text
stringlengths
1k
23.6k
id
stringlengths
47
47
dump
stringclasses
3 values
url
stringlengths
16
1.34k
file_path
stringlengths
125
126
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
2.05k
4.1k
score
float64
2.52
4.91
int_score
int64
3
5
Written on: Sep 17, 2019 By: Shari Hicks-Graham September is the month designated for Alopecia Areata Awareness, so I thought it would be fitting to use this platform to help contribute to this cause. Alopecia areata (AA) is a common form of hair loss that is caused by inflammation and presents with characteristic circular, smooth patches without hair. We know that the emotional difficulty of having AA is great. Patients with AA have greater rates of depression and anxiety. Sometimes the disease can affect the majority of the scalp and eyebrows, drawing assumptions that the condition is life-threatening, which may leave AA patients feeling isolated and self-conscious. It is frustrating to lose control over a physical feature that is tied so closely with one’s appearance and self-esteem. These feelings are more than what can be unpacked during a visit to the dermatologist, but I try to communicate to my patients that I see their struggles and that I understand. I recommend that people with AA seek professional counseling from licensed therapists on a regular basis to help manage these normal emotions. Additionally, organizations like the National Alopecia Areata Foundation focus on patient and community education and awareness. Visit naaf.org to learn more. Now, let’s discuss some statistics, what’s going on biologically when AA happens, and how to identify and treat the condition. Alopecia areata is an equal opportunity disease – the condition may affect children or adults of any gender or ethnicity. In the United States, the cumulative lifetime incidence is 2%. The average age at the time of diagnosis for women was 31.5 years of age, and for men 36.2 in one study. AA is caused by inflammation that attacks the hair follicle within certain portions of hair-bearing skin. The affected areas may be small or large, and may even involve the entire scalp. These hairless patches may also involve the face and body including eyelashes, eyebrows, arms, legs, underarms, and genital skin. This inflammatory dysregulation is considered autoimmune because the body’s immune system effectively turns against itself and attacks the hair bulb portion of the hair follicle. In recent years, it has become clear that the protected zone (immune privilege) of this portion of the hair follicle is lost in AA. This causes the inflammation to disrupt the hair growth cycle and force premature cell death of the hair epithelium, leading to the resting phase and shedding of the hair. The development of AA is often sudden and may not be obvious to the individual who is affected. Many patients have told me that it was brought to their attention by a hairdresser or barber, or they may have noticed it while grooming their own hair. People often feel a sense of irritation or itching prior to the development of a spot. Others recognize that a stressful situation may precede the development of the hair loss. Associated features seen on examination of the body include nail changes like pitting (regularly spaced dotted indentations) of the nail and redness of the underlying nail surface. People with atopic dermatitis, environmental allergies and asthma, and immune-mediated conditions like type-1 diabetes, thyroid disease, vitiligo or psoriasis are also more likely to develop AA. There are other hair loss conditions that can look similar to alopecia areata, so it’s important to see your doctor, or even better, a board-certified dermatologist who can help differentiate between other forms of alopecia. Remember that all hair loss is not necessarily managed the same way. Telogen effluvium (generalized hair shedding) can often look like AA, so can early central centrifugal cicatricial alopecia, lupus, secondary syphilis, trichotillomania, or the less common temporal triangular alopecia. Fortunately, the inflammation of AA does not cause scarring of the scalp or affected skin, which means hair regrowth is possible. A dermatologist would be able to determine this either by examination or with a simple scalp biopsy. Treatment options for AA include topical and injectable anti-inflammatory steroid medications, topical minoxidil, topical sensitization therapy – using an irritant to “trick” the immune system into stopping the attack at the hair follicle, even systemic medications like prednisone or methotrexate for more resistant cases. Newer medications are on the horizon called the JAK inhibitors. These targeted therapies operate to block signals that turn on inflammation in the affected areas of AA. They are currently not FDA approved for AA, but we are hopeful that they will be in the next 1-2 years. Prognosis is extremely variable on a case-by-case basis. Without treatment, hair regrowth may occur at rates up to 60% but, for some, medical treatment may be required to see regrowth. Use of some of the aforementioned therapies have been associated with hair regrowth upwards of 80%. Unfortunately, cases with more extensive areas of involvement are more resistant to treatment. Pediatric cases are also associated with a less favorable prognosis. We are hopeful, however, that with advances in genetic research new targeted therapies will be developed that are more effective, particularly for these resistant pediatric and adult cases. Source: J Am Acad Dermatol 2018;78:15-24 It is important to understand the many different forms of alopecia and that they are managed in different ways. Increasing awareness about AA and other types of hair loss is important not just for our own well-being and education but also to hopefully become more sympathetic to what we may see or hear others go through. Chances are high that by your 30s or 40s you have already seen or know someone dealing with AA. The good thing is, despite the high variability of outcomes for the condition, it is fairly easy for a board-certified dermatologist to specifically identify what is happening to your hair and provide some clarity on treatment options. Resolution of AA is certainly possible and we are optimistic that with ongoing research, the future is bright for those suffering with this disease. Live free & clear, Of course, all of our products are carried here at livso.com. Alternatively, you can also purchase our products at the following retailers, salons, and barbershops. Check back frequently as our stocklist continues to grow. Ambushed Salon - Gahanna, Ohio Adrian Fanus Grooming - Brooklyn, New York DTLA Cuts - Los Angeles, California Downtown Dermatology - Columbus, Ohio Ms. Melanin Beauty Supply & Salon - Canal Winchester, Ohio Panache Hair Designs - Oak Park, Illinois Skin Specialty Dermatology - New York, New York Skin Wellness Center of Alabama - Homewood, Alabama Willis Beauty Supply Co - Columbus, Ohio W Style Lounge - Columbus, Ohio If you are a professional stylist, barber, or retailer and would like to carry our LivSo products please email firstname.lastname@example.org Yes! We offer samples for $2 each on our website. Each sample includes a 0.33 oz package of each of our three products. There is no limit to the number of samples you can purchase. You may require more than one sample depending on the length and thickness of your hair and how many times you would like to use each product. Samples are a great way to introduce friends and family to the product line as well! We fulfill orders within 48 hours and once an order is shipped, shipping time is typically 2-3 business days. Moisturizing Shampoo - Apply to wet scalp and hair, lather and rinse. Repeat once or twice as needed. Our gentle, sulfate-free formula is intended to effectively cleanse without overdrying your hair and skin. Massaging the scalp while applying is also recommended to improve circulation throughout the scalp. Use at least weekly for best results. Be careful to avoid direct contact with eyes. View a quick tutorial video here. Moisturizing Conditioner - Apply to scalp and hair for at least three minutes, detangle or comb gently using a wide-toothed comb, then rinse out completely. Be careful to avoid direct contact with eyes. View quick tutorial video here. Moisturizing Lotion - Apply to scalp after washing and conditioning, before drying & styling hair. May also apply to scalp daily as a moisturizer. This product also works well on the hair itself for extra moisturization to improve manageability. View a quick tutorial video here. The products are designed to be used at least once per week but may be used as often as three times per week depending on your hair and scalp condition and moisturizing needs. Yes! The Moisturizing Lotion is a versatile product. Although it is formulated to be a light moisturizer applied directly to the scalp to alleviate dryness and itchiness, many use it directly on the hair for additional hydration from the roots to the tips. Men also find it useful to moisturize their beards without leaving their hair with an oily finish or strong fragrance. Yes. Although we did not do a unique test for color safety, we are familiar with all of our ingredients and have yet to receive feedback relating to issues with use on color treated hair. Our independent clinical trial consisted of only adult participants. Please be advised that our products do not have a tear-free formula and they were not studied for use on children. Be careful to avoid direct contact with the eyes. LivSo was designed for people with dry itchy scalp and curly or textured hair, regardless of ethnicity or gender. If you have a textured hair type that dries out easily with typical dandruff shampoos or sulfate products, this product is for you. Curly or textured hair, by nature, requires more moisture to prevent breakage, splitting and ensure its luster. The cuticle of curly and textured hair tends to be more ruffled as opposed to flat, which prevents oil from the scalp from wicking from the root to the tip of the hair causing it to look and feel drier. Beyond those who have dry, itchy scalp, if you have dry curly hair, our products were expertly crafted to promote scalp health and meet your hair cleansing and moisturizing needs. To our friends with straight hair and dry, itchy scalp, LivSo products are formulated to meet your needs as well. Our line of products may feel more moisturizing than alternative products without a focus on hydration, but they will still work to alleviate dry itchy scalp. Creating products that are safe and gentle for your skin and hair without compromising efficacy is our highest priority. We include numerous natural ingredients in all of our products but we do not focus on being all-natural because natural elements and therefore natural products may not necessarily be safe and gentle. Our products are gluten-free, paraben-free, sulfate-free, and are not derived from animals in any way. Acetyl alcohol will not dry out your hair, in fact, it does just the opposite. Acetyl alcohol is a conditioning agent and a fatty alcohol, which actually has a moisturizing effect on the hair and skin as opposed to isopropyl alcohol, which is your common rubbing alcohol. Glycolic acid is a valuable and effective ingredient included in our Moisturizing Shampoo and Conditioner and can increase photosensitivity on the skin. As such, we want to ensure that those with sun sensitivity are aware of its presence. We believe sunburn risk from using these products is still fairly low because the hair on the scalp protects the scalp from being exposed to direct sunlight. If you are wearing a protective style where parts of your scalp is exposed or if have a very short haircut, we advise that you apply sunscreen on any exposed areas. Roughly two years. We have plans to ship products to select countries in 2018. Please sign up for our newsletter to be notified when we begin shipping internationally. LivSo, LLC500 East Main StreetSuite 310Columbus, Ohio 43215
<urn:uuid:16ea1715-a227-4404-b768-3b567dc4a1ec>
CC-MAIN-2020-16
https://livso.com/blogs/news/hair-loss-not-always-what-you-think
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371880945.85/warc/CC-MAIN-20200409220932-20200410011432-00553.warc.gz
en
0.952127
2,515
2.78125
3
Two interesting news items related to postpartum depression popped up this past week. The first is news that researchers have identified a link between an oxytocin receptor blood marker in some women which increased their likelihood of experiencing postpartum depression. What does this mean? Well, if there were a blood test to give pregnant women to identify which ones were more likely to experience postpartum depression, we could proactively identify those women, doctors and families could put supports into place for the postpartum period ahead of time. The second story is about a change in recommendations from the US Preventive Services Task Force about screening adults for depression. Now, if you're like me, you might be asking yourself what the US Preventive Services Task Force (USPSTF) is and what they do. Turns out, The Task Force is convened by Congress and reviews current clinical research to "improve the health of all Americans by making evidence-based recommendations about clinical preventive services such as screenings, counseling services, and preventive medications." This Task Force is now recommending that all adults be screened for depression because of its prevalence (1 in 10 all adults in the US will experience depression), and they specifically identified that all pregnant and postpartum women be screened. It's great when postpartum depression gets media attention. It increases awareness of the huge number of families affected by emotional complications in pregnancy and postpartum . Screening and identifying those who are suffering is a critical first step. However, there's an immense gap between screening and treatment. Postpartum women--particularly low-income mothers and mothers of color--obtain treatment for postpartum mood and anxiety disorders at abysmally low rates, even after they've been positively screened. There's also a little fact in the original research about the oxytocin receptor that's interesting. The study found that there was no connection between the oxytocin receptor and risk for PPD in women who had depression during pregnancy - the link was only in women who were not depressed prenatally. Not only does depression in pregnancy increase the risk for postpartum depression, but untreated prenatal depression is also a risk factor for unfavorable pregnancy outcomes including low-birth weights and pre-term births. So, we still need to screen all pregnant women AND treat those who are depressed. The Massachusetts legislature overturned the Governor's veto of funding for the pilot program I mentioned above. I've been asking many of you in Massachusetts to contact your legislators about this recently, so thank you for all your advocacy! Meanwhile, Congresswoman Katherine Clark and Congressman Ryan Costello introduced legislation, the Bringing Postpartum Depression Out of the Shadows Act, to increase and improve screening AND treatment for women with postpartum depression through grants to the states to develop new programs. What's caught your eye in the news lately related to pregnancy or postpartum emotional complications? I went to the screening of the maternal mental health documentary, Dark Side of the Full Moon, last night, organized by Leslie McKeough, LICSW - a Lynnfield therapist - and the North Shore Postpartum Depression Task Force. The documentary highlights the experiences of several women who experienced perinatal mood and anxiety disorders, the dismal state of screening for emotional complications in pregnancy and postpartum, and the barriers to treatment for these women. Interspersed are the news stories of the lives lost to maternal mental illness while they were filming the documentary. Women feel guilty, self-conscious, isolated, and overwhelmed when they're experiencing emotional complications in the postpartum. Supporting women with perinatal emotional complications is about more than a 10-item questionnaire, though that's a good first step. It's about more than having a therapist's phone number, though that's needed too, and hopefully many people have that therapist's number or know where to look. It's knowing that if they reveal to you how they feel, they're doing so with fear and worry about not being a good mother, about their baby being "taken away," about never feeling like themselves again. Supporting women with perinatal emotional complications is about having effective systems of care in the community that would include These issues, this stigma, these barriers to care are why I and three colleagues founded the Every Mother Project with the belief that every mother deserves comprehensive perinatal support. We developed a Perinatal Toolkit for women's health professionals to better understand, recognize, know how to talk about, and support women through perinatal emotional complications. We've had lactation counselors, doulas, pelvic floor physical therapists, midwives, acupuncturists and many other birth and postpartum professionals download the toolkit. Our hope is that with more training and awareness for all the myriad of people who come into contact - and often develop quite close and important relationships - with pregnant and postpartum women and new parents, more women will feel heard and understood and will be able to be connected to the right supports. The movie didn't get into the racial and socioeconomic disparities that exist in maternal mental health, but I'd be remiss in not mentioning them here. There's been yet another study that examined stress in pregnancy and risk of postpartum depression, finding that more stress events (financial, partner, trauma, or emotional) in a woman's life was directly correlated with a higher risk for emotion complications. Other studies have identified that experiences of racial discrimination during pregnancy (which can be prevalent within medical systems) not only affect the pregnant woman's own emotional and physical health, but also impact the infant's stress physiology response. So yes, institutional racism and systemic oppression have real effects on pregnant and postpartum women of color and women in poverty, increasing their risk for perinatal emotional complications, all while making it harder for them to be identified and access treatment. I'm so thankful for the chance to view Dark Side of the Full Moon, and that so many others did, too. We have much to do still to better support women through perinatal emotional complications - even in Massachusetts. Please, at least take a look at the trailer if you missed it. And maybe we can organize another viewing... So, I promised to come back to the idea of intuition, how to find it and stick with it. Because, while it may feel easier to "stop the chattering mind" in order to make space for intuition in solitude, most of us aren't parenting in a peaceful, zen place away from all meddling, well-meaning voices that feed the rational mind's desire for a plan: a promise of steps to follow to succeed in raising this baby/toddler/teenager seeking - messily, noisily, and sometimes inconveniently! - to get their needs met. So, what then? Here are three things to help tap into your own intuition: And lastly, just a word about posts on the internet - including this one! If something I've written here resonates, fantastic. If it doesn't land well, that's ok. I strive to write the way I approach my private practice: from a supportive, nonjudgmental place. I hope you "take what works and leave the rest" from my writing and from all the rest of the internet, too! On 6/16/14, the NY Times published two articles on maternal mental health. This was above the fold, front-page media coverage for the emotional complications that mothers face. Let's talk about the cover story: 'Thinking of Ways to Harm Her': New Findings on Timing and Range of Maternal Mental Illness. First, a quick point about language. I don’t love that the headline led with the attention-grabbing reference to intrusive thoughts of harming a baby. And the article fell into the pattern of referencing either “postpartum depression,” which it makes the point of saying doesn’t accurately encompass the range of experiences, or “maternal mental illness,” which can come across as very medical. But the alternatives, like "perinatal mood disorders" or "emotional complications" have their limitations as well. So, I try to use a range to best speak to women's experiences. But back to the article... Women and families shared their experiences with depression, intrusive thoughts, and anxiety, demonstrating incredible vulnerability and courage. Writer Pam Belluck discussed the new research that backs up what clinical experience tells us: that emotional complications often start in pregnancy and are not easily identified as "just” depression, but frequently include overlapping features of depression, anxiety, obsessive compulsive disorder, and bipolar disorders. She touched on some of the factors that are associated with these experiences, the prevalence, and the range in timing when symptoms arise (during pregnancy and throughout the postpartum year). And there was mention of treatment: medications, therapy, support groups, and help to address impacts on bonding and attachment, though I wish there was more exploration of what treatment looks like for women and families. Belluck highlighted efforts to increase screening for postpartum mood disorders and the frustrating fact that more screening does not necessarily mean improved health: "A study in New Jersey of poor women on Medicaid found that required screening has not resulted in more women being treated...the law educated pediatricians and obstetricians, but did not compensate them for screening." I am thankful that in Massachusetts we are taking some steps to increase screening. A 2010 law authorized the Department of Public Health to "develop a culture of awareness, de-stigmatization, and screening for perinatal depression." But changing a culture and eliminating stigma take time. Even if they are given a questionnaire, new moms often hide the truth of how they're feeling from their doctors and pediatricians out of shame and fear of judgment. And an OB who sees a woman for a mere 15 minute follow up appointment at 2 or 6 weeks postpartum may feel reluctant to ask further questions because they're unsure of where or to whom they would even refer her. The Massachusetts Child Psychiatry Access Project (MCPAP) aims to address some of these barriers by expanding its focus to include maternal mental health. Starting next month, doctors will be able to call a toll-free number to speak to a care coordinator to help find a mental health provider for their patient. MCPAP for Moms will be a great resource for doctors, but what about for mothers, their partners and families, and other providers? Granted there are some resources like the Massachusetts Postpartum Support International warmline (866-472-1897) and regional and community task forces creating systems of care for maternal mental health, but there are still gaping cracks women and families can fall through. What I’m most hopeful about in Massachusetts is a relatively small pilot project focused on preventing postpartum depression by putting postpartum doulas who can provide support and screenings in a few community health centers. A friend and colleague, Divya Kumar, Sc.M., is a certified postpartum doula and certified lactation counselor who works in one of these community health centers. Excited about the integration of services to address maternal health, Kumar says, “We need to change the way we do this...it's not just about preventing postpartum depression, but it's about promoting postpartum wellness and overall emotional health in new moms.” When a new mom brings her baby in for his well baby visit, Kumar is able to spend time with her helping with breastfeeding challenges, screen her for postpartum depression, and if needed, refer her to the mental health clinician down the hall who can see her that same day. And this is true even if the mother is not a patient of the health center. Plus, the community health center has midwives who also provide prenatal care so there’s the possibility for connection during pregnancy – important for the women who experience depression and anxiety during pregnancy and/or those who have a known mental health history. "Timely screening for perinatal emotional complications can save lives—especially in a community health center lucky enough to have comprehensive postpartum support AND mental health services right under one roof. [I am] so thankful for this pilot money and for centralized, accessible services," says Kumar, "We are offering services to families where the baby is seen at the clinic even if mom is not...We have caught a couple cases of PPD that way--huge, huge victories!" Until this pilot project can be replicated to reach more women, a woman (or someone in her family) needs the knowledge to recognize that what she's feeling isn't just new mom exhaustion, the courage to ask for help, and the resources to be able to find/afford/get to treatment. On top of all that, treatment must be specialized, connected to community-based supports, and welcoming. Dr. Kozhimannil, quoted in the article, speaks to the barriers: "There are also not enough treatment options…If a woman comes with a baby, and it’s a place treating people with substance abuse or severe mental illness, she may be uncomfortable.” (And yet, let's not forget that these are not mutually exclusive groups). When everything falls into place, it works. Timely, accessible treatment can help. As Jeanne Marie Johnson was quoted saying, once she received help, “It’s just a whole world of difference.” When I was looking for my office space, I thought about what it would be like for pregnant or new moms coming to see me. I looked for an office with an elevator, easy bathroom access, and parking. A chair that rocks for a breastfeeding mother, a hidden box of toys to distract an infant, water or a cup of tea to offer some comfort and hydration: these are all small ways I hope that the environment welcomes pregnant women and new mothers. And my connections to other resources—psychiatrists, acupuncturists, sleep consultants, lactation counselors, groups—form the foundation of a potential community of support for isolated new mothers and families. Ultimately, national media coverage of perinatal emotional complications like these NY Times articles helps to decrease isolation and stigma. I hope that this leads to more screening, more treatment, and more health for mothers and families. A bow of gratitude to the women who shared their stories and to Pam Belluck for writing these pieces. What's your take on the article? Please share in the comments.
<urn:uuid:ce67904d-1ed7-4ffc-92fb-118b823469f7>
CC-MAIN-2020-16
https://www.laurieganberg.com/seattle-therapist-blog/category/media
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505366.8/warc/CC-MAIN-20200401034127-20200401064127-00515.warc.gz
en
0.956096
2,971
2.546875
3
The Battleship Potemkin (in Russian : Броненосец “Потёмкин” , Bronenossets “Potiomkin” ) is a silent Soviet film directed by Sergei Eisenstein , released in 1925 . It deals with the mutiny of the battleship Potemkin in the port of Odessa in 1905 , the insurrection and the repression which ensued in the city. The film was banned for a long time in many Western countries because of “Bolshevik propaganda” and “incitement to class violence”. It is considered one of the greatest propaganda filmsall time. He was chosen in 1958 as the best film of all time by 117 international critics at the Universal Exhibition of Brussels 1 , 2 . The film has entered the public domain in most countries of the world. The film consists of five parts: - “Men and worms” (Люди и черви): sailors protest against eating rotten meat. - “Drama in the bay” (Драма на тендре): sailors and their leader Vakulintchuk revolt. The latter died assassinated. - “Death demands justice” (Мёртвый взывает): the body of Vakulintchuk is carried by the crowd of the people of Odessa came to acclaim the sailors like heroes. - “Odessa Stairs” (Одесская лестница): The soldiers of the Imperial Guard massacred the population of Odessa in a staircase that seems endless. - “Meeting with the squadron” (Встреча с эскадрой): the squadron whose task is to stop the revolt of Potemkin refuses orders. The revolt of the crew of the battleship Potemkin on 14 June 1905 ( in the Gregorian calendar ), during the Russian Revolution of 1905 , is presented as a precursor to the October Revolution ( 1917 ) and from the point of view of the insurgents. The battleship reproduces, in the microcosm of its crew, the divisions of Russian society and its inequalities. One of the causes of the mutiny is the issue of food. The officers presented as cynical and cruel force the crew to consume rotten meat, while they themselves maintain a privileged lifestyle among the crew (scene of the dishes, “God, give me my daily bread” ). The staircase scene The most famous scene of the film is the massacre of civilians on the steps of the monumental staircase of Odessa (also called the Primorsky or the ” Potemkin Staircase ” ). In this scene, Tsarist soldiers in their white summer tunics seem to descend the endless staircase with a rhythmic pace like machines and shooting at the crowd. A detachment of Cossacks on horseback loads the crowd down the stairs. The victims who appear on the screen are an old woman with a pince-nez, a young boy with his mother, a student in uniform and a teenage schoolgirl. This scene lasts six minutes. The plan of a mother, who dies on the ground, releasing a pram that runs down the steps, uses aTracking before diving, revolutionary filming for the time. In reality, this scene never took place. Eisenstein used it to dramatize the film, demonize the tsarist guard and the political power in place. In 1991 the staircase scene was taken over by Russian photographer Alexey Titarenko to dramatize human suffering during the collapse of the Soviet Union in 1991 3 . However, this scene is based on the fact that there were many demonstrations in Odessa itself (and not on the stairs), following the arrival of Potemkin in his port. The London Times and the British consul reported that the troops fired on the crowd, which resulted in a significant loss of life (the exact number of casualties Impact of the scene of the pram on culture The theme of the landau escaping the mother and down the stairs will be taken by Brian De Palma in The Incorruptible , except that the scene is idling and in a station. Terry Gilliam in Brazil took over the scene, but this time it’s a vacuum cleaner down the steps after a maid had been killed during an exchange of gunfire subsequent to the release of Sam Lowry 4 . It was also used parodically in The Simpsons or by Woody Allen in War and Love , as well as in Bananas , by David Zucker in Is there a cop to save Hollywood? (who actually parodies The Untouchables ), by Ettore Scola in We loved each other so much , by Dummies in The City of Fear , by Anno Saul in Kebab Connection as well as by Peter Jackson in Braindead . [ref. necessary] . - Director: Sergei Eisenstein - Scenario: Sergei Eisenstein from Nina Agadjanova-Chutko’s story - Editing: Grigori Aleksandrov – Sergei Eisenstein - Sets: Vassili Rakhals - Deputies: A. Antonov; Mikhail Gomarov, Levshine, Maxime Schtrauch - Director: Yakov Bliokh - Subtitles: Nikolai Aseyev - Music: Edmund Meisel , Dmitry Shostakovich , Nikolay Kryukov (ru) - Photo: Edouard Tissé – Vladimir Popov - Production: Goskino (Moscow) - Producer: Jacob Bliokh - Filming locations: Port and city of Odessa and Sevastopol - Distribution: Goskino – Mosfilm - Production format: 35 mm - Projection format: 1.33: 1 - Country of origin: Soviet Union - Language: Mute and inter-titles in Russian - Genre: Historical drama - Duration: 68 to 80 min. according to the versions - Release dates: - Soviet Union :(world premiere at the Bolshoi Theater in Moscow ) - France : - United States :(first in New York ) - Grigory Alexandrov : Lieutenant Guiliarovski, second in command - Alexander Antonov : Grigory Vakulintchouk, the Bolshevik sailor - Vladimir Barsky : Commander Golikov - Ivan Bobrov : Young sailor, the “blue” conscript struck during his sleep - Julia Eisenstein : the woman with piglet - Sergei Eisenstein : Citizen of Odessa - Andreï Faït : A recruit - Constantin Isodorovich Feldman : the student delegated by the revolutionaries of Odessa to the crew of Potemkin, role he had held in his existence. - A. Glaouberman : Aba, the boy killed on the stairs - Glotov : the antisemitic provocateur - Mikhail Gomorov : Matushenko - Korobei : veteran seaman, cul-de-jatte - Alexandre Levchine : The second master, petty officer - Marusov : an officer - Vladimir Mikhailovich Uralsky - N. Poltavseva : the teacher wearing an eyeglass - Prokopenko : the mother of Aba - Protopopov : old man - Repnikova : a woman on the stairs - Maxime Maximovich Strauch - Beatrice Vitoldi : the woman with the pram - Zerenine : the student - Anonymous actors: - a driver: the doctor-major - a gardener: the pope Genesis of the film Battleship Potemkin is a command movie. In fact, the State Commission ordered a film to Serguei Mikhailovich Eisenstein to celebrate the twentieth anniversary of the 1905 Revolution . It is therefore a didactic work but the director has kept a great freedom of artistic creation to evoke the subject. The Soviet state decided to use the cinema as an instrument of propaganda, but the filmmakers during the period of the New Economic Policy (period of economic and political easing initiated by Lenin) were able to produce films that did not follow the letter. the line of the Communist Party. Eisenstein, who had directed the previous year a feature very noticed, La Grève, had four months to shoot and edit the film. So he reduced his starting scenario, copious “monograph of an era” written in collaboration with Nina Agadjanova, focusing the action on one episode and one: the mutiny of the sailors of a warship in the Black Sea , near from the port of Odessa, the1 . After La Greve , released the year before, Eisenstein continues to experiment with his editing theories . Initially propaganda, like all Soviet films of the period, the film was a huge success in the Soviet Union and marked the history of cinema by its inventions and technical qualities and the epic breath given by Eisenstein. Several sound versions have been superimposed on the silent images of Eisenstein. They are the work of Dmitry Shostakovich , Nikolai Kryukov (ru) in the restored Soviet version of 1976, and Edmund Meisel . It is the latter which was originally used. Eisenstein, however, stopped his participation with Meisel from the day when a performance in London – with a brisk rhythm lavished by Meisel – made, at one point, laugh the whole room. It is then that one realizes the importance of the concordance – or non-concordance – between image and sound. A “new version” was shown at the Berlin Film Festival. It includes intertitles containing speeches of Trotsky , removed already at the time, it is not part of the official pantheon of communism wanted by Stalin . The genius of editing is also his fault. Eisenstein, who had “made his hand” by going back to Western films, assimilates the power of montage to that of discourse. Today, this montage is fragmentary: it has been retouched many times for propaganda purposes by the Soviet regime . Around the film - “Designed to commemorate the anniversary of the failed revolution of 1905, the film was originally intended as The Year 1905 , to evoke all the events that had marked it. However, prisoner of the deadlines imposed on him – the film must be finished before the end of the year – and delayed by terrible meteorological conditions, Eisenstein decides to abandon the initial scenario and to retain only the episode of the Mutiny Potemkin, the latter having the advantage of being able to be turned on the Black Sea , where the weather is more lenient. “ - “… Its public broadcast will only be allowed in France in 1953! Until that date, it was only visible in cinematheques and film clubs. “ Two excerpts from the Ciné … club booklet whose director of the publication was Jean-François Davy . - Julia Eisenstein, who plays the role of a woman holding a piglet, is the director’s mother. - To learn about the historical facts about the riots in Odessa and the mutiny of Potemkin, the magazine L’Illustration , now online, provides a large amount of articles written by his special envoys and photographs taken by his reporters. Promotional Poster Gallery Notes and references - ↑ a and b “The Battleship” Potemkin “” [ archive ] , on Le Monde diplomatique (accessed March 14, 2016 ) . - ↑ ” Battleship Potemkin ” [ archive ] , on City of music .fr , (accessed January 17, 2017 ) . - ↑ Protzman Ferdinand. “Landscape • Photographs of Time and Place”. National Geographic , 2003, ( ISBN 0-7922-6166-6 ) . - ↑ AlloCine, ” Trivia film Battleship Potemkin – Screenrush ” [ archive ] , on Screenrush (accessed 14 March 2016 ) .
<urn:uuid:4c5f2848-a2bb-4f66-b6c2-4df7b1d85172>
CC-MAIN-2020-16
https://www.solarmovie.eu/battleship-potemkin/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371826355.84/warc/CC-MAIN-20200408233313-20200409023813-00474.warc.gz
en
0.929083
2,588
3.046875
3
What are clinical studies? Clinical studies are research studies in which real people participate as volunteers. Clinical research studies (sometimes called trials or protocols) are a means of developing new treatments and medications for diseases and conditions. There are strict rules for clinical trials, which are monitored by the National Institutes of Health and the U.S. Food and Drug Administration. Some of the research studies at the Clinical Center involve promising new treatments that may directly benefit patients. Research is vital to help us understand these factors and their complex interactions. Because of the answers research can provide, it is a powerful source of hope for people experiencing mental health conditions and their families. Why should I participate? The health of millions has been improved because of advances in science and technology, and the willingness of thousands of individuals like you to take part in clinical research. The role of volunteer subjects as partners in clinical research is crucial in the quest for knowledge that will improve the health of future generations. With your help, we can pave the path for future cures. Will I be compensated? Many clinical trials offer compensation for volunteering. Please ask if compensation is available for you. What is a “healthy volunteer”? A volunteer subject with no known significant health problems who participates in research to test a new drug, device, or intervention is known as a “healthy volunteer” or “Clinical Research Volunteer.” The clinical research volunteer may be a member of the community, an NIH investigator or other employee, or family members of a patient volunteer. Research procedures with these volunteers are designed to develop new knowledge, not to provide direct benefit to study participants. Clinical research volunteers have always played a vital role in medical research. We need to study healthy volunteers for several reasons: When developing a new technique such as a blood test or imaging device, we need clinical research volunteers to help us define the limits of “normal.” These volunteers are recruited to serve as controls for patient groups. They are often matched to patients on such characteristics as age, gender, or family relationship. They are then given the same test, procedure, or drug the patient group receives. Investigators learn about the disease process by comparing the patient group to the clinical research volunteers. What are Phase I, Phase II and Phase III studies? The phase 1 study is used to learn the “maximum tolerated dose” of a drug that does not produce unacceptable side effects. Patient volunteers are followed primarily for side effects, and not for how the drug affects their disease. The first few volunteer subjects receive low doses of the trial drug to see how the drug is tolerated and to learn how it acts in the body. The next group of volunteer subjects receives larger amounts. Phase 1 studies typically offer little or no benefit to the volunteer subjects. The phase 2 study involves a drug whose dose and side effects are well known. Many more volunteer subjects are tested, to define side effects, learn how it is used in the body, and learn how it helps the condition under study. Some volunteer subjects may benefit from a phase 2 study. The phase 3 study compares the new drug against a commonly used drug. Some volunteer subjects will be given the new drug and some the commonly used drug. The trial is designed to find where the new drug fits in managing a particular condition. Determining the true benefit of a drug in a clinical trial is difficult. What is a placebo? Placebos are harmless, inactive substances made to look like the real medicine used in the clinical trial. Placebos allow the investigators to learn whether the medicine being given works better or no better than ordinary treatment. In many studies, there are successive time periods, with either the placebo or the real medicine. In order not to introduce bias, the patient, and sometimes the staff, are not told when or what the changes are. If a placebo is part of a study, you will always be informed in the consent form given to you before you agree to take part in the study. When you read the consent form, be sure that you understand what research approach is being used in the study you are entering. What is the placebo effect? Medical research is dogged by the placebo effect – the real or apparent improvement in a patient’s condition due to wishful thinking by the investigator or the patient. Medical techniques use three ways to rid clinical trials of this problem. These methods have helped discredit some previously accepted treatments and validate new ones. Methods used are the following: randomization, single-blind or double-blind studies, and the use of a placebo. What is randomization? Randomization is when two or more alternative treatments are selected by chance, not by choice. The treatment chosen is given with the highest level of professional care and expertise, and the results of each treatment are compared. Analyses are done at intervals during a trial, which may last years. As soon as one treatment is found to be definitely superior, the trial is stopped. In this way, the fewest number of patients receive the less beneficial treatment. What are single-blind and double-blind studies? In single- or double-blind studies, the participants don’t know which medicine is being used, and they can describe what happens without bias. Blind studies are designed to prevent anyone (doctors, nurses, or patients) from influencing the results. This allows scientifically accurate conclusions. In single-blind (“single-masked”) studies, only the patient is not told what is being given. In a double-blind study, only the pharmacist knows; the doctors, nurses, patients, and other health care staff are not informed. If medically necessary, however, it is always possible to find out what the patient is taking. Are there risks involved in participating in clinical research? Risks are involved in clinical research, as in routine medical care and activities of daily living. In thinking about the risks of research, it is helpful to focus on two things: the degree of harm that could result from taking part in the study, and the chance of any harm occurring. Most clinical studies pose risks of minor discomfort, lasting only a short time. Some volunteer subjects, however, experience complications that require medical attention. The specific risks associated with any research protocol are described in detail in the consent document, which you are asked to sign before taking part in research. In addition, the major risks of participating in a study will be explained to you by a member of the research team, who will answer your questions about the study. Before deciding to participate, you should carefully weigh these risks. Although you may not receive any direct benefit as a result of participating in research, the knowledge developed may help others. Are clinical trials safe? There are many processes in place to ensure the highest safety standards for those entering a clinical trial. The following are areas of protection for the volunteer. Ethical guidelines: The goal of clinical research is to develop knowledge that improves human health or increases understanding of human biology. People who take part in clinical research make it possible for this to occur. The path to finding out if a new drug is safe or effective is to test it on patients in clinical trials. The purpose of ethical guidelines is both to protect patients and healthy volunteers, and to preserve the integrity of the science. Protocol review: As in any medical research facility, all new protocols must be approved by an institutional review board (IRB) before they can begin. The IRB, which consists of medical specialists, statisticians, nurses, social workers, and medical ethicists, is the advocate of the volunteer subject. The IRB will only approve protocols that address medically important questions in a scientific and responsible manner. Informed consent: Informed consent is the process of learning the key facts about a clinical trial before deciding whether to participate. The process of providing information to participants continues throughout the study. To help you decide whether to take part, members of the research team explain the study. The research team provides an informed consent document, which includes such details about the study as its purpose, duration, required procedures, and who to contact for various purposes. The informed consent document also explains risks and potential benefits. If you decide to enroll in the trial, you will need to sign the informed consent document. You are free to withdraw from the study at any time. IRB review: Most, but not all, clinical trials in the United States are approved and monitored by an Institutional Review Board (IRB) to ensure that the risks are minimal when compared with potential benefits. An IRB is an independent committee that consists of physicians, statisticians, and members of the community who ensure that clinical trials are ethical and that the rights of participants are protected. You should ask the sponsor or research coordinator whether the research you are considering participating in was reviewed by an IRB. For more information about research protections, see: For more information on participants’ privacy and confidentiality, see: - HIPAA Privacy Rule - The Food and Drug Administration, FDA’s Drug Review Process: Ensuring Drugs Are Safe and Effective What safeguards are there to protect participants in clinical research? Patient representative: The Patient Representative acts as a link between the patient and the hospital. The Patient Representative makes every effort to assure that patients are informed of their rights and responsibilities, and that they understand what the Clinical Center is, what it can offer, and how it operates. We realize that this setting is unique and may generate questions about the patient’s role in the research process. As in any large and complex system, communication can be a problem and misunderstandings can occur. If any patient has an unanswered question or feels there is a problem they would like to discuss, they can call the Patient Representative. Patient Bill of Rights. Whether you are a clinical research or a patient volunteer subject, you are protected by the Clinical Center Patients’ Bill of Rights. This document is adapted from the one made by the American Hospital Association for use in all hospitals in the country. The bill of rights concerns the care you receive, privacy, confidentiality, and access to medical records. Source: The Clinical Center at the National Institutes of Health. Research is not always just about taking a new medication. You can make a difference in the advancement of scientific research in many ways. Different types of clinical research are used depending on what the researchers are studying. Below are descriptions of some different kinds of clinical research. Treatment Research generally involves an intervention such as medication, psychotherapy, new devices, or new approaches to surgery or radiation therapy. Prevention Research looks for better ways to prevent disorders from developing or returning. Different kinds of prevention research may study medicines, vitamins, vaccines, minerals, or lifestyle changes. Diagnostic Research refers to the practice of looking for better ways to identify a particular disorder or condition. Screening Research aims to find the best ways to detect certain disorders or health conditions. Quality of Life Research explores ways to improve comfort and the quality of life for individuals with a chronic illness. Genetic studies aim to improve the prediction of disorders by identifying and understanding how genes and illnesses may be related. Research in this area may explore ways in which a person’s genes make him or her more or less likely to develop a disorder. This may lead to development of tailor-made treatments based on a patient’s genetic make-up. - Genetic Information Nondiscrimination Act (GINA) of 2008 - National Human Genome Research Institute - The National Advisory Mental Health Council Workgroup on Genomics Epidemiological studies seek to identify the patterns, causes, and control of disorders in groups of people. An important note: some clinical research is “outpatient,” meaning that participants do not stay overnight at the hospital. Some is “inpatient,” meaning that participants will need to stay for at least one night in the hospital or research center. Be sure to ask the researchers what their study requires. Phases of clinical trials: when clinical research is used to evaluate medications and devices Clinical trials are a kind of clinical research designed to evaluate and test new interventions such as psychotherapy or medications. Clinical trials are often conducted in four phases. The trials at each phase have a different purpose and help scientists answer different questions. - Phase I trials Researchers test an experimental drug or treatment in a small group of people for the first time. The researchers evaluate the treatment’s safety, determine a safe dosage range, and identify side effects. - Phase II trials The experimental drug or treatment is given to a larger group of people to see if it is effective and to further evaluate its safety. - Phase III trials The experimental study drug or treatment is given to large groups of people. Researchers confirm its effectiveness, monitor side effects, compare it to commonly used treatments, and collect information that will allow the experimental drug or treatment to be used safely. - Phase IV trials Post-marketing studies, which are conducted after a treatment is approved for use by the FDA, provide additional information including the treatment or drug’s risks, benefits, and best use. Examples of other kinds of clinical research Many people believe that all clinical research involves testing of new medications or devices. This is not true, however. Some studies do not involve testing medications and a person’s regular medications may not need to be changed. Healthy volunteers are also needed so that researchers can compare their results to results of people with the illness being studied. Some examples of other kinds of research include the following: - A long-term study that involves psychological tests or brain scans - A genetic study that involves blood tests but no changes in medication - A study of family history that involves talking to family members to learn about people’s medical needs and history. Source: FDA website.
<urn:uuid:1cd5465f-fb4f-4d93-a2b2-c5e123d1dc08>
CC-MAIN-2020-16
http://thestarr.org/faq/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493120.15/warc/CC-MAIN-20200328194743-20200328224743-00514.warc.gz
en
0.945651
2,846
3.34375
3
Altman Z-Score Definition The Altman Z-score (or simply, “Z-Score”) is a statistical measurement of one variables relationship to the mean (average) of a group of values. More specifcally, the Z-score states the number of standard deviations a variable lies from the mean of the group of values. A zero Z-score indicates that the mean of the group and the value being tested is identical. A one or negative one Z-score indicates that the value is one standard deviation from the mean. A positive one indicates that it is one standard deviation above, while a negative one indicaes that it is one standard deviation below the mean. Z-scores are measures of an observation’s variability and can be put to use by traders in determining market volatility. The Z-score is more commonly known as the Altman Z-score. The Altman Z-Score is the formula, consisting of five fundamental ratios, used to determine the financial condition of the company and probability for bankruptcy. The Altman Z-Score formula helps the investors to evaluate the business’s financial strength. This Formula also helps predicting the business’s bankruptcy. A Little More on the Altman Z-Score The Altman Z-Score determines the company’s strength by calculating its financial risk. It highlights the bankruptcy probabilities using various financial indices. The Altman Z-Score was introduced by Edward Altman, a professor at the University of New York, in 1960. The Altman Z-Score is a valuable tool to evaluate the company’s operations. This fundamental tool measures the company’s viability in the long term, which is helpful for the capital investors to determine the company’s bankruptcy. The poor assessment of the company’s financial viability may cause investor huge lauses. The Altman Z-Score models uses multivariate statistical technique called discriminant analysis. It is used to assess the credit studies or project the movement of the treasury of a potential client. The formula Altman Z-score The Altman Z-score formula calculation is as follows: Altman Z-score = 1.2 * T 1 + 1.4 * T 2 + 3.3 * T 3 + 0.6 * T 4 + 1.0 * T 5 T 1 : (Working Capital / Total Assets) T 2 : (Undistributed profits / Total Assets) T 3 : (EBITDA / Total Assets) T 4 : (Stock Market Capitalization / Total Debt) T 5 : (Net Sales / Total Assets) How to use the Altman z-score to predict bankruptcies The result of the Altman Z-score formula determines if the company is in the safe zone, gray zone or in danger zone. Z-score more than 2.99 means Safe area. Z-score between 1.81 and 2.99 means a gray area, specifying that the company may go bankrupt in the following two years. Z-score less than 1.81 means the danger zone; i.e. Imminent bankruptcy. The accuracy of the Altman Z-score in the prediction of bankruptcies The Altman Z-score formula is 72% accurate two years in advance concerning the bankruptcy, with a false negative rate of 6%. In its trial era of 31 years, the accurate rate was in between 80% and 90%, one year in advance concerning the bankruptcy, with a false negative rate in between 15% and 20%. Hence, it can be said that the Altman Z-score formula predictions are considerably accurate. However, it is not an absolute formula, thus, must be used in parallel with a qualitative analysis of the business for more accurate predictions. References for Altman Z Score Predicting financial distress of companies: revisiting the Z-score and ZETA models, Altman, E. I. (2000). Stern School of Business, New York University, 9-12. Financial ratios, discriminant analysis and the prediction of corporate bankruptcy, Altman, E. I. (1968). The journal of finance, 23(4), 589-609. Considering the utility of Altman’s Z-score as a strategic assessment and performance management tool, Calandro Jr, J. (2007). Strategy & Leadership, 35(5), 37-43. – The analysis defines the Carton and Hofer’s findings regarding the utility of the Z‐score being a strategic analysis and performance management tool. CAN ALTMAN Ζ-SCORE MODEL PREDICT BUSINESS FAILURES IN GREECE?, Gerantonis, N., Vergos, K., & Christopoulos, A. (2009). In Proceedings of the 2nd International Conference: Quantitative and Qualitative Methodologies in the Economic and Administrative Sciences(p. 149). Christos Frangos. Z scores-A Guide to failure prediction, Eidleman, G. J. (1995). The CPA Journal, 65(2), 52. Business bankruptcy prediction models: A significant study of the Altman’s Z-score model, Siddiqui, S. A. (2012). Financial ratios are the fundamental indicator of the business’s soundness, and operational and financial health. Altman proposed z-score model that combines these ratios, to predict the business’s financial viability/bankruptcy up to 2-3 years in advance. The paper highlights Altman’s studies to predict the business bankruptcy, and summarizes the research carried out by Altman to develop the z-score model. In modern economy, this model can be used to predict the bankruptcy and distress, one, two or three years in advance. Distressed firm and bankruptcy prediction in an international context: A review and empirical analysis of Altman’s Z-score model, Altman, E., Iwanicz-Drozdowska, M., Laitinen, E., & Suvas, A. (2014). This paper reviews the previous literature on the importance and efficacy of the Altman Z-score bankruptcy prediction model and in its implication in finance and other relevant areas, globally. Analysis of 33 scientific papers published since 2000 in mainstream accounting and financial journals, has been done in the review. The paper also used a sample of international firms (from 31 European and 3 non-European countries) to evaluate the performance of the model for distress and bankruptcy prediction of the firm. Since our sample has firms that primarily belong to private sector and are non-financial, we have used the version of the z-score model developed for manufacturing and non-manufacturing firms, in our testing. The overall literature review shows that z-score performed well in most of the cases.For our most of the sample countries, the accuracy rate was around 75% , while in some cases, more than 90%. Financial Distress Prediction in an International Context: A Review and Empirical Analysis of Altman’s Z‐Score Model, Altman, E. I., Iwanicz‐Drozdowska, M., Laitinen, E. K., & Suvas, A. (2017). Journal of International Financial Management & Accounting, 28(2), 131-171. This paper evaluates the performance classification of z-score model, especially for the banks that operate globally and must evaluate the failure risks of the firms. The performance of the model is assessed for 31 European and 3 non-European countries. Since our sample has firms that primarily belong to private sector and are non-financial, we have used the version of the z-score model developed for manufacturing and non-manufacturing firms, in our testing. The overall literature review shows that z-score performed well in most of the cases.For our most of the sample countries, the accuracy rate was around 75% , while in some cases, more than 90%. The Altman z-score revisited, Russ, R., Peffley, W., & Greenfield, A. (2004). This study contains the research on the Altman z-score measure of bankruptcy. To Answer the criticisms of the original study, this study takes into an account a large sample, all the data from recent years, statistical methods, and the elimination of the matched pair design of the original study, for rescaling the z-score, which significantly improves the predictive power of the model in predicting bankruptcy for two years prior in advance.. Altman’s Z-Score models of predicting corporate distress: Evidence from the emerging Sri Lankan stock market, Samarakoon, L., & Hasan, T. (2003). This study evaluates the ability of three variations of the Altman’s Z-Score model (Z, Z’, and Z”) of distress prediction formed in the U.S. to determine the corporate distress in the rising market of Sri Lanka. The findings show that the models are remarkably accuracy in predicting distress using the financial ratios calculated from the financial statements in the year before the distress. The overall success rate of the z-score was 81%. The paper concludes that Z-Score models bear great potential in assessing the risk of corporate distress in emerging markets as well. An evaluation of Altman’s Z-score using cash flow ratio to predict corporate failure amid the recent financial crisis: Evidence from the UK, Almamy, J., Aston, J., & Ngwa, L. N. (2016). Journal of Corporate Finance, 36, 278-285. This paper assesses the extension of the Z-score model in determining the viability of the UK companies; using discriminant analysis, and performance ratios to evaluate which ratios are statistically important in predicting the health of the UK companies from 2000 to 2013. The findings show that cash flow when used with the original Z-score variable, is helpful in predicting the health of UK companies. A J-UK model was established to test the health of UK companies. When compared to the Z-score model, the accuracy rate was 82.9%, which is consistent with Taffler’s (1982) UK model.
<urn:uuid:3f4cbdee-1921-48ba-90bd-955036c7e20b>
CC-MAIN-2020-16
https://thebusinessprofessor.com/knowledge-base/altman-z-score-definition/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370526982.53/warc/CC-MAIN-20200404231315-20200405021315-00275.warc.gz
en
0.891796
2,101
3.4375
3
Redox Reactions Cu(s) + 2Ag+(aq) +2NO3-(aq) Cu2+(aq) + 2NO3- (aq) + 2Ag(s) Removing the spectator ions we get our net ionic equation: Cu(s) + 2Ag+ (aq) Cu2+(aq) + 2Ag (s) Copper began as a neutral atom with no charge but changes into an ion with a 2+ charge. This happens when it loses 2 electrons. Cu (s) Cu2+ (aq) + 2 eCopper was oxidized because it lost electrons. Silver went from an ion Ag+ to a neutral atom Ag. The only way this can happen is to gain electrons. It has been reduced. LEO the Lion Says GER LEO Loss of Electrons is Oxidation GER Gain of Electrons is Reduction Oxidation numbers are our system for keeping track of what gains and what loses electrons. An oxidation number is a positive or negative number assigned to an atom in a molecule or ion that reflects a partial gain or loss of electrons. Main Rules: 1. The oxidation number of a pure element (not an ion) is zero (0). 2. The oxidation number of a monatomic ion (by itself or in an ionic compound) is equal to its charge. 3. The oxidation number of hydrogen is almost always +1 when it is in a compound. HCl H +1 Cl -1 H2S H +1 S -2 4. The oxidation number of oxygen is almost always -2 when it is in a compound. Two exceptions: peroxides O is -1 oxygen with fluorine O is +2. 5. The sum of the oxidation number in a compound is zero. Example: Mn2O7 O Is -2 (rule 4) -2 x 7 = -14 total The sum of the oxidation numbers must be zero. The total of the oxidation numbers of Mn must be +14. +14/2 = +7. The oxidation number of Mn must be +7. 6. The sum of the oxidation number of a polyatomic ion is equal to the charge on that ion. Example: Cr2O7 2Oxygen - -2 x 7 = -14 The sum of the oxidation numbers must be -2 (instead of zero) as that is the overall charge on the ion. -14 + 12 = -2 +12/2 = +6 Cr The oxidation number always refers to each individual atom in the compound not the total for that element. Oxidation cannot occur without reduction Reducing Agent: substance that is oxidized Oxidizing Agent: substance that is reduced The oxidizing agent is usually the entire compound that the element that is being reduced is in, not just the element that is being reduced. Agents are always on the reactants side. Sodium metal reacts with chlorine gas: Na + Cl2 2 NaCl 0 0 Na 0 to +1 Cl 0 to -1 +1 -1 lost 1 e gained 1 e This is a redox reaction. oxidized (reducing agent) reduced (oxidizing agent) Zn + HNO3 Zn(NO3)2 + NO2 + H2O Assign oxidation #’s Is this a redox reaction? What is the oxidizing agent? Remember that an increase in the oxidation number means oxidation Decrease in the oxidation number means reduction. 4 HCl + O 2 2 H2O + 2 Cl2 element In Ox no Final Ox No e’ lost or Oxidized or gained reduced H +1 +1 0 Cl -1 0 1 - Agent - Oxidized (lost) reducing agent O 0 -2 2 Reduced (gained) oxidizing agent Observations: Cu(s) + Ag+ (aq) Cu 2+(aq) + Ag (s) Looking at the number of atoms the net ionic equation appears balanced. The charges however ARE NOT. in ox no final change in e’ change in e balance for electrons Coefficient Total Cu 0 2+ 2 lost 2 x 1 = 2 Ag +1 0 1 gained 1 x 2 = 2 Cu (s) + 2 Ag + Cu 2+ (aq) + 2 Ag (s) SnCl2 + HgCl2 SnCl4 + HgCl Sn Cl Hg Initial +2 -1 +2 Final Change Coeff +4 2 x1 -1 none +1 1 x2 SnCl2 + 2 HgCl2 SnCl4 + 2HgCl Total e 2 2 MnO41- + Fe2+ + H1+ Mn2+ + Fe3+ + H2O element in ox final ox change Mn +7 +2 5 x 1 = 5 Fe +2 +3 1 x 5 = 5 MnO41- + 5 Fe2+ + H1+ Mn2+ + 5 Fe3+ + H2O But the H and O are not balanced: MnO41- + 5 Fe2+ + 8H1+ Mn2+ + 5Fe3+ + 4H2O NH3 + O2 N O in. ox final -3 +4 0 -2 NO2 + H2O change no. atoms no. e’ balance 7 7 x 4 = 28 2 2 4 x 7 = 28 Which O does the coefficients go in front of?????? Do the N first, O in diatomic second, H last. 4 NH3 + 7 O2 4 NO2 + 6 H2O Using this method we will break an equation into the oxidation reaction and the reduction reaction. These separate equations are referred to as halfreactions because the two halves cannot occur with out the other. (or two halves make a whole) The spectator ions are removed from the equations. Each half reaction is balanced separately. Electrons are added to balance the charges. The electrons lost must equal the electrons gained. Everything is put back together including the spectator ions. Mg (s) + Cl2(g) MgCl2(s) What is oxidized? What is reduced? Mg is oxidized 0 to +2 Cl is reduced 0 to -1 Mg Mg+2 Mg Mg2+ + 2 e’ Cl2 2 ClCl2 + 2 e’ 2 Cl- electrons are equal so it is already balanced. Mg (s) + Cl2(g) MgCl2(s) Cu (s) + AgNO3(aq) Cu(NO3)2 + Ag (s) Cu Cu2+ + 2e’ Ag+ + 1 e’ Ag The electrons do not balance. Cu Cu2+ + 2e’ 2Ag+ + 2 e’ 2Ag Cu + 2Ag+ Cu2+ + 2Ag Return spectator ions: Cu (s) + 2 AgNO3(aq) Cu(NO3)2 + 2Ag (s) MnO4- + Fe2+ + H+ Mn2+ + Fe3+ + H2O Fe oxidized Fe +2 to +3 Mn reduced + 7 to +2. Fe2+ Fe3+ + 1 e’ Mn7+ + 5 e’ Mn2+ 5 Fe2+ 5 Fe3+ + 5 e’ MnO4- + 5 Fe + H+ Mn2+ + 5 Fe3+ + H2O (The hydrogen and the oxygen must be included in the half reaction and balanced.). MnO4- + 5 Fe2+ + 8 H+ Mn2+ + 5 Fe3+ + 4 H2O During redox reactions, electrons pass (flow) from one substance to another. Electrochemistry is the branch of chemistry that deals with the conversion of chemical energy to electrical energy. 1. Electrochemical Cells – spontaneous chemical reactions convert chemical energy into electrical energy. Batteries are an example. 2. Electrolytic Cells – electrical energy is used to cause nonspontaneous chemical reactions to occur. Rechargeable batteries and electroplating are examples. The electrons that are released by the oxidation half reaction are passed along to the reduction reaction. An external circuit needs to be created. Reactions will occur without one but electricity will not be created. Zn(s) + Cu2+ (aq) Zn2+ (aq) + Cu(s) zinc is oxidized (reducing agent) copper is reduced (oxidizing agent) An electrolytic solution (conducts electricity due to the presence of ions) is needed. Each beaker is a half cell. The metal strips become the electrodes. The electrodes are connected by a wire. This becomes the external circuit. A salt bridge contains an electrolytic solution. This allows the electrons to flow And becomes the internal circuit. Anode = Oxidation - post of the cell source of electrons An Ox Reduction = Cathode + post of the cell Consumes electrons Red Cat Electrons travel from the Zn anode to the Cu cathode through the wire of the external circuit. At the anode Zn2+ ions go into solution. The excess positive charges attract the negative NO3ions from the salt bridge. Electrochemical Cell Simulation External Circuit – Electrons flow from anode to cathode Internal Circuit- anions to anode; cations to cathode An entire complete cell is comprised of: 2 half-cells (electrodes in their solution) internal circuit (salt bridge and the half cells) external circuit (the wire connecting the two electrodes) When the cell reaches equilibrium, the voltage will be zero. What determines which element is oxidized and which is reduced? Metals like to lose electrons so they tend to be oxidized. Metals are arranged according to the activity series. In our cell example Zn is the anode (oxidation) while Cu is the cathode (reduction). Where are these elements on the activity series? Zinc is quite a ways higher on the series and therefore more easily oxidized. This table allows us to determine the voltage of the electrochemical cells. All values on the table are determined based on a halfreaction with a hydrogen half-cell. 2H+(aq) + 2e’ H2(g) Eo = 0.00 v • Means: 25 oC, 100 kPa, 1 mol.L-1 Positive values on this table mean that they are better at competing for electrons and will be reduced. The hydrogen will be oxidized. + → Cu(s) + H2 (g) → 2H (aq) + 2e___________________________ Cu2+(aq) + H2 (g) → 2H+(aq) + Cu(s) Cu2+(aq) 2e- + E° 0.34 V 0.00 V _____________ 0.34 V In the Table of Standard Reduction Potentials that zinc has a negative E° indicating that it is not as good at competing for electrons as hydrogen. Zn2+(aq) + 2e- → Zn(s) E° = -0.76 V Therefore if zinc and hydrogen are paired together in an electrochemical cell, the hydrogen would be reduced (gain the electrons) and zinc would be oxidized (losing electrons). To determine the net redox reaction as well as the voltage of the electrochemical cell we reverse the zinc equation (write it in oxidation form), and also reverse it's sign before adding the equations and E° together: Zn(s) → Zn2+(aq) + 2e2H+(aq) + 2e- → H2 (g) ______________________ Zn(s)+ 2H+(aq) → Zn2+(aq) + H2 (g) + E° 0.76 V 0.00 V ____________ 0.76 V We can now use the table to calculate the voltage of our zinc-copper cell as well as use the table to explain why zinc is the anode (oxidized) and copper is the cathode (reduced). Locate the Cu/Cu2+ half-reaction. Cu: Cu2+ (aq) + 2e’ Cu(s) Eo = 0.34 V Zn: Zn2+ (aq) + 2e’ Zn(s) Eo = -0.76 V The copper is larger than zinc so it will be reduced. Zinc will be oxidized. Zn Zn2+ + 2 e’ 0.076 V Cu2+ + 2 e’ Cu 0.034 V Cu2+ (aq) + Zn(s) Zn2+ (aq) + Cu (s) 1.10 V Always reverse the half reaction that will result in a positive value for Eo when the equations are added together. A positive Eo value means that the reaction is spontaneous and electrochemical cells always involve a spontaneous chemical reaction. Batteries are electrochemical cells used to generate power. Types of Batteries: 1. Dry Cells (Primary batteries) non rechargeable; electrolytes are paste not liquid; used for flashlights, radios; toys etc. Consists of a zinc case (anode) , a graphite rod (cathode) and an electrolytic paste 2. Secondary Batteries Rechargeable An example is a car battery (lead-acid).
<urn:uuid:10bebed2-67e5-4137-9bd4-6fd0fbe715ad>
CC-MAIN-2020-16
http://slidegur.com/doc/17552/unit-powerpoint
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500331.13/warc/CC-MAIN-20200331053639-20200331083639-00434.warc.gz
en
0.872542
2,749
3.703125
4
What is Excessive Sleeping? Many of you may be surprised to know that there is also a condition known as excessive sleeping; especially the insomniacs out there will be shocked to know that some people struggle with the problem of too much sleeping. People suffering from excessive sleeping find it difficult to stay awake during daytime or doing daily work and other activities and are always bogged down by drowsiness or the need to sleep. They have to struggle to wake up in the morning and getting out of bed. There are around 20% of such adults who have excessive sleepiness so severe that it affects their daily activities. However, there is always some sort of medical problem which causes excessive sleeping; to find out what the causes of excessive sleeping are, read on this article. When Does Sleepiness Become A Problem? What are the Symptoms of Excessive Sleeping? You are said to suffer from excessive sleeping when: - You are feeling sleepy during the daytime and find it difficult to perform daily activities. - You have difficulty in waking up in the morning. - Taking a daytime nap does not relieve your excessive sleepiness. Other than excessive sleeping, you may also suffer from: - Trouble with memory or thinking. - Loss of appetite. - Feelings of anxiety or irritability. What are the Causes of Excessive Sleeping? Excessive Sleeping Caused By Not Getting Sufficient Sleep Due to today’s hectic pace of life, many people do not get enough sleep; and this is one of the commonest causes of excessive sleepiness. Other causes of excessive sleeping include alcohol, drugs, obesity, lack of physical activity and sleeping during the day and working at night. Excessive Sleeping Caused By Sleep Apnea Sleep apnea is a leading cause of excessive sleeping in adults and children. Sleep apnea is a condition where the upper airway collapses for around 10 seconds when the person is asleep and this goes on for about hundreds of times every night. A blockage in the airway causes obstructive sleep apnea. When the brain fails to transmit signals to the muscles which control the breathing then it causes central sleep apnea. As the airway reopens, patient gasps for air and snores in sleep apnea. A person suffering from sleep apnea becomes aware of this after their bed partner informs them about the problems which are going on. As sleep apnea causes interruption in breathing and the sleep of the patient is also disturbed, it causes sleepiness during daytime, when working or when in school etc. The ability to sleep anytime should not be mistaken for being a “good sleeper,” because falling asleep at work or in traffic is very hazardous. Sleep apnea should be treated immediately which will also reduce excessive sleeping. Other problems which are caused by sleep apnea are: wide fluctuations in the heart rate and decreased oxygen levels. Sleep apnea is also associated with other medical conditions such as: Heart disease, high blood pressure, diabetes, increased hemoglobin, depression and fatigue. Treatment for Sleep Apnea: The common treatments for sleep apnea are: CPAP or continuous positive airway pressure is the commonest treatment used for obstructive sleep apnea. In this treatment, a nasal device is attached to a machine with a blower unit. This will help in keeping the airway open. CPAP is the most common treatment used for obstructive sleep apnea. Oral appliance therapy consists of devices which move the lower jaw, tongue or soft palate forward. This helps in opening the airway. Stimulant medications such as modafinil and armodafinil are given to relieve excessive sleepiness and to increase alertness in people who are not responding to CPAP alone. Weight loss helps a lot with sleep apnea especially if you are obese. Losing the excess weight helps in decreasing the risk for sleep apnea by decreasing the fat deposits in the neck. Additionally losing weight also decreases the associated risk of leep apnea, which is heart disease. Surgery is considered if other treatment options do not work. Excessive Sleeping Caused By Restless Legs Syndrome RLS is a condition where the person has a strong urge to move his/her legs and has unpleasant sensations in the legs. Restless Legs Syndrome can also cause jerky leg movements for 30 seconds during the nighttime and it can also affect other parts of the body. The symptoms of Restless Legs Syndrome occur or worsen when a person is sleeping or is at rest. As the symptoms of RLS are often worse at night, they greatly disturb a person’s sleep and this causes excessive sleepiness one should be awake. In some cases, Restless Legs Syndrome can be mistaken for insomnia. Treatment for Restless Legs Syndrome: Moving the Legs: The symptoms of Restless Legs Syndrome are greatly relieved by moving the legs. Treating Any Deficiencies: Other than this, if there is deficiency of iron or vitamin B12, then taking the necessary supplements helps in relieving RLS and daytime sleeping. Medication Changes: Sometimes, certain medications taken for cold, high blood pressure, allergies, depression and heart problems worsen Restless Legs Syndrome and cause excessive sleeping. So, you can talk to your doctor regarding changing the medication if possible. Lifestyle Modification: It is also important to follow a healthy diet, regular exercise, do meditation, massage and hot bath for relaxing. Also, one should avoid caffeine, alcohol and nicotine. Medications For Treating RLS: If the above steps do not benefit in relieving restless legs syndrome and excessive sleeping, then the doctor will prescribe medications for treating the symptoms of RLS and to induce deep sleep. These medications include: - Anti-Parkinsonian Medicines, such as such as levodopa/carbidopa, pramipexole, pergolide and ropinirole. - Anti-Seizure Medicines, such as gabapentin, carbamazepine and valproate. - Benzodiazepines which include diazepam, clonazepam, temazepam and lorazepam. - Opiates, such as methadone, codeine and oxycodone are prescribed for severe cases of Restless Legs Syndrome. Excessive Sleeping Caused By Narcolepsy Narcolepsy is a sleep disorder where the patient suffers from debilitating daytime sleepiness along with other symptoms. Narcolepsy is associated with REM (rapid eye movement) sleep. The REM periods occur during the day in narcolepsy. Patients with narcolepsy not only suffer from persistent drowsiness, but also suffer from sleep attacks that are brief, uncontrollable moments of sleep which occur without any warning. Another daytime characteristic feature of narcolepsy is cataplexy where there is sudden loss of muscle control. This can range from a slight feeling of weakness to complete body collapse. This condition can last from seconds to a minute. Cataplexy is associated with muscle immobility which is a part of REM sleep and is commonly triggered by fatigue or emotions. When a person is suffering from narcolepsy, he/she can have insomnia or hallucinations or vivid frightening dreams and sleep paralysis. These may occur when the patient is trying to sleep or is waking up. Patients with narcolepsy will not only suffer from excessive sleeping, but will also experience depression, poor attention/ concentration and poor memory. All this occurs as a result of extreme fatigue due to daytime sleepiness and lack of good-quality sleep at night. Treatment for Narcolepsy: These medications are prescribed for treating narcolepsy: Antidepressants such as serotonin reuptake inhibitors or tricyclics help with hallucinations, cataplexy and sleep paralysis. A central nervous system depressant known as sodium oxybate helps in managing cataplexy. Stimulant medications such as modafinil, armodafinil, methylphenidate or dextroamphetamine are used to help people combat excessive sleeping, help them stay awake and be more focused and alert. Taking a couple of naps during the day can help in improving daytime sleepiness from narcolepsy. It is also important to follow a good diet and regular exercise program to combat narcolepsy symptoms including excessive sleeping. Excessive Sleeping Caused By Depression Depression is a condition where a person has persistent feelings of sadness, hopelessness, anxiety, lack of concentration, forgetfulness, lethargy, back pain and stomach problems. Excessive sleeping is one of the symptoms of depression. Depression and sleep problems go hand in hand and often share the same risk factors and will respond to same type of treatment. Depression is also linked to different types of sleep disorders, such as insomnia, restless legs syndrome and obstructive sleep apnea. The risk of suffering from depression in patients with insomnia is about 10 times high than normal people. Treatment for Depression: Given below are some of the effective treatments for depression: Cognitive Behavioral Therapy (CBT) involves targeting thoughts which cause depressive feelings and replacing these negative thoughts with positive ones. Patient is also trained to change behaviors which worsen the depression. Medications such as antidepressants, anxiolytics, mood-stabilizers, lithium or anticonvulsants are prescribed for depression associated with bipolar disorder. Daily exercise, meditation, dietary changes which include restricting caffeine and alcohol is also very beneficial for treating depression and excessive sleeping. Self-Care Tips For Improved Sleep Hygiene And To Combat Excessive Sleeping These strategies help in developing good sleep habits and can be utilized to combat excessive sleeping: - During bedtime, one should perform relaxing habits, such as meditating, taking a hot bath, drinking warm milk etc. - It is important to follow a consistent sleep schedule and got to sleep on a fixed time daily. - Never watch TV before sleeping or use your bed for anything else other than sleeping and sex. - Avoid heavy meals at least 2 hours before going to sleep. - Avoid the use of any gadgets including cell phones at least 30 minutes before sleeping. - What Causes Sweating While Sleeping? - What Causes Twitching While Sleeping And How To Get Rid Of It? - What Is Hypersomnia Or Excessive Daytime Sleepiness? - What is Sleeping Beauty Syndrome? - What Causes Grogginess After Sleep and Ways to Stop It! - What Causes Drooling While Sleeping & How To Stop It?
<urn:uuid:8988942a-90d2-47e1-957c-858b187ce6de>
CC-MAIN-2020-16
https://www.epainassist.com/sleep-disorders/causes-of-excessive-sleeping
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371618784.58/warc/CC-MAIN-20200406035448-20200406065948-00233.warc.gz
en
0.93258
2,182
2.75
3
- This article is about the concept of dragons in general throughout the series. For the specific class in Fates known simply as Dragon in Japanese, see Feral Dragon. For the final boss of Fire Emblem: The Blazing Blade see Dragon (The Blazing Blade). Dragons (Japanese: 竜 dragon), collectively referred to as the dragonkin (Japanese: 竜族 dragon tribe), are a species of powerful and legendary giant reptiles which frequently play a significant role in the Fire Emblem series. There are numerous subspecies of dragon, each with command over a unique elemental power or other trait. In the eras in which each Fire Emblem game is set, they usually appear in the form of Manaketes, shapeshifted into a human-like form in order to preserve their strength and avoid consequences to their health. In antiquity, Archanea was the epicenter of an advanced and enlightened dragon civilization prior to the rise of humanity, one overseen by the wisdom of the divine dragons. The dragon civilization reigned supreme and uncontested for at least three thousand years, until the species entered a period of decline: a mysterious wave of deteriorating health struck them, rendering them infertile and slowly driving them mad. In response, the dragonkin learned that the only way to survive the condition was to take the forms of humans, and so sought to convince all dragons to do so. While many (including almost all earth dragons) refused to accept this and eventually devolved into mindless beasts, others accepted the word of the dragon elders and took human forms, thus becoming Manaketes. By sealing their powers into dragonstones, Manaketes retained the ability to temporarily become dragons, but never for long owing to the risks posed to their health. Meanwhile, those dragons who shunned the Manakete form ultimately turned into wild, volatile beasts with no remaining semblance of self, and eventually came to populate the wilds of Anri's Way in the continent's untamed north. The earth dragon clans, meanwhile, engaged humanity in an aggressive war, only to be repelled and defeated by Naga and the divine dragons; at this war's conclusion the defeated earth dragons were sealed away by Naga in the Dragon's Table and remained there for millennia under the watch of one of their own kind, the Manakete Medeus. In the following millennia as humans became the dominant power over the Archanean continent, Manaketes became the target of racism from humans in an exercise of their new-found power over the land. In their weak humanoid forms, the Manaketes were easily abused and mistreated, and were condemned to living in poverty. This ultimately led to the rise of Medeus and his Dolhr Empire, which sought to exact vengeance upon humanity for their crimes against the Manaketes and to retake Archanea as the land of the dragonkin. Although he succeeded in ruling the world for two periods of time adding up to several decades, Medeus was thrice thwarted and killed: first by Anri, then twice more by his descendant, the Hero-King Marth. During his third reign of terror, wild dragons served Medeus and his minions, Gharnef and Hardin, in fighting Marth's forces; this included a number of earth dragons, as the seal in the Dragon's Table had been progressively weakening for centuries. It is uncertain what became of dragons in the post-Marth world. In the era of Chrom of Ylisse, Manaketes continued to exist, but little is known of what social presence they may or may not have held, other than that they are regarded as rare and mythical entities. The Anri's Way region was ultimately settled by humans to form the kingdom of Ferox, implying the wild dragon populations of the region were wiped out or relocated. In ancient times, Elibe hosted a dragon population which coexisted peacefully with humanity. This changed with the start of the Scouring, a war waged by humans to take Elibe as wholly their own and to purge dragons from it entirely. Despite their immense power, the dragons were hindered by their lower population and far slower birth rate compared to humans, prompting the creation of a demon dragon intended to artificially bolster the number of dragons fighting in the war. In retaliation, humans forged the eight legendary weapons, extremely powerful weapons forged solely to fight dragons. As with their Archanean counterparts, the dragons of Elibe were also forced to become Manaketes as a result of the Ending Winter, a distortion of the laws of nature caused by the power of the divine weapons which resulted in dragons no longer being able to exist in their present forms. In the Scouring's aftermath, the vast majority of surviving dragons fled Elibe through the Dragon's Gate on Valor; most of those who remained took up residence in Arcadia, while one more—Jahn—hid in the Dragon Temple for the following centuries to heal wounds sustained in the Scouring. Occasionally, dragons crossed back into Elibe through the Dragon's Gate: three fire dragons were summoned through it by Nergal in the year 980. Only two dragons are seen to still be living in Magvel during the course of The Sacred Stones, both of whom take the form of Manaketes. These two Manaketes, Morva and his adopted daughter Myrrh, are said by the people of the village of Caer Pelyn to act as watchful guardians over humans. In the original conflict with the Demon King Fomortiis in Magvel's antiquity, Morva's power was a crucial factor in ensuring the Five Heroes were able to use the Sacred Stones to defeat Fomortiis and seal him away. For the following eight centuries, Morva remained in Darkling Woods, the last resting place of Fomortiis, to protect the world from his ongoing influence. Although Morva's role in combating Fomortiis is not acknowledged in retellings of the tale by the majority of Magvel's population, the village of Caer Pelyn holds Morva, and Myrrh by extension, in reverence for their contributions. While Myrrh and Morva are the only living dragons seen in Magvel, the presence of reanimated dragon corpses suggests more lived there at one time. Myrrh also mentions being orphaned during the original war with the Demon King; this, the reference to Morva leading an entire tribe, along with the presence of the reanimated corpses suggests that dragons and Manaketes were significantly wiped out at some point, or simply rarely interact with humans. Myrrh initially expresses some doubts to protecting humanity like her father, possibly suggesting other dragons avoid interaction with humans. The dragons of Tellius are but one of the tribes of laguz, shapeshifters who can take both human and animalistic forms, and so have a very different origin compared to the dragons of other lands. Like all laguz and beorc, the dragon laguz evolved from the Zunanma, an ancient race of the first non-divine sentients to dwell in Tellius. In the centuries following the Great Flood and the ensuing sealing of the dark god, the dragon tribe resided in their own nation, Goldoa, which maintained a strict policy of total isolation and neutrality. The dragon king Dheginsea believed that the dragons were too powerful to co-exist with the other races and that, to preserve the balance of peace, they must not interfere with the outside world, but nonetheless on several occasions dragons left Goldoa anyway to explore the rest of the world. Several dragons are known to have been subjected to the experiments of Izuka and transformed into Feral Ones. Across their numerous subspecies dragons take a wide variety of forms and present great diversity in their appearances. - Fire dragons: The most frequently seen type of dragon. Fire dragons are red in hue, typically bulky and grounded, and have the ability to wield powerful flame breath. They have a heated history of war with humanity. - Ice dragons: Typically found dwelling in frigid regions, the ice dragons are thinner, distinctly serpentine in shape, and can exhale blasts of ice to attack. - Mage dragons: This breed of dragon is oriented around magical powers, and are completely immune to opposing magical attacks from humans. They have a close relationship with the Earth tribe. - In Elibe they are instead called demon dragons, a soulless one-of-a-kind breed born by transforming a divine dragon and intended solely to produce war dragons. - Wyverns: An entirely airborne dragon species whose agility is unrivalled among the dragonkin. - Maligs: A race of "evil dragons" sharing characteristics to the wyvern, with darker scales and glowing red eyes. - Divine dragons: Hailed as the greatest of all the dragonkin, the divine dragons typically act as wise leaders among dragons. In Archanea, the divine dragon Naga is worshipped as a god by humanity, and has long fought to protect humans from the dangers of dragons. - Earth dragons: A prideful breed of dragons whose might rivals that of the divine dragons. Imposing and armored, earth dragons have the ability to exhume a sinister shadow power on their foes. Historically the earth dragons have hated humans and sought to dominate them, exemplified by Medeus and Loptous. - Shadow dragons: A twisted evolved form of earth dragons, born of dark powers. The only known shadow dragon is Medeus. - Astral dragons: A dragon species native to the astral plane with the ability to travel between worlds and Deeprealms. - First Dragons: A group of godlike dragons that held unimaginable power before their decline. - Main article: Laguz The laguz dragon tribes do not share the elemental classifications of the other dragons, and instead are divided into three castes which, while all physically similar, differ in strength, power and body color. Their relation to the dragons of other lands is unknown. - Red dragons: A caste of dragon laguz who tend toward being bulky and physically strong. - White dragons: An arcane dragon laguz caste which wields power more in line with the magic of humans. - Black dragons: The rarest of the laguz dragons and their ruling caste. Black dragons are extremely powerful and possess longevity greater than any other laguz clan, and by extension vastly beyond that of beorc. - Blight Dragon, a unique draconic form taken by the Empty Vessel, Garon, upon drawing the strength of Anankos. Looks identical to (and has the same name in Japanese as) Nohr's Dusk Dragon depiction of Anankos. - War dragons: Artificial dragons spawned by a demon dragon such as Idunn. They all possess no emotions, and exist only to fight on the behalf of their master. All known war dragons are fire dragons. - Necrodragons: The corpse of a dead dragon reanimated through the fell magic of a dark god such as Duma or Fomortiis. - White dragons: A more powerful and rare variant of Necrodragons. - In Shadow Dragon and the Blade of Light, Mystery of the Emblem, and some supplementary material, the dragon races were also given clan names which are derived from mythical reptiles: the fire dragons were called the Salamander (Japanese: サラマンダー), mage dragons were called the Basilisk (Japanese: バジリスク), and the divine dragons were called the Naga (Japanese: ナーガ). The references to all but the Naga tribe were removed in Mystery of the Emblem and Shadow Dragon, and all the dragon types introduced in Mystery of the Emblem and later, as well as the earth dragons, lack clan names in this style. However, unused and supplementary material say the earth tribe is known as the Gaia. Notably, Salamander and Naga are names of the gods of the fire and divine dragons, respectively, although the reference to Salamander was removed in the DS remake. Etymology and other languages |Names, etymology and in other regions| |Language||Name||Definition, etymology and notes| |• Dragon. Used in the dialogue of most games.| • Transliteration of English "dragon". Used in some Shadow Dragon and the Blade of Light dialogue. - "If you really wanna know... Tens of thousands of years ago, the dragon tribe settled down on this continent, and created a civilization. They possessed intellect and abilities far exceedin' those of humans. But suddenly, outta nowhere, their day of destruction came. At first, they couldn't bear children. Then they began to lose their minds, goin' berserk one after the next. The elders warned that the end of dragons as a species was approachin'. There was no longer any way to prevent it. However, there was one way they could survive: to discard their identities as dragons and live on as humans. The dragons fell into a panic. Those who believed the elders sealed their forms within stones and became humans. But those who couldn't throw away their pride as a dragon; those who adamantly refused to become human... They eventually lost their minds and became naught but beasts..." — Xane, Fire Emblem: New Mystery of the Emblem - " Y'see, I don't like humans. I've got nothin' but contempt for those who treated the defenseless Manaketes like insects. So I can understand why Medeus despised you humans so. Medeus, an earth dragon prince, was the only one of his tribe who become a Manakete. And, as ordered by Naga, he guarded the Dragon's Table. But the once peaceful human race, drunk with power, began to rule with tyranny. They oppressed the dragons who had done nothin' wrong. Furious at their betrayal by humans, the Manaketes gathered in Dolhr, and they created a nation for their people. Then they fought to conquer humanity." — Xane, Fire Emblem: New Mystery of the Emblem - "By the gods, she's a manakete... I never thought I'd see one." — Chrom, Fire Emblem Awakening - "Tiki: The realm you call Ferox certainly brings back memories... How do the people there fare today? I remember only a cold, harsh land. Have you found a way to cope with the heavy snowfall and barren soil? Flavia: Well, we've struggled with the harvest for generations. Honestly, it took years and years of work before the soil was worth a damn... Still, I hope we've improved it some from what you remember." — Tiki and Flavia, Fire Emblem Awakening - "Flavia: Tiki, tell me more about the Regna Ferox you remember. Tiki: Well, all right... After all, I slept there within the ice for several centuries. As I recall, it was a frozen hell plagued by barbarians and mage dragons. " — Flavia and Tiki, Fire Emblem Awakening - "Jahn: When the order of nature collapsed, we dragons suffered the most. With nature weakened, we could not maintain our dragon forms. And so, we sealed our power into gemstones and took human form. Roy: The dragonstones... Jahn: Yes. We were utterly powerless against the humans. In human form, we were even more feeble than the humans themselves. The humans took the opportunity to slaughter us. Roy: Why did you choose the form of humans? Why not some other shape? Jahn: In the new order of nature, the human form required the least energy to transform into." — Jahn and Roy, Fire Emblem: The Binding Blade - "Eirika: Demon King? Are you speaking of the legend of the Sacred Stones? The hero Grado used the power of the five Sacred Stones to defeat and seal away-- Dara: No, no, that's not right at all. Ah, how quickly did mankind forget its debt to the Great Dragon. To hear the story now, one would think humans alone brought about victory. That is a gross mistelling of the tale. Only through the Great Dragon's strength could the Demon King be sealed away!" — Eirika and Dara, Fire Emblem: The Sacred Stones - "Mankind may have forgotten its debt, but the Great Dragon never forgets. It watches over the bones of the Demon King in Darkling Woods. It keeps the Demon King's dark brood from swarming the world of men. The Great Dragon's vigilance alone has kept us safe from their blind rage." — Dara, Fire Emblem: The Sacred Stones - "Uh-huh... My foster father leads the dragon tribe. In the last great war... both of my true parents were killed. Morva took me in and raised me as though I were his own child." — Myrrh, Fire Emblem: The Sacred Stones - "... My father has dedicated his life to protecting humans. For the longest time, I could not fathom why he would do this. But now, after spending time with all of you... I begin to understand how he felt." — Myrrh to Saleh, Fire Emblem: The Sacred Stones - "しんりゅうぞく ナーガ かりゅうぞく サラマンダー" — Malledus, Fire Emblem: Shadow Dragon and the Blade of Light - "Haha... You came back just to die? I'm the most powerful servant of Medeus: Morzas of the Basilisk." — Morzas, Fire Emblem: Shadow Dragon and the Blade of Light - "You there. Have you come across a young girl, one by the name of Tiki? She must be found, boy! Tiki is the last of the Naga, the divine-dragon clan. Without her powers, we cannot challenge the Manaketes who serve Medeus…" — Bantu, Fire Emblem: Shadow Dragon |Races and animals of the Fire Emblem series|
<urn:uuid:540dc092-e2e5-4098-bc08-33498e4a0f89>
CC-MAIN-2020-16
https://fireemblemwiki.org/wiki/Dragon
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371618784.58/warc/CC-MAIN-20200406035448-20200406065948-00233.warc.gz
en
0.955529
3,736
2.671875
3
Attempting to describe the city's culture, I only came up with the collection of illustrations below. But it must be noted that these are not distinct groups. There is so much crossover, so many stereotypes being broken. Like other Canadians, Nanaimo-ites take part in the industrialized, western consciousness — we watch our popular tv shows and we identify with the general culture of North America and the West. But, remaining slightly on the fringe of that general culture, Nanaimo has some distinct elements that make it romantic and interesting: (The links will simply bring you down this page) The Hudson's Bay Company was on the west coast to get sea otter furs, as far back as the end of the 18th century. The British government supported the HBC's operations in the area, with the condition that the HBC would foster colonization of Vancouver Island. Victoria was selected as the seat of the colony, but there were difficulties in generating enough income to support the colonists beyond mere subsistence, and the growing American interests down in the Oregon Territory were threatening to win the locals over to their side. Governor James Douglas was looking for a resource. Flash backward sixty million years - the area that is now the Gulf of Georgia was a shallow tropical ocean. Huge swamps and fertile lagoons rotted and were covered over with sediments. These became vast coal fields under Nanaimo. When the colonists learned of this, there was a rush to move settlers into the mid-island region to secure a foothold and to generate wealth for the colony. The coal seemed endless, and it was high quality, so Nanaimo grew steadily for the next half century. People came from all over the place to exploit the coal, timber, sandstone, fisheries, and each other. Imagine a bustling town described by Charles Dickens, carved out of the mossy, forested coastline of the Gulf Islands region. North America's last Hudson's Bay Company free-standing bastion (blockhouse) is here in Nanaimo. We call it the Bastion of course, and its cannons are still fired on occasion, inviting citizens to look out across the harbour and to remember the Dutch ships that broke up under fire 150 years ago, as they tried to land on the shore and pillage the fort. (I'm just making that up — but the explosion of the Oscar in 1913 was far more exciting, anyway.) So, what's the big deal about the Bastion? It's not that big - not a castle, not even as big as a schooner… The Bastion gets a lot more interesting as you read about its history – even those stories which don't mention the Bastion, at least are staged with the Bastion as backdrop. It stands immutably in the time-lapse while everything around it grows and crumbles. There's a story about all of the European families being called to take shelter in this surprisingly spacious fort. An armada of 100 canoes had arrived, filled with hostile Kwakiutl warriors from the north - three of their people had been slain and they wanted to exact revenge on the local Snuneymoux. The Snuneymoux chief was renowned - a great man - and he offered himself as suitable compensation for three regular men. The Kwakiutl agreed, shot him dead, and left the area. One wonders if perhaps the bastion's armaments (cannons with grapeshot for a very effective spread) could have been used to help the Snuneymoux to fight back against the invaders. There was friendship between the settlers and the First Nation, and this would have been a suitable gesture by the colonists, to fight alongside the local first people. The view from the Bastion during this encounter must have been surreal. The expansive harbour and distant mountains must have helped this scene to remind each European settler that they were very, very far from their homeland. Nanaimo is thriving. Nanaimo is dead. This is an old city (for BC), and no amount of teal-coloured paint will hide the rot at our wharfs. Nanaimo has a Pacific Northwest Gothic thing — that thing with tugboats and ravens and a cloudy sky. It's palpable as you walk downtown or explore the thunderous beaches and misty forests. It's a "ghost town" to the extent that there was a boom and it sort of ended 60 years ago. Or at least it changed - we're no longer digging millions of tons of high-grade anthracite coal out of the ground like we did for 100 years. There were ridiculous amounts of money moving around in those days, and the streets of Nanaimo were packed by a much denser population. It must have been very cool, to be in a seaside city like Nanaimo in the days before television and cars. The expansion of the city northward, with big-box stores and huge malls every few blocks along the highway, has drained the crowds out of the older, more interesting part of Nanaimo. Many residents never even come downtown. But that's changing. Nanaimo is also a ghost town on account of its...ghosts. Whether or not you believe in the paranormal, you must be sensitive to the massive historic legacy everywhere you look: dilapidated buildings, old bottles poking out of colourful dirt, shorelines piled with shell middens of the first peoples. Strolling around Newcastle Island in particular, one is aware of human activity spanning a thousand years and more; the island was a longtime seasonal home for villages of native peoples, then it was crawling with Europeans and Chinese and Japanese between the 1850's and 1940's. Many people are certain that there are real ghosts here. Hundreds of miners died underground in various explosions and accidents (fueling decades of labour disputes), and some people think the miners' subterranean ghosts hold Nanaimo in some kind of cursed state. This region was also a great meeting place for aboriginal nations, and the site of battles and massacres. Three large Chinatowns have been here, the final one having burned completely in 1960. And our erstwhile visitor information centre, Beban House, is nationally recognized as a haunted site. There are some other restless spirits in Nanaimo: the people addicted to hard drugs, who exist in any city — but they do seem to be more conspicuous in some parts of Nanaimo. It can't be (and shouldn't be) ignored that Nanaimo has a major problem with poverty and bad drugs. Much of Downtown, and for a few blocks southward, there is conspicuous poverty — a lot of people are obviously very down-and-out. The good news for visitors is that you're unlikely to be bothered by any of these people. Most of them wish to be left alone, too. Just consider them to be a part of our genuine maritime identity. You know, smugglers and pirates. Work with me here. But for residents, this problem is serious. It must be addressed. Theft is common, there are needles in the grass sometimes, young girls are selling themselves for crystal meth. Everybody knows it's wrong, but too many of us think there's nothing we can do about it. That's wrong, too. Try any of the following: - Avoid thinking that people with addictions deserve them. We all have some kind of baggage. Be happy yours is less burdensome. [12 years later I'm feeling a bit less tolerant] - Stand up to injustices when you see them. You don't have to be heroic, confronting people on the street. Just be vocal in your social circles about what you think is right. Start small, and you'll feel yourself getting stronger. - Donate to organizations which promote healing, and which lend support to people struggling to get away from the underworld. - Be good to children. Pirate Themes in Nanaimo Nanaimo dabbles in "pirate" themes. Why? Because we can. Our Gulf of Georgia (Salish Sea) is like the Caribbean was: an archipelago on the frontier, a string of lights along a dark coastline. And come to think of it, our peoples are similar to those who took to the water 400 years ago: a blend of romantics and isolationists, at odds over how to enjoy the spoils available in the beautiful wilds. The resemblance might end there. Nanaimo is hardly Port Royal 1660, and while our Bastion was always prepared to defend us againts marauders, we never got to use its cannons offensively. However, there are many forces in Nanaimo that consciously recreate the light-hearted pirate culture of Treasure Island and Pirates of the Caribbean. And why not? We have boats, and alcohol, and open space, and money. We should really play this up more. Some businesses are doing their part. Pirate Chips down on Front Street pays a sassy tribute to peglegs and walking the plank, and the Harbour Chandlery on Esplanade even has some sort of kids' play ship out front. But nowhere in Nanaimo will you find anything that attempts to serve real "pirate" fare. We have no "pirate show" or even a "pirate playground", and Pirates Park has a dock, but no flags or sloops. No, everything here that's pirate is tongue-in-cheek, and that's fine with us. It's a lot of fun on the waterfront in our motley boats during the Marine Festival, or walking around Protection Island, where placenames are deliberate: Captain Morgan's Boulevard, Spyglass Lookout, Billy Bones Bay, Treasure Trail. It's mostly thanks to Frank Ney that we can pull it off without feeling silly. He was so off-the-wall in his pirate regalia, whether it was at a child's birthday or a city council meeting. Frank was the one who had Protection Island subdivided. He also organized the Bathtub Races and was "admiral" of the Loyal Nanaimo Bathtub Society. Today, a statue of Frank Ney watches over our holy-of-holies, that most public of Nanaimo places: Swy-a-Lana Lagoon and Maffeo-Sutton Park. This is a real music city. Not only does Nanaimo nurture the likes of David Gogo and Diana Krall, but we also keep a good stock of other musicians in the city at all times. CHLY 101.7 FM One extremely important centre of the music scene in Nanaimo is the independent, "campus" radio station, CHLY (101.7 FM). It really is a valuable asset for Nanaimo, disseminating an interest in everything "grassroots" while being independent and mostly non-commercial. There is so much music played on CHLY that you will never hear on the other local frequencies. Live Music Venues Sites for great jams are all around, but the most accessible live music is at the licensed venues downtown. Or you should look to Vancouver Island University, whose music program thrives on a steady stream of talented students and instructors. There are also numerous restaurants and cocktail lounges that bring in live acts – but currently, only the Queen's Hotel downtown has live music every night. The Port Theatre is a classy venue that brings in all kinds of acts, from famous pianists to Pink Floyd tribute bands. Or, look at the musical events listings on HarbourLiving.ca for acts all around town. Multicultural village square Like most cities in Canada, Nanaimo is a hub of international activity, bringing together people from all over the world. But Nanaimo has some additional factors which create an even more cosmopolitan aspect than other Canadian cities have: We teach English well A lot of people come to Nanaimo to study English. Vancouver Island University hosts thousands of international students from all over the globe. It's really great to see these students out in the community, adding to the local colour! Our lifestyle attracts all kinds As a "destination" city with a lot of appeal, Nanaimo attracts many different kinds of immigrants. People from all over the world like clean air, mild weather, natural beauty, a high standard of living, and space to breathe. Are you new to Nanaimo, and feeling like a fish out of water? If so, please visit our "Immigrant Welcome Centre" downtown. It's managed by the Central Vancouver Island Multicultural Society, which does all kinds of work with immigrants, including translation services, helping with government forms, family counselling, and English classes. Gulf Island town The gulf islands and Vancouver Island have always appealed to those who seek a quieter, slower, naturally beautiful lifestyle. Artists and artisans, naturalists, healers, shamans, singer/songwriters, visionaries and comedians, schoolteachers, tradespeople, writers of every kind and people with hobby farms – these are examples of whom you might sit beside on BC Ferries. Nanaimo's population is diluted and varied, primarily urban – the boho artistic streak is not as concentrated as places like Saltspring and Hornby Islands. But it's certainly alive and well in Nanaimo! First Nations town In Nanaimo, people of aboriginal descent are not simply a memory – the native presence here is very tangible and visible, and we're proud of it. While there are certainly relics of the old native life preserved throughout the city, we are also fortunate to have a living voice from a significant population of First Nations peoples. The local nation is called Snuneymuxw, a Coast Salish people. The culture of Nanaimo has been informed and affected in cool ways by the First Nations peoples of the past and present, and it's heartening to see that our primarily non-native governments are increasingly seeking the counsels of our aboriginal neighbours and elders. Nanaimo has a lot of resources and there is room for all to prosper if we establish a vision that is inclusive and imaginitive. The Arts One: First Nations program at VIU provides not only a venue for aboriginal students to get formal education in matters relating to their nation and heritage; it's also open to non-natives, providing a unique opportunity for our two cultures to have some reconciliation and mutual understanding. Of the waves of immigrants who came to BC in the nineteenth century, the Chinese are prominent for their numbers, but also for the patient industriousness with which they endured the pioneering lifestyle and the hard work of the railroads and mines. They also endured intense bigotry from the more numerous white populations - the Chinese were also viewed as unfair competition in the labour market, because they were willing to work for cheap. The Chinese men who died in the coal mines were not even named by the bosses when they died (they show up in the accident registers as "Chinaman #42", etc). However the past fades, especially since many of the Chinese oldtimers moved away when the final Chinatown burned down in 1960. The city is also far more accepting of diverse peoples, now. It seems appropriately Chinese that those bad memories are being laid to rest, though not quite forgotten. Landscape art gallery Some fine landscape art ends up in the homes and galleries around Nanaimo. The local aesthetics are by turns beautiful, wild, and gloomy as the tides, which the local artists translate well into drunken, vibrant paintings. There are distinctive styles in Nanaimo which seem to derive from BC artists like E.J. Hughes and Emily Carr, and perhaps El Greco. Arbutus trees and sandstone, driftwood-choked inlets and the wide open sea, mountain vistas and great forests all make for stunning landscapes, and local artists are gifted in reproducing some of the unique combinations of form and colour that are Vancouver Island. Visit any of the local art galleries and you'll see a lot of great styles. Nanaimo Marine Festival (Bathtub Races) Beginning with the Silly Boat Regatta and culminating in the bathtub races on the following Sunday, the Nanaimo Marine Festival is considered by many residents and returning visitors to be the event of the year. "There's the marine festival, and all the days in between." That kind of thing. The International World Championship Bathtub Race (the event around which the Marine Festival revolves) is almost 50 years old. No matter what you're into, it's likely you'll find something to entertain you during this exciting Festival. The outdoors are a huge part of our cultural identity in Nanaimo. The city is spread out along a wide hump of Vancouver Island, between the Gulf of Georgia and the mountainous center of Vancouver Island. The result is an amazing variety of outdoor recreation, from diving to mountainbiking to mountaineering and kayaking — there's even world-class spelunking. People from all walks get out into the wilds for their entertainment. Congregations happen at trailheads and beaches, and at the pubs for wings and a pint on the way home. The Nanaimo River is a unique jewel, providing residents and visitors with the deepest, warmest swimming holes on Vancouver Island.
<urn:uuid:eec24106-d6f8-4ad5-a8a3-f744a278cc5e>
CC-MAIN-2020-16
https://www.nanaimoinformation.com/nanaimo-culture.php
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371830894.88/warc/CC-MAIN-20200409055849-20200409090349-00354.warc.gz
en
0.967067
3,573
2.5625
3
The term "Middle East" has become enormously elastic. The name originated with the British Foreign Office in the 19th century. The British divided the region into the Near East, the area closest to the United Kingdom and most of North Africa; the Far East, which was east of British India; and the Middle East, which was between British India and the Near East. It was a useful model for organizing the British Foreign Office and important for the region as well, since the British — and to a lesser extent the French — defined not only the names of the region but also the states that emerged in the Near and Far East. Today, the term Middle East, to the extent that it means anything, refers to the Muslim-dominated countries west of Afghanistan and along the North African shore. With the exception of Turkey and Iran, the region is predominantly Arab and predominantly Muslim. Within this region, the British created political entities that were modeled on European nation-states. The British shaped the Arabian Peninsula, which had been inhabited by tribes forming complex coalitions, into Saudi Arabia, a state based on one of these tribes, the Sauds. The British also created Iraq and crafted Egypt into a united monarchy. Quite independent of the British, Turkey and Iran shaped themselves into secular nation-states. This defined the two fault lines of the Middle East. The first was between European secularism and Islam. The Cold War, when the Soviets involved themselves deeply in the region, accelerated the formation of this fault line. One part of the region was secular, socialist and built around the military. Another part, particularly focused on the Arabian Peninsula, was Islamist, traditionalist and royalist. The latter was pro-Western in general, and the former — particularly the Arab parts — was pro-Soviet. It was more complex than this, of course, but this distinction gives us a reasonable framework. The second fault line was between the states that had been created and the underlying reality of the region. The states in Europe generally conformed to the definition of nations in the 20th century. The states created by the Europeans in the Middle East did not. There was something at a lower level and at a higher level. At the lower level were the tribes, clans and ethnic groups that not only made up the invented states but also were divided by the borders. The higher level was broad religious loyalties to Islam and to the major movements of Islam, Shiism and Suniism that laid a transnational claim on loyalty. Add to this the pan-Arab movement initiated by former Egyptian President Gamal Abdel Nasser, who argued that the Arab states should be united into a single Arab nation. Any understanding of the Middle East must therefore begin with the creation of a new political geography after World War I that was superimposed on very different social and political realities and was an attempt to limit the authority of broader regional and ethnic groups. The solution that many states followed was to embrace secularism or traditionalism and use them as tools to manage both the subnational groupings and the claims of the broader religiosity. One unifying point was Israel, which all opposed. But even here it was more illusion than reality. The secular socialist states, such as Egypt and Syria, actively opposed Israel. The traditional royalist states, which were threatened by the secular socialists, saw an ally in Israel. Aftershocks From the Soviet Collapse Following the fall of the Soviet Union and the resulting collapse of support for the secular socialist states, the power of the traditional royalties surged. This was not simply a question of money, although these states did have money. It was also a question of values. The socialist secularist movement lost its backing and its credibility. Movements such as Fatah, based on socialist secularism — and Soviet support — lost power relative to emerging groups that embraced the only ideology left: Islam. There were tremendous cross currents in this process, but one of the things to remember was that many of the socialist secular states that had begun with great promise continued to survive, albeit without the power of a promise of a new world. Rulers like Egypt's Hosni Mubarak, Syria's Bashar al Assad and Iraq's Saddam Hussein remained in place. Where the movement had once held promise even if its leaders were corrupt, after the Soviet Union fell, the movement was simply corrupt. The collapse of the Soviet Union energized Islam, both because the mujahideen defeated the Soviets in Afghanistan and because the alternative to Islam was left in tatters. Moreover, the Iraqi invasion of Kuwait took place in parallel with the last days of the Soviet Union. Both countries are remnants of British diplomacy. The United States, having inherited the British role in the region, intervened to protect another British invention — Saudi Arabia — and to liberate Kuwait from Iraq. From the Western standpoint, this was necessary to stabilize the region. If a regional hegemon emerged and went unchallenged, the consequences could pyramid. Desert Storm appeared to be a simple and logical operation combining the anti-Soviet coalition with Arab countries. The experience of defeating the Soviets in Afghanistan and the secular regimes' loss of legitimacy opened the door to two processes. In one, the subnational groupings in the region came to see the existing regimes as powerful but illegitimate. In the other, the events in Afghanistan brought the idea of a pan-Islamic resurrection back to the fore. And in the Sunni world, which won the war in Afghanistan, the dynamism of Shiite Iran — which had usurped the position of politico-military spokesman for radical Islam — made the impetus for action clear. There were three problems. First, the radicals needed to cast pan-Islamism in a historical context. The context was the transnational caliphate, a single political entity that would abolish existing states and align political reality with Islam. The radicals reached back to the Christian Crusades for historical context, and the United States — seen as the major Christian power after its crusade in Kuwait — became the target. Second, the pan-Islamists needed to demonstrate that the United States was both vulnerable and the enemy of Islam. Third, they had to use the subnational groups in various countries to build coalitions to overthrow what were seen as corrupt Muslim regimes, in both the secular and the traditionalist worlds. The result was al Qaeda and its campaign to force the United States to launch a crusade in the Islamic world. Al Qaeda wanted to do this by carrying out actions that demonstrated American vulnerability and compelled U.S. action. If the United States did not act, it would enhance the image of American weakness; if it did act, it would demonstrate it was a crusader hostile to Islam. U.S. action would, in turn, spark uprisings against corrupt and hypocritical Muslim states, sweep aside European-imposed borders and set the stage for uprisings. The key was to demonstrate the weakness of the regimes and their complicity with the Americans. This led to 9/11. In the short run, it appeared that the operation had failed. The United States reacted massively to the attacks, but no uprising occurred in the region, no regimes were toppled, and many Muslim regimes collaborated with the Americans. During this time, the Americans were able to wage an aggressive war against al Qaeda and its Taliban allies. In this first phase, the United States succeeded. But in the second phase, the United States, in its desire to reshape Iraq and Afghanistan — and other countries — internally, became caught up in the subnational conflicts. The Americans got involved in creating tactical solutions rather than confronting the strategic problem, which was that waging the war was causing national institutions in the region to collapse. In destroying al Qaeda, the Americans created a bigger problem in three parts: First, they unleashed the subnational groups. Second, where they fought they created a vacuum that they couldn't fill. Finally, in weakening the governments and empowering the subnational groups, they made a compelling argument for the caliphate as the only institution that could govern the Muslim world effectively and the only basis for resisting the United States and its allies. In other words, where al Qaeda failed to trigger a rising against corrupt governments, the United States managed to destroy or compromise a range of the same governments, opening the door to transnational Islam. The Arab Spring was mistaken for a liberal democratic rising like 1989 in Eastern Europe. More than anything else, it was a rising by a pan-Islamic movement that largely failed to topple regimes and embroiled one, Syria, in a prolonged civil war. That conflict has a subnational component — various factions divided against each other that give the al Qaeda-derived Islamic State room to maneuver. It also provided a second impetus to the ideal of a caliphate. Not only were the pan-Islamists struggling against the American crusader, but they were fighting Shiite heretics — in service of the Sunni caliphate — as well. The Islamic State put into place the outcome that al Qaeda wanted in 2001, nearly 15 years later and, in addition to Syria and Iraq, with movements capable of sustained combat in other Islamic countries. A New U.S. Strategy and Its Repercussions Around this time, the United States was forced to change strategy. The Americans were capable of disrupting al Qaeda and destroying the Iraqi army. But the U.S. ability to occupy and pacify Iraq or Afghanistan was limited. The very factionalism that made it possible to achieve the first two goals made pacification impossible. Working with one group alienated another in an ongoing balancing act that left U.S. forces vulnerable to some faction motivated to wage war because of U.S. support for another. In Syria, where the secular government was confronting a range of secular and religious but not extremist forces, along with an emerging Islamic State, the Americans were unable to meld the factionalized non-Islamic State forces into a strategically effective force. Moreover, the United States could not make its peace with the al Assad government because of its repressive policies, and it was unable to confront the Islamic State with the forces available. The collapse of the Soviet Union energized Islam, both because the mujahideen defeated the Soviets in Afghanistan and because the alternative to Islam was left in tatters. In a way, the center of the Middle East had been hollowed out and turned into a whirlpool of competing forces. Between the Lebanese and Iranian borders, the region had uncovered two things: First, it showed that the subnational forces were the actual reality of the region. Second, in obliterating the Syria-Iraq border, these forces and particularly the Islamic State had created a core element of the caliphate — a transnational power or, more precisely, one that transcended borders. The American strategy became an infinitely more complex variation of President Ronald Reagan's policy in the 1980s: Allow the warring forces to war. The Islamic State turned the fight into a war on Shiite heresy and on established nation states. The region is surrounded by four major powers: Iran, Saudi Arabia, Israel and Turkey. Each has approached the situation differently. Each of these nations has internal factions, but each state has been able to act in spite of that. Put differently, three of them are non-Arab powers, and the one Arab power, Saudi Arabia, is perhaps the most concerned about internal threats. For Iran, the danger of the Islamic State is that it would recreate an effective government in Baghdad that could threaten Iran again. Thus, Tehran has maintained support for the Iraqi Shiites and for the al Assad government, while trying to limit al Assad's power. For Saudi Arabia, which has aligned with Sunni radical forces in the past, the Islamic State represents an existential threat. Its call for a transnational Islamic movement has the potential to resonate with Saudis from the Wahhabi tradition. The Saudis, along with some other Gulf Cooperation Council members and Jordan, are afraid of Islamic State transnationalism but also of Shiite power in Iraq and Syria. Riyadh needs to contain the Islamic State without conceding the ground to the Shiites. For the Israelis, the situation has been simultaneously outstanding and terrifying. It has been outstanding because it has pitted Israel's enemies against each other. Al Assad's government has in the past supported Hezbollah against Israel. The Islamic State represents a long-term threat to Israel. So long as they fought, Israel's security would be enhanced. The problem is that in the end someone will win in Syria, and that force might be more dangerous than anything before it, particularly if the Islamic State ideology spreads to Palestine. Ultimately, al Assad is less dangerous than the Islamic State, which shows how bad the Israeli choice is in the long run. It is the Turks — or at least the Turkish government that suffered a setback in the recently concluded parliamentary elections — who are the most difficult to understand. They are hostile to the al Assad government — so much so that they see the Islamic State as less of a threat. There are two ways to explain their view: One is that they expect the Islamic State to be defeated by the United States in the end and that involvement in Syria would stress the Turkish political system. The other is that they might be less averse than others in the region to the Islamic State's winning. While the Turkish government has vigorously denied such charges, rumors of support to at least some factions of the Islamic State have persisted, suspicions in Western capitals linger, and alleged shipments of weaponry to unknown parties in Syria by the Turkish intelligence organization were a dominant theme in Turkey's elections. This is incomprehensible, unless the Turks see the Islamic State as a movement that they can control in the end and that is paving the way for Turkish power in the region — or unless the Turks believe that a direct confrontation would lead to a backlash from the Islamic State in Turkey itself. The Islamic State's Role in the Region The Islamic State represents a logical continuation of al Qaeda, which triggered both a sense of Islamic power and shaped the United States into a threat to Islam. The Islamic State created a military and political framework to exploit the situation al Qaeda created. Its military operations have been impressive, ranging from the seizure of Mosul to the taking of Ramadi and Palmyra. Islamic State fighters' flexibility on the battlefield and ability to supply large numbers of forces in combat raises the question of where they got the resources and the training. However, the bulk of Islamic State fighters are still trapped within their cauldron, surrounded by three hostile powers and an enigma. The hostile powers collaborate, but they also compete. The Israelis and the Saudis are talking. This is not new, but for both sides there is an urgency that wasn't there in the past. The Iranian nuclear program is less important to the Americans than collaboration with Iran against the Islamic State. And the Saudis and other Gulf countries have forged an air capability used in Yemen that might be used elsewhere if needed. It is likely that the cauldron will hold, so long as the Saudis are able to sustain their internal political stability. But the Islamic State has already spread beyond the cauldron — operating in Libya, for example. Many assume that these forces are Islamic State in name only — franchises, if you will. But the Islamic State does not behave like al Qaeda. It explicitly wants to create a caliphate, and that wish should not be dismissed. At the very least, it is operating with the kind of centralized command and control, on the strategic level, that makes it far more effective than other non-state forces we have seen. Secularism in the Muslim world appears to be in terminal retreat. The two levels of struggle within that world are, at the top, Sunni versus Shiite, and at the base, complex and interacting factions. The Western world accepted domination of the region from the Ottomans and exercised it for almost a century. Now, the leading Western power lacks the force to pacify the Islamic world. Pacifying a billion people is beyond anyone's capability. The Islamic State has taken al Qaeda's ideology and is attempting to institutionalize it. The surrounding nations have limited options and a limited desire to collaborate. The global power lacks the resources to both defeat the Islamic State and control the insurgency that would follow. Other nations, such as Russia, are alarmed by the Islamic State's spread among their own Muslim populations. It is interesting to note that the fall of the Soviet Union set in motion the events we are seeing here. It is also interesting to note that the apparent defeat of al Qaeda opened the door for its logical successor, the Islamic State. The question at hand, then, is whether the four regional powers can and want to control the Islamic State. And at the heart of that question is the mystery of what Turkey has in mind, particularly as Turkish President Recep Tayyip Erdogan's power appears to be declining.
<urn:uuid:b5d7ce13-636e-49dc-b936-73cf3de753fb>
CC-MAIN-2020-16
https://worldview.stratfor.com/article/net-assessment-middle-east
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510287.30/warc/CC-MAIN-20200403030659-20200403060659-00115.warc.gz
en
0.973086
3,411
3.875
4
Importance of the Arts Arts education has always appeared to be a contested field. Many arts tutors have argued that the subject should be in the school syllabus by emphasising its role in students’ ethical and individual personal development (Lemon & Garvis, 2013). The arts as a subject are perceived central to the idea of education being about understanding of love and learning to obtain a relevant knowledge. It is no coincident that the arts are usually linked with the thought of being educated. Hence, a knowledgeable person is believed to be concerned or involved in the arts (Plourde, 2002). Twentieth-century German theorist Ernst Cassirer assessed the importance of the arts as follows: science gives humans order in thoughts; morality gives humans order in actions; art gives humans order in the anxiety of visible, substantial and perceptible appearances (Stone, 1996). A better education includes a better arts education, introducing children and young people to have literature (novels, poems and short stories, plays), dance, visual arts, music and film. How a school focuses on the arts may be a subject for discussion, which relies on the expert teachers to have knowledge about arts. Yet, a school is still dedicated to introduce children to the most relevant forms of art in the curriculum. In late 1980s, arts experts from the United States and the United Kingdom created the discipline-based arts education (DBAE) as a method of describing what should be integrated in an arts syllabus (Garvis & Pendergast, 2010). Rejecting the previous importance on self-expression and child-centred education, DBAE involves four different integrated areas of arts around skills and art making, chronological knowledge, visual understanding and crucial judgement, with the objective of helping students to learn to imagine like artists and art opponents do (Bamford, 2004). A visual-arts syllabus might seek, therefore, to develop skills in, and knowledge of, a variety of art techniques, including line, colour, text and form. Importance of Different Art Forms This section provides a discussion of four different art forms, such as art painting, art sculpture, photography and computer arts. Art Painting: It is focused on creating pictures by using colours to a surface. Paintings can trace events as well as capture a picture of a person, place, or item; notify stories; decorate walls; and demonstrate texts (Bamford, 2004). An art painting can communicate emotions and ideas, or merely be enjoyed for its extensive qualities. Approximately 20,000 years ago, early humans used charcoal and minerals as coloured powders to construct images on cave walls (Tosun, 2000). Sometimes the colour powder was mixed with the saliva or animal fat to form a liquid, which was derived through reeds, or used with fingers. The first paintings were considered to depict hunting scenes. Art Sculpture: This art form represents the division of the visual arts, which operates through three different elements, including a form of the artificial arts. The durable sculptural method originally used statues or figures (the removal of material) and forming (the addition of objects, as clay), in rock, metal, ceramics, wood and other substances (Rice & Roychoudhury, 2003). A wide range of materials may be used by removal, such as carving, accumulating by welding, forming, or transmitting. An art sculpture in stone exists far better than works of art in fresh materials, and frequently represents the bulk of the surviving art works (other than ceramic) from antique cultures (Martinello & Gonzalez, 1987). However, certain traditions of sculpture in wood may have been misplaced almost exclusively. Photography: Visual impact cannot be undervalued – it is the very fundamental nature of creativity. Creative media, art photography, video and web links represent the best method to a familiar life without really being there (Falk & Dierking, 2013). In the business environment, whatever it is that they can do or work with, it is significant for companies to rapidly and efficiently communicate what they do in a way that connects with their audience. Art photography makes remembrance of special events and valuable moments dramatic (Dinham, 2011). Photography can be able to rewind time and presents a strong recall effect by looking at photos. Computer Arts: The use of computers arts is steadily established in many science fields and subjects. This part of art and design is considered to be exciting and energetic, with new IT technologies constantly developing, allowing the advancement of new methods to communicate and mix different art and design art forms (Dinham, 2011). Understanding and keeping up to date with the transforming technological needs of the art industry is important in order for students to be flexible, adjustable and employable (Hein, 1998). It is vital that students engage with new art technologies and advance the skills, knowledge and consideration that are necessary to communicate ideas successfully in a highly economical, technical and pioneering sector. Contextual Description of the School The school for which the lesson plans on teaching creative arts are created is a primary school located in Fort Worth, Texas. It presents a realistic and relevantly structured art curriculum that targets the learning needs of students from grades 3-6. Learning through creative arts has been perceived as a proper way to encourage integration of students’ cognitive, emotional and sensory potential (Chomley, 2005). The main approach adopted by the school is based on active participation in the arts. Learning experiences in the arts refer to broad aesthetic experiences, constant creative engagement in different art tasks and development of adequate skills that allow students to express themselves in a distinct, creative manner. Students have a prior experience in learning arts, which serves as a strong basis for introducing relevant art concepts and principles (Dean, 1994). As a result, the school is committed to providing an art program that suits students’ learning needs. Purpose of the Program The purpose of the program is to foster students’ awareness of the importance of creative arts, which reflects in their social and emotional growth. Experiences in the arts would allow students to use their full potential to contribute to their local community and to society as a whole. Participation in the arts, as an approach used in the process of teaching the art lessons, can expand students’ horizons in numerous ways (Caston, 1980). The major objective that can be achieved with this program is to help students learn about diverse artistic practices. In addition, young individuals have a relevant opportunity to learn that they are part of a living, dynamic and constantly evolving culture. This assumption is important to strengthen their view and interest in the arts. Students can be encouraged to interpret different art forms and concepts creatively and critically (Garvis, 2010). This would eventually demonstrate a strong focus on their imaginative and innovative potential that they can realize in practice through specific art forms. The first lesson plan presents the topic of art appreciation. It is intended for grades 3-6 from the described primary school. It has been assumed that students have a prior knowledge of certain art forms, mainly paintings and photographs. The main objective to be achieved with this lesson is to foster students’ understanding that every individual tends to demonstrate a different opinion or attitude toward the idea of what constitutes good art. As part of teaching strategies for this lesson, the teacher needs to use reproductions of artworks created in different styles. Moreover, the teacher should present to students a set of different shapes, including heart and house. In the beginning of the lesson, it is important to put art prints in front so that students can see them. Then, each student is given a different art shape, as they should be encouraged to explore the images in details. The shapes they are given serve as indicators of like and dislike regarding their own perception of the art prints. There should be a relevant discussion on the reasons behind students’ selection. Learning strategies are quite abstract considering the specificity of the lesson. Students may discuss certain cultural values as related to the process of perceiving and interpreting different art forms. The second art lesson to be taught is intended for students of grade K-2 from the identified primary school. The topic of the lesson is presented as primary hands, implying the use of portfolio assessment as a major teaching strategy. It is expected that children can obtain a significant knowledge of the primary colours in the process of making primary colour handprints themselves. Practical materials needed for this lesson include markers, crayons, white drawing paper, scissors and glue. The introductory stage of the lesson is dominated by a discussion of the three primary colours, respectively red, yellow and blue. Students need to be taught to grasp this basic concept in art. When students are ready to follow the practical part of this lesson, the teacher provides them with a white drawing paper so they can trace their hands on the paper. Another step of the teaching strategy is to request the students to colour the printed hands in primary colours. Moreover, students need to glue their hands on the construction paper as well as cut out the handout. In this way, they learn an essential practical art skill of making different shapes and colours. The use of students’ portfolio assessment allows the teacher to focus on learners’ progress because it is monitored in a structured way-from beginning to end. Bamford, A. (2004). Art and education: New frontiers. NAVA Quarterly, 2-4. Caston, E. (1980). The object of my affection: Commentary on museumness. Art Education, Chomley, F. (2005). Good arts partnerships don’t just happen-they have support. Presentation at Backing Our Creativity Education and the Arts Research Policy and Practice. Victorian College of the Arts, Melbourne. Dean, D. (1994). Museum exhibition: Theory and practice. London: Routledge. Dinham, J. (2011). Delivering authentic arts education. South Melbourne: Cengage. Falk, J. H. & Dierking, L. D. (2013). The museum experience revisited. Walnut Creek, CC: Left Garvis, S. (2010). An investigation of beginning teacher self-efficacy for the arts in the middle years of schooling (years 4-9). PhD Thesis. School of Music: University of Queensland. Garvis, S. & Pendergast, D. (2010). Supporting novice teachers and the arts. International Journal of Education and the Arts, 11(8), 1-22. Hein, G. (1988). Learning in the museum. London: Routledge. Lemon, N. & Garvis, S. (2013). What is the role of the arts in a primary school?: An investigation of perceptions of pre-service teachers in Australia. Australian Journal of Teacher Education, 38(9), 1-9. Martinello, M. L. & Gonzalez, M. G. (1987). The university gallery as a field setting for teacher education. The Journal of Museum Education, 12(3), 16-19. Plourde, L. A. (2002). The influence of student teaching on preservice elementary teachers’ science self-efficacy and outcome expectancy beliefs. Journal of Instructional Psychology, 29, 245-253. Rice, D. C. & Roychoudhury, A. (2003). Preparing more confident preservice elementary science teachers: One elementary science methods teacher’s self-study. Journal of Science Teacher Education, 14(2), 97-126. Stone, D. (1996). Preservice art education and learning in art museum. Journal of Aesthetic Education, 30(3), 83-96. Tosun, T. (2000). The beliefs of pre-service elementary teachers toward science and science teaching. School Science and Mathematics, 100, 374-379.
<urn:uuid:bbda9dbc-be18-4dc8-9b69-505baf2f464e>
CC-MAIN-2020-16
https://awfulessays.com/creative-arts.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371606067.71/warc/CC-MAIN-20200405150416-20200405180916-00474.warc.gz
en
0.938678
2,448
3.65625
4
The fundamental ideas in John Dewey’s 1913 essay, Interest and Effort in Education, are as true today as they were when he published it more than a century ago. His key point was that interest can motivate students to undertake efforts that may not be immediately engaging, and once they are engaged, they will start to develop skills and knowledge, leading to intellectual growth and development. The importance of interest and motivation is reflected in A Framework for K–12 Science Education, which states that “Learning science depends not only on the accumulation of facts and concepts, but also on the development of an identity as a competent learner of science with motivation and interest to learn more” (NRC 2012, p. 286). This emphasis on students’ attitudes and self-concept is certainly not a surprise to teachers. Both classroom teachers and afterschool and summer program facilitators know that engaging their students’ interest is essential for learning to occur. Yet, only cognitive learning is routinely assessed. One reason why it is uncommon to assess students’ attitudes is that they are not generally included in education standards. The Next Generation Science Standards acknowledge the importance of attitudinal goals (NGSS Lead States 2013), but did not include them as capabilities for assessment. Nonetheless, even though attitude changes are not valued in the same way as cognitive accomplishments, there are good reasons for assessing them. That is especially true in afterschool and summer programs, where getting kids interested in STEM (science, technology, engineering, and math) is often the primary goal; but it is also important in classrooms, so that teachers can find out what activities and teaching methods inspire their students. Although it is common to “take the temperature” of the class by observing the level of activity in the room and listening to students’ conversations, assessing changes in each student’s interest, motivation, and identity as a STEM learner is uncommon. Observing student engagement alone does not pick up more subtle changes in attitudes, or differences between boys and girls, or the views of quieter students. The Common Instrument Suite for Students (CIS-S) was designed to do just that. Although it was initially developed for use outside of school, it is of equal value in the classroom. The Common Instrument Suite for Students One way to know what young people are thinking or feeling is to ask them using a self-report survey. There are many such instruments in the literature that use various formats and types of questions, usually designed to evaluate a particular program. As a way of helping program leaders and evaluators take advantage of the tools that have already been developed, The PEAR Institute: Partnerships in Education and Resilience, located at McLean Hospital, an affiliate of Harvard Medical School in Boston, Massachusetts, collected existing assessment instruments and made them available through a website: Assessment Tools in Informal Science (ATIS). Each of the 60 tools on the ATIS website have been vetted by professional researchers, briefly described, categorized, and posted so they are searchable by grade level, subject domain, assessment type, or custom criteria. In addition, links are provided to the papers where the actual instruments reside so it is easy to access the tools once a user of the website has chosen one that could be useful. ATIS is a free service developed by The PEAR Institute with support from the Noyce Foundation. Although ATIS solved one problem—the need to develop new measurement instruments for every program evaluation—there was another problem that ATIS alone could not solve. At the time, the Noyce Foundation was providing millions of dollars in funding to several large youth organizations to infuse STEM into their camps and clubs. Each organization had its own evaluator, and each evaluator used a different tool to measure impact. As long as different instruments were being used to evaluate different programs, it was not possible to compare results and determine which programs and approaches were most effective at getting kids interested in STEM and helping them develop an identify as a STEM learner. Ron Ottinger, executive director of the Noyce Foundation (now called STEM Next) asked an important question: Why not bring together the directors and evaluators to see whether they could agree on the use of one of the instruments from the ATIS website? In July 2010, we (Sneider and Noam) facilitated a two-day meeting of several grant directors and evaluators to examine the instruments on the ATIS website to see whether we could agree on one that would be used to measure the impact of each program. The participants agreed that they all wanted youth to develop positive attitudes toward engaging in STEM activities, but none of the existing instruments were acceptable. Most were too long or applied exclusively to specific programs. Eventually the group developed a new self-report survey for student engagement that was composed of 23 items. One of us (Noam and The PEAR Institute) tested and refined the instrument on behalf of the team, eliminating questions that did not contribute significantly to its reliability. The final result was the Common Instrument (CI), a brief but highly valid and reliable self-report survey—now only 14 items—that takes only five minutes to complete, but captures students’ degree of engagement by asking them to indicate their level of agreement or disagreement with a set of statements, such as “I like to participate in science projects” (Noam et al. 2011). Over the next few years, practitioners, funders, and policy makers asked whether the CI could be extended to measure other dimensions of STEM attitudes, such as knowledge and interest in STEM careers, identification as someone who can “do” STEM, and voluntary participation in STEM-related activities. Other leaders asked whether the CI might also be expanded to include outcomes related to 21st-century/social-emotional skills such as critical thinking, perseverance, and relationships with peers and adults. New items were developed and tested to measure these additional dimensions. The result was the valid and reliable CIS-S. Evaluators can use just the questions from the CIS related to STEM engagement or include additional sets of questions to measure any of the other dimensions. All nine dimensions, with accompanying sample questions, are shown in Table 1. A survey that measures all nine dimensions has 57 items, which usually takes about 15 minutes for students to complete and can be used from fifth grade and up. The shorter, 14-item version is recommended for third grade and up. The complete instrument has also been tested for validity, reliability, and potential gender and multicultural bias (Noam et al., unpublished manuscript). Importantly, all scales of the CIS-S have national norms by age band and gender, so every child and every program can be compared to a representative sample. This makes the instrument truly common, so that there are markers with which one can assess local conditions without having to collect control group data. Each user benefits from all the data that were collected previously and contributes to the common data pool. The database now has more than 125,000 responses to the CI and CIS-S. The Common Instrument Suite for Students (CIS-S) Recently, the PEAR team created a survey for STEM facilitators and teachers in the afterschool space. This self-report survey, called the Common Instrument Suite for Educators (CIS-E), includes questions on - the training and professional development that educators have received and desire to receive, - their STEM identity and levels of interest and confidence in leading STEM activities, - their perceptions of growth in their students’ STEM skills and confidence, - their self-assessment of the quality of STEM activities that they present, and - their interactions with colleagues. The survey consists of about 55 questions and takes less than 15 minutes to complete. To help the organizations that are using these instruments, PEAR developed a dynamic data collection and reporting platform, known as Data Central. The automated platform produces an online data dashboard that displays actionable results shortly after collection is complete. Evaluators and practitioners can use these results to improve their programs and share with funders. The data reporting system also enables program leaders to compare their programs with thousands of afterschool and summer programs nationwide. Program quality and impact: A study across 11 states Dimensions of Success (DoS) is an observation instrument described in a companion article in this issue, “Planning for Quality: A Research-Based Approach to Developing Strong STEM Programming.” The instrument guides observers in examining the quality of STEM instruction through 12 dimensions of good teaching practices—such as strong STEM content, purposeful activities, and reflection. These qualities are equally important in classrooms as they are in afterschool and summer programs. With a two-day training, program leaders and teachers can learn to use the instrument themselves, so they do not have to hire a professional evaluator. Research studies have shown that the instrument produces reliable results, as two people trained in the use of DoS obtain very similar results when independently rating a lesson (Shah et al. 2018). DoS and the CIS-S and CIS-E instruments were used in a study of 1,599 children and youth in grades 4–12 enrolled in 160 programs across 11 states (Allen, Noam, and Little 2017; Allen et al., unpublished manuscript). Observers conducted 252 observations of program quality, and children and youth participating in the observed activities completed the CIS-S. Results show that high ratings of quality measured using the DoS instrument are strongly correlated with positive outcomes on the CIS-S, particularly with items related to positive attitudes about engagement in STEM activities, knowledge of STEM careers, and STEM identity. - 78% of students who participated in high-quality programs said they are more engaged in STEM. - 73% of students said they had a more positive STEM identity. - 80% of students said their STEM career knowledge increased. Not only did participation in high-quality STEM afterschool programs influence how students think about STEM, but more than 70% of students across all states also reported positive gains in 21st-century skills, including perseverance and critical thinking. And youth regularly attending STEM programming for four weeks or more reported significantly more positive attitudes for all instrument items than youth participating for less time. These findings provide strong support for the claim that high-quality STEM afterschool programs yield positive outcomes for youth. Pre–post tests are not essential to measure changes in attitudes Traditionally, self-report instruments such as the CIS-S are administered as pretests and posttests. That is, the youth in the program to be evaluated are given a list of statements such as “I get excited about science” before the program begins and then again, a month or two later, after the program is over. It is not unusual for there to be no change or even an apparent drop in interest or engagement, even when interviews show that the children enjoyed the program a great deal. One way to explain this result is that participants’ reference points change between the start and end of the program. Research studies have shown that a better way of measuring change in attitudes and beliefs is to administer a self-report survey only at the end of the program (using what is called a retrospective survey method) by asking participants to reflect on how the program affected their levels of interest, engagement, and identity (Little et al., Forthcoming). The retrospective method not only has the advantage of being more accurate when measuring change in attitudes and beliefs over time, but it also avoids asking children to fill out a questionnaire at the start of an afterschool or summer program, which may dampen their enthusiasm. It also removes the challenge of matching pretests and posttests and has the very practical effect of cutting the cost of data collection in half. An especially important feature of the CIS-S is that it is eminently practical. The assessment process is very brief, it only needs to be administered once at the end of a program, and it can be used by children as young as third grade. Providing this new set of tools has accomplished more than simply making program evaluation easier and less expensive. As illustrated by the 11-state study, when used in conjunction with the DoS observation tool, the CIS-S makes it possible to view the links between program quality and youth outcomes, and to determine which aspects of STEM programs are most influential in student growth. Given the importance of students’ interest, motivation, and self-confidence for acquiring knowledge and skills in all settings, the CIS-S can become as useful to classroom teachers as it has been to afterschool and summer STEM facilitators. The authors gratefully acknowledge the Noyce Foundation (now STEM Next Opportunity Fund), as well as the Charles Stewart Mott Foundation and the National Science Foundation for their support in developing these assessment instruments. We also want to acknowledge Dr. Patricia Allen for her careful reading, critique, and intellectual support of this paper. Dimensions of Success is based upon work supported by the National Science Foundation under Grant No. 1008591. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. ReferencesClick here to expand the list of references Allen, P.J., G.G. Noam, T.D. Little. 2017. Multi-state evaluation finds evidence that investment in afterschool STEM works. STEM Ready America. http://stemreadyamerica.org/multi-state-evaluation-finds-evidence-that-investment-in-afterschool-stem-works. Allen, P.J., G.G. Noam, T.D. Little, E. Fukuda, R. Chang, B.K. Gorrall, and L. Waggenspack. Unpublished manuscript. From quality to outcomes: A national study of afterschool STEM programs. Science Education. Dewey, J. 1913. Interest and effort in education. Cambridge, MA: Riverside Press. http://openlibrary.org/books/OL7104169M/Interest_and_effort_in_education. Little, T.D., R. Chang, B. Gorrall, and E. Fukuda. Forthcoming. The retrospective pretest–posttest design redux: On its validity as an alternative to traditional pre–post measurement. International Journal of Behavioral Development. National Research Council (NRC). 2012. A framework for K–12 science education: Practices, crosscutting concepts, and core ideas. Washington, DC: National Academies Press. NGSS Lead States. 2013. Next Generation Science Standards: For states, by states. Washington, DC: National Academies Press. www.nextgenscience.org/next-generation-science-standards. Noam, G.G., P.J. Allen, G. Sonnert, and P. Sadler. Unpublished manuscript. Validation of The Common Instrument: A brief measure for assessing science interest in children and youth. Belmont, MA: The PEAR Institute. Noam, G.G., D. Robertson, A. Papazian, and M. Guhn. 2011. The Common Instrument Suite: Background and summary information about the assessment tool. Boston, MA: Program in Education, Afterschool, and Resiliency; Harvard University; and McLean Hospital. www.thepearinstitute.org/common-instrument-suite. The PEAR Institute. 2017. A guide to PEAR’s STEM tools: Common Instrument Suite and dimensions of success. Boston, MA: Program in Education, Afterschool, and Resiliency; Harvard University; and McLean Hospital. www.thepearinstitute.org/stem. The PEAR Institute. 2018. Assessment tools in informal science Boston, MA: Program in Education, Afterschool, and Resiliency; Harvard University; and McLean Hospital. http://pearweb.org/atis. Shah, A.M., C. Wylie, D. Gitomer, and G.G. Noam. 2018. Improving STEM program quality in out-of-school-time: Tool development and validation. Science Education 102 (2): 238–59. https://doi.org/10.1002/sce.21327.
<urn:uuid:b80f891a-7070-4704-8ad3-d6ee31e178fe>
CC-MAIN-2020-16
http://csl.nsta.org/2019/07/the-common-instrument-suite/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371803248.90/warc/CC-MAIN-20200407152449-20200407182949-00074.warc.gz
en
0.950186
3,367
4
4
Patching is the process of deploying software updates. Often, these updates are resolving critical security vulnerabilities that can potentially be exploited by attackers. For organizations, patching is a critical element of good cybersecurity practices – and ensuring that all devices are compliant is essential. A growing number of cybersecurity regulations are creating standards for patch management, and enterprises from every industry are going to need better patch compliance. What is patch compliance? To put it simply, patch compliance refers to the number of devices on your network that are “compliant” – meaning that the machines have been successfully patched or otherwise remediated against new threats. Deploying patches does precious little if none of your devices are compliant, so keeping tabs on the success and reach of your patch deployment efforts is a critical step for a strong patch management strategy. Organizations of every size may be affected by an array of issues that can hinder their patching efforts, ranging from low endpoint visibility to the end of support for commonly used software and servers. While there are many variables that can affect the success of patch deployment, there is also no shortage of solutions and steps organizations can take to ensure their patching efforts are working – and that all devices and systems are compliant. Consider system software upgrades for patch compliance There are lots of reasons why organizations choose to forego system software upgrades, even when the software they're using will no longer be supported or receive necessary security updates. According to cyber experts, there are many challenges organizations may face: Smaller companies may not have the resources for a full OS upgrade, while updates for large-scale enterprises require substantial research and planning ahead of time. Another big concern for just about every organization is the potential for software upgrades to impact operational workflow. But experts agree that systems that go without being upgraded are a significant threat to an organization's cybersecurity. While the potential hit to operational workflow may seem like a huge sacrifice, it is going to be a minor inconvenience compared to the wreckage of a data breach or ransomware attack. Recently, Microsoft ended support for Windows 7 and Server 2008 – which means Windows 7 and Server 2008 users will no longer receive necessary patches for critical cyber vulnerabilities. Even though Microsoft has been hounding users to upgrade their operating systems to Windows 10 for months, current estimates suggest that 20 percent of people using Microsoft are running Windows 7. That means at least one out of every five Microsoft users is running unsupported software. Failure to upgrade comes with many risks. Unsupported software is not updated nearly as often, which means vulnerabilities are not getting remediated – leaving the door wide open for attackers. Scammers may even launch targeted phishing attacks, luring Windows 7 users into opening their malicious emails with “warnings” about their unsupported software. Continued use of unsupported software doesn't just hinder your overall patch compliance; it can affect your compliance with GDPR, PCI and HIPAA requirements as well. It's important to note that system software is not the only software that needs to be updated. Third party applications also need to be updated regularly. Applications like Java or Adobe can be home to significant cyber vulnerabilities, and if you're not updating your apps or using the latest version, these vulnerabilities can be exploited by attackers too. All software and applications need to be updated regularly in order to achieve compliance with new regulatory standards. Patching and cybersecurity standards If the devices on your organization's network aren't receiving necessary security updates, it can affect your compliance with critical cybersecurity standards and regulations. A number of government institutions and agencies have created sets of stringent cybersecurity standards to protect data and privacy. In the U.S., even different states may have different laws regarding how organizations must protect private or sensitive digital information. There are multiple prominent regulatory agencies that include some kind of patch compliance as part of their security standards. For example, PCI, or the Payment Card Industry Data Security Standard, is a set of security regulations that dictate the technical and operational standards businesses must follow to ensure credit card information given by cardholders is properly protected. Businesses that store, process, or transmit credit card data are required to be PCI compliant. And PCI requirement 6.1 dictates that organizations need to “deploy critical patches within a month of release” in order to maintain their compliance. Similarly, the EU's General Data Protection Regulation (GDPR), also requires a rigorous patching protocol for its security standards to be met to secure data. And for healthcare organizations, there are HIPAA regulations, which also call for stringent patching practices. Poor patch compliance on your network can substantially impede your regulatory compliance. If your devices aren't getting patched, then they're out of patch compliance – and your organization may be out of compliance with industry-specific cybersecurity standards. For example, if you're a healthcare provider, failure to patch could put you out of compliance with HIPAA. But out-of-date system software isn't the only concern when it comes to keeping your network up to snuff on security updates. Deploying patches is only the beginning; ensuring every device receives them successfully is the next step. What to know about endpoint visibility Guaranteeing that every device on your network is successfully receiving patches for critical vulnerabilities is essential to achieving sufficient patching compliance. But limitations in endpoint visibility and lack of inventory can be a real hindrance to the process of ensuring all devices are compliant. Experts agree that creating an inventory of everything on your network – including all devices and third party software – is a crucial element of cyber hygiene best practices. But it is also a critical step to achieving full visibility over endpoints. With a complete inventory, organizations can keep track of and secure all their assets more easily. A living inventory of devices and applications that are kept up-to-date gives organizations valuable information for overall cybersecurity – after all, you can't secure it if you don't know you have it. Achieving full endpoint visibility also necessitates the ability to “see” all your endpoints in real time. With modern patching platforms, users can see the endpoints on their network, no matter where they are located, as well as take action to remediate threats as needed. Full endpoint visibility gives users the ability to see what's happening on every device regardless of its location – so you can see what patches deployed successfully and what devices need more attention, in real time. Endpoints represent a substantial portion of the network for many organizations. Ensuring that all endpoints are receiving necessary security updates in a timely manner is critical to overall patch compliance. By maintaining visibility over endpoints, you can ensure that every device is updated and kept in compliance. Achieve compliance with automated patch management Automated patch management makes the process of patch compliance more accessible for organizations of any size. Automated patching solutions like Automox make it possible for users to patch across all devices – regardless of operating system, location or third party application – from a single interface. Automated patching helps organizations ensure that patches for critical vulnerabilities don't end up getting delayed or forgotten about entirely. Many regulations call for patches to deploy within a certain time frame – and automated patching solutions can help users ensure patches are getting deployed in a timely fashion. Manual patching protocols can make it difficult to adhere to the time restraints on patches set forth by regulations like PCI, but automated tools make the process of deploying patches and keeping records more streamlined. Legacy patch management solutions and manual patching processes can make the process of record keeping overly complex – particularly if an organization is using multiple operating systems and third party applications. But with modern, automated solutions, users can compile data and keep accurate records of their patch compliance with relative ease. Instead of having to pull data from multiple systems, solutions like Automox allow users to do and see everything from a single dashboard. Automated patching solutions improve patching confidence, give users full visibility over their entire network, and can also include detailed reporting – all of which is critical to patch compliance. And with automation, faster patch deployment is more accessible. Current estimates suggest that malicious actors can weaponize a known vulnerability in as little as seven days, and zero-day vulnerabilities are already being exploited in the wild at the time of disclosure. Meanwhile, estimates also suggest that it can take an average of up to 102 days for organizations to patch for critical vulnerabilities. Time is of the essence when it comes to patching; that's why several cybersecurity regulatory guidelines contain stipulations concerning time to patch. Patching is good, but patching faster is better. With a cloud-native, automated patch management solution like Automox, users can remediate zero-day vulnerabilities within 24 hours and take action against other critical vulnerabilities within 72 hours. Time is a luxury of the past; today's cyber attackers are moving faster and growing more sophisticated at a record pace – and many organizations need to do more in order to keep up. With automated patching tools, IT professionals can do more in less time. The importance of patch management compliance Automated patch management tools are a great option for ensuring patch management compliance. While “patch compliance” refers to the number of devices that have successfully received security updates, “patch management compliance” refers to cybersecurity regulations and standards regarding patch management. There are many agencies which require organizations to implement a routine patching process, complete with full documentation. As previously stated, regulatory standards like PCI and GDPR often have stipulations regarding patch timing and frequency. Patch management compliance casts a much wider net: In addition to regulations regarding time-to-patch and patch frequency, standards for visibility, reporting, and documentation are also being set. In other words, keeping track of the patches you've deployed is no longer enough. Regulations are growing more thorough, prompting organizations to keep in-depth documentation of a variety of reports and assessments. These can include regular baseline assessments of your network and its devices, non-compliant device reports, patch status and compliance reports, vulnerability assessments, and much more. Patch management compliance requires organizations to do more than just patch. In order to meet the standards of cybersecurity regulations, companies must have a documented patching protocol and conduct regular reports and analyses – as well as maintain an inventory of all assets and have visibility over those devices. While regulations may not explicitly state that things like full endpoint visibility are a must, being able to see your endpoints and monitor their patch status is crucial to overall patch compliance. Keeping your devices patch compliant will help your organization achieve overall patch management compliance, no matter what industry standards or regulations you have to meet. Using an automated patch management solutions supports patch compliance across all devices, and can help organizations ensure they are compliant with the cybersecurity regulations relevant to their industry. About Automox Automated Patch Management Facing growing threats and a rapidly expanding attack surface, understaffed and alert-fatigued organizations need more efficient ways to eliminate their exposure to vulnerabilities. Automox is a modern cyber hygiene platform that closes the aperture of attack by more than 80% with just half the effort of traditional solutions. Cloud-native and globally available, Automox enforces OS & third-party patch management, security configurations, and custom scripting across Windows, macOS, and Linux from a single intuitive console. IT and SecOps can quickly gain control and share visibility of on-prem, remote and virtual endpoints without the need to deploy costly infrastructure. Experience modern, cloud-native patch management today with a 15-day free trial of Automox and start recapturing more than half the time you're currently spending on managing your attack surface. Automox dramatically reduces corporate risk while raising operational efficiency to deliver best-in-class security outcomes, faster and with fewer resources.
<urn:uuid:a40510ec-f3b2-4a39-8039-9074919bab8c>
CC-MAIN-2020-16
https://blog.automox.com/patch-compliance
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506988.10/warc/CC-MAIN-20200402143006-20200402173006-00353.warc.gz
en
0.946543
2,389
2.515625
3
Drug Addiction and the Enabler It is not uncommon in most cases of addiction in Lehman that the individual's habit is influenced entirely or in part by somebody in their immediate environment. An enabler is either a knowing or unaware participant in the individuals struggle with drug addiction, and is someone who makes their addiction possible or easier to prolong. The act of enabling is typically carried out out of "concern" or "worry", but is more harmful than good in the end. A good example of an enabler is a family member or partner who gives a drug addicted individual any kind of financing, housing, and may even help the person to get their drugs in some way. The logic behind this is often that the enabling is helping the person to be in a safe and secure situation, rather than being on the streets or in harmful situations. Enablers are typically the crucial component in an addicted persons life which makes addiction possible. Reversely, enablers can also be the key to helping someone get off of drugs by discontinuing the enabling behaviors. As soon as the enabling has been stopped, drug addicted individuals will often realize that it is no longer possible to continue their habit and will reach a crisis point. This is why an enabler must recognize the situation immediately and instead of prolonging the individual's addiction, get them into an effective drug rehabilitation program in Lehman. Only then will both the enabler and the drug addict be able to go on with their lives in a much healthier and sane manner. What is Drug Rehab? Drug rehab in Lehman is sometimes an addicts only solution to ever recover from addiction, because all other attempts at quitting have failed. If they don't choose to seek treatment at a Lehman drug rehabilitation facility, the alternatives often include intense suffering not only for oneself but for one's friends and family. This can also consist of legal problems and a general deterioration of one's life in general. What should be realized is that addiction is a complicated condition that requires treatment. A quality rehabilitation program offers the intensive treatment needed which handles all areas of the addicted person's life so that they can see clearly and respond to situations in their lives analytically. For example, if there was abuse in one's childhood or from one's partner, this could easily predispose someone to drug addiction. Social inequities commonly result in substance abuse, so that individuals can "get the edge off" and feel more comfortable and accepted in social settings. Drug rehab helps resolve these types of issues, which are the real reasons the individual began using drugs in the first place. Once these issues have been handled through drug rehab, the person will be able to make it through life without using drugs as a crutch. How Much Does a Drug Rehab Cost? It can be difficult enough to get someone to want help and agree to enter a drug treatment program in Lehman. Finding the money to pay for drug treatment can often be a challenge, but one that can be overcome if one considers the many possibilities available in Lehman. Depending on which drug rehabilitation option is chosen, the cost of drug rehabilitation can vary considerably from program to program. Some outpatient and short term drug treatment programscenters]]] for example may be state or federally funded and may even be free of charge. These types of programs are also commonly the least effective however, a fact which should be considered over cost. More long term drug rehab centers in Lehman which have proven to be the most effective are residential and inpatient drug treatment centers which require a stay of at least 90 days. These types of drug rehab programs are typically more costly due to the fact that these facilities are private drug rehab programs and provide their clients with all food and shelter for the duration of their stay. These programs typically cost anywhere from $4000 to $20,000, depending on the length of stay and the amenities offered. Drug Treatment and Detox for Withdrawal Symptoms One of the reasons drug addicted individuals find it difficult to stop using drugs once they start using them, is because of physical and psychological dependency that inevitably develops when the person uses them long enough. It no longer becomes a matter of "willpower" because their bodies and minds will actually punish them both physically and mentally if they stop using drugs. This is called drug withdrawal, and is a major roadblock for individuals who wish to stop using drugs. Addicts will become extremely ill during withdrawal and can even die in in some cases, because seizures and strokes can occur with certain drugs and with alcohol. Depression is a very typical withdrawal symptom, which can become so severe that the person may commit suicide. To reduce certain withdrawal symptoms and to make detox a safer process, it is suggested that drug addicted individuals who wish to quit do so in a suitable environment such as a drug rehab facility. Drug treatment facilities in Lehman can not only medically monitor the person through the detoxification process and help alleviate and reduce withdrawal symptoms, but also ensure that the individual doesn't relapse back into drug use. After detoxification has been completed, addiction professionals in Lehman will then ensure that all underlying psychological and emotional issues tied to the individual's addiction are addressed so that they stay off of drugs once they leave the drug rehabilitation facility. Do I Need a Drug Rehab Center Individuals in Lehman can get captured in the routine of addiction so quickly, that before they know it their addiction has spun out of control and they can no longer control their behavior or choices relating to their drug use. One day a person may be using drugs "socially" and within just a brief amount of time, almost nothing else seems to be important. This is because drugs induce both bodily and psychological dependence that causes men and women to make drugs a lot more important than anything else in their lives. Although this can be difficult to comprehend for most who don't have a problem with drugs, individuals that are "good" people can quickly get caught up in the cycle of addiction; a cycle that can hardly ever be stopped without correct treatment at a drug rehab program in Lehman. At a drug rehab program, individuals will be able to first detox safely and control withdrawal symptoms with the aid of specialists and medical staff. More importantly, they will be able to handle points which brought on their drug use such as all psychological and emotional issues. Layer by layer these concerns can be resolved, so that there is no probability the person will fall prey to drug abuse once again in the future. Drug Addiction and Codependency Drug addiction and codependency go together, and many family members and loved one's of addicts in Lehman find themselves enthralled in an addicted individual's addiction. This can go so far that it reaches the level where the codependency is an addiction in itself. Addiction sometimes leads to both the drug addicted person and those closest to them to develop these unhealthy codependent relationships, which can lead to great emotional pain and ultimately ruin these relationships completely. Codependency can be tricky to recover from, especially when those affected forget how to perform normally in the relationship and become fully absorbed in drug addiction and its consequences. The only way to quit and recover from drug addiction and codependency is to seek treatment at a drug rehab facility. Many times, it is not only essential for the person who is actually using drugs to find treatment, but also for the men and women in their lives who have become codependent to seek out treatment as well. There are many drug rehab programs in Lehman which not only handle drug addiction but unhealthy codependency, which can help repair these relationships and prepare friends and family for a far more healthy relationship once treatment is finished. What are Different Drug Rehab Options? For individuals who are addicted to drugs, trying to beat the habit on one's own can be a losing battle. Usually the only true solution is professional drug treatment at a drug rehab facility in Lehman. Because there are many things to be considered when selecting an effective drug rehab center, it is helpful to know what different drug rehab options are available in Lehman and which one will prove most effective in each particular situation. Many drug rehabilitation centers in Lehman are based on the belief that addiction is a disease. While this type of drug rehab option may be effective for some, there are drug rehab options which effectively treat and entirely resolve addiction during the course of rehabilitation so that drug addiction never plagues the person again. In effect, these drug rehab options have proven time and time again that in fact addiction is not a disease but a condition that is 100% treatable and curable. Most drug rehab options that treat addiction in this way are in-patient and residential drug treatment programs which provide different types of counseling, behavioral therapy and drug education over an extended period of time, typically 90 days or more. Treatment is delivered until the recovering addict is able to leave treatment knowing that they will never feel the need to use drugs again and can make the fresh start they deserve. Drug Intervention and Drug Treatment Programs Drug intervention and drug rehab in Lehman are invaluable tools that can help families and loved ones of drug addicted individuals. Addiction can take over a person's will, mind and body to the point where they cannot help themselves, and this often reaches a point of crisis where they will need an intervention from those who love and care about them. In Lehman, drug treatment programs work with professional interventionists who can help organize and supervise drug interventions so that the addicted individual can finally find his way to recovery. Most drug interventions can be orchestrated and held within a matter of days or even hours as needed, and professional interventionists are trained and knowledgeable in dealing with even the toughest cases to get individuals into drug treatment. The alternatives are grim, and most individuals who don't receive such an intervention will lose their lives to addiction. Once the person is confronted by means of a drug intervention, they will understand how much love and concern their families and loved ones have for them and what they stand to lose if they don't get help. Once the addicted individual can see solutions rather than addiction problems, they will more often than not accept treatment help and start their path to recovery. Do I Need a Lehman Drug Rehabilitation Program? Sometimes it is difficult to know if a person in Lehman is in need of a Drug Rehab Program. Since most drug addictions start with casual or social use, it is often hard to tell when a particular person has crossed over into full blown addiction. With drug addiction, some common symptoms and behaviors exist that can help loved ones to decide if an individual is in need of a Drug Treatment Program in Lehman, Pennsylvania. Behaviors and Signs of Drug Addiction: If you or someone you care about in Lehman, PA. exhibits one or more of the above signs and symptoms of drug addiction, there is a need for Drug Rehab in Lehman. Usually people in Lehman who are caught in the grips of addiction feel hopeless. But, there is hope for addicts- hope through drug rehabilitation. An effective Drug Treatment Program can help an individual to recover from addiction and allow them to take back control of their lives. 20 local and nearby drug treatment listings in Lehman, Pennsylvania:
<urn:uuid:38a637e7-b18c-404d-a8f6-ac4c83e2ebae>
CC-MAIN-2020-16
https://www.drug-overdose.com/Lehman-PA-local_and_nearby_treatment_listing_directory.htm
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370508367.57/warc/CC-MAIN-20200402204908-20200402234908-00233.warc.gz
en
0.961548
2,277
2.515625
3
Protein is the single most important nutrient for weight loss and a better-looking body. A high protein intake boosts metabolism, reduces appetite and changes several weight-regulating hormones. Protein can help you lose weight and belly fat, and it works via several different mechanisms. This is a detailed review of the effects of protein on weight loss. Protein Changes The Levels of Several Weight Regulating Hormones Your weight is actively regulated by your brain, particularly an area called the hypothalamus. In order for your brain to determine when and how much to eat, it processes multiple different types of information. Some of the most important signals to the brain are hormones that change in response to feeding. A higher protein intake actually increases levels of the satiety (appetite-reducing) hormones GLP-1, peptide YY, and cholecystokinin, while reducing your levels of the hunger hormone ghrelin. By replacing carbs and fat with protein, you reduce the hunger hormone and boost several satiety hormones. This leads to a major reduction in hunger and is the main reason protein helps you lose weight. It can make you eat fewer calories automatically. Protein reduces levels of the hunger hormone ghrelin, while it boosts the appetite-reducing hormones GLP-1, peptide YY and cholecystokinin. This leads to an automatic reduction in calorie intake. Digesting and Metabolizing Protein Burns Calories After you eat, some calories are used for the purpose of digesting and metabolizing the food. This is often termed the thermic effect of food (TEF). Although not all sources agree on the exact figures, it is clear that protein has a much higher thermic effect (20-30%) compared to carbs (5-10%) and fat (0-3%). If we go with a thermic effect of 30% for protein, this means that 100 calories of protein only end up as 70 usable calories. About 20-30% of protein calories are burned while the body is digesting and metabolizing the protein. Protein Makes You Burn More Calories (Increases “Calories Out”) Due to the high thermic effect and several other factors, a high protein intake tends to boost metabolism. It makes you burn more calories around the clock, including during sleep. A high protein intake has been shown to boost metabolism and increase the amount of calories burned by about 80 to 100 per day. This effect is particularly pronounced during overfeeding, or while eating at a caloric surplus. In one study, overfeeding with a high protein diet increased calories burned by 260 per day. By making you burn more calories, high protein diets have a “metabolic advantage” over diets that are lower in protein. A high protein intake can make you burn 80-100 more calories per day, with one study showing an increase of 260 calories during overfeeding. Protein Reduces Appetite and Makes You Eat Fewer Calories Protein can reduce hunger and appetite via several different mechanisms. This can lead to an automatic reduction in calorie intake. In other words, you end up eating fewer calories without having to count calories or consciously control portions. Numerous studies have shown that when people increase their protein intake, they start eating fewer calories. This works on a meal-to-meal basis, as well as a sustained day-to-day reduction in calorie intake as long as protein intake is kept high. In one study, protein at 30% of calories caused people to automatically drop their calorie intake by 441 calories per day, which is a huge amount. So, high protein diets not only have a metabolic advantage – they also have an “appetite advantage,” making it much easier to cut calories compared to lower protein diets. High-protein diets are highly satiating, so they lead to reduced hunger and appetite compared to lower protein diets. This makes it much easier to restrict calories on a high-protein diet. Protein Cuts Cravings and Reduces Desire for Late-Night Snacking Cravings are the dieter’s worst enemy. They are one of the biggest reasons why people tend to fail on their diets. Another major problem is late-night snacking. Many people who have a tendency to gain weight get cravings at night, so they snack in the evening. These calories are added on top of all the calories they ate during the day. Interestingly, protein can have a powerful effect on both cravings and the desire to snack at night. This graph is from a study comparing a high-protein diet and a normal-protein diet in overweight men: The high-protein group is the blue bar, while the normal-protein group is the red bar. In this study, protein at 25% of calories reduced cravings by 60% and cut the desire for late-night snacking by half! Breakfast may be the most important meal to load up on the protein. In one study in teenage girls, a high-protein breakfast significantly reduced cravings. Eating more protein can lead to major reductions in cravings and the desire to snack late at night. These changes should make it much easier to stick to a healthy diet. Protein Makes You Lose Weight, Even Without Conscious Calorie Restriction Protein works on both sides of the “calories in vs calories out” equation. It reduces calories in and boosts calories out. For this reason, it is not surprising to see that high-protein diets lead to weight loss, even without intentionally restricting calories, portions, fat or carbs. In one study of 19 overweight individuals, increasing protein intake to 30% of calories caused a massive drop in calorie intake: In this study, the participants lost an average of 11 pounds over a period of 12 weeks. Keep in mind that they only added protein to their diet, they did not intentionally restrict anything. Although the results aren’t always this dramatic, the majority of studies do show that high-protein diets lead to significant weight loss. A higher protein intake is also associated with less belly fat, the harmful fat that builds up around the organs and causes disease. All that being said, losing weight is not the most important factor. It is keeping it off in the long-term that really counts. Many people can go on “a diet” and lose weight, but most end up gaining the weight back (28). Interestingly, a higher protein intake can also help prevent weight regain. In one study, a modest increase in protein intake (from 15 to 18% of calories) reduced weight regain after weight loss by 50%. So not only can protein help you lose weight, it can also help you keep it off in the long-term. Eating a high-protein diet can cause weight loss, even without calorie counting, portion control or carb restriction. A modest increase in protein intake can also help prevent weight regain. Protein Helps Prevent Muscle Loss and Metabolic Slowdown Weight loss doesn’t always equal fat loss. When you lose weight, muscle mass tends to be reduced as well. However, what you really want to lose is body fat, both subcutaneous fat (under the skin) and visceral fat (around organs). Losing muscle is a side effect of weight loss that most people don’t want. Another side effect of losing weight is that the metabolic rate tends to decrease. In other words, you end up burning fewer calories than you did before you lost the weight. This is often referred to as “starvation mode,” and can amount to several hundred fewer calories burned each day. Eating plenty of protein can reduce muscle loss, which should help keep your metabolic rate higher as you lose body fat. Strength training is another major factor that can reduce muscle loss and metabolic slowdown when losing weight. For this reason, a high protein intake and heavy strength training are two incredibly important components of an effective fat loss plan. Not only do they help keep your metabolism high, they also make sure that what is underneath the fat actually looks good. Without protein and strength training, you may end up looking “skinny-fat” instead of fit and lean. Eating plenty of protein can help prevent muscle loss when you lose weight. It can also help keep your metabolic rate high, especially when combined with heavy strength training. How Much Protein is Optimal? The DRI (Dietary Reference Intake) for protein is only 46 and 56 grams for the average woman and man, respectively. This amount may be enough to prevent deficiency, but it is far from optimal if you are trying to lose weight (or gain muscle). Most of the studies on protein and weight loss expressed protein intake as a percentage of calories. According to these studies, aiming for protein at 30% of calories seems to be very effective for weight loss. You can find the number of grams by multiplying your calorie intake by 0.075. For example, on a 2000 calorie diet you would eat 2000 * 0.075 = 150 grams of protein. You can also aim for a certain number based on your weight. For example, aiming for 0.7-1 gram of protein per pound of lean mass is a common recommendation (1.5 – 2.2 grams per kilogram). It is best to spread your protein intake throughout the day by eating protein with every meal. Keep in mind that these numbers don’t need to be exact, anything in the range of 25-35% of calories should be effective. More details in this article: How Much Protein Should You Eat Per Day? In order to lose weight, aiming for 25-35% of calories as protein may be optimal. 30% of calories amounts to 150 grams of protein on a 2000 calorie diet. How to Get More Protein in Your Diet Increasing your protein intake is simple. Just eat more of protein-rich foods. Meats: Chicken, turkey, lean beef, pork, etc. Fish: Salmon, sardines, haddock, trout, etc. Eggs: All types. Dairy: Milk, cheese, yogurt, etc. Legumes: Kidney beans, chickpeas, lentils, etc. You can find a long list of healthy high-protein foods in this article. If you’re eating low-carb, then you can choose fattier cuts of meat. If you’re not on a low-carb diet then try to emphasize lean meats as much as possible. This makes it easier to keep protein high without getting too many calories. Taking a protein supplement can also be a good idea if you struggle to reach your protein goals. Whey protein powder has been shown to have numerous benefits, including increased weight loss. Even though eating more protein is simple when you think about it, actually integrating this into your life and nutrition plan can be difficult. I recommend that you use a calorie/nutrition tracker in the beginning. Weigh and measure everything you eat in order to make sure that you are hitting your protein targets. You don’t need to do this forever, but it is very important in the beginning until you get a good idea of what a high-protein diet looks like. There are many high-protein foods you can eat to boost your protein intake. It is recommended to use a nutrition tracker in the beginning to make sure that you are getting enough. Protein is The Easiest, Simplest and Most Delicious Way to Lose Weight When it comes to fat loss and a better looking body, protein is the king of nutrients. You don’t need to restrict anything to benefit from a higher protein intake. It is all about adding to your diet. This is particularly appealing because most high-protein foods also taste really good. Eating more of them is easy and satisfying. A high-protein diet can also be an effective obesity prevention strategy, not something that you just use temporarily to lose fat. By permanently increasing your protein intake, you tip the “calories in vs calories out” balance in your favor. Over months, years or decades, the difference in your waistline could be huge. However, keep in mind that calories still count. Protein can reduce hunger and boost metabolism, but you won’t lose weight if you don’t eat fewer calories than you burn. It is definitely possible to overeat and negate the calorie deficit caused by the higher protein intake, especially if you eat a lot of junk food. For this reason, you should still base your diet mostly on whole, single ingredient foods. Although this article focused only on weight loss, protein also has numerous other benefits for health.
<urn:uuid:9e4dd173-332b-47fd-99d2-59d404b5c266>
CC-MAIN-2020-16
https://versionweekly.com/weight-loss/how-does-protein-help-in-weight-loss/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371665328.87/warc/CC-MAIN-20200407022841-20200407053341-00314.warc.gz
en
0.948185
2,673
2.609375
3
THIS IS HUGE. From a field of 450 works by 210 artists from 14 countries, curated by five historians* exploring 16th-21st century art of the Afro-Atlantic territories Jamaica's Barrington Watson's (1931-2016) Conversation is chosen for the catalogue cover and promotion imagery. Prepared over three years of research, the exhibition Histórias Afro-Atlânticas [Afro-Atlantic Stories] in an unprecedented collaborative initiative, shows --- in two Brazil venues: the São Paulo Art Museum Assis Chateaubriand (MASP) and the Tomie Ohtake Institute --- the impact of African cultures in Atlantic territories from the South of the United States, through the Caribbean to South America and draws parallels, frictions and dialogues around their art. Beauford Delaney, Dark Rupture James Baldwin, 1941 pictured left alongside Osmond Watson, Johnny Cool, 1967 Sixteen of the exhibition's 450 works come from nine Jamaican artists and include: - Albert Huie, Noon Time, 1943 - Barrington Watson, Conversation, 1981 - David Miller Senior, Obi, c1940 - Edna Manley, The Prophet, 1935 - Isaac Mendes Belisario, Cocoa Walks, c1840 - John Woods, Fisherman, 1943 - Mallica 'Kapo' Reynold, Revivalists, 1969 - Osmond Watson, Johnny Cool, 1967 - Ram Geet, Untitled, n.d. What it is. The Black Atlantic --- a term coined by Paul Gilroy --- "is a geography lacking precise borders, a fluid field where African experiences invade and occupy other nations, territories and cultures" and a "...culture that is not specifically African, American, Caribbean, or British, but all of these at once, a black Atlantic culture whose themes and techniques transcend ethnicity and nationality to produce something new and, until now, unremarked." What it means. It reminds that art from the Global South need not be judged in the scheme of Eurocentric paradigms and exclusive art systems but, when considered by our own academics, historians, collectors, curators and critics, can be assessed based on our own powerful individual and collective cultural legacies, unveiling and forever changing the expectations and positioning of Black artists. "The large art galleries and galleries of museums in the world generally portray only white people, both on the canvases and in the authorship of the works. One of the objectives of the exhibition is to show that it is not a matter of the absence of black authors, of black characters, that these works have not entered into the collections of museums and great galleries. " said curator Hélio Menezes (pictured below with O'Neil Lawrence, senior curator for the National Gallery of Jamaica from which many of the works were borrowed). Read the complete interview with *curator-in-chief Adriano Pedrosa, who was assisted by Ayrson Heraclita, Hélio Menezes, Lilia Maritz Schwarcz, and Tomas Toledo. See and read more about the exhibition here. RESULTS | More from the 25th Liguanea Lodge Art Auction staged last month. Bidding for Ken Abondarno Spencer's Untitled started at $110,000 and was sold for $165,000. Richard Hall's Sunbeam started at $60,000, and was sold for $100,000 and his work Awaiting Sale started at $55,000, and sold for $84,000. An untitled David Pottinger's started at the highest figure of $220,000, but no bids were placed on that piece. POP-UP | Art Gallery Decor, Kingston held a Pop up Art Auction on Thursday 12th July 2018 QUANTUM & INDICIA | ex FIDA BONA POLICY | Key takeaways for Jamaica's cultural and creative Industries from the Minister Olivia Grange's 2018/19 sectoral presentation in Parliament (read in full here) From Branded to Branding for Sustainable Prosperity: Brand Jamaica on the Rise are: plans to operationalize the National Cultural and Creative Industries Council as an over-arching, inter-sectoral, one-stop shop for targeted intervention in the sector including administering the JAD26M fund for the Jamaica Creative 100 Programme to support short to medium term small business entrepreneurial projects create new products for the global marketplace or to enter new markets; plans to establish a digital distribution and promotion platform for Jamaican music, video and fashion; plans to develope a Kingston Creative Media Village for increased visibility and accessibility of creative practitioners; plans to form a Creative Skills Council; and plans to create a Culture and Creative Industries Fund for Jamaica. A detail from Caryatid, 2018 by daughter of the Rock, Kathy Stanley, who was Portland, Oregan's Karuna Contemplative exhibiting artist for July 2018. Stanley is a visionary artist whose work explores the sacred feminine, earth and Gaian spirituality, mythic images, goddesses and transformation. Stanley tells a powerful story of her coming to art and the archetypal images of the sacred feminine which emerged. Maia Chung's work Miss Jamaica Pain was chosen to represent Jamaica in the Inter-American Development Bank (IDB) Art Collection’s Sidewalk of the Americas temporary installations in Argentina and at the IDB Washington, DC headquarters. The idea is to bring the project to IDB member countries to help link the development work of the Bank with the role of creatives in the pursuit of knowledge and innovation. It's here. It's fabulous. Get used to it. | Starbucks opened its first store in Kingston. The new store’s design showcases bespoke artwork from locally based Irish artist Fiona Godfrey. The foreground of the mural tells the story of its people, whilst the background features the expansive Blue Mountains, reminding of the precious coffee that grows there. (Courtesy of Starbucks partners.) For their exhibition Daylight Come… Picturing Dunkley’s Jamaica (May 27 – July 29 2018) the National Gallery of Jamaica introduced it’s first e-catalogue. While not as extensive as their print catalogues, e-catalogues will be created for select exhibitions and will provide notable insight and information on their respective exhibitions, while being easily accessible to the general public. Click here to view. NEWS MEDIA | continues to push below-the-radar local art. Television Jamaica's Smile Jamaica and in particular host Simone Clarke-Cooper delivers painstaking interviews with local artists, the Gleaner carried stories about Romaine McNeil among others, LOOP news featured the Trench Town Ceramics & Art Centre and a piece on Alicia Thomas, Pan Media's Art Events includes lengthy social media pieces on Jamaican art history and the Jamaica Information Service features iconic Jamaican works of art in their cultural updates. artMart Jamaica is an online platform for browsing, buying and delivering Jamaican art. Says founder and entrepreneur David Hall: “So many people want to buy local art but it isn’t always easy to find across the island." Works by Alexander Cooper, Aubrey Williams, Erwin de Vries and Lloyd Van Pitterson appear on the site. Remember to always request certification. Nanny of the Maroons. One of seven busts of Jamaica's National Heroes crafted by sculptor, Hon. Basil Watson for the Journey to Freedom corridor at Emancipation Park in Kingston. Part of the Rotary Club of Kingston's special Jamaica 55 Legacy Project for the 2017/18 Rotary Year,the entire project cost $25 million, due in part to Watson's waving of 50% of his fees as a contribution to telling the emancipation story. ZEMIS FOUND | Minister of Culture, The Hon. Olivia Grange announced a significant archaeological find at White Marl, St Catherine of four "priceless" zemis --- religious objects carved by the Taínos to contact spiritual beings who could perform deeds on their behalf --- by a collaborative excavation team which included the Jamaica National Heritage Trust, the Leiden University of the Netherlands, and the Department of History and Archaeology at UWI Mona. Research indicates that the area was occupied for more than 600 years (between 900 and 1500 AD) by the first Jamaicans — the Taínos. The Minister also announced work already begun to repatriate treasures that belong to Jamaica, which are in foreign countries. The zemi figures (pictured left) were found in 1792 in Manchester by a British surveyor. Subsequent provenance after this remains obscure before their acquisition and/or registration by the British Museum in 1977. No copyright infringement is intended. Ebony G Patterson is one of 25 artists participating in the inaugural exhibition curated by Dan Cameron in Swope Park, Kansas City, MO called Open Spaces. The project asks artists to make a new work for the public 8000 acre park and for which Ebony will fill a defunct public pool with bouquets, wreaths, toys, candy, loose flowers, and personal effects. Around the pool will sit four gold benches to recognize the space for the neighborhood and community to meet, relax, pause, and bear witness to the site and its history. This project will only be funded if it reaches its goal by Thursday, August 9 2018 3:02 PM CDT. As at this writing, the project has achieved 80% of its financing. Click here to help reclaim this space for the park and the people who use it. The Davidoff Art Initiative (DAI) announced that textile and fiber artist Katrina Coombs and digital animator Oneika Russell will be the fall 2018 residents for their FLORA ars+natura and Residency Unlimited in Brooklyn, NY. Coombs intends to create a body of work interrogating notions of belonging and nesting interests while Russell will explore how exotic places and people are an expression of Western desire. The Brooklyn nonprofit arts organization BRIC has appointed Kristina Newman-Scott, the director of culture for the State of Connecticut, as its new president. The appointment at BRIC makes Newman-Scott one of the very few women of color to lead a major New York cultural institution. BRIC (which stands for Brooklyn Information & Culture) is a nonprofit arts and media organization located in Brooklyn, New York City founded in 1979. Leasho Johnson leaves Jamaica for Chicago, Illinois to pursue a two year Master of Arts in Painting and Drawing at School of the Art Institute of Chicago (SAIC). The Art Institute of Chicago/Chicago Art Institute is one of America's largest accredited independent schools of art and design, is recognized as one of the top graduate art programs in the nation, as well as the most influential art school in the United States. SAIC's notable alumni include Richmond Barthé, Jeff Koons and Georgia O'Keefe. Dr. Janice Lindsay, (1974-2018) died at the University Hospital of the West Indies on Friday, July 6 after a brief illness. She was the Principal Director, Culture and Creative Industries Policy Division, in the Ministry of Culture, Gender, Entertainment and Sport and her expertise in heritage tourism played a large part in the Ministry’s accomplishments in the portfolio area of culture, nationally and internationally. If you wish to add resources to this site, or if you own the copyright for any of the material on this website and do not consent to its use herein, please contact us for guidelines &/or material take down. All site content is prepared using publicly available, "as-is" information with or without examining the actual works works. artephemera®com has no vested interest in any art assets that appear herein. One Twickenham Park | POB 703 Spanish Town | Jamaica +1 (876)978-4718
<urn:uuid:0031b9cb-10c3-4a68-9641-241b1bcb6fe7>
CC-MAIN-2020-16
https://www.artephemera.com/single-post/2018/07/31/Jamaica-Art-Market-Review-July-2018
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370526982.53/warc/CC-MAIN-20200404231315-20200405021315-00275.warc.gz
en
0.932098
2,434
2.671875
3
VoIP, or Voice over Internet Protocol, is simply the process of making and receiving telephone calls via the internet, as opposed to using a standard fixed-line service. When we talk about fixed-line services we are usually talking about either ISDN-based (Integrated Services Digital Network) services, often referred to as a digital telephone service; or POTS-based (Plain Old Telephone Service) services, often referred to as an analogue telephone service. Most VoIP services rely on a communications protocol called SIP (Session Initiation Protocol). VoIP providers will typically promote a product offering known as a “SIP Trunk” for businesses. A “SIP Trunk” will usually mention a number of lines as part of the package, and this is no different to having multiple analogue or digital lines in your existing phone system. For example: If you have a SIP Trunk with 10 lines, you can have 10 concurrent calls (inbound and/or outbound) at the same time. This would be the equivalent of having an ISDN-10 service, or 10x analogue (POTS) service. Some VoIP providers will also allow you to attach a larger number of Direct-In-Dial (DID) services to the lines. A DID is a telephone number. You could have 2x lines, and 4x DID’s; or 10x lines and 100-range DID. In either case you could assign DID’s to each telephone extension or groups of extensions. You could also use the extra DID’s for marketing, so you can accurately count the number of calls that a particular marketing campaign produces. I should point out that there is often confusion regarding the difference between VoIP and IP-based phone system’s (PBX’s), and the need to have a new IP-based PBX to use VoIP. Put simply, when we refer to VoIP we’re referring to the communications medium or “trunk” you’re using when your phone system makes the call. In this case we are using the Internet to make the call, as opposed to a fixed line service in your office. When we refer to IP-PBX’s and handsets, we’re referring to the way in which the handsets communicate with the PBX. In this case we’re using a data network to communicate, where the PBX and the handsets plug into a network switch, as opposed to having cables coming out of the PBX which are then connected directly (via wall socket) to the handset, as is the case of a “digital PBX”. It is possible to have a phone system with digital handsets that makes and receives calls via a VoIP service. Some digital phone systems are able to be upgraded to support SIP trunks. There are other ways to make older phone systems support VoIP, including things called “SIP Gateways”. These devices provide an ISDN or POTS socket which is converted to a network connection. Whilst these do work, they add an extra point of failure that needs to be maintained and so in the most part I would recommend avoiding them. It is also possible to have an IP-PBX make and receive calls via an ISDN or POTS connection. In this case the IP-PBX would have sockets for ISDN or POTS, or be connected via a “Gateway” device that converts one medium (IP) to another (ISDN or POTS). The reason I’m telling you all this is to highlight that irrespective of where you are in the lifecycle of your phone system or your telephone contracts, you may be able to take implement VoIP services, or IP-based PBX’s in your business NOW which allow you to expand on in the future. Alternatively, if your phone system was produced back when Telstra was called Telecom, you might want to consider an upgrade so you can take advantage of some of the features that VoIP and IP-based telephone systems could offer your business. So why might you want to start using VoIP for your business? The biggest driver for the adoption of VoIP is in call cost savings. Most VoIP providers will offer local and nation-wide untimed calls for about 10c/call, calls to mobile at around 19c/minute and international calls from 1.9c/minute – which is usually a significant saving over standard fixed-line call costs. There are also savings to be made on inbound calls if you offer a 1300 or 1800 service. As you’re aware, if you offer a 13 or 18 number service, you pay for the incoming call. If you’re paying standard line costs per minute this can add up pretty quickly! However if you use a VoIP based service for your inbound calls you can benefit from the reduced call rates as well. Many of the telecommunications providers have attacked the market with aggressive pricing on fixed line (PSTN and ISDN) services to compete with VoIP services, so it doesn’t hurt to shop around and compare what your call costs would be on a VoIP service versus that of one of the new fixed-line service offerings. Another reason you might consider VoIP is for redundancy. If a telephone line is cut or accidentally disconnected, your only option is to get the provider to redirect the call – if you’ve even noticed! With VoIP, most providers have built in features where they can automatically redirect calls to another number (for example, your mobile number) in the event that the connection between your phone system and the provider goes down. If you want additional redundancy you can use a 3G or 4G based mobile service as a “backup internet link” for your phone system. In fact, when we first moved into our new office in 2012 I had our entire office running on a Telstra 3G service, including our phone system, whilst we waited for the big “T” to install our lines! You may have also heard about “hosted VoIP PBX” services, whereby all the brains of the phone system is provided via the internet, meaning you just need to have handsets in your office. No need to have a box on the wall anymore! A hosted PBX offers even more flexibility, allowing you to have handsets connected to your phone system wherever there is an internet service. This could mean you have a phone in your home office, which is part of the same phone system as the business one. Or you could have multiple offices all sharing the one phone system. VoIP is not without its faults however. No doubt you’ve heard friends, colleagues and sales-reps tell you of horror stories with VoIP. The biggest factor in the success or failure of a VoIP service usually comes down to the internet connection being used to deliver the service. There’s a number of factors that attribute to this, from the actual “speed” of the service, the quality of the service, and whether your VoIP lines share the internet service with your office. An easy way to test your internet speed is using the free website www.speedtest.net. As a general rule, each VoIP “line” uses around 100 kbps (kilobits) of bandwidth in both directions (sending and receiving). If you are putting in a 10-line system, you would need to ensure you have an internet service that can provide (at least) 1000 kbps (1 mbps) of bandwidth in each direction. If you’re using a cheap internet service, expect sub-par results. Cheaper service providers oversell access to their network, meaning there’s no guarantee to the performance you’ll get. If it’s available, get a service from the VoIP provider. If you get an internet service from the VoIP provider your data will go directly from your network directly into theirs, with no interference from outside sources. The single most common reason I hear of is because the VoIP service is using the same internet service that the rest of the office uses. I’ve lost track of the number of people who’ve told me how they can implement Quality of Service (QoS) on the router to overcome this, but the truth is that once the data leaves your router all that is ignored! The reason for this is that when the data leaves your premises and travels onto the internet, you have no control over how it will flow. It’s that simple. The only sure-fire way of ensuring excellent results with VoIP is to put in a 2nd internet service that’s dedicated to VoIP. Who should you trust with your business? We’ve had excellent success with MyNetFone’s VoIP services over the last 8 years and whilst I highly recommend checking their offerings out, a little birdie over at OntheNet has advised me that they are introducing business VoIP services and I absolutely love doing business with OntheNet, so I would definitely reach out to them and see what they can offer your business. Word of warning – Watch out for ridiculously “cheap” services. I’ve seen a rush for offers from telephone salespeople lately trying to cash in on this market, offering cheap services. For almost all of us, having reliable telephones is an utmost necessity in our businesses – so don’t believe everything the sales person tells you (most don’t understand the terminology they use anyway!). Ask around, speak to an IT consultant or network expert and get references from other businesses who have been using their services for over a considerable time (6 months or more!). What’s the quality of the calls like? Do they get drop outs? If they have problems how do they find the support team? The last thing you want is to be left high and dry when things go wrong! Have a technology related question? Either post in the comments box below or drop me a line.
<urn:uuid:33985434-b77d-42fe-92a5-b9fbc7fbbcab>
CC-MAIN-2020-16
http://www.davidrudduck.com.au/articles/voip-what-is-it-how-does-it-work-why-you-should-use-it-and-what-to-watch-out-for/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500482.27/warc/CC-MAIN-20200331115844-20200331145844-00313.warc.gz
en
0.947597
2,118
3.03125
3
Christian vegetarianism is a Christian practice based on effecting the compassionate teachings of Jesus, the twelve apostles and the early church to all sentient or living beings through vegetarianism or, ideally, veganism. Alternatively, Christians may be vegetarian for ethical, environmental, nutritional or other spiritual reasons. - Alphabetized by author or source - Vegetarianism is a way of life that we should all move toward for economic survival, physical well-being and spiritual integrity. - While we resist violence, injustice, and war, and while we practice nonviolence, seek peace, and struggle for justice for the poor, we are also invited to break down the species barrier, extending our belief in Christian compassion to the animal kingdom by, among other things, adopting a vegetarian diet. … Vegetarianism proves that we’re serious about our belief in compassion and justice, that we’re mindful of our commitment, day in and day out, every time we eat. We are reminded of our belief in mercy, and we remind others. We begin to live the nonviolent vision, right here and now. … Many Christians who agree that harming a dog or cat is wrong think nothing of harming cows, pigs, chickens, fish and other creatures. We need to understand that if we’re eating meat, we are paying people to be cruel to animals. For the simple reasons that all animals are creatures beloved by God and that God created them with a capacity for pain and suffering, we should adopt a vegetarian diet. - Father John Dear, Christianity and Vegetarianism: Pursuing the Nonviolence of Jesus (Norfolk, VA: People for the Ethical Treatment of Animals, 1990). - Our appetite for meat leads to widespread, horrific cruelty to animals—chickens pressed wing-to-wing into filthy sheds and debeaked, for example. And since I've always espoused creative nonviolence as the fundamental gospel value, my vegetarianism helps me not to participate in the vicious torture and destruction of billions of cows, chickens, and so many other creatures. These chickens never raise families, root in the soil, build nests, or do anything natural. … Animals have feelings, they suffer; they have needs and desires. They were created by God to breathe fresh air, raise their families, peck in the grass, or root in the soil. Today's farms don't let them do anything God designed them to do. Animal scientists attest that farm animals have personalities and interests, that chickens and pigs can be smarter than dogs and cats. I like that even Jesus identified himself as “a mother hen who longs to gather us under her wings.” - Vegans recognize the value of life to all living creatures and extend to them the compassion, kindness, and justice in The Golden Rule. Vegans see animals as free entities in nature, not slaves or vassals, nor as chattel, pieces of goods to be bought and sold. An animal has feelings, an animal has sensitivity, an animal has a place in life, and the vegan respects this life that is manifest in the animal. Vegans do not wish to harm the animal any more than they would want the animal to harm them. This is an example of The Golden Rule precisely as it should be applied. - I believe my dear Master has been pleased to try my faith and obedience, by teaching me that I ought no longer to partake of any thing that had life. - Therefore, in the light of the Truth that God is love, and that Jesus came to make his love manifest in the world, we cannot believe it is his will for men to eat meat, or to do anything else that would cause suffering to the innocent and helpless. - Jesus' message is about love and compassion, but there is nothing loving or compassionate at factory farms and slaughterhouses, where billions of animals endure miserable lives and die violent deaths. Jesus mandates kindness and mercy for all God's creatures. He'd be appalled by the suffering that we inflict on animals today to indulge our acquired taste for their flesh. Catholics, and all Christians, have a choice. When we sit down to eat, we can add to the violence, misery and death in the world, or we can respect God's creatures with a vegetarian diet. I believe we're obligated to make choices that are as merciful as possible, and we can all do that at the dinner table with a vegetarian diet. There won't be any factory farms and slaughterhouses in heaven. - Esaias says: “The wolf also shall feed with the lamb, and the leopard shall take his rest with the kid; the calf also, and the bull, and the lion shall eat together; and a little boy shall lead them. …” I am quite aware that some persons endeavour to refer these words to the case of savage men, both of different nations and various habits, who come to believe, and when they have believed, act in harmony with the righteous. But although this is [true] now with regard to some men coming from various nations to the harmony of the faith, nevertheless in the resurrection of the just [the words shall also apply] to those animals mentioned. For God is rich in all things. And it is right that when the creation is restored, all the animals should … revert to the food originally given by God … that is, the productions of the earth. - Just as divorce according to the Saviour's word was not permitted from the beginning, but on account of the hardness of our heart was a concession of Moses to the human race, so too the eating of flesh was unknown until the deluge. But after the deluge, like the quails given in the desert to the murmuring people, the poison of flesh-meat was offered to our teeth. … At the beginning of the human race we neither ate flesh, nor gave bills of divorce, nor suffered circumcision for a sign. Thus we reached the deluge. But after the deluge, together with the giving of the law which no one could fulfil, flesh was given for food, and divorce was allowed to hard-hearted men, and the knife of circumcision was applied, as though the hand of God had fashioned us with something superfluous. But once Christ has come in the end of time, and Omega passed into Alpha and turned the end into the beginning, we are no longer allowed divorce, nor are we circumcised, nor do we eat flesh. - Veganism has given me a higher level of awareness and spirituality, primarily because the energy associated with eating has shifted to other areas. … If you're violent to yourself by putting [harmful] things into your body that violate its spirit, it will be difficult not to perpetuate that [violence] onto someone else. - The biblical case for vegetarianism does not rest on the view that killing may never be allowable in the eyes of God, rather on the view that killing is always a grave matter. When we have to kill to live we may do so, but when we do not, we should live otherwise. It is vital to appreciate the force of this argument. In past ages many – including undoubtedly the biblical writers themselves – have thought that killing for food was essential in order to live. But … we now know that – at least for those now living in the rich West – it is perfectly possible to sustain a healthy diet without any recourse to flesh products. … Those individuals who opt for vegetarianism can do so in the knowledge that they are living closer to the biblical ideal of peaceableness than their carnivorous contemporaries. The point should not be minimized. In many ways it is difficult to know how we can live more peaceably in a world striven by violence and greed and consumerism. Individuals often feel powerless in the face of great social forces beyond even democratic control. To opt for a vegetarian life-style is to take one practical step towards living in peace with the rest of creation. One step towards reducing the rate of institutionalized killing in the world today. - In early times of Christianity, even those who used animal food themselves came to think of the vegetarian as one who lived a higher life, and approached more nearly to Christian perfection. - A man can live and be healthy without killing animals for food; therefore, if he eats meat, he participates in taking animal life merely for the sake of his appetite. And to act so is immoral. - Leo Tolstoy, Writings on Civil Disobedience and Nonviolence (1886). - Men think it right to eat animals, because they are led to believe that God sanctions it. This is untrue. No matter in what books it may be written that it is not sinful to slay animals and to eat them, it is more clearly written in the heart of man than in any books that animals are to be pitied and should not be slain any more than human beings. We all know this if we do not choke the voice of our conscience. - Leo Tolstoy, The Pathway of Life: Teaching Love and Wisdom Vol 1 (1919), p. 68. - A mystery enwrapped Pythagoras, the preacher of vegetarianism … Silent fellowships were founded, remote from turmoil of the world, to carry out this doctrine as a sanctification from sin and misery. Among the poorest and most distant from the world appeared the Saviour, no more to teach redemption's path by precept, but example; his own flesh and blood he gave as last and highest expiation for all the sin of outpoured blood and slaughtered flesh, and offered his disciples wine and bread for each day's meal:—"Taste such alone, in memory of me." … Perhaps the one impossibility, of getting all professors to continually observe this ordinance of the Redeemer's, and abstain entirely from animal food, may be taken for the essential cause of the early decay of the Christian religion as Christian Church. But to admit that impossibility, is as much as to confess the uncontrollable downfall of the human race itself. - Those who eat flesh are but eating grains and vegetables at second hand; for the animal receives from these things the nutrition that produces growth. The life that was in the grains and the vegetables passes into the eater. We receive it by eating the flesh of the animal. How much better to get it direct by eating the food that God provided for our use! - Ellen G. White, The Ministry of Health (1942), p. 313. - The effects of a flesh diet may not be immediately realized; but this is no evidence that it is not harmful. Few can be made to believe that it is the meat they have eaten which has poisoned their blood and caused their suffering. Many die of diseases wholly due to meat eating, while the real cause is not suspected by themselves or by others. The moral evils of a flesh diet are not less marked than are the physical ills. Flesh food is injurious to health, and whatever affects the body has a corresponding effect on the mind and the soul. Think of the cruelty to animals that meat eating involves, and its effect on those who inflict and those who behold it. How it destroys the tenderness with which we should regard these creatures of God! - Ellen G. White, The Ministry of Health (1942), p. 315. - God gave our first parents the food he designed that the race should eat. It was contrary to his plan to have the life of any creature taken. There was to be no death in Eden. The fruit of the trees in the garden, was the food man's wants required. - Ellen G. White, Spiritual Gifts Vol 4 (1945), p. 120.
<urn:uuid:5a78a7a8-b402-41d5-ac35-5e04fa042436>
CC-MAIN-2020-16
https://en.wikiquote.org/wiki/Christian_vegetarianism
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371660550.75/warc/CC-MAIN-20200406200320-20200406230820-00434.warc.gz
en
0.96
2,391
2.515625
3
Trogonidae – Trogons & Quetzals The Trogonidae or trogons and quetzals are birds in the order Trogoniformes which contains only one family.. The family contains 43 species in seven genera. The fossil record of the trogons dates back 49 million years to the Early Eocene. They might constitute a member of the basal radiation of the order Coraciiformes or be closely related to mousebirds and owls. The word ‘trogon’ is Greek for ‘nibbling’ and refers to the fact that they gnaw holes in trees to make their nests. They are residents of tropical forests worldwide. The greatest diversity is in the Neotropics, where four genera, containing 28 species occur. The genus Apaloderma contains the three African species. The genera Harpactes and Apalharpactes, containing twelve species, are found in southeast Asia. They feed on insects and fruit, and their broad bills and weak legs reflect their diet and arboreal habits. Although their flight is fast, they are reluctant to fly any distance. Trogons are generally not migratory, although some species undertake partial local movements. Theyhave soft, often colourful, feathers with distinctive male and female plumage. They are the only type of animal with a heterodactyl toe arrangement. They nest in holes dug into trees or termite nests, laying 2–4 white or pastel-coloured eggs. According to the IOC there are 43 species in this family, which are: Eared Quetzal Euptilotis neoxenus Pavonine Quetzal Pharomachrus pavoninus Golden-headed Quetzal Pharomachrus auriceps White-tipped Quetzal Pharomachrus fulgidus Resplendent Quetzal Pharomachrus mocinno Crested Quetzal Pharomachrus antisianus Cuban Trogon Priotelus temnurus Hispaniolan Trogon Priotelus roseigaster Lattice-tailed Trogon Trogon clathratus Slaty-tailed Trogon Trogon massena Choco Trogon Trogon comptus Ecuadorian Trogon Trogon mesurus Black-tailed Trogon Trogon melanurus Black-headed Trogon Trogon melanocephalus Citreoline Trogon Trogon citreolus White-tailed Trogon Trogon chionurus Baird’s Trogon Trogon bairdii Green-backed Trogon Trogon viridis Gartered Trogon Trogon caligatus Amazonian Trogon Trogon ramonianus Guianan Trogon Trogon violaceus Blue-crowned Trogon Trogon curucui Surucua Trogon Trogon surrucura Black-throated Trogon Trogon rufus Elegant Trogon Trogon elegans Mountain Trogon Trogon mexicanus Collared Trogon Trogon collaris Masked Trogon Trogon personatus Narina Trogon Apaloderma narina Bare-cheeked Trogon Apaloderma aequatoriale Bar-tailed Trogon Apaloderma vittatum Javan Trogon Apalharpactes reinwardtii Sumatran Trogon Apalharpactes mackloti Malabar Trogon Harpactes fasciatus Red-naped Trogon Harpactes kasumba Diard’s Trogon Harpactes diardii Philippine Trogon Harpactes ardens Whitehead’s Trogon Harpactes whiteheadi Cinnamon-rumped Trogon Harpactes orrhophaeus Scarlet-rumped Trogon Harpactes duvaucelii Orange-breasted Trogon Harpactes oreskios Red-headed Trogon Harpactes erythrocephalus Ward’s Trogon Harpactes wardi Elegant Trogon Trogon elegansBirdLife Species Account Elegant Trogon Trogon elegansSpecies AccountSound archive and distribution map. Elegant Trogon Trogon elegansSpecies AccountThe elegant trogon (Trogon elegans) (formerly the "coppery-tailed" trogon), is a near passerine bird in the trogon family. Along with the eared quetzal, it is the most poleward-occurring species of trogon in the world, ranging from Guatemala in the south as far north as the upper Gila River in Arizona and New Mexico. Elegant Trogon Trogon elegansCornell Species AccountMany kinds of trogons live in tropical forests, but only one species regularly occurs in North America. Easily recognized by their metallic-green and rose-red colors, as well as their unusual stout-bodied, square-tailed profile, Elegant Trogons are a prized sighting for birders who visit southeastern Arizona. Elegant Trogon Trogon elegansHBW Species AccountTaxonomy: Trogon elegans Gould, 1834, Guatemala. Possibly closest to T. curucui, T. rufus, T. mexicanus, T. collaris and T. personatus; DNA studies suggest that T. rufus, T. collaris and T. personatus may be nearest relatives. Usually considered conspecific with T. ambiguus. Birds in El Salvador and Honduras intermediate between nominate and lubricus. Two subspecies recognized. Guianan Trogon Trogon violaceusBirdLife Species Account Guianan Trogon Trogon violaceusHBW Species AccountTaxonomy: Trogon violaceus J. F. Gmelin, 1788, no locality = Suriname. Guianan Trogon Trogon violaceusIUCN Species Status Guianan Trogon Trogon violaceusSpecies AccountSound archive and distribution map. Guianan Trogon Trogon violaceusSpecies AccountThe Guianan trogon (Trogon violaceus), is a near passerine bird in the trogon family, Trogonidae. It is found in humid forests in the Amazon basin of South America and on the island of Trinidad. Until recently, this species, the gartered trogon (T. caligatus) of Mexico, Central America, and northern South America, and the Amazonian trogon (T. ramonianus) of the western Amazon were all considered to be conspecific and collectively called violaceous trogon. Guianan Trogon Trogon violaceusCornell Species AccountThe Guianan Trogon was recently split from Violaceous Trogon along with Gartered Trogon and Amazonian Trogon and ranges from Venezuela, the Guianas, northern Brazil to the island of Trinidad. Masked Trogon Trogon personatusSpecies AccountThe masked trogon (Trogon personatus) is a species of bird in the family Trogonidae. It is fairly common in humid highland forests in South America, mainly the Andes and tepuis. Masked Trogon Trogon personatusCornell Species AccountThe Masked Trogon is a widespread species of humid montane forests in South America. Masked Trogon Trogon personatusSpecies AccountSound archive and distribution map. Masked Trogon Trogon personatusHBW Species AccountTaxonomy: Trogon personata Gould, 1842, Choachí (1996 m), Colombia. Masked Trogon Trogon personatusBirdLife Species Account Narina Trogon Apaloderma narinaBirdLife Species Account Narina Trogon Apaloderma narinaHBW Species AccountTaxonomy: Trogon Narina Stephens, 1815, Knysna District, Western Cape Province, South Africa. Recent molecular data suggest that this species and A. aequatoriale are sister-taxa. Races arcanum and rufiventre sometimes synonymized with nominate. Six subspecies recognized. Narina Trogon Apaloderma narinaSpecies AccountSound archive and distribution map. Narina Trogon Apaloderma narinaSpecies AccountThe Narina trogon (Apaloderma narina) is a largely green and red, medium-sized (32–34 cm long), bird of the family Trogonidae. It is native to forests and woodlands of the Afrotropics. Though it is the most widespread and catholic in habitat choice of the three Apaloderma species, their numbers are locally depleted due to deforestation. Some populations are sedentary while others undertake regular movements. The species name commemorates Narina, mistress of French ornithologist François Levaillant, whose name he derived from a Khoikhoi word for "flower", as her given name was difficult to pronounce. Resplendent Quetzal Pharomachrus mocinnoCornell Species AccountAcross time and cultures, the Resplendent Quetzal has been heralded for its great beauty. With an iridescent green sheen and uppertail covert feathers longer than its entire body, the bird has attracted much attention from pre-Columbian peoples, ornithologists, collectors, market hunters, and birders. Resplendent Quetzal Pharomachrus mocinnoSpecies AccountThe resplendent quetzal (pronunciation: /ˈkɛtsəl/) (Pharomachrus mocinno) is a bird in the trogon family. It is found from Chiapas, Mexico to western Panama (unlike the other quetzals of the genus Pharomachrus, which are found in South America and eastern Panama). It is well-known for its colorful plumage. There are two subspecies, P. m. mocinno and P. m. costaricensis. Resplendent Quetzal Pharomachrus mocinnoIUCN Species Status Resplendent Quetzal Pharomachrus mocinnoSpecies AccountSound archive and distribution map. Resplendent Quetzal Pharomachrus mocinnoHBW Species Account Resplendent Quetzal Pharomachrus mocinnoBirdLife Species Account White-tipped Quetzal Pharomachrus fulgidusBirdLife Species Account White-tipped Quetzal Pharomachrus fulgidusHBW Species AccountTaxonomy: Trogon fulgidus Gould, 1838, Guiana? = northern Venezuela. Has been considered possibly to form a group with P. pavoninus and P. auriceps or with P. mocinno and P. antisianus. Two subspecies recognized. White-tipped Quetzal Pharomachrus fulgidusSpecies AccountSound archive and distribution map. White-tipped Quetzal Pharomachrus fulgidusSpecies AccountThe white-tipped quetzal (Pharomachrus fulgidus) is a species of bird in the family Trogonidae. It is found in Venezuela, Colombia, and Guyana. In Venezuela and Colombia, three separated ranges occur, all contiguous and on the northern coasts. Its natural habitat is subtropical or tropical moist montane forests. White-tipped Quetzal Pharomachrus fulgidusCornell Species AccountThe White-tipped Quetzal occurs in the Santa Marta mountains of northern Colombia and in the mountain ranges of northern Venezuela. Ranging form 900 to 2500 meters, it occurs in a wide variety of habitats from sub-tropical to temperate forests, cloud forests, secondary growth and forest edge. Number of bird species: 43 Trogons: A Natural History of the TrogonidaeJoseph M. Forshaw | Illustrated by Albert Earl Gilbert | Hardcover | 2009 | 304pp | 75 colour illustrations ISBN: 9788496553514 Buy this book from NHBS.com
<urn:uuid:78b8184a-6ca8-429a-a772-a6e2aaeb7b08>
CC-MAIN-2020-16
https://fatbirder.com/ornithology/trogonidae-trogons-and-quetzals/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371880945.85/warc/CC-MAIN-20200409220932-20200410011432-00554.warc.gz
en
0.753762
2,565
3.90625
4
The Blue devil damselfish is a gorgeous and insanely popular saltwater fish. In fact, a few years back, it was the #2 most imported fish. They are small, beautifully colored and as inexpensive as saltwater fish get. It can be challenging, at times, to tell the difference between different species of damselfishes–and the blue devil damselfish can be particularly problematic, because the same species of fish can look quite different, depending on what part of the world they are from. As their name implies, the blue devil damselfish does have a dark side. We’ll explore the good and the bad aspects of keeping Chrysiptera cyanea in this article. Quick Facts About the Blue Devil Damselfish : - Scientific Name: Chrysiptera cyanea - Common Names: blue damselfish, South sea demoiselle, blue demoiselle - Max Size: About 3 inches - Minimum Tank Size: 20-30-gallons (singly), 55-gallons + for 2 or more - Aggression Level: Aggressive - Color: Blue, Yellow, Orange - Care Level: Beginner - Most Active: Day - Lifespan: 2 To 6 Years Natural habitat of the South sea demoiselle Like many other beloved saltwater fishes, the blue devil damselfish comes from the reefs around of the Indo-Pacific Ocean. Their preferred location is in the shallow lagoons or reefs, usually found near coral where they can quickly retreat if threatened. Blue devil damselfish proper tank conditions & behavior The colorful Blue devil damselfish is a hardy fish that adapts quite well to aquarium conditions. If it’s suitable for other reef fishes or corals, it will likely be suitable for this fish. Due to their relatively small size as adults (about 3 inches), they don’t require a large tank. Anything around 20-30 gallons should be sufficient for a single damselfish in a tank. This is a bold species that will be out-and-about in your tank, staying close to liverock or the substrate and zipping in and out of rocks for cover–and defeding their territory from others that travel in. Aggression in Chrysiptera cyanea Although they are tiny in size, the blue devil damselfish is quite large…in aggression. While the fish is otherwise perfectly suited as a beginner fish, it is best to just keep looking, if your goal is to have a peaceful community tank. If, on the other hand, your ideal tank setup has a small shoal of these aggressive electric-blue fishes, my advice is to use the same strategy I used for stocking aggressive freshwater cichlids, back in the day. The theory for stocking aggressive fishes goes like this–fish tend to establish a pecking order or dominance structure. If you have two aggressive fish or one aggressive and one non-aggressive fish, chances are good that the most aggressive of the two will spend the majority of their time letting the less dominant fish know they are in charge. If you have one dominant and two submissive fish in the tank, the dominant fish will split their time harassing the other fishes. This math would theoretically continue until the aggressive fish is no longer able to keep it all straight. So while it may seem counter-intuitive, a group of 5-7 damselfishes will likely behave better than 1-2, because the aggression gets spread out among the group, which reduces the likelihood that a single fish gets attacked enough to cause serious injuries. That approach doesn’t always work but is certainly something to consider, if you are thinking about adding a blue devil damselfish or any other damselfish to your tank. If you are going to add a shoal of damsels, you will want to have a tank that is 55-gallons or larger in size. They are territorial so make sure to place them in a large tank. A large tank is especially important if you plan to house other fish with them. For a single fish set-up, you can go with a minimum of 30-gallons. If you want a community tank, then you should go no lower than a 55-gallon tank, but a 100-gallon is safest. In some cases, a mated pair of Blue Devils can live peacefully in a 30-Gallon tank, but this is not recommended as aggression can easily pop-up later on. There was also some evidence, years back, that adding cleaner fish to a tank helps reduce the number of aggressive attacks. Compatibility with the blue devil damselfish Compatibility is one of the more challenging issues when planning for the blue devil damselfish. As mentioned earlier, this species is aggressive, which makes them difficult to add to a community tank. These fish will harass others that swim into their territory, making it a stressful living situation for the other fish. However, unlike other aggressive fish, like eels, puffers, lionfishes and triggerfishes, Chrysiptera cyanea is tastily bite-sized, making them an equally bad fit for those larger aggressive fishes. They are, technically, reef-safe, in that they won’t bother your corals or other invertebrates. The sweet spot here is to house them with other small, but aggressive fish species. Certain dottybacks, other damselfishes, etc., Like males in my own family (myself included), it seems that the blue devil damselfishes also get even more cranky (territorial?) with age. So keep a close eye on them when they reach their golden years. Scott Michael recommends (in Marine Fishes) keeping a single male with a few females as the maximum size of a group. Aggression is more likely between two males in the tank. One final note about compatibility–recall that by nature, this fish is likely going to be relatively substrate attached, meaning it will stay call ‘home base’ part of the rock structure in your tank. When choosing tankmates, you will want to avoid adding other substrate attached fishes that will be seen as a threat (clownfishes, etc.), but may have better luck with open water swimmers or fishes that will occupy an unrelated niche in the tank. Breeding, reproduction and sexing in the South sea demoiselle While I spent most of the time here talking about the aggressive nature of the blue devil damselfish, they apparently do have a softer-side, because they do pair up and spawn in captivity relatively easily, by saltwater aquarium fish standards. Males and female blue devil damselfishes display different colorations. Males have yell0w/orange tails and ventral fins, while females have less yellow (mostly blue) with transparent fins. However, you might have guessed that spawning devils have even more incentive to be aggressive. The greater challenge with breeding these fishes is raising the larvae, which can be quite small and therefore more challenging than some other larger larvae (like clownfish larvae) to feed and grow through metamorphosis. It may be worth all the aggression and drama to witness the courtship dance (swim) in your tank of a male enticing the female back to the chosen spawning location. The female will often go to see the male multiple times right before they mate. If the male accepts the female’s interest, then he will perform a mating dance for her. Spawning itself is similar to other substrate spawners, like clownfishes, where the female lays eggs on the walls and ceiling of the structure, allowing time for the male to fertilize in-between batches’. The male guards the nest of eggs for about 4 days until they hatch. To learn more about breeding damselfish, I strongly recommend you pick up a copy of The Complete Illustrated Breeder’s Guide to Marine Aquarium Fishes. Despite the Blue Devil Damselfish’s aggressive nature, it is an omnivore, not exclusively a carnivore. This species prefers to eat a mixture of food types making them an omnivore. C. cyanae will happily eat anything from algae to fish eggs in the wild. The tank is no different, and you can feed them a mix of flakes and meaty foods. Frozen or shredded meat is a great choice to balance out their diet. You will need to feed this species around two times a day and make sure they are eating well. Be aware the missing feeding times can cause the C. cyanae to become aggressive towards tank mates. If you are trying to get them ready to spawn, feed a high calorie, meaty diet with blackworms, mysis shrimp, brine shrimp, grated squid or shrimp. Where To Buy The Blue devil damselfish is the #2 most important fish in the United States. So according to the law of big numbers, it is likely available wherever you plan to buy your saltwater fish. They are also so inexpensive, and so brilliantly blue colored that they tempt the casual hobbyist who is just window-shopping. Whether to Buy a Blue Devil Damselfish or Not The blue devil damselfish is a gorgeous, hardy and inexpensive saltwater fish species that will survive brilliantly in almost any marine aquarium. The biggest concern is for the other fishes you hope to add to what will quickly become THEIR TANK (not yours). They are relatively easy to spawn and will eat whatever food you offer them, but aggression gets worse over time and seriously limits your options. The only real option you have here is to create a small aggressive species tank. If that’s what you’re in the market for, then this is a great choice. If you are otherwise scared of the costs of a saltwater tank and want to prove you can keep some fishes alive, this could be an option. If you want a peaceful tank full of a range of amazing fish species, you should probably pass over this almost too cute to pass up fish. What to read next If you’re planning to keep damselfishes in your tank you may want to read up on how to deal with aggression in a saltwater tank–to have a few tricks up your sleeve in case these little devils act like…well you get the point. - Learn a few tips on how to better deal with aggression in saltwater fish. - The blue devil damselfish is the second most popular saltwater fish. Find out what one fish is imported more. - Or perhaps you would rather check out the damselfish’s mild-mannered cousin, the green chromis. Watch this video Michael, Scott W. Marine Fishes: 500+ Essential-to-Know Aquarium Species. T.F.H. Publications. Neptune City, NJ 2001. Wittenrich, Matthew L. The Complete Illustrated Breeder’s Guide to Marine Aquarium Fishes. T.F.H. Publications. Neptune City, NJ 2007.
<urn:uuid:4198f679-0279-4ac1-a19d-e1526f1468f9>
CC-MAIN-2020-16
https://www.saltwateraquariumblog.com/blue-devil-damselfish/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371637684.76/warc/CC-MAIN-20200406133533-20200406164033-00554.warc.gz
en
0.941871
2,300
3.0625
3
An anterior cruciate ligament (ACL) sprain or tear is one of the most common knee injuries. High demand sports like soccer, football, and basketball have a higher incidence of ACL injuries. An injury to the anterior cruciate ligament may need surgery to regain full function of the knee, depending on the severity of your injury, your activity level, and other factors. ANATOMY OF THE KNEE The knee is a hinged joint made up of four main parts: bones, cartilage, ligaments, and tendons. The thighbone (femur), shinbone (tibia), and kneecap (patella) meet to form the knee joint. The kneecap helps protect the front of the joint. Ligaments connect bones to one another and keep the knee stable. The knee has four primary ligaments, of two types: - Collateral Ligaments – on the sides of the knee, controlling the sideways motion of the knee and braces it against unusual movement. - Medial collateral ligament (MCL) on the inside - Lateral collateral ligament (LCL) on the outside - Cruciate Ligaments – located inside your knee joint, controlling the back and forth motion of the knee. - Running diagonally in the middle of the knee, the anterior cruciate ligament (ACL) prevents the shinbone (tibia) from sliding out in front of the thighbone (femur) and provides rotational stability. - The posterior cruciate ligament (PCL) mirrors the ACL, but is attached to the back of the knee, crossing the ACL in an X. The weight-bearing surface of the knee is covered by a layer of articular cartilage. Between the cartilage surfaces of the thighbone and shinbone on either side of the joint are the medial meniscus and lateral meniscus. They act as shock absorbers and work with the cartilage to reduce stress between the shinbone and the thighbone. Your anterior cruciate ligament can be injured in several ways: - Rapidly changing direction - Deceleration coupled with cutting, pivoting or sidestepping moves - Suddenly stopping - Slowing down while running - Awkward or incorrect landings from a jump - Out of control play - Direct contact or collision (like a football tackle) The majority of ACL injuries occur through non-contact – a smaller percent are from direct contact with another player or object. Female athletes tend to have a higher incidence of ACL injuries than males in certain sports. This may be due to differences in physical conditioning, muscular strength, and neuromuscular control, differences in pelvis and lower leg alignment, increased looseness in ligaments, and the effects of estrogen on ligament properties. Half of ACL injuries happen when there is damage to other structures in the knee, such as the meniscus, articular cartilage, or other ligaments. There can also be bruising of the bone beneath the cartilage surface. Magnetic resonance imaging (MRI) scans can help to see these additional injuries. Football players and skiers commonly injure the ACL, the MCL, and the medial meniscus – nicknamed the “unhappy triad.” An injury to a ligament is called a sprain and is graded on a severity scale. Grade 1 Sprains – The ligament is mildly damaged. It has been slightly stretched but can still to help keep the knee joint stable. Grade 2 Sprains – The ligament is stretched to the point where it becomes loose. Also referred to as a partial tear of the ligament. Partial tears are rare – most ACL injuries are complete or near complete tears. Grade 3 Sprains – The ligament is split into two pieces, and the knee joint is unstable. Commonly called a complete ligament tear. SYMPTOMS OF ACL INJURIES When the anterior cruciate ligament is injured, typical symptoms include: - A “popping” noise - The knee gives out from under you - Loss of range of motion - Tenderness along the joint line - Discomfort while walking - Pain with swelling. Within 24 hours after the injury, the knee swells. Sometimes, the swelling and pain resolve on their own. You risk causing further damage to the cushioning cartilage (meniscus) of your knee if you return to sports, as your knee may be unstable. Your Florida Orthopaedic Institute physician checks all the structures of your injured knee and compares them to your non-injured knee during a physical examination. Most ligament injuries can be diagnosed with a thorough physical examination of the knee. They will also ask you about your symptoms and medical history. Other tests that help your doctor confirm a diagnosis include X-rays and MRI scans. Although X-rays don’t show injuries to your anterior cruciate ligament, they can show whether the injury is associated with broken bones. MRI scans (Magnetic Resonance Imaging) create a better image of your soft tissues like anterior cruciate ligaments. Your physician may also perform a Lackman Test which tests the movement of the knee. The test helps identify the anterior cruciate ligament’s integrity and gauge instability in various directions. Treatment for an ACL tear depends upon the patient. Young athletes involved in agility sports usually need surgery to safely return to them. Less active and older individuals may not need surgery. Torn ACLs do not heal without surgery, but nonsurgical treatment may be effective for patients who are older or have very low activity levels. Non-surgical healing varies from patient to patient and depends on their activity level, the degree of injury and knee instability. A positive outcome for partially torn ACLs without surgery is possible, with the recovery and rehabilitation period typically lasting at least three months. Some patients with partial ACL tears may still have instability symptoms. Comprehensive clinical follow-up and physical therapy help identify patients that have unstable knees from partial ACL tears. Without surgical intervention, complete ACL ruptures have a much less favorable outcome. After a complete ACL tear, some patients have instability during walking or other normal activities. Athletes are usually unable to take part in sports that involve cutting or pivoting movements, but there are a few who can participate without any symptoms of instability. It all depends on the severity of the original knee injury and the physical demands of the patient. Secondary damage to the meniscus, articular cartilage or other ligaments can occur in patients who have repeated episodes of knee instability. With chronic instability, a majority of patients have meniscus damage 10 or more years after the initial injury. Articular cartilage lesions increase in patients who have a 10-year-old ACL instability. With progressive physical therapy and rehabilitation, most knees can be restored to a condition close to their pre-injury state. Patients have to learn how to prevent instability and may need to use a hinged knee brace. These types of isolated ACL tears have better nonsurgical success: - Partial tears with no instability symptoms - Complete tears with no symptoms of knee instability during low-demand sports, and patients who are willing to give up high-demand sports - Those with light manual work or sedentary lifestyle - Children whose growth plates are still open If the overall stability of the knee is intact, your Florida Orthopaedic Institute physician may recommend these nonsurgical options: BRACING. Protects your knee from instability. You may also be given crutches to keep you from putting weight on your leg to further protect your knee. PHYSICAL THERAPY. A rehabilitation program can be started as soon as the swelling goes down. Specific exercises can restore function to the knee and strengthen the leg muscles supporting it. ACL ARTHROSCOPIC PROCEDURE Your physician may recommend knee arthroscopy if your condition does not respond to nonsurgical treatments and you have pain. Surgery to rebuild an anterior cruciate ligament can be done with an arthroscope using small incisions, and the procedure is less invasive. There is less pain from surgery, less time spent in the hospital, with quicker recovery times. Knee arthroscopy can also relieve painful symptoms of many problems that damage the cartilage surfaces and other soft tissues surrounding the joint. Other arthroscopic procedures for the knee include: - Torn anterior cruciate ligament reconstruction - Removal of inflamed synovial tissue - Removal of loose fragments of bone or cartilage - Removal or repair of a torn meniscus - Treatment of knee infection (sepsis) - Treatment of kneecap (patella) problems - Trimming of damaged articular cartilage With combined injuries (ACL tears in combination with other injuries in the knee), your Florida Orthopaedic Institute physician will usually recommend surgery. REBUILDING THE LIGAMENT To surgically repair the ACL and restore knee stability, the ligament must be reconstructed as most ACL tears cannot be stitched (sutured) back together. ACL repairs done this way generally fail over time. The torn ligament is replaced with a tissue graft to act as a framework for a new ligament to grow on. Grafts are obtained from several sources. If they come from the patient, they are called autografts. Grafts are often taken from the patellar tendon, which runs between the kneecap and the shinbone (patellar tendon autograft). Hamstring tendons at the back of the thigh (hamstring tendon autograft) and quadriceps tendons (which runs from the kneecap into the thigh and called a quadriceps tendon autograft) are also common sources of grafts. Cadaver grafts (allografts) are also used and taken from the patellar tendon, Achilles tendon, semitendinosus, gracilis, or posterior tibialis tendon. Your Florida Orthopaedic Institute surgeon will review the advantages and disadvantages of various graft sources to help determine which is best for you. Because regrowth takes time, it can take six months or more before an athlete can return to sports after surgery. REHABILITATION AFTER ACL TREATMENTS Rehabilitation plays a vital role in getting you back to your daily activities, whether your treatment involves surgery or not. Physical therapy programs help regain knee strength and motion. Following surgery, physical therapy focuses initially on returning motion to the joint and surrounding muscles, followed by a strengthening program to help protect the new ligament. Strengthening exercises gradually increase the stress across the ligament. For athletes, the final phase of rehabilitation is designed to create a functional return to their particular sport. Active adult patients whose jobs involve pivoting, turning or heavy manual labor should consider surgical treatment, as well as those who actively play sports. Activity, not age, typically determines surgical consideration. Your surgeon may delay ACL surgery in young children or adolescents until they are closer to skeletal maturity. They may also change the ACL surgery technique to decrease the risk of growth plate injury and bone growth problems. In combined injuries, surgical treatment may be necessary as it generally produces better outcomes. Almost half of meniscus tears are repairable and can heal better if the repair is done along with the ACL reconstruction. All Florida Orthopaedic Institute surgeons are fellowship trained, which adds additional expertise in their specialty. They stay current on the latest ACL treatments and research and can talk to you about all your ACL repair options.
<urn:uuid:347c5110-4511-44b7-86b6-90cf70bf24cf>
CC-MAIN-2020-16
https://www.floridaortho.com/specialties/knee-leg/acl-injuries/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500482.27/warc/CC-MAIN-20200331115844-20200331145844-00313.warc.gz
en
0.919528
2,432
3.375
3
In this podcast: Janet responds to a question from a caregiver who says the family she works for is interested in teaching their son ABCs and other lessons. The child is sometimes disinterested and refuses to participate, and she wonders: “Is there a respectful approach to teaching children?” Janet responds with an alternative perspective on early childhood learning that focuses on providing the best foundation possible for children to develop their innate abilities and a lifelong love of learning. Transcript of “Be Careful What You Teach (It Might Interfere with What They Are Learning)” Hi, this is Janet Lansbury, welcome to Unruffled. Today I’m responding to a question about doing learning activities with our children, like teaching them the ABCs. Here’s the interesting email I received: “Hi Janet, I’m wondering, is there a respectful approach to teaching children, say, their ABCs or doing any learning activities? I work as a nanny, and the family I’m currently with is very interested in their son’s learning and development. Sometimes he’s happy to go along with loosely structured learning activities, and some days he’s disinterested. I’m wondering, is it best to end an activity if he’s not showing interest-slash-refusing to participate, or come back to it later, or what? I wouldn’t want to teach him to give up on things, or that he just needs simply to offer a little resistance and then he gets what he wants. I really like the respectful approach and use this approach as much as possible. Any words of wisdom for those of us who use this approach as a nanny would be very much appreciated. Thank you for any advice.” Okay, so what I’m going to suggest would apply to a nanny in this situation or to other kinds of caregivers of children, and to parents. What I’m going to suggest is an alternative. While there may be a respectful way to do these learning activities with children (I’m sure there is), the question I want to pose in return is: are learning activities the best way to help children learn? You can probably guess that I believe they are not, and I’m going to talk about why in this podcast. But mostly, I want to offer an alternative to teaching children their ABCs and directing them in learning activities. What this nanny says is that the family is very interested in their son’s learning and development. Most of us as parents, or nannies, or early childhood teachers are very interested in learning and development. And then she says that sometimes this boy is happy to go along with loosely structured learning activities, and some days he’s disinterested. So what this nanny is asking about is what to do when she can’t garner interest from the child she’s working with. And she doesn’t say how old this child is, but some days he’s disinterested, and sometimes he’s even refusing to participate. She’s concerned that she’s going to teach him to give up on things, or that he just needs to simply offer resistance and then he gets what he wants. The problem here is that the activities she’s doing are actually not aligned with, and complimentary to, early child development. And that’s why she’s getting the sometimes frustrating results that she’s getting. Children are born with these amazing learning abilities and they’re actually able to stay with their interests for a very long time. But when we try to impose our interest in their learning, and what we believe they should be learning, it’s often a mismatch. It’s similar to trying to put up curtains on windows of a house that hasn’t been built yet. They’re still working on the foundation. Skills and knowledge that involve rote memorization are actually the easiest thing for children to learn when they’re ready. But in these early years, they’re developing a very important foundation that will serve them throughout life. It’s a foundation of higher-order learning skills that go way beyond memorization. They’re learning how to learn, and they’re learning about themselves as learners. The best message they can receive is that they are trusted, that we understand (and research shows this) that they are the learning experts. It is an innate ability that they have, and it is all done through play. And that isn’t to say that this means we now have to find fun, playful activities that are teaching our child these little specific details, these symbols for ideas like amount, weight or letters. The foundation of learning is about using all of our senses to explore amount, weight, gravity, comparisons. To analyze, have theories, and so much more. Children need to focus on what those symbols represent. And they will naturally do this through play, through play that might not even look like anything to us. It might look like they’re just messing around or sitting there staring into space. But this is actually the important stuff. So what I would propose to this nanny is to help these parents appreciate the incredible learning that their child is engaging in every moment. This learning that goes way beyond adult-directed activities. Learning that goes deep because it is meaningful to that child. Just like for all of us, when we take a course that we’re very interested in, we learn quickly and we learn deeply. We can sustain attention in that kind of learning. We don’t refuse it or stop. We can be insatiable around it. Those are the experiences that young children need to have in this crucial window of time, the early years, building the foundation for that house of learning. So specifically, I would encourage this nanny to cultivate this child’s self-directed learning, which is the same as cultivating his self-directed play. Because to devise successful learning activities, we have to understand how children actually learn. So our job can be setting up a safe play area where they are free to be explorers, and letting our child be the master of this one area of life. And then letting go of the results. Young children have to conform to a lot of things that we decide, but learning through play can be a territory that they own completely. And they deserve to own it, because they are the experts at this. They need to be active in their learning, not passively receiving or following along. They need to be the ones that are creating, designing, initiating, sustaining. They can be trusted with this job. Then when play is cultivated, we can learn how to be in observer mode, in sensitive observation mode, and we will learn everything we need to know about that child. And the way to share that with someone else is to be the observer, and to write down what that child is doing, what we see, and help the parent to appreciate the amazing things children do. Much more amazing than being able to recite an alphabet or a succession of numbers. So write down what you see. “I noticed he was interested in the rug. There was a flower that he was following with his finger, and then he went over to the other side of the rug where there was a similar flower, and he seemed to connect those two ideas.” Or, “He had that ball that’s actually been in his play area since he was tiny and he never noticed it before, and today he was rolling it, and watching it, and bouncing it off the floor and other objects. He seemed to be doing an intensive study of that ball. I noticed he used it in actually 20 different ways.” And then listing those. This is how children develop a long attention span. This is how we give children the edge when they do enter a structured learning environment at age five or six. They get to go into that with confidence in their active learning, and with a lot of experience with how to master concepts and ideas. And a sense from the adults around them that they are accepted and appreciated for who they are. Those are things that we can give children that last a lifetime. So after offering that alternative, I want to talk a little about the reasons that self-directed learning is better than adult-child teaching. Number one: It (1) distracts children from, and can even undermine, these amazing, innate learning skills. Alison Gopnik expressed it this way, she notes in her studies that “babies as young as eight months old demonstrated astonishing capacities for statistical reasoning, experimental discovery, and probabilistic logic that allowed them to rapidly learn all about the particular objects and people surrounding them.” And then she warns, “Sadly, some parents are likely to take the wrong lessons from these experiments, and conclude that they need programs and products that will make their babies even smarter. Many think that babies, like adults should learn in a focused, planned way. So parents put their young children in academic enrichment classes, or use flashcards.” “Instead,” she says, “Infants and toddlers need plenty of open-ended play time to be able to build the brain synapsis necessary for higher learning abilities.” So those products and learning activities that we try to impose on children take precious time away from them building the brain synapses that they need as lifelong learners. Number two: (2) by teaching we can impede instead of foster, skills like sustained focus and attention span. Again, I’m amazed in the observations I’ve done in my classes and of my own children, the long attention span that children display when they’re following their own interests. But when they have to follow ours, it’s seldom anywhere near as long that they can sustain that attention. One of my popular posts is called “Baby, Interrupted” and I go over other specific things that we might do as parents that actually foster a shorter attention span in our child. We’re interrupting their interests and trying to direct them to our own. And we want learning to go in deep. These little shallow things, these memorizing things, again, are just the tip of the iceberg. By focusing on those activities, we might be threatening the gold the children already have coming into this world. Their interest in mastering everything about it. We learn deepest when we are able to discover it ourselves. Piaget has some famous quotes about that idea. He says, “Every time we teach a child something, we keep him from inventing it himself. On the other hand, that which we allow him to discover by himself will remain with him visibly for the rest of his life.” I think a lot of us can relate to that, I know I can. When I have a problem I can’t figure out on my computer, I can ask one of my children or someone else to come and fix it for me, or just show me what to do. Or I can do what I don’t always do, believe me, which is figure it out myself. But guess what helps me in the long run? Figuring it out myself. Because the computer becomes a little less intimidating to me, and now I can do it myself. I don’t need someone to show me. And I will remember that solution forever because I discovered it. Similar to the way that I use this app, Waze, to get everywhere now. I’m very dependent on it. But what happens is, I don’t really know how to get places. And when I’m traveling, especially, and I don’t know the area, it helps me so much to actually have a jog around or, if I have time, to try getting around with just some basic directions, or finding it myself on a map. Then I learn that area. When I’m using Waze, I never really learn how to get somewhere. All it does is make me more dependent on Waze. So it makes a difference, and it especially makes a difference in these early years. Because again, this is the crucial foundation that children will draw on in everything they’re learning for the rest of their lives. The third reason that self-directed learning is better than adult-directed activities is that, without meaning to, (3) we can teach children that they need to step it up and perform for us for us to be interested in and appreciative of them. It can become a part of our relationship, that children feel they aren’t really enough, things they’re interested in aren’t that important, and that they need to be able to do things that they don’t feel able to do yet, in some cases. And then they get the smiles, then they get the good jobs and the kudos, and that appreciation that they long for. But again, if we see differently, if we see the way Magda Gerber saw, and I see after cultivating play with my own children, and then seeing how beautifully this foundation has served them as students and adults… You can’t buy this kind of learning in an activity book. And it has the other benefit of giving our children that confidence in themselves as capable people who are interesting as they are, for their interests and their agendas, not only interesting if they can conform to ours. The fourth point I want to make, (4) children behave better when they feel accepted and appreciated as they are, when we have that basic trust in them. Magda’s first principle, basic trust in the child, as an initiator, an explorer, and a self-learner. Feeling trusted and appreciated for who we are eliminates a lot of the stress that we can feel, and therefore helps us to be at our best more of the time as young children. It’s that relationship of safety and trust. The last point I want to make is that letting go of those learning activities, those things that we want to teach, letting go of that fear that somehow if my child doesn’t know these, what are again very small details in the scheme of things that children will easily learn when they’re ready… But that fear that we might have if our child doesn’t learn this, I’m not doing a good job, they’re not going to be able to succeed in school… Those are messages that I know from parents are getting passed around a lot these days, and it concerns me… that toddlers need to be in classes and learning activities need to be created for them. If I had a magic wand, I would use it to (5) eliminate all that stress that parents have around this, so that they could trust, so that they could enjoy their experience a lot more. As Magda Gerber said, “Do less, enjoy more,” and, “Be careful what you teach. It may interfere with what they’re learning.” She also said that. So letting go of that as part of our job, that we have to teach children all these things and make sure they’re up to task… There are several studies showing that knowing letters and numbers and how to read at a very young age might give a child an edge in the first year or two of school, but then it all evens out. But if we give our child this edge of being able to reason, and experiment, and understand probabilities and be critical thinkers and engage for long periods, retain what they’ve learned because it’s going in deeply… That’s an edge that, unfortunately, with other children it doesn’t even out. That is a lifelong edge that we can give children. And then, the other organic way that we teach is through caregiving tasks where we communicate and we give language to things. Where we teach children all about their bodies, and what food is, and all of these different words that we’ll use will be meaningful to children. So following their interests, and giving language to those, taking advantage of caregiving opportunities, dressing, bathing, mealtimes, diapering, as times where we are more directive. Some people I know that I work with, they really want to teach, and that’s a time when they can. And it can still be organic and important to the child, because it’s about what’s really happening to their bodies, and our relationship with them. There’ll be specific numbers and words, of course, that we’ll be teaching in these experiences. There are hundreds of opportunities in a day when we’re communicating respectfully with a child, where we can say, “Here’s three snaps on your shirt. Let’s snap those. Okay, we’ll do this one, one. Do you want to help with that one? Two, three, we did it.” Or, “You want a second serving of that vegetable?” (Yeah, we wish it was the vegetable!) And then when they’re done with that, “Do you want a third?” And all of this can be authentic and respectful. Never pushy. Children feel the difference. So do less, enjoy more, trust more. You’ll be amazed. We need to give ourselves a break from all this performance pressure we might feel. I hope some of that helps. This is a topic that I’m passionate about. It’s one of the most valuable things I learned from Magda Gerber and have appreciated in every moment that I’ve been able to spend with children. So I hope it makes sense. And by the way, if my podcasts are helpful to you, you can help the podcast continue by giving it a positive review on iTunes. So grateful to all of you for listening! And please check out some of the other podcasts on my website, JanetLansbury.com. They’re all indexed by subject and category, so you should be able to find whatever topic you might be interested in. And both of my books are available on audio, please check them out. Elevating Child Care, A Guide To Respectful Parenting and No Bad Kids, Toddler Discipline Without Shame. You can even get them for free from Audible by following the link in the liner notes of this podcast, or you can go to the books section of my website and find them there. You can also get them in paperback at Amazon, and in ebook at Amazon, Barnes And Noble, and apple.com. Thanks again for listening. We can do this.
<urn:uuid:4a0fef74-2cb4-43fb-b98a-cc64fb4b9323>
CC-MAIN-2020-16
https://www.janetlansbury.com/2019/09/be-careful-what-you-teach-it-might-interfere-with-what-they-are-learning/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371611051.77/warc/CC-MAIN-20200405213008-20200406003508-00354.warc.gz
en
0.972462
3,941
2.515625
3
Dogs and COVID-19 Coronavirus is a large family of viruses which are generally species specific. Coronaviruses in humans cause one third of what is generally diagnosed as common cold and upper respiratory infections. Rarely species specific coronaviruses will jump the species barrier and infect humans. This is what happened with SARS, MERS and is suspected in the current COVID-19 outbreak. Dogs can contract certain types of coronaviruses, such as the canine respiratory coronavirus which is responsible for 9.8% of cases generally known as kennel cough (Wikipedia article Kennel Cough). These types of the virus do not affect humans. The American Veterinary Medical Association (AVMA) said that infectious disease experts and multiple international and domestic human and animal health organizations, all agree there is no evidence at this point to indicate that pets become ill with COVID-19 or that they spread it to other animals, including people. That being said, there have been two cases of dogs testing positive for the COVID-19 virus, both from the same residence in Hong Kong. Neither of these animals showed any symptoms of the virus and are believed to have been infected by their owner who did have the disease. Veterinary diagnostic company IDEXX laboratories report they have tested thousands of dogs and cats and have not found a single case of COVID-19. The American Veterinary Medical Association Chief Veterinary Officer Gail Golab says, “We’re not overly concerned about people contracting COVID-19 through contact with dogs and cats.” And there’s science behind that: “The virus survives best on smooth surfaces, such as counter tops and doorknobs,” Golab says. “Porous materials, such as pet fur, tend to absorb and trap pathogens, making it harder to contract them through touch.” The Center for Disease Control (CDC) in the United States says "At this time, there is no evidence that companion animals, can spread COVID-19 or that they might be a source of infection in the United States." But there is always a chance and we don't know to much about the virus yet, so just make sure you are watching that your dog isn't coughing more then normal or have a fever. Our dogs, as always, are a source of comfort and calm for us and especially in this stressful time. So at this time there is no need to practice physical distancing from our dogs, unless you yourself are infected and need to protect your dog. References: American Kennel Club, Can Dogs Get Coronavirus, posted March 20, 2020 Centers for Disease Control, Animals and Coronavirus Disease 2019 (COVID-19), updated March 16, 2020 MarketWatch, Second Dog Tests Positive for Coronavirus as owners warned not to abandon pets, March 21, 2020 WebMD, What are common symptoms of coronavirus, January 22, 2020 Wikipedia, article Kennel Cough "Roses are grey, violets are a different shade of grey, let's go chase cars.", this old joke by Bo Burnham perpetuates the myth that dogs are totally colour blind, they're not. Animals including humans have light receptors in the back of the retina called cones. Humans have three distinct types of cones each of which is tuned to a different frequency of light. In our case, with our some 6 million cones in each eye, we can detect frequencies in the blue, green and red range, giving us the vibrant view we have of the world. Dogs have far fewer cones and only two types, those for blue and green, and are lacking any detector for red. This results in a different perception of the colour spectrum than humans as seen in this chart. So that bright red ball we just thru for our ball crazy friend will be seen as the greyish green at the top of the chart, almost camouflaged on the yellow grass in the middle of the chart. Even at that, the much lower density of cones renders the colours much less vibrant so buddy's bright red ball may be almost invisible. However, we don't need to feel to bad for them, since what they lack in colour perception they make up for in low light acuity. Another structure around the outside of the retina, called rods, are sensitive to low light levels but not colour. The ratio of rods to cones is about 2.5 times higher for dogs than humans, meaning they can detect objects in low light much better than humans. Another adaptation contributing to low light acuity is the larger pupil, allowing more light to reach the retina. Still another adaptation, called the tapetum is a reflective layer behind the retina which reflects light back into the rods and cones. This is what causes animals eyes to shine in the dark when illuminated (this is different from red eye in flash photos of humans which happens because the iris cannot close quick enough to filter the light and we see the reflection of the blood rich retina). Humans do not have a tapetum. So it turns out dogs do see colours, although not as vibrantly nor as many hues as we do but have built in night vision. References: Are Dogs Colorblind? Elizabeth Palermo Live Science June 27, 2014 Can Dogs See Colors, Stanley Coren PhD., DSc, FRSC Psychology Today, Oct 20 2008 Do Dogs Actually Use Color Vision?Stanley Coren PhD., DSc, FRSC Psychology Today, July 22, 2013 Doggone: Your Best Friend Is Red-Green Colorblind,Laura Geggel Live Science Nov 8, 2017 Puppy Dog Eyes Ahh. So cute. We've all seen those "puppy dog eyes" and usually with the same reaction. But why do dogs give us this endearing gaze? Studies have shown that eye contact is an important aspect of human bonding and is associated with increased levels of oxytocin, the feel good hormone. In a study documented in Live Science, researchers found elevated levels of oxytocin in both humans and dogs after they spent some time just gazing at each other. It has been well documented that dogs as we know them evolved from more social wolves hanging around human encampments and scrounging for food. So have dogs actually developed this trait as a way of weaseling more food from us? Since dogs do not generally use eye contact in intra-species communication, the same study surmised that this endearing trait developed strictly to communicate with humans. Indeed, in a study documented in Current Biology, a group of wolves that had been socialized to humans were unable to locate food treats pointed out by the humans by either touching or pointing to the same degree of success as dogs. In a second part of the same study the animals were given an insolvable task such as getting to food in a jar. The wolves in this case would eventually give up and leave, however the dogs, presented with the same dilemma, tended to turn to the humans for help, engaging eye contact. This behavioral development actually led to a physical evolutionary development in the facial muscles of dogs. The excellent PBS program Nova looked into this and documented studies showing that wolves lack two sets of facial muscles required to manipulate the facial expression we know as puppy dog eyes. One set of these muscles is used to lift the eyebrows up, while the other, pulls to the outside resulting in the wide, expressive eyes that remind us of human babies, or a person on the verge of tears, generally resulting in an emotive response from the subject human. As a sort of "missing link" in this evolutionary development, Nova reported that one species, the Siberian Husky, has only one set of these muscles, the ones used to pull the eyebrows up. This is because Siberian Husky's are more like their distant relative the wolves and have only developed one set of these muscles. So are our four legged friends master manipulators or just making use of an evolutionary response taught by eliciting a beneficial response from us. Either way those puppy dog eyes seem to benefit both species. Why do dogs scratch the ground after peeing Divots all over the lawn, gravel flying all over. Someone just had a pee. But why do dogs need to scratch the ground after peeing? Actually, only around 10 percent of dogs do it," said Rosie Bescoby, a clinical animal behaviorist with the Association of Pet Behaviour Counsellors in the United Kingdom and she says that it appears to occur equally in males and females, although it was observed that males that do it, do it more frequently than the females that do it, which may be why most people think it is a male thing. A 2004 paper studied 12 female Jack Russels, 6 spayed and 6 intact and watched their urinary behaviors. They observed that these dogs were more likely to urinate more frequently and aim their urine at objects when away from home in comparison to when they were walked close to home and concluded that scent marking was an important function of urination, especially away from home area. Male dogs have also been observed to raise their legs more frequently to urinate when in the presence of another dog. We can corroborate this from observations in our boarding kennel where all the dogs are away from home and around strange dogs. But still why the scratching? Most studies conclude that, given the marking functions explained above, the scratching is just a method of distributing that scent over wider territory. In addition, dogs have sweat glands in their paw pads and by scratching at the ground they are also adding that additional scent to the already lovely odour. Most studies also hypothesize that the marks left on the ground contribute a visual component to the marking function of the behaviour for passers by. Scientists who study this sort of thing call this a composite signal. At any rate, ground scratching is not a behaviour we need to discourage, just stand clear when the dirt starts flying. Have fun and keep heading Duenorth, the right direction to a well trained dog. Physiology Today, Ground Scratching by Dogs: Scent, Sight, and Ecstasy, Mark Bekoff PhD March 03, 2019 Eileen and dogs, Ground Scratching: Why Does My Dog Do It? Eileen Anderson, December 02, 2014 Live Science, Why do dogs scratch the ground after they pee? Emma Bryce, August 04 2018. PetMD, 12 Dog Peeing Positions and What They Mean. Jennifer Coates DVM Can Dogs feel guilty? You’ve come home to find the counter cleared off and the empty bread bag on the floor with Fido nearby, head down eyes averted looking guilty as hell. But does she really feel guilt. Seventy-four percent of dog owners believe that their dogs experience guilt. It sure looks like guilt. Psychologists label feelings like happiness and fear as primary emotions, that is a direct response to external events, and there is plenty of evidence of these emotions in dogs. But emotions like jealousy, pride, and guilt are termed secondary emotions and are feelings about feelings. There has been little evidence of secondary emotions in animal cognition literature. That does not mean that dogs do not experience guilt but perhaps that hang dog look is really something else. Charles Darwin observed that the types of behaviours associated with guilt - keeping one's head down, and averting one's gaze - are also seen in other social non-human primate species. These behaviours have been interpreted as a means to mitigate retaliation for transgressions in social groups and as such are more pragmatic than emotion based. Indeed, anecdotally pet owners report that they chastise their pets less harshly when these displays of apparent guilt for transgressions are displayed. Is Fido, then, attempting to lessen the anticipated reprimand rather than actually feeling guilt? A group of canine cognition researchers from Eotvos Lorand University in Budapest, created an experiment to find out. Sixty-four dogs were selected and a normal greeting behaviour was established for each. Then the dogs were presented with an opportunity to misbehave when left alone in a room (stealing food from a table) and the greeting behaviour when their owners returned was recorded for both those that had misbehaved and those that had not. Keep in mind that they were all aware of the possibility that they may be in trouble. The two groups were equally likely to display guilt type behaviours whether they had transgressed or not. It seems like the guilt type behaviours are a means of mitigating an anticipated punishment and probably not an emotional response. Reference: Scientific American By Jason G. Goldman on May 31, 2012 Business Insider: Dogs don't experience guilt. Ben Gilbert The Dodo: Think Your Dog Has A "Guilty" Look? Think Again. Julie Hecht Welcome to our new feature where each Sunday we will delve into the world of science as related to dogs. These posts will be our crude interpretation of recent scientific studies in the canine world. Today's topic: Early Exposure to pets effect on mental health It has been well know (to some) that some psychiatric disorders may be linked to environmental exposure to immune system disrupters in early life. Dr Robert Yolken of Johns Hopkins Children's Center conducted a study investigating the relationship between exposure to a household pet cat or dog during the first 12 years of life and a later diagnosis of schizophrenia or bipolar disorder. The study found a statistically significant decrease in the prevalence of schizophrenia in those exposed to a dog early in life. Yolken found as much as 24% fewer schizophrenia diagnosis among those brought up with pet dogs before their 13th birthday. Yolken did not find any such relationship between exposure to dogs and bi-polar disorder. More significantly, he found no significant relationship between exposure to cats and either schizophrenia or bi-polar, however there was a slight increase in risk of developing either disorder for those who were first in contact with cats between the ages of 9 and 12. Multiple epidemiological studies conducted since 1953 have shown there also is a statistical connection between a person exposed to the parasite that causes toxoplasmosis and an increased risk of developing schizophrenia. Toxoplasmosis, is a condition in which cats are the primary hosts of a parasite transmitted to humans. Some of our own thoughts on this are:
<urn:uuid:29b208fe-8aa2-47ae-92c2-f9f283351613>
CC-MAIN-2020-16
https://www.duenorthdogtraining.com/blog/category/science-sunday
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370492125.18/warc/CC-MAIN-20200328164156-20200328194156-00075.warc.gz
en
0.960065
2,972
3.453125
3
Purify Water and Air on the Earth Planetary Potentiality from Small Rice Husks February 21, 2020 More than 100 million tons of rice husks are generated worldwide each year. By making use of these excess by-products as raw materials, Sony is driving the Triporous™ project, a project that addresses the global challenge of water and air purification. So, what kind of project is this? We spoke with Seiichiro Tabata, who invented Triporous, Shun Yamanoi, who has made success in the mass-production of the material and product development in the healthcare and apparel sectors, and Makoto Koike, who is the senior manager of Strategy Gp in Intellectual Property Incubation Department, which owns this project. IP Incubation & Investment Department, Intellectual Property Division, IP Incubation & Investment Department, Intellectual Property Division, IP Incubation & Investment Department, Intellectual Property Division, Micropores, mesopores, and macropores ──First, what kind of material is Triporous and what kind of properties does it have? Seiichiro Tabata:Triporous is a porous material made from rice husks. It can adsorb high-molecular-weight organic substances in water and air that cannot be adsorbed by conventional porous materials (such as activated carbon). It has been confirmed, in particular, that this material can adsorb not only organic molecules, but also viruses and bacteria in water. In addition, with large pores called mesopores and macropores, Triporous can adsorb substances at very high speed. ──Could you tell us more about mesopores and macropores? Tabata:Going through a unique manufacturing method invented by Sony, Triporous has three different sizes of pores: micropores with a diameter less than 2 nm (which can also be found in conventional activated carbon), mesopores with a diameter of 2 to 50 nm, and macropores with a diameter larger than 50 nm. To explain the manufacturing method briefly, we first carbonize the rice husks to clear the silica accumulated between the cells of the husks. Then, we etched off the silica to form macropores. Finally, we perform an activation process (high-temperature process that uses water vapor) to develop mesopores and micropores. "Triporous" is named after a combination of "Tri-" meaning 3 and "porous" meaning having many small holes. ──We couldn't easily imagine Sony has something to do with rice husks. How and when did you pay attention to rice husks and bring it to the Triporous business? Tabata:When I joined Sony in 2006, my research theme was the development of new electrode materials using excess biomass (excess natural resources) for power storage devices, such as lithium-ion batteries and electric double-layer capacitors. As a matter of fact, when I was a student in a university laboratory, I was researching the porous carbon electrodes made from artificial resin using silica microparticles as a mold. Having this experience, I thought a natural substance could also produce the same material if it contains silica. After I repeated research and analysis, I discovered that rice husks are composed of silica microparticles and lignocellulose (a carbon source), and when I made the material from the husks, a porous carbon material with a unique structure was produced. At that time, we were aiming to apply these to battery electrodes, but later discovered a unique adsorption property derived from its unique pore structure, so we organized it as a basic patent. Since then, we have conducted various laboratory experiments with our members and obtained a lot of patents and know-how on the Triporous technology. ──How did you come up with the idea of using it for other than battery electrodes? Tabata:In the latter half of 2007, there was a growing movement in the laboratory to emphasize research in the environmental and medical fields. Triporous was also expected to contribute to those fields in some way and we were inspired by the adsorption characteristics of Triporous, thinking it could be applied in environmental and medical domains. We did a lot of laboratory experiments, and when we discovered a dye molecule that can be adsorbed only by Triporous, we were so excited. After this finding, its uniqueness was recognized by many experts through academic conferences and papers, which gave us even more confidence. What we needed was co-creative innovation ──The Triporous project is driven and promoted by Intellectual Property Division of Sony. What kind of activities does this division usually do? Makoto Koike:Intellectual Property Division is an organization to support Sony's business activities by handling intellectual property rights, such as patents, designs, and trademarks. The main mission is to protect Sony’s business by gaining the competitiveness with its own intellectual property rights and by competing to reduce business risks in terms of external intellectual property rights. ──Of all those activities, what specific role do you play in the Triporous project? Koike:By evolving the tasks I just mentioned, IP Incubation & Investment Department, where we belong, aims to provide a new functional value as an organization that is responsible for intellectual property within a company (the corporate IP division). In the case of Triporous, our goal is to create a licensing business that leverages its intellectual property rights. Even if the results of R&D investment are not utilized in Sony's existing businesses, I believe that we can provide a new functional value to create a different opportunity for utilizing the results by making use of its intellectual property rights. I myself have been involved in the Triporous project since January 2018. As management, I support the Triporous project team, including Yamanoi-san as the project leader and Tabata-san as the technical leader. Triporous aims to create a licensing business using intellectual property rights as I mentioned. That may sound very different from the conventional Sony business model, but Sony has lots of experience in the licensing business. For example, in the recording media business, such as CD, DVD, and Memory Stick, Sony has promoted licensing to create a market for products that Sony has manufactured and sold. We also made a great success in MPEG2 Video as the licensing business, which aimed to promote the technology Sony adopts. If there is anything special about the Triporous project, it would be that we actively promote and appeal this novel material to the public ourselves, in a similar manner to IT industry platformers. This allows us to be engaged with interested companies and organizations, and gives us an opportunity to promote an open innovation, where we can collaborate with others to strengthen our ideas for the project. We are also creating our licensing business in a "co-creative innovation" style. In other words, we seek ways to co-create with people outside of Sony and form a business ecosystem from which we can receive license fees. We expect that this kind of scheme will lead to a new functional value as the corporate IP division. ──What challenges did you face in the mass production of Triporous, and how did you solve them? Shun Yamanoi:When we tried the same method as a lab experiment in a mini-plant, we failed in the process of removing silica by etching, which is the process of stirring carbonized rice husks in reaction liquid. When I tried to separate the water from the rice husks after the reaction, the resistance of the rice husks was unexpectedly large, which caused the filter to get clogged. Furthermore, in the final activation process where water vapor was applied to make a reaction in an environment of ca. 1000 degrees C, light rice husks flew away when exposed to wind, resulting in a very low yield As a result of trial and error, we developed a method to process rice husks into pellets before going to mass-production. The development of this method enabled us to successfully produce several tons of Triporous. Although we faced a series of unexpected events, I was very happy to see it produced in tons, instead of grams when we did in Atsugi's laboratory. Since Sony did not have much know-how in the mass production of materials, we had a great amount of support from chemical and activated carbon manufacturers to mass-produce Triporous. This collaboration enabled us to combine Sony's Triporous technology and their expertise to bring realization. Even now, our technology is advancing everyday thanks to their knowledge. How will Triporous be implemented in society? ──What is Triporous currently used for? Yamanoi:Triporous was adopted by Rohto Pharmaceuticals Co., Ltd. into their body soaps and by EDIFICE for their apparel products in 2019, and the products containing Triporous have started to be available in the market. Triporous can be adjusted in its shape, granularity, and quality to suit various uses. In addition to skin cleansers and deodorant fibers which are already sold in the health care and apparel fields, we are planning to expand implementations to water filters for the water purification field and air filters for the air purification field. We have met the food additive and medicinal carbon standards as well, so we are also looking forward to developing this technology for food and pharmaceuticals sectors. Koike:We are currently proceeding the commercialization for B-to-C markets, but we also expect that we will be able to enter B-to-B markets where large amounts of Triporous powder can be consumed, in order to reduce production costs. We'd also like to expand our brand licensing business. And for this, we'd like to support the companies who have adopted Triporous for B-to-C by publishing information so that they can easily appeal the environmental value of sustainable materials using excess biomass, and by promoting commercialization in various fields so that it will be easier to conduct marketing across industries. Tabata:It may sound unusual, but Triporous can also be used to preserve art and craft works. For the long-term storage of cultural properties, air quality control is required at a higher level than normal indoor environments. Because air purification devices and sheets using processed Triporous can efficiently remove gaseous contaminants, Triporous has started to be used in a variety of places where important cultural properties are stored, including the World Heritage Byodoin Temple. ──So, you are exploring applications as well as researching and developing how to process it. Tabata:Yes. Since the development at the lab level is almost completed, we are preparing to implement them into variety of areas the world. Koike:It is both difficult and interesting that we must tailor our products to each customer's needs. We are developing processed products in cooperation with partner companies. Yamanoi:In the case of textiles, for example, it has been found that mixing Triporous enhances deodorizing power, but Sony has little knowledge of how to mix it into textiles. Therefore, we need the help of textile specialists. By the way, if Triporous is mixed into textiles at a few percent of the weight of the clothes, its deodorant effect starts appearing. We have also confirmed the conditions under which the deodorant effect can be actually felt, and only the fibers that satisfy those conditions are tagged as Triporous FIBER. Tabata:As we considered various business models, we found out that we couldn't do everything on our own. This reality was hard for us, but we never gave up on our dream, and thanks to our business partners' support, we stand where we currently are. Sustainable growth and open innovation ──Sustainable growth is an essential aspect of the times, and open innovation is critical for large companies to survive. Koike:Yes. When a large company like Sony tries to create its own product, it would often be proceeded with development in a closed manner, but I feel the trend is shifting towards open innovation, in a manner to seek a platform for more open knowledge creation. I think that the Triporous project has shifted from the "enclosure style" as conducted in the manufacturing industry to the "co-creation style" as many IT companies do. In my opinion, Triporous could be used for many things and we shouldn't narrow down targets at this point. If we focus on something, we may or may not succeed. I would rather like to collaborate with more people and proceed with development without selecting areas. In this way, we will be able to find answers to our challenges as to where we can make business and whether it can grow sustainably. Currently, Triporous is not produced as much as the ordinary activated carbons, but if it becomes more widespread and can be produced at a similar level as the conventional activated carbons, the cost will be dramatically decreased and it can generate applications where the cost used to be a bottleneck even if the performance is good from the user's point of view. We expect the Triporous business to grow even further. Tabata:We are often asked from other companies, why Sony is using rice husks to make carbon. Sony is famous for offering cool, cutting-edge products, but now it uses low-tech rice husks to clean up the environment—this gap seems to be gaining a good reputation among people. We believe that Triporous can also contribute to the Sustainable Development Goals (SDGs) in Sony's unique way. Purifying Water and Air ──What existing things do you imagine will be replaced by Triporous if the manufacturing cost largely drops and its use is widely spread? Koike:Currently, millions of tons of coconut shells and wood-based activated carbons are used. So, in the beginning of the Triporous project, we thought Triporous could be used as a replacement of them. But since Triporous is a new material, I think it's going to create a new application. I believe it will be developing in a different way from the idea of just a replacement of something. For example, in the case of water purification, activated carbon is used to clean water, but since Triporous has three different sizes of pores—large, medium, and small—it can adsorb substances more quickly. In that sense, Triporous can be used as a replacement when water needs to be purified during a short contact time. But this is a special case. I think there would be off chance to simply replace commonly-used items by Triporous. Yamanoi:I hope that "Triporous" will be a common word in our daily lives. "Use Triporous when drinking water." "Clean the air in the room with Triporous." "Wash yourself with Triporous” and “Go out in Triporous clothes." If we could hear such conversations here and there, I think it means we had great success. We'd like to create a whole new world, rather than aiming to replace existing things. Koike:In terms of purifying water and air, emerging countries might come across your mind. If water and air can be purified by using Triporous in a place where development is still progressing and purification solutions are yet to be provided, it would be wonderful. Tabata:If you calculate the quantity of rice husks generated worldwide, there are more than 100 million tons per year. Some are used for power generation, but many are discarded. If a significant portion of such waste can be transformed into Triporous, we will be able to deliver various values to those who are suffering in the world. There are so many ways to utilize Triporous, and we are continuing discussions for ideas. Purifying water and air in regions where the infrastructures are not available is what we want to pursue, and we also want to contribute to the community by realizing a circular economy. Yamanoi:Triporous alone cannot filter seawater, but it may be possible to reduce the load on the RO membrane (reverse osmosis membrane) and extend its life by attaching a Triporous filter as the prefilter of the RO membrane. In this way, there could be many ways in which Triporous can achieve maximum performance when combined with other components. Koike:I anticipate that water and air purification will become a part of our business, and I'd like to make every effort to realize it. But, of course, it will take time, so until then, we are going to make our business stable in other areas, such as Triporous FIBER and WASH brand business, and steadily increase areas where we can make the best use of Triporous' unique pore structure. And if water and air purification we are jointly working on with other partners finally blooms as business, it will be a very desirable scenario. As with Triporous, we'd like to continue to explore new ways to protect and enhance the value of technologies developed by many people in the R&D of Sony. Will low-power, ultra-compact, high-performance semiconductor technology change the future of the space industry? January 17, 2020
<urn:uuid:50ac15c8-3b83-404e-af60-e17d4ce32596>
CC-MAIN-2020-16
https://www.sony.net/SonyInfo/technology/stories/Triporous/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370503664.38/warc/CC-MAIN-20200331181930-20200331211930-00193.warc.gz
en
0.958443
3,637
3.109375
3
* Group students in pairs (by passing out tongue depressor, colored bracelets, and having them locate their partner) and read story and have students share their thoughts on questions asked (click here) before, during and after reading the story, plus narrative questions (click for pdf) * Things to share after reading book, singing and chanting song] a) Tell me a word that rhymes with ________? (repeat this exercise with different words from the song – Mary; little; lamb; white; day; snow; play) b) Say two words (one word from the song and a rhyming and non-rhyming word) and ask them if they rhyme ex. Mary …David? Mary…Larry? c) I am going to say a word from the song and you tell me what letter it begins with (select word from song) d) Listen to the word I say and lets count the syllables (clap, snap, pat or stomp syllables with the class) e) I am going to say a sentence from the story / song and you fill in the word that is missing Example: Mary had a little _____________: Its fleece was white as _______________; etc. continue with the other phrases from the song. f) I am going to say part of a sentence and you finish the sentence. Example: (teacher) Mary had a (students) little lamb (teacher) Its fleece was (students) white as snow (etc.) g) Once they are able to accomplish the task by phrases you can now do it by word. (T = Teacher and S = Students) Example: (T) Mary (S) had (T) a (S) little (T) lamb (S) little (T) lamb (continue with song) * Complete Venn diagram and compare a lamb to another living animal with similar characteristics in the middle (click for pdf) * Complete a story map: title, setting, characters, problem, and solution. * Ask students what is white and create a map on there answers. * Use this song to practice echo reading, choral reading, buddy reading, and work on phonological awareness activities (click for example) * Synonyms. Chant song and substitute words to expand vocabulary. Example: (synonyms) little = small, tiny, petite, etc. big, large, huge, etc. * Have students sing the melody by using the syllable Baa. Example: Baa baa baa baa baa-baa-baa baa-baa-baa etc. (You can change initial letter to N, M, C, T or any other letter. You can select the first letter of each of their name) * Create different ending to the story or talk about what the little lamb might be doing at school or where else she can go? * As a class, create silly alliteration sentences. Read them, chant them, whisper, normal voice, and loud voice. Example: Little lambs love licking lollipops. Little Lambs love to leap and laugh. * Use this song to practice echo reading, choral reading, buddy reading, and work on phonological awareness. (see video example) * Letter recognition for (M)ary and (L)amb . List the words (students name) that have this letter and sound in the beginning. * Write the nursery rhyme on chart tablet and identify upper case, lower case, commas, periods within the song. * Using the pocket chart, cover a word with a red cover sheet. Follow the words with your finger, and ask the kids to clap while they read the word under the red card. The can also whisper the red word. * Make multiple copies of pdf and color them the seven colors mentioned and pass out lambs of various colors and take turns in having children call out their color and the class sings, “Mary had a little lamb, little lamb, little lamb. Mary had a little lamb its fleece was _______ as _______. (black as night; green as grass; red as an apple; orange as an orange; blue as the sky; brown as dirt; yellow as a banana) (click for pdf) * Present a stuffed lamb / sheep animal. Call on students to listen and place the lamb in various positions in the room. (For example, on the chalkboard ledge, under the chair, beside the computer, moving through the door, moving around the table, etc.) * Discuss the process of making wool. 1) One haircut per year, usually in the summer; 2) sort the wool by color and condition (quality); 3) wash all the germs from the wool; 4) once the wool dries, comb the wool and remove tangles; 5) weave fibers to make yarn; 6) use yarn to create fabric: 7) fabric is then used to make coats, socks, carpets, jackets, pants and much more. Bring a role of wool yarn and a piece of wool fabric…WalMart! * Introduce wool and have students touch it, walk on it barefoot and touch their neighbors arm to experience static electricity shock. * Let students witness the results of static by placing balloons near their neighbors’ hair to see the hair react to static. Place cheerios on floor and let children use balloons to pickup cheerios from the stactic…balloon does not touch cheerios. * Study the difference between warm and cold. Using a wool sock have the children test to see if they can feel the cold when they hold an ice cube with the sock over their hand and without. You can also give them a wool sock, cotton sock, polyester sock and see which sock is warmer. * Compare and list other animals that are “white as snow”. Play matching / memory game. (click for pdf) * Pass around a piece of wool and cotton and compare texture (use cotton-balls to cover lamb). (click for pdf) Social / Emotional * Act-out, recite, chant, and sing the song using stick puppets, masks or by assigning roles. (click for pdf) * Discussion questions can be talked about in a group or with partners as conversation practice. Topic A: What would make you laugh at school? Record their answers Topic B: Discuss the friendship between Mary and her lamb. Relate the discussion to how the children feel about their pets and/or friends. Talk about “The Buddy System”: To be paired with someone to help them or protect them Topic C: Discuss which points in the story could be happy or sad. * Was the lamb sad that Mary would be gone all day; is that why he followed her. * Would the lamb feel lonely by himself all day? * Was Mary happy to go to school with her friends? Topic D: Wool is warm. “What makes you feel warm?” A sweater, hot chocolate, a blanket, a jacket, etc. Topic E: Sheering sheep is like getting a haircut. Ask the children if they ever get haircuts. Does it hurt? Did they cry? * Discuss “Rules” at school / home / etc. What rules exist at your house? Have the class make a rule for the day and change the rule each day for a week. * Time the children with a stopwatch or by counting while student run 25 yards or around the playground. * How long did it take for Mary and her little lamb to walk to school that day? Would it have been faster by car or bus? How do you get to school? How long does it take? Ask them to report. Create a graph with the answers. * Create a timeline of the events in the story. Use the picture clips from the stick puppets. * Use this printout to place the sheep in order (1-5) on their way to eat hay or gluing the assigned number of cotton balls on each lamb. (click for pdf) * Have the children estimate and then count the number of cotton-balls that will cover the lamb. Have the children select a specific number of cotton balls or pompoms. * Have children practice picking up cotton balls with the chopsticks. (children’ sized chopsticks can be found in any Asian market but regular size will work as well). Kids learn to use chopsticks quickly and it becomes a writing ready skill. * Group the cotton balls and have the children discover which group is more/less, big/little. Incorporate more vocabulary by using the words: pile, heap, stack, mound or mass. Extension: Use colored pompoms, then sort and count by color. Physical / Outdoor * Sing or chant “Mary had a little lamb” forming a circle as a class and holding hands and walking one direction and changing directions each verse. * Place masking-tape in a circle or a straight line and encourage children to use one foot in front of the other and balance as they walk on the tape lines and repeat the rhyme / song or the alliteration above or your own alliteration created by the class. Change movement: criss-cross over the tape. You can also have them in three lines and assign each group to different color and have the first three go together while singing “Mary had a Little Lamb” and walking on their assigned colored line. (click for example: Illustration) * Play “Follow the Leader”, to reinforce how the lamb followed Mary everywhere she went. You can create various small groups to allow everyone to be a leader. * Test their knowledge by have them squat as you give statements that are wrong. When they hear the correct statement, they jump! (sample questions) Did Mary have a cow? No Did Mary have a horse? No Did Mary have a Lamb? YES! (kids jump) Did it follow Mary to the store? No Did it follow her to church? No Did it follow her to school? Yes! (kids jump) It made the children sad? No It made the children mad? No It made the children laugh? Yes! (kids jump) Continue with different questions on where it followed her and what it made the children do. Great way to promote story comprehension.
<urn:uuid:8a6cd329-f67d-48a3-8a66-db714788ac2e>
CC-MAIN-2020-16
https://drmike.info/2017/01/11/mary-had-a-little-lamb/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371818008.97/warc/CC-MAIN-20200408135412-20200408165912-00153.warc.gz
en
0.941833
2,182
4.0625
4
The Cray Series of Supercomputers A detailed discussion of the most significant supercomputer line of the late 20th century. Cray–1 at the the design. In 1976, the magazine Computerworld called the Cray–1 “the world’s most expensive love seat”. In order to understand the Cray line of computers, we must look at the personal history of Seymour Cray, the “father of the supercomputer”. Cray began work at Control Data Corporation soon after its founding in 1960 and remained there until 1972. He designed several computers, including the CDC 1604, CDC 6600, and CDC 7600. The CDC 1604 was intended just to be a good computer; all computers beginning with the CDC 6600 were designed for speed. CDC 6600 is often called the first RISC (Reduced Instruction Set Computer), due to the simplicity of its instruction set. The reason for its simplicity was the desire for speed. Cray also put a lot of effort into matching the memory and I/O speed with the CPU speed. As he later noted, “Anyone can build a fast CPU. The trick is to build a fast system.” The CDC 6600 lead to the more successful CDC 7600. Full disclosure: I have programmed on the CDC 6600, CDC 7600, and Cray–1; I found each to be excellent. The CDC 8600 was to be a follow–on to the CDC 7600. While an excellent design, it proved too complex to manufacture successfully, and was abandoned. left Control Data Corporation in 1972 to found Cray Research, based in In 1989, Cray left the company in order to found Cray Computers, Inc. His reason for leaving was that he wanted to spend more time on research, rather than just churning out the very profitable computers that his previous company was manufacturing. This lead to an interesting name game: Cray Research, Inc. producing a large number of commercial computers Cray Computer, Inc. mostly invested in research on future machines. Cray–3, a 16–processor system, was announced in 1993 but never delivered. The Cray–4, a smaller version of the Cray–3 with a 1 GHz clock was ended when the Cray Computer Corporation went bankrupt in 1995. Seymour Cray died on October 5, 1996. In 1993, Cray Research moved away from pure vector processors, producing its first massively parallel processing (MPP) system, the Cray T3D™. Cray Research merged with SGI (Silicon Graphics, Inc.) in February 1996. It was spun off as a separate business unit in August 1999. In March 2000, Cray Research was merged with Terra Computer Company to form Cray, Inc. Cray–1: The Physical Machine Here is a schematic of the Cray–1. At the base, it is more than 8 feet in diameter. We may think this a large computer, but for its time the Cray–1 was surprisingly small. Processor Specifications of the Cray–1 Source: Cray–1 Computer System Hardware Reference Publication 2240004, Revision C, November, 1977. Memory Specifications of the Cray–1 that the memory size, without error correction, would be 8MB. Each word has 64 data bits (8 bytes) as well as 8 bits for error correction. Other material indicates that the memory was low–order interleaved. Source: Cray–1 Computer System Hardware Reference Publication 2240004, Revision C, November, 1977. The Cray–1 Vector Registers It is important to understand the structure and function of the vector registers. of the vector registers is a vector of registers, best viewed as a collection sixty–four registers each holding 64 bits. A vector register held 4,096 bits. Vector registers are loaded from primary memory and store results back to primary memory. One common use would be to load a vector register from sixty–four consecutive memory words. Nonconsecutive words could be handled if they appeared in a regular pattern, such as every other word or every fourth word, etc. One might consider each register as an array, but that does not reflect its use. The Cray–1 Vector and Scalar Registers One of the key design features of the Cray is the placement of a large number of registers between the memory and the CPU units. These function much as cache memory. Note that the scalar and address registers also have auxiliary registers. Cache Memory on the Cray Here we pay special attention to the Scalar Registers and the Address Registers. of the registers, including the vector registers, are implemented in static with six nanosecond access time. The main memory has 50 nanosecond access time. The Cray–1 does not have explicit cache memory, but note the two pairs of register sets. The eight scalar registers backed up by the sixty–three temporary registers. The eight address registers backed up by the sixty–three auxiliary address registers. In some sense, we can say that 1. the T registers function as a cache for the S registers, and 2. the B registers function as a cache for the A registers. “Without the register storage provided by the B, T, and V registers, the CRAY–1’s [memory] bandwidth of only 80 million words per second would be a serious impediment to performance.” [R. M. Russell, 1978] Each word is 8 bytes; 80 million words per second is 640 million bytes per second, or one byte every 1.6 nanoseconds. Evolution of the Cray–1 In this course, the main significance of the CDC 6600 and CDC 7600 computers lies in their influence on the design of the Cray–1 and other computers in the series. Remember that Seymour Cray was the principle designer of all three computers. Here is a comparison of the CDC 7600 and the Cray–1. Item CDC 7600 Cray–1 Circuit Elements Discrete Components Integrated Circuitry Memory Magnetic Core Semiconductor (50 nanoseconds) Scalar (word) size 60 bits 64 bits (plus 8 ECC bits) Vector Registers None Eight, each holding 64 scalars. Scalar Registers Eight: X0 – X7 Eight: S0 – S7 Scalar Buffer Registers None Sixty–four T0 – T77 Octal numbering was used. Address Registers Eight: A0 – A7 Eight: A0 – A7 Address Buffer Registers None Sixty–four: B0 – B77 Two main changes: 1. Addition of the eight 2. Addition of fast buffer registers for the A and S registers. Chaining in the Cray–1 Here is how the technique is described in the 1978 article. “Through a technique called ‘chaining’, the CRAY–1 vector functional units, in combination with scalar and vector registers, generate interim results and use them again immediately without additional memory references, which slow down the computational process in other contemporary computer systems.” This is exactly the technique that we called “forwarding” when we discussed the pipelined datapaths. Consider the following example using the vector multiply and vector addition operators. MULTV V1, V2, V3 // V1[K] = V2[K] · V3[K] ADDV V4, V1, V5 // V4[K] = V1[K] + V5[K] Without chaining (forwarding), the vector multiplication operation would have to finish before the vector addition could begin. Chaining allows a vector operation to start as soon as the individual elements of its vector source become available. The only restriction is that operations being chained belong to distinct functional units, as each functional unit can do only one thing at a time. Vector Startup Times Vector processing involves two basic steps: startup of the vector unit and pipelined operation. As in other pipelined designs, the maximum rate at which the vector unit executes instructions is called the “Initiation Rate”, the rate at which new vector operations are initiated when the vector unit is running at “full speed”. initiation rate is often expressed as a time, so that a vector unit that 100 million operations per second would have an initiation rate of 10 nanoseconds. I know: rates are not times. This is just the common terminology. time to process a vector depends on the length of the vector. For a vector with length N (containing N elements) we have T(N) = Start–Up_Time + The time per result is then T = (Start–Up Time) / N + Initiation_Rate. For short vectors (small values of N), this time may exceed the initiation rate of the scalar execution unit. An important measure of the balance of the design is the vector size at which the vector unit can process faster than the scalar unit. For a Cray–1, this crossover size was between 2 and 4; 2 £ N £ 4. For N > 4, the vector unit was always faster. Experimental Results: Scalar/Vector Timing Here are some comparative data for mathematical operations (Log, cosine, square root, and exponential), showing the per–result times as a function of vector length. Note the low crossover point, for vectors larger than N = 5, the vector unit is much faster. The time cost is given in clock ticks, not nanoseconds. See Russell, 1978. The Cray X–MP and Cray Y–MP The fundamental tension at Cray Research, Inc. was between Seymour Cray’s desire to develop new and more powerful computers and the need to keep the cash flow going. Seymour Cray realized the need for a cash flow at the start. As a result, he decided not to pursue his ideas based on the CDC 8600 design and chose to develop a less aggressive machine. The result was the Cray–1, which was still a remarkable machine. With its cash flow insured, the company then organized its efforts into two lines of work. 1. Research and development on the CDC 8600 follow–on, to be called the Cray–2. 2. Production of a line of computers that were derivatives of the Cray–1 with improved technologies. These were called the X–MP, Y–MP, etc. The X–MP was introduced in 1982. It was a dual–processor computer with a 9.5 nanosecond (105 MHz) clock and 16 to 128 megawords of static RAM main memory. A four–processor model was introduced in 1984 with a 8.5 nanosecond clock. The Y–MP was introduced in 1988, with up to eight processors that used VLSI chips. It had a 32–bit address space, with up to 64 megawords of static RAM main memory. The Y–MP M90, introduced in 1992, was a large–memory variant of the Y–MP that replaced the static RAM memory with up to 4 gigawords of DRAM. While his assistant, Steve Chen, oversaw the production of the commercially successful X–MP and Y–MP series, Seymour Cray pursued his development of the Cray–2, a design based on the CDC 8600, which Cray had started while at the Control Data Corporation. The original intent was to build the VLSI chips from gallium arsenide (GaAs), which would allow must faster circuitry. The technology for manufacturing GaAs chips was not then mature enough to be useful as circuit elements in a large computer. The Cray–2 was a four–processor computer that had 64 to 512 megawords of 128–way interleaved DRAM memory. The computer was built very small in order to be very fast, as a result the circuit boards were built as very compact stacked cards. Due to the card density, it was not possible to use air cooling. The entire system was immersed in a tank of Fluorinert™, an inert liquid intended to be a blood substitute. When introduced in 1985, the Cray–2 was not significantly faster than the Y–MP. It sold only thirty copies, all to customers needing its large main memory capacity. The Cray–3 and the End of an Era After the Cray–2, Seymour Cray began another very aggressive design: the Cray–3. This was to be a very small computer that fit into a cube one foot on a side. Such a design would require retention of the Fluorinert cooling system. It would also be very difficult to manufacture as it would require robotic assembly and precision welding. It would also have been very difficult to test, as there was no direct access to the internal parts of the machine. The Cray–3 had a 2 nanosecond cycle time (500 MHz). A single processor machine would have a performance of 948 megaflops; the 16–processor model would have operated at 15.2 gigaflops. The 16–unit model was never built. The Cray–3 was delivered in 1993. In 1994, Cray Research, Inc. released the T90 with a 2.2 nanosecond clock time and eight times the performance of the Cray–3. In the end, the development of traditional supercomputers ran into several problems. 1. The end of the cold war reduced the pressing need for massive computing facilities. 2. The rise of microprocessor technology allowing much faster and cheaper processors. 3. The rise of VLSI technology, making multiple processor systems more feasible. Supercomputers vs. Multiprocessor Clusters “If you were plowing a field, which would you rather use: Two strong oxen or 1024 chickens”. Although Seymour Cray said it more colorfully, there were many objections to the transition from the traditional vector supercomputer (with a few processors) to the massively parallel computing that replaced it. This slide quotes from an overview article written in 1984. It assessed the commercial viability of traditional vector processors and multiprocessor systems. The key issue in assessing the commercial viability of a multiple–processor system is the speedup factor; how much faster is a processor with N processors. Here are two opinions from the 1984 IEEE tutorial on supercomputers. “The speedup factor of using an n–processor system over a uniprocessor system has been theoretically estimated to be within the range (log2n, n/log2n). For example, the speedup range is less than 6.9 for n = 16. Most of today’s commercial multiprocessors have only 2 to 4 processors in a system.” “By the late 1980s, we may expect systems of 8–16 processors. Unless the technology changes drastically, we will not anticipate massive multiprocessor systems until the 90s.” As we shall see soon, technology has changed drastically. The Cray XT–5 is a picture of the Cray XT–5, one of the later and faster products from Cray, It is a MPP (Massively Parallel Processor) system, launched in November 2007. This is built from a number of Quad–Core AMD Opteron™ processor cores. The Operating System is a variant of Linux. The History of Computing Project http://www.thocp.net/hardware/cray_1.htm Cray, Inc. http://www.cray.com/ M. Russell, “The Cray–1 computer system.”, Communications of the ACM, Hwang, “Evolution of Modern Supercomputers”, the introduction to Chapter 1 in the IEEE Tutorial Supercomputers: Design and Applications, 1984. ISBN 0 – 8186 – 0581 – 2.
<urn:uuid:8db6f7d5-e4a2-4871-a4bb-408497ef9717>
CC-MAIN-2020-16
http://edwardbosworth.com/My5155_Slides/Chapter13/Cray_Supercomputers.htm
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371807538.83/warc/CC-MAIN-20200408010207-20200408040707-00394.warc.gz
en
0.945947
3,445
2.90625
3
The topic of the different paracord uses is an interesting one, because it focuses attention on the fact that survival thinking can be both short term and long term. For example, a group of friends who are planning to go camping in the woods for a weekend, will take on a more short term view of survival planning. On the other hand, a family or community which is preparing for the different possible nightmare scenarios (both caused by nature or by humans) will be taking a much more long term view of survival. The good thing about this piece of equipment is that it applies to both short term and long term survival planning One of the key benefits of paracord is the fact that it is so easy to take along with you. After all, a person’s ability to survive will depend heavily on the kind of equipment he has with him. Even a person who possesses much survival knowledge will struggle, if he has barely any tools or equipment with him. The great thing about paracord is that it comes in form factors, such as bracelets or keychain lanyards, which a person can have on him all the time. This means that even if you get lost in the woods without your pack, or you find yourself in some other tricky scenario, you will always have your paracord. The key attributes of paracord To better understand how paracord can be used in survival situations, it can help to learn more about how this material came about and its composition. As its name suggests, this cord was originally used for military purposes. More specifically, it was used in World War II in the parachutes of soldiers. This created a number of requirements for this material. First of all, it needed to be strong. It would be completely unacceptable for cord to be used in parachutes which couldn’t handle the weight, not just of the soldier, but also his weapon and all the other equipment that he was being deployed with. Translated into more concrete numbers, this meant that the type 3 version of the cord, which was and is the most common version, had to be strong enough to carry 550 pounds without breaking. That is a lot of weight, especially given the fairly thin diameter of the cord. This 550 pound breaking point is also why paracord is sometimes called 550 cord. The cord is able to achieve this kind of strength because of its design. On the inside of the cord, there are generally 7 to 9 strands of nylon. Each strand is itself actually made up of 2 further strands which have been twisted together, so there are actually 14 to 18 strands in all, on the inside of the cord. Then, the outside cover of the cord is made of 32 to 36 strands of nylon which have been braided to give it a more sheath like appearance. This nylon construction is what gives the cord its strength, but it also allows the cord to have some flexibility. Flexibility was important for the parachutes during the war because the slight stretching it allowed would absorb some of the shock when the parachute opened to catch the soldier. In addition to the strength and flexibility of paracord, it is also important to note that the material is very light. For instance, a type 3 version of the cord which is 225 feet or 69 meters long will only weigh, at most, 1 pound. This was very important because this helped to lighten the load that soldiers were expected to carry, and this allowed them to save their paracord for use in other situations. Today, this light weight is very important because this is what allows paracord crafts, such as bracelets, belts and lanyards, to be worn comfortably all the time. In addition, current versions of paracord come in various colors, so that the paracord you could be wearing will look rather good. Using paracord to secure gear Because of the versatility of paracord, there are a huge number of possible uses in an outdoor or survival type of situation. Some of these uses allow a person to set up, transport and secure important gear and equipment. In an outdoors situation, this can be useful for a number of reasons. First of all, equipment may go missing or end up broken, which will force you to come up with some sort of workable solution while you’re far away from the nearest shop. Second, securing your gear for transport is important because you don’t want to end up losing things unnecessarily while you’re in the outdoors. When it comes to how to use paracord, one of the things that this material is good for, is replacing lost or broken parts of gear or clothing. For example, during the transport of your tent, it is possible for the lines used to secure the tent to go missing or become damaged. This can also happen if the tent lines are exposed to high wind or other extreme conditions. If you have paracord with you, this will be less of a problem because the cord has the strength and flexibility to secure a tent to the ground. Paracord can also be used for other more portable equipment such as your clothes and bags. The cord is strong enough to be used as a replacement set of shoe laces. It can also be used to replace a damaged belt or bra. In the case of a bra, a cord of the proper length can simply be tied in order to replace a damaged strap. In the case of a belt, there are two options. You can either use a length of paracord as a makeshift belt by running it through the belt loops and tying the ends together, or you can use two lengths of cord as makeshift suspenders. Of the two, the suspenders will usually be easier to manage, since there’s no need to untie and tie the cord during regular use. It may also be necessary to transfer large piles of equipment to a lower or higher location, such as up a tree or down a steep slope. Paracord is particularly good for these kind of heavy loads because of its 550 pound breaking point. If the load is very heavy, it can be possible to increase the cord strength further by twisting or braiding together multiple sets of paracord. In the case of having to move equipment up, paracord can be used as part of a makeshift pulley system to better maximize lifting power. As you can see, there are countless uses for paracord. Even more, due to the extensive types of knots, it can be transformed in a wide range of items that survivalists need on them at all times. To make things a bit more interesting, we put together some of the most interesting projects and you can see how to do them in our article on paracord projects. Using paracord to increase safety Paracord doesn’t just improve convenience and carrying capacity in the outdoors. It can also be used in various ways, to increase your safety as an individual or the safety of your group. One of the more straightforward uses is by using the cord to pull a bear bag up into a tree, away from the reach of most animals. This allows you to transfer food to a location separate from where you or your group are camped out. Animals won’t be drawn by the scent of food into your tents, bags or supplies, and will instead end up somewhere further away. The bag will also be out of reach, so that your food will remain secure. Another thing that paracord can be used for is making sure that the various members of your party stay together. Since the cord is very light, you can carry enough of it so that you can tie people to each other with the cord, in a way that doesn’t impede movement or increase fatigue. This can be particularly useful if you’re forced to walk through the night and not all of you have light sources. The cord can be sufficient to keep all of you walking in line, within reach of each other. This can also be useful if you are moving through an area where avalanches are a possibility, or where ice appears to be thin. The cord could allow you to find other members in your group more quickly, in the event someone gets buried in snow, or in the event someone falls through the ice. Paracord can also help with safety by allowing you to secure an area from animals, or even human intruders. Various tripwires can be set up using the cord, where the lines are attached to things that make noise such as pieces of metal, bells or other noise makers. This will allow you to more quickly learn if something or someone is approaching your camp, so that you can prepare. With more advanced knowledge, it will also be possible to use the cord to set up lines that are intended to impede movement or even set traps for approaching animals. In the event of a survival situation where you find yourself with barely any gear in the outdoors, the paracord can be used to create a makeshift hammock so that you can sleep above the forest or jungle floor. This will entail knotting and tying the cord into what is essentially a net, which can then be attached to a number of trees. In all likelihood, this will not be the most comfortable place to sleep, but the cord strength will help to ensure that you do not fall to the ground below. This is particularly important because all sorts of insects, critters and predators make their way along the jungle floor at night. Using paracord for medical emergencies A medical emergency in the comfort on your own home is a problem. However, a medical emergency out in the wild is a much more serious matter. It may not be possible to contact emergency responders. You might have to find a way to get back to civilization, while nursing a serious injury. In these kinds of situations, paracord can also prove to be exceedingly useful. One use of paracord is in the construction of a splint. The wilderness can be a dangerous place, even for experienced outdoors types, and falling injuries resulting in broken bones are not altogether uncommon. A makeshift splint can be constructed using tree branches, clothing for padding, and paracord to lash everything together into a stable structure. Done properly, the splint can help to avoid further injury and allow for some movement, so that more help can be found. In certain cases, an individual may fall sick or be seriously wounded, so that walking is no longer an option for that person. In order to be able to transport him or her, at greater speed, it is possible to create a makeshift stretcher. The poles can be made out of longer tree branches, ski poles, or any similarly strong equipment. The main stretcher material can be made out of tarpaulin or even some types of clothing, such as jackets, which can then be supported and tied together using paracord. In the unfortunate event that no such clothing or material is available, it can be possible to create the main stretcher material out of the paracord itself. The cord will simply have to be attached to the poles, and then run from one to another, similar to a shoelace pattern. While it may not be the most comfortable surface to lay on, the priorities in this case are the prevention of further injuries and speedy transport. It’s also important to keep in mind that the paracord can be cut open, in order to access the thinner, finer strands inside. These can then be used in additional ways. For instance, if a person has been wounded and the injury needs to be sewn up, it can be possible to use the interior nylon strands as makeshift suturing material. Simply open up the cord sheath to access the interior threads, and then untwist these so that you can use just the single thread. This will, however, require access to a needle or something very similar, so it will be a good idea to keep a needle or two in your emergency supplies. Using paracord to find food Some survival situations last only for a day or even less. However, there are also situations where people need to survive in adverse conditions for much longer. It will soon be clear that a way has to be found to come up with food, otherwise the situation will very quickly deteriorate. In a scenario like this one, paracord could also be very useful. If you or your party are located close to a body of water, it may be possible to catch fish. In this case, the paracord will be cut open in order to access the thinner internal threads. These can then be used as makeshift fishing lines. It can then be possible to carve fishing hooks from wood, create lures using cloth or shiny metal, and use a branch as a fishing pole or as a stake to which the line is attached. Away from the water, it is also possible to create basic snares and other traps in order to catch small game. Once you have identified where small animals are likely to pass, there are various ways of preparing traps. A simple snare trap can be created using an internal strand of the paracord, shaped into the form of a noose which is then propped up by sticks and anchored to a secure spot. With some time and careful choice of location, this could allow someone to capture and eat small animals. If you find yourself in a location which appears to have bird nests or edible fruit up in the trees, a paracord can be useful in various ways. It may be possible to use the cord attached to a makeshift hook to either knock a nest or fruit out of a tree, or drag it off its position down to the ground. A more risky proposition involves creating a makeshift rope ladder which can then be used to climb up to where the nest or fruit is waiting. We also have a very helpful article about living in the wilderness and coping with everything this implies. So, if you’re interested in learning more, check out our article on surviving in the wilderness. Keep in mind though that, while paracord is designed to carry heavy loads, it may still be possible for it to give way, especially in the case of inferior versions produced for the civilian market. So it is important to be very cautious before committing a particularly heavy load or the weight of a human being, solely to paracord. In certain cases, it may be better to braid several cords together, or make use of additional means of support. While it is important to find new food sources, it is also vital to avoid injuries, especially serious ones. There are much more other using options When it comes to the various uses of paracord, these examples are only the tip of the iceberg. There are dozens of other uses, which have a bearing on a person’s convenience, comfort, safety, security, nourishment and other factors, while out in the wild. Given the fact that paracord is so light and is available in many convenient form factors, there is absolutely no reason someone who is interested in survival should have no paracord on his or her person. You could wear a paracord bracelet or use a paracord lanyard or wallet. You never know when that cord could get you out of a jam, or help save someone’s life.
<urn:uuid:29d6f5d8-6a72-4947-b0ce-e90a03a5eb67>
CC-MAIN-2020-16
http://survival-mastery.com/skills/camp/paracord-uses.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510287.30/warc/CC-MAIN-20200403030659-20200403060659-00114.warc.gz
en
0.955686
3,163
2.671875
3
Iron ore - Vitol. Iron ore is the core component of steel, which is used in many forms of modern construction. Steel can be recycled – on average it takes 17 years for a piece of steel to be reused – so demand is proportionately much higher in countries which are industrialising. Iron ore is the raw material used to make pig iron, which is one of the main raw materials to make steel—98% of the mined iron ore is used to make steel. Indeed, it has been argued that iron ore is "more integral to the global economy than any other commodity, except perhaps oil". another study on injection of iron ore fines, it was found that the Si content was decreased. If sinter fines containing CaO were used the effect was higher. When a mixtur e of iron ore and coal fines or a water-slurry containing iron ore was injected, decreased silicon content of the hot metal could be noticed. The injection of iron ore ... sintering of iron ore fines in steel plant - Mine Equipments. 2.1.1 General Information on Emissions from Iron Sintering Plants Sinter plants that are located in a steel plant recycle iron ore fines from the raw material storage and handling operations and from waste iron oxides from steel ... Contact Supplier Dec 28, 2015· These micro fines of high-grade concentrates had to be agglomerated for their use in the blast furnace and this has led to the development of the pelletizing process. These agglomerates, in turn, sharply improved blast furnace performance and led to a major shift in blast furnace burdening. History of sintering of iron ore Iron Ore Recycling Machine, Iron Ore Recycling Machine Suppliers Directory - Find variety Iron Ore Recycling Machine Suppliers, Manufacturers, Companies from around the World at plastic recycling machine ,recycle machine ,waste plastic recycling machine, Recycle Washing Line Disclaimer : Steel-360 has taken due care and caution in compilation of content for its web site. Information is just for reference not intended for trading purpose or to address your particular requirement. The content includes facts, views, and opinions are of individuals and not that of the website or its management. Nov 02, 2018· Today most of the steel manufacturing companies in the world adopt recycling. Reinforced steels such as TMT bars can be produced from recycling and from the iron ore. By adopting recycling, the exploitation of existing iron ore resources will effectively reduce and will stay longer for future generations. recycled iron ore fines . recycled iron ore fines - rampackersin. Raw materials - Jernkontoret 13 May 2015, For the production of steel, there is a need for iron raw material in the form of. Contact Supplier Iron ore pelletizing systems. Iron ore fines are agglomerated into pellets and then indurated using a furnace to create iron ore pellets. These are typically fed to a blast furnace or DRI plant as part of the process to make steel. Main feed into a sinter plant is base mix, which consists of iron ore fines, coke fines and flux (limestone) fines. In addition to base mix, coke fines, flux fines, sinter fines, iron dust (collected from plant de-dusting system & ESP) and plant waste are mixed in proportion (by weight) in a rotary drum, often called mixing and nodulizing drum. Nov 29, 2016· The CDE Mining division of United Kingdom-based CDE Global has announced a new project with Australia-based Arrium Mining to process and convert nearly 17 million short tons of low-grade iron ore fines currently stockpiled as tailings in waste dumps into a saleable product.. The investment involves the provision of two new processing plants in Australia, says CDE. Oct 30, 2018· The global iron ore market’s junk problem just got worse. China’s push to clamp down on pollution is giving extra impetus to the use of scrap in steel-making, strengthening a long-term trend ... The Metals & Mining division of Cangem was carved out of the parent structure that started in Canada with the exports of scrap metal, mostly HMS 80:20. Now, it has grown into one of the largest exporters of scrap metal from North America and South America, traders, exporters and suppliers of iron ore and coal from all over the world. PDF | Sintering is a thermal agglomeration process that is applied to a mixture of iron ore fines, recycled ironmaking products, fluxes, slag-forming agents and solid fuel (coke). The purpose of ... Sinter plant - Wikipedia. Sinter plants agglomerate iron ore fines (dust) with other fine materials at high temperature, to create a product that can be used in a blast furnace.The final product, a sinter, is a small, irregular nodule of iron mixed with small amounts of other minerals. Jan 11, 2010· That nation’s steelmakers declined to renew yearly iron ore supply contracts and are instead buying on the spot market. India, which supplies about 20 percent of China’s iron ore imports, has reportedly increased the duty on iron ore fines from 0 percent to 5 percent and upped the duty for lump ore from 5 percent to 10 percent. What is iron ore lumps and iron ore fines.What is the diference? Iron ore lumps: size 10-40mm Iron ore fines: Granular size of up to 10 mm for up to 90% of the cargo.While lumps are crushed to 5-20mm size in crusher, normally 30% …. Click & Chat Now Agreement to Develop Iron Ore Fines Recycling Plants Using Direct Reduction Metals and mining firm,Tenova HYL and Diproinduca Canada Ltd., has entered into a commercial alliance agreement for the development and commercialisation of the DRB (Direct Reduced Briquettes) technology for the recycling of iron ore fines in Direct … iron ore. The sintering experiments were performed using flue dust as pellets as a substitute ... materials to recycling in iron and steel making operations. These processes involve thermal, ... utilized, economically, such iron oxide fines have the potential to add significant benefits to the iron industry . iron ore waste recycling – Crusher South Africa. Home > Mining Equipment > iron ore waste recycling. About us. … Posts Related to iron ore waste recycling. … Construction waste – top tips to Reduce Reuse Recycle. recycling and reuse of building waste in construction – Gold Ore … Reuse and recycle construction waste. … Get Price In the case of iron ore, the concentration tailings are fine materials, containing mostly silica, together with some fines of iron oxides, alumina and other minor minerals. This constitution puts those tailings such as potential aggregated materials of mortar and concrete, in the civil construction industry . Iron Ore Processing for the Blast Furnace (Courtesy of the National Steel Pellet Company) The following describes operations at the National Steel Pellet Company, an iron ore mining and processing facility located on the Mesabi Iron Range of Minnesota. Creating steel from low-grade iron ore requires a long process of mining, crushing, Mar 09, 2013· Sintering process helps utilization of iron ore fines (0-10 mm) generated during iron ore mining operations; Sintering process helps in recycling all the iron, fuel and flux bearing waste materials in the steel plant. Sintering process utilizes by product gases of the steel plant. Iron Ore (fines) (Liquefaction) ‘Fines’ is a general term used to indicate the physical form of a mineral or similar cargo and, as the name suggests, such cargoes include a large proportion of small particles. Jul 05, 2018· During sintering process, iron ore fines, recycled iron-bearing materials (dusts and slags), fluxes (dolomite, limestone, etc.), and fossil fuels, such as coke breeze and anthracite, are thoroughly mixed and agglomerated into lump-like sinter under about 1300 °C provided by the combustion of fossil fuels . Metals and mining firm,Tenova HYL and Diproinduca Canada Ltd., has entered into a commercial alliance agreement for the development and commercialisation of the DRB (Direct Reduced Briquettes) technology for the recycling of iron ore fines in Direct Reduction (DR) plants Sintering is a process whereby iron ore fines are heated in a mixture of fluxes in order to agglomerate the particles into larger-sized pieces. iea.org Beim S intern wird das Feinerz in einem Zuschlagstoffgemisch erwärmt, um die Partikel zu größeren Agglomeraten zu verschmelzen ("stückig zu … Their goal is to develop and commercialize the DRB (Direct Reduced Briquettes) technology oriented to recovery and recycling of iron ore fines in Direct Reduction (DR) plants. Iron ore low value by-products in DR plants include the fines from scrubbing systems of material handling, iron ore screening and the sludge mainly produced in the ... Iron and Steel Manufacturing Industry Description and Practices Steel is manufactured by the chemical reduction of iron ore, using an integrated steel manufac-turing process or a direct reduction process. In the conventional integrated steel manufacturing process, the iron from the blast furnace is con-verted to steel in a basic oxygen furnace ... recycled iron ore fines - erfgoedkamerbertembe. Recycling of steel plant mill scale via iron ore sintering Recycling of steel plant mill scale via iron ore sintering plant A small pile was prepared by layering the iron ore fines, coke breeze, limestone, dolomite, lime, return fines and mill scale on a weight basis on the floor ... No, iron ore is a non-renewable resource, meaning it will not grow back. Products made from iron ore can be recycled depending on the product. Apr 15, 2012· Home > Business of Mining Specials > Recycling & the Future of Mining Recycling & the Future of Mining ... ‘urban mining’. The existing stock of materials in the urban environment is recycled more and more. 38% of iron input in the steel making process comes from scrap. ... The large iron ore miners don’t get tired of stressing that the ...
<urn:uuid:eec14a6b-ac30-4180-9e83-feb451dce472>
CC-MAIN-2020-16
https://www.msenekalproperties.co.za/2019/12_11/recycled-iron-ore-fines/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370521574.59/warc/CC-MAIN-20200404073139-20200404103139-00074.warc.gz
en
0.902174
2,188
2.96875
3
(The following can also be downloaded here in our brochure’s checklist format. Note: If printing this brochure on your home printer, please set it for borderless printing.) WHAT IS GROUP B STREP? Group B strep (GBS) is a type of bacteria that is naturally found in the digestive and reproductive tracts of both men and women. About 1 in 4 pregnant women “carry” or are “colonized” with GBS. Carrying GBS does not mean that you are unclean. Anyone can carry GBS. Unfortunately, babies can be infected by GBS before birth through several months of age due to their underdeveloped immune systems. Only a few babies who are exposed to GBS become infected, but GBS can cause babies to be miscarried, stillborn, or become very sick and sometimes even die after birth. GBS most commonly causes infection in the blood (sepsis), the fluid and lining of the brain (meningitis), and lungs (pneumonia). Some GBS survivors experience handicaps such as blindness, deafness, mental challenges, and/or cerebral palsy. Fortunately, most GBS infections that develop at birth can be prevented if women who have tested positive receive at least 4 hours of IV (through the vein) antibiotics just prior to delivery. HOW DO I KNOW IF I CARRY GBS? Although most women do not have any symptoms, GBS can cause vaginal burning/irritation and/or unusual discharge which may be mistaken for a yeast infection and treated incorrectly. (1) If you have “vaginitis” symptoms, see your care provider promptly for an exam and possible GBS testing. GBS can also cause bladder infections, with or without symptoms. Your provider should do a urine culture for GBS and other bacteria (this is not the standard prenatal urine “dipstick” check) at the first prenatal visit. GBS in your urine means that you may be heavily colonized which puts your baby at greater risk. (2) If your urine tests positive, your provider should consider you as “GBS colonized” for this pregnancy so that you receive IV antibiotics for GBS when labor starts or your water breaks. It is now the standard of care in the US and several other countries for all pregnant women to be routinely tested for GBS at 35–37 weeks during each pregnancy unless their urine already cultured positive in the current pregnancy. (Since levels of GBS can change, each pregnancy can be different.) Your provider will perform a swab test of both your vagina and rectum and receive the test results in 2–3 days. Inform your provider if you are using antibiotics and/or vaginal medications which may cause false negative results.(3) Some hospitals will offer rapid, DNA-based tests which can be performed during labor or any time during pregnancy with results in just a few hours.(2) These tests can help supplement your routine GBS testing because: Your GBS status can change by the time you go into labor Culture tests can show a false negative Your culture test results may not be available HOW CAN GBS INFECT MY BABY? Carissa was born weighing 1 pound, 12 ounces because GBS caused her mother to go into preterm labor. GBS can infect your baby even before your water breaks. GBS infections before birth are called “prenatal-onset.” GBS can cause preterm labor, causing your baby to be born too early. GBS infection can also cause your water to break prematurely without labor starting, causing your baby to lose a significant layer of protection. It is thought that babies are most often infected with GBS as they pass through the birth canal. GBS infections within the first week of life are called “early-onset.” Babies can become infected with GBS by sources other than the mother. GBS infections after the first week of life are called “late-onset.” Be aware that your womb and/or C-section wound can become infected by GBS. HOW CAN I HELP PROTECT MY BABY Ask to have a urine culture for GBS and other bacteria done at your first prenatal visit.(4) If you have a significant level of GBS in your urine, your provider should prescribe oral antibiotics at the time of diagnosis. GBSI advocates a recheck (“test of cure”) one month after treatment. See your provider promptly for any symptoms of bladder (urinary tract) infection and/or vaginitis symptoms. (5) Be aware that bacteria can be passed between sexual partners, including through oral contact.(6) Contact your provider immediately in you experience either: Decreased or no fetal movement after your 20th week Any unexplained fever Get tested at 35–37 weeks. If the test result is positive, you should receive IV antibiotics when labor starts or your water breaks. Get a copy of all culture test results and keep them with you! Plan ahead if you have short labors or live far from the hospital.The IV antibiotics you receive in labor generally take 4 hours to be optimally effective. Ask about a late third-trimester penicillin shot as a possible safeguard.(7) (Note: not a widely accepted strategy.) Tell your provider if you are allergic to penicillin. There are IV antibiotic alternatives.(8) Know that “alternative medicine” treatments such as garlic or tea tree oil have not been proven to prevent your baby from becoming infected.8 Some are unsafe. Wren’s mother followed an alternative regimen of acidophilus, echinacea, garlic capsules, vitamin C, grapefruit seed extract, and garlic suppositories to eradicate GBS from her body when pregnant with Wren. Wren was 7 pounds, 20.5 inches and perfect at birth after a normal labor and delivery at home. He died 11 hours later from a group B strep infection in his lungs. Avoid unnecessary, frequent, or forceful internal exams which may push GBS closer to your baby (9). (Knowing how far you are dilated does not accurately predict when your baby will be born.) Vaginal or perineal ultrasounds are a less invasive option.(10) Discuss the benefits vs. risks of possible methods of induction with your provider well before your due date as not all providers ask before “stripping” (also known as “sweeping”) membranes. Ask your provider to not strip your membranes if you test positive for GBS. (Be aware that you may test negative, but be GBS positive later.) GBS can cross even intact membranes and procedures such as stripping membranes and using cervical ripening gel to induce labor may push bacteria closer to your baby.(11-13) If you are having a planned C-section, talk to your provider about the risks vs. benefits of starting IV antibiotics well before your incision. C-sections may not completely prevent GBS infection although the risk during a planned C-section is extremely low if performed before your labor starts and before your water breaks. Talk to your provider about whether or not to use internal fetal monitors and/or have your water broken before you have had IV antibiotics for at least 4 hours. …when my water breaks or I start labor? Call your care provider. Report any fever. Remind him or her of your GBS status. If you have already had a baby with GBS disease or have had GBS in your urine in this pregnancy, you should receive IV antibiotics regardless of this pregnancy’s GBS test results. Go to the hospital immediately if you should receive IV antibiotics. Have all test results with you. Be sure to tell the nurses that you need to start IV antibiotics for GBS. If you do not have a GBS test result, and your hospital does not offer a rapid GBS test, per the CDC guidelines you should be offered IV antibiotics based on the following risk factors: Your baby will be born before 37 weeks. Your water has been broken 18+ hours without delivering. (Even 12+ hours increases the risk.) You have a fever of 100.4 °F or higher during labor. In half of GBS infections, the mother has no risk factors.(15) This is why testing is so important! …after my baby is born? If you give birth before you have had 4 hours of antibiotics, the hospital may culture your baby and should observe him/her for 48 hours.(2) You can ask about your baby having antibiotics while waiting for the results of the culture. (Note: Recent research suggests antibiotic treatment may disturb the baby’s protective intestinal flora.) Breastfeeding can supply your baby with important antibodies to fight infection.(16) However, it is speculated that a few late-onset and recurrent GBS infections are possibly associated with infected breast milk. (17,18) It is currently thought that the health benefits of breastfeeding outweigh any potential risk of exposure to GBS.(19,20) Have everyone wash their hands before handling your baby. Make sure everyone who takes care of your baby knows the symptoms of GBS infection in babies and how to respond. SYMPTOMS OF GBS INFECTION IN BABIES Call your baby’s care provider immediately or take your baby to the emergency room if you notice any of these signs: Sounds - High-pitched cry, shrill moaning, whimpering, inconsolable crying, constant grunting as if constipated Crying sounds made by Wren who lived 11 hours due to GBS pneumonia Grunting sounds made by Aayan who was diagnosed with GBS meningitis. The grunting sounds he made are a common, yet often unrecognized, symptom of GBS meningitis. Although Aayan was born premature, he was healthy except for apnea until he became infected by GBS at 98 days of age. He passed away 12 days later. Breathing - Fast, slow, or difficult breathing Appearance of skin - Blue, gray, or pale skin, blotchy or red skin, tense or bulging fontanel (soft spot on a baby's head), infection (pus/red skin) at base of umbilical cord or in puncture on head from internal fetal monitor Above: Premature but otherwise healthy Jasmine Below: Jasmine with cardiovascular shock-induced pallor due to late-onset GBS disease Eating and Sleeping Habits - Feeding poorly, refusing to eat, not waking for feedings, sleeping too much, difficulty being aroused Behavior - Marked irritability, projectile vomiting, reacting as if skin is tender when touched, not moving an arm or leg, listless, floppy, blank stare, body stiffening, uncontrollable jerking Body Temperature - Fever or low or unstable temperature, hands and feet may feel cold even with a fever
<urn:uuid:a47fb544-3a90-4a14-a486-1ae31e58c3e4>
CC-MAIN-2020-16
https://fr.groupbstrepinternational.org/en-comment-aider-agrave-proteacuteger-votre-beacutebeacute-anglais.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370521574.59/warc/CC-MAIN-20200404073139-20200404103139-00074.warc.gz
en
0.937514
2,287
3.6875
4
Culture survives not in history books or past papers, but in the smaller details of age-old monuments that have stood the test of the times. The Buddhist monuments serve as the testimony of a great belief. They establish a dialogue between Buddhism and today’s generation revealing the extent to which humans are linked. Mahabodhi Stupa stands as a true symbol of Buddhist art and culture which is probably fading away nowadays. Let’s recall the greatness of this monument while journeying through its history and its importance in today’s world. Mahabodhi temple- Where and why If you ever desired to witness the origin of religion as great as Buddhism, you may love visiting Bodh Gaya. The religious state is believed to be the home of the enchanting tree under which Gautama Buddha achieved enlightenment. Also, this religious place host’s one of the most famous pilgrimages of the religion in the name of the Mahabodhi Temple. Established as a UNESCO world heritage in the year 2002, Mahabodhi Temple was once the Great Awakening temple of the Buddhist religion. Marking the grounds where Buddha formulated his philosophy, the temple is the spiritual heart of Bodhgaya. Most parts of the temple are rebuilt in the present century and restored from the ancient concepts of designs and marking. The heart-melting highlight of the temple is a 50-meter pyramidal spire. This inner sanctum of the ornate structure holds a 10th-century 2m gilded seated Buddha image. You might feel startled to know that four original sculpted stone railings that surround the temple are still there as an emblem of the ancient art. This date back to the Shunga or Sunga period (2nd to 1st century BCE) How to visit Mahabodhi Temple The temple is located in Bodhgaya which is a popular place and about 90 km away from Patna. Patna is an important city of the Bihar state of India. The easiest way to reach here is by train which will drop you at the Gaya station well connected to popular places like Patna, New Delhi, Varanasi, Kolkata, and Puri. For flight lovers, the nearest airport is Patna which is 100 km away. Taking the road medium is another great option. You may visit Bodhgaya from Gaya (only 12 Kms ride). For those coming from Patna, the travel is 178 km via Nalanda and Rajgir. At times, the diversion of visitors towards this temple increases and hence, it is always best to make advance bookings for the trains so that you reach the desired date. When exploring options, you should also consider The Mahaparinirvan Express which takes tourists on an eight-day seven-night spiritual tour across different Buddhist monuments in North India, including Mahabodhi temple. When visiting here, you should take note that you might be asked to walk barefoot as it is a holy place. It is best to leave your belongings including cell phones, shoes or wallets in a locker located 50m to the west side of the entrance. The security is unpatrolled here. The visiting timings are from 5 am to 9 pm. You might need to pay a nominal fee if you wish to click pictures and take your camera inside. The tour – don’t miss the details of this resurrected temple Mahabodhi Temple, being an important part of Buddhist cultural history offers more than your eyes can witness. The scenic view is not enough to compare its architecture and inner beauty. Yes, you need to see it close enough to immerse in the centuries-old culture of Buddhism. The 6th century AD old building has seen various modifications, starting from one carried out by Emperor Ashoka 800 years earlier. The temple was razed by foreign invaders in the 11th century. From there on, it underwent several major restorations. Various tourists and pilgrims from all parts of the world, religion, and life love coming here and they find solace that is hard to come by these days. When you visit, it is obvious to soak up the positive atmosphere of this sacred spot. You may start the tour by strolling around the inside perimeter. This is in the temple compound which is built in an auspicious clockwise pattern. Simply delight your eyes watching the sea of maroon, yellow and white dressed Tibetan monks performing the endless prostrations on their respective prayer boards. If you wish to inhale those sacred vibes, you may spend time in the Meditation Park located in the temple grounds. Things to look for in Mahabodhi Temple complex The Bodhi tree Whenever you visit a place as surreal as Mahabodhi Temple, you will always have a center point where you will feel the spirits of the temple rising from within. The Bodhi Tree is the first such thing to look for when visiting this serene temple. This is the tree where Buddha got enlightenment which you might already know. What you might not know is that the original tree was destroyed by medieval invaders but its sapling was brought back to Mahabodhi to grow. And it has stood since then. The crawling branches, the spellbinding shade, and a tranquil ambiance is never a thing to miss here. Connection to Hinduism The second thing to look for in Mahabodhi Stupa is the connection between Hinduism and Buddhism displayed in a crafty feast. You will find beautiful sculptures and paintings of Hindu Gods like Lakshmi. The sculpture of Sun God of Hindu riding his horse chariot it one popular spot. The lotus pond The lotus pond near the Bodhi Tree is another splendid example of holy spirits of this place. The passage around the pond has carved lotus stones. It is believed that Buddha spent seven weeks of his life meditating near this pond. He even performed walking meditation. The amazing fact is that you can still see the faded footprints of Lord Buddha. Embark on that pilgrimage around the pond and you shall feel the spirit of Buddha within yourself. The eternal slab The beliefs and rituals flow through every spot in this place. It is believed that the spot where the tree is growing is the navel of the earth. As much as this seems true, you might love the fact that there is a slab placed on the position where Buddha used to meditate under the tree. It is said that when the world will get destroyed, only the slab will remain. This spot will form the basis of a new world originating from that very point where the slab is placed. The more time you spend in the Mahabodhi temple, the more conjured you will feel in a relaxing state of mind. While these facts may increase your curiosity hormones, you need to actually visit the place to believe that it is all nothing but real. Saving the monument As with all monuments, the Mahabodhi Temple is in grave danger of losing its authenticity due to various factors threatening its foundation. Though considered as world heritage, we can’t deny that only a few original parts of the temple remain. This is because of tons of resurrection it has undergone. Still now, the Giant Bodhi tree is facing a catastrophe. It is slanting on one side and getting support from the temple walls. Needless to say, this might cause serious damage to the temple. Plans are going on to substitute the tree with its clone. But when will that happen, is still unknown. If we have the will power, we can spread force the government authorities to continue to force a ban on any type of construction on the property. This can be done by posting forums and spreading awareness. You may join hands with our websites and lead the cause yourself. Everything starts with a little initiative and you can take your first baby steps by sharing this article with your friends, family members and anyone you know. Spreading the word is the best way to start. You can even help increase the legal state of the property by spreading the word about the importance of the Mahabodhi temple. Consequently, the protection of the landscape surrounding the property can also persevere as a cultural landscape deal. The fears faced by this important temple are much more serious than we can imagine. Some argue that a Delhi tourism corporation is setting eyes on this temple to turn it into an amusement park, where foreigners would be shown the ancient pilgrimage spot with a game of golf. Turning a Buddhist shrine into a 5-star international tourist site never seems an ethical deal. The monument needs saving and we all have to work on it. By spreading the word about this enchanting shrine, and letting everyone know about its importance in the ancient Buddhist culture, we can contribute to reviving it probably from its few last breaths. As much as possible, every one of us should visit the temple and if not, simply look at our website to find more info about saving the monument. You will love the work we are doing to save Buddhist monument. The core spirit of Buddhism lies in their believers, the Buddhists. The Buddhists keep their faith alive by visiting their pilgrimage. And the pilgrimage or temples remain awake, thanks to the belief of people visiting them. This is a cycle that cannot be broken by any means. If it does, Buddhism might not prevail. Buddhists have strong faith that Mahabodhi temple is indestructible. They believe that it will remain even when the entire world is destroyed. But is it actually true? Probably not! The modern invasion and environmental factors can ruin it if nothing is done on a short note. If we humans are starting to believe that nothing is sacred, then we are probably losing the basis of humanity. So, wait no more, start spreading the awareness and let us know your views in the comment section. As a dedicated Buddhist community, we are giving our best to let everyone know about saving the monuments that are the epitome of Buddhism. You may reach us out for any type of concerns.
<urn:uuid:44860516-5b24-4ca4-82f1-6ddf30ae7c18>
CC-MAIN-2020-16
https://unexploredbuddhistmonuments.com/blog_details.php?id=82
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370492125.18/warc/CC-MAIN-20200328164156-20200328194156-00075.warc.gz
en
0.95277
2,059
2.75
3
fjghjhfg.co.vu/14411.php Therefore, a meta-analysis research would be desirable to integrate results from the existing studies to reveal patterns of causal relationships and to form a theoretical framework for further future studies. In addition, the researcher also suggests that big research institution should work hand in hand to conduct more researches in this field. According to Cushrer and Brislin [ 1 ] National Culture can be defined as the average common ideas, values and assumptions about life that are widely shared and that guide the behavior of specific nation or people. Economic Growth can be defined as the rate of increase in the value added produced in the economy — GDP growth rate. Alternatively, the rate of increase in the incomes of all factors of production in one year always calculated in constant prices. According to Guiso et al. But specialization requires trade, so that when the division of labor has extended itself sufficiently throughout a society, everyone lives by exchanging. In view of that, it is obvious that the very first writings in economics aimed at revealing the causes of economic wealth of some nations and the reasons behind economic growth. On the other hand, knowing such reasons could pave the way for other weak economic nations to try applying them in an attempt to improve their weak economies and increase their growth rate. In the meantime, Guiso et al. Therefore, the purpose of this research paper is to review the literature with respect to the influence of national culture on economic growth and to discuss the different empirical findings shown by different studies conducted in this field and the debates take place between both culturalists and economists with regard to the causal link between culture on economic growth. The role of National culture played in economic growth remained unrevealed till , when Max Weber linked in his early studies between the raise of an economic ideology capitalism , and a certain religious creed protestant ethic. Despite that Max Weber is widely recognized as one of the founding fathers of the discipline of sociology. Campbell, [ 3 ] yet Weber was the first one who suggested that they might be a certain correlation between national culture and economic growth. Therefore, arguments which prioritize culture as a prominent development factor are not new, Max Weber raised awareness of the importance of a set of values to explain the success of industrial capitalism vis-a-vis pre-capitalist agrarian societies across Europe [ 4 ]. He believes that while such theories may explain specific episodes of development, they fail to clearly explain the international development experience following the Second World War. You let them in, but you are now putting yourself and your family at risk. It also explains why Odinga is still the leading politician amongst the Luo after losing several elections. I therefore am convinced that if we would explore answer to the following questions we would shed more light on why some people and societies develop in specifif ways:. Third — I think you are being unfair in places. A study by Li et al. Protestantism did this, he stated, by defining and sanctioning the ethic of everyday behavior that conduced to business success [ 5 ]. While these values spread widely along with the Protestant Reform in Central Europe, traditional values of obedience and religious faith remained in Southern Europe. That explains, according to Rao [ 6 ], why decades of poorer economic performance ensued across Southern Europe. In the meantime, Calvalcanti et al. The main finding is that these differences may possibly explain why Northern Europe developed before Southern Europe, but they cannot explain why Europe developed before Latin America. In the meantime, there are thousands of regional economies across the world that are also similarly premised on strong cohesive regional cultures, including ethnic cultures, trade unions cultures or work cultures based on particular sectoral specialization. This volume is the product of the "Sixth Annual SEEP-Conference on Economic Ethics and Philosophy" on the theme of 'Cultural Factors in Economic Growth'. Ethical Economy. Other renditions Economics: Economic Growth. Casson Growth. The volume deals with analysis of cultural factors in Economic Growth. Moreover, some of the well-known geographical examples of new industrial districts are also based on religious regional cultures. As a result culture in its own right has not been dealt with explicitly as a major issue by economists. Some researchers neither approve nor support that approach while others supported his ideas and beliefs through relating some of the economic growth leaps of some countries especially after the Second World War to Weber's theory, to the extent that they attributed such economic success of some nations only to the cultural factor. Banfield in is the first to propose a cultural explanation for underdevelopment. Cuesta suggested that a critical review of economicist and culturalist paradigms shows that the developmental role attributed to culture in general and to specific values, beliefs and behaviors in particular has fluctuated between two extremes, from complete neglect to claims of explanatory superiority. Furthermore, Weber intended to claim that a Protestant ethic actually caused the rise of modern capitalism. Weiss and Hobson [ 10 ] remind us that the same revered Confucian ethics, to which culturalist theories attribute the success of East Asian countries, have been associated for centuries with stagnant economies. He believes that while such theories may explain specific episodes of development, they fail to clearly explain the international development experience following the Second World War. They also reject the implication that Confucianism encompasses a homogeneous ethics, an argument equally applicable to Christianity or Islamism. Instead, they argue that the choice of political arrangements such as strategic industrialization guided by the State is behind the development success of the region. Therefore, national culture has been either neglected as a determinant of economic growth by economicist theories or deemed the main explanation behind international developmental differences by culturalist theses. Empirical evidence supports neither of these arguments. Less ambitious theories connecting concrete cultural aspects, such as trust and associational participation in communities, have more convincingly estimated a significant and positive impact on economic welfare. Moreover, since the s, both the paradigm of human development and passionate discussions on globalization has revived interest in the role of culture in economic development. The myopic approach of many analyses has contributed to a heated debate. Unsurprisingly, empirical evidence does not support such claims. The protestant work ethic worked its wonders not just in the western world but also in Asia. Japan has been characterized as a society whose sense of duty and collective obligation, in all realms, set it apart from the individualism cultivated in the west. Along with the government initiatives and a collective commitment to modernization, this work ethic and Japanese personal values made possible the socalled Japanese miracle [ 14 ]. More recently, the contribution of cultural factors to economic success or failure in different countries or regions of the world has been documented [ 15 ]. Huntington [ 16 ] predicts that, as a result, economic globalization will lead to a ferocious competition among civilizations and a protracted confrontation among the most prominent cultures. He envisages that economic competition without cultural convergence will dominate future world relations. Confrontation between democratic, communist and fascist regimes has been replaced by Western, Islamic, Confucian and other culture clashes. A frequent critique of this thesis, however, is that traditional civilizations are bound to conflict. There are, after all, at least as many examples in history of civilizations collapsing at the hands of others, as there are of successful episodes of cooperation among them. Huntington in his attempt to link national culture with economic growth took an example of two countries that had similar per capita income levels, economic sectors and composition of exports in the early s. There, the weather is warm and there are annual rains more or less. My friend was arguing that this is why Thais are always late — they never had to plan. Northern Europeans had to plan ahead to get through the cold months. That argument has a long history. Second — I think you are missing some cultural items which likely impact efficiency and productivity. Perhaps a risk taking culture is good for productivity in the long run more invocation or learning. Perhaps a bargaining culture has higher transaction costs; or maybe it has for efficient price discrimination. A less truthful culture might incur higher transaction costs through increased need for due diligence. A culture of corruption which might include entertainment, kickbacks, purchaser rebates, birthday gifts, red packets, etc might lead to larger capital outflows to protect the ill-gotten gains. A culture which encourages education might lead to higher productivity. Third — I think you are being unfair in places. Nor have I heard economists recommend that the stereotype Malaysian work ethic is better than the stereotype Chinese work ethic because it means less, well, work. No, I think that cultural economics informs the tradeoff space. How much do siestas cost an economy? How much would an extra week of vacation cost? What is the impact of more paternity leave for fathers? The economics can tell us about the monetary cost or benefit. Then it is up to the people to make the utility choice. So yes, I think the study is very worth while. I think your last point may be the most pertinent. And if everyone is making an individual choice to conform to some cultural norm, but this has a big economic cost, then perhaps culture can be a significant factor in economic growth. In this case, culture is like an externality or coordination failure. I am interested in the role reciprocity plays in finance and I have started looking at how different commercial cultures affect the development of the financial system. My hypothesis is that a financial network based on e. This will impact on the distribution of money in the financial system and its resilience to shocks. The point of divergence, I suspect, is that I do not take it as axiomatic that I work in a framework of utility maximisation. If differences in the utility function are the only difference, then I agree. But what if the ability to make good choices differs across cultures. Have you considered the possibility that some cultures are better at maximizing their own utility than others? Effectively, what is the difference? Maximizing utility does not mean that everyone enjoys their decisions. So culture could maximize utility but not maximize happiness. Thank you for the great post. Insofar as economic growth is the emergent phenomenon of each individual behaving in a certain way working , and insofar culture affects how each individual behaves, then it is easy to draw a conclusion that culture must indeed have an effect on economic growth. Furthermore, the assumption that culture — whatever it might be — is static seems to be, at least intuitively, erroneous, therefore any regression such as above would be temporal at best. Alternatively, we could instead focus on ethical values which, unlike culture, can be defined to be static concepts i,e, unchanging through time , and try to observe their correlation with economic growth. Even if we cannot really escape the realm of subjectivity, at the very least we have a simpler task. The existence of a certain value in an individual is a boolean either true or false , and as such we can easily measure how much that value is prevalent within a certain community. Culture , Education, Ethic undervalued factors of competitive development http: This i wrote about on april Culture — Ethics — Information: In the Economic policy measures , especially in the last years, it has been given a lot of emphasis to some impediments that block both the development policies and the employment. Among the referred obstacles appears: Against this situation , trying to emulate the successes achieved in other Western countries , the measures put in place by some other Governments, have been focused primarily on quantitative measures, underestimating the capacity to react to these measures of the social system respect that one of other Western countries. Some Governments did it.. A deterioration , the last, very serious in our economy as it incorporates a structural and not cyclical deterioration which will impact on the future of our country. The same situation happen in other weak countries of Eurozone … and.. This is mainly becouse the accelleration in the last years of an incredible Worldwide sophistication of the Economic development process. A crucial role in this game must be played by Politics, by Politicians and then by the Government, the government parties and all political parties in general, which should start to give an example. The Political Parties seems not capable any more to select ruling class especially at intermediate level becouse the need to favourite clientelism and corporative interest. The parties and therefore Politics are dependent too much from the vote system and from corporative interests that are not any more helpful to the satisfation of general interest of the countries and of the worldwide economy. These considerations, which affected public opinion and newspapers and television on and off and only during some election campaign , assume today a greater value as greater is the level of the modernization and communication of the country to which they refer becouse also the higher level of influence that either directly or indirectly foreign investors lead also and above all on our productive system and enterprises. Basically the Politics today must be made more seriously than in the past because it is not only the political decision ie the one that determines the address of the strategic choices made by the government in different sectors but it is mostly the Ethics embodied by political class and by the government that make the difference to gain positive effects on the long term economic development. I think culture influence many many variables that turn to have impacted on the utility function for the societies and the household in general. Take for example factors such as educational impacts on culture, employment impacts on culture and so on affects the GDP growth either negative or positive depending on the people pespective about cultures on growth. For me i think the factors influence on culture on growth are not only the above but others like decision making in the society,household protection or stay indoor in some countries women stay indoor lower education, lower participation in labor force and so on. Overall only caring kids and only acts as house wife , affects productivity, affects consumption and saving and even affects investment. So cultures will have correlation either positive or negative with economics growth. So, culture must matter but impossible to prove. Reblogged this on Things I grab, motley collection. Does Culture Matter for Economic Growth? Thanks for this interesting post. It points into the right direction: This is comprised of individual ambitions, values, belief systems etc. I therefore am convinced that if we would explore answer to the following questions we would shed more light on why some people and societies develop in specifif ways:. Lets take some young entrepreneurs as an example. With being open-minded, self-motivated, passionate, pro-active, solution-seeking? If so, then how can we empower people and entrepreneurs, how can mind-sets and inner attitudes be shifted? These dimensions do matter and can be developed if taken into account. Unless someone has tried meditation or a a similar practice it is a difficult topic for economists and development practitioners to grasp. Development is what people do for themselves. It must start and end from within. Our job is to facilitate the process. Nwanze, Addis Ababa, May The Growth Economics Blog. The concept of culture seems to be taken in a very wide sense but the answer is yes. The Luo of Western Kenya were mentioned above and they can serve as an example of culture causing relatively poor, but difficult to measure, economic growth. In the culture of the Luo, the younger brother cannot plough until the eldest brother has ploughed. Hence, assuming the old adage, that the sooner ploughing is done the greater is the yield and economic growth, then the delay in younger men ploughing is harmful to economic growth caused by the above mentioned cultural law, by some immeasurable amount. So, measure this, somehow, against say another Kenyan tribe, which does not cause such delays then you would say, all things being equals as to rain, inputs, etc. It may be that because of this law they plough a lot quicker but that is not commonly held to be the case. The issue here is not actually culture in its narrower sense but religion. It also explains why Odinga is still the leading politician amongst the Luo after losing several elections. He is a senior man of a certain house and this law mentioned above means that no one within the Luo will seek to replace him in his lifetime. This law, therefore, opens up the question of economic growth and factors militating against it, in cultures with such laws reflected through to the very top of society and the government of the culture or society. In effect the person of age rules and if he does so badly, none, even although disagreeing with it, will challenge it and do their own thing for their own good. They just agree to suffer the consequences even if harmful.
<urn:uuid:8546a5fa-6ea5-4c36-9950-da1de6280dd5>
CC-MAIN-2020-16
http://owiluxyfiq.tk/films/cultural-factors-in-economic-growth-ethical-economy.php
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371807538.83/warc/CC-MAIN-20200408010207-20200408040707-00393.warc.gz
en
0.958668
3,401
2.671875
3
Already know the theory? Skip to 08:07 to hear OCTERMINUS Tab and MP3 posted at my Patreon:https://bit.ly/2zFwzOO Diminshed 7th chords aka Full Diminshed chords are symmetrical and any note can be considered Please note, this transcription was computer generated and has not been checked for errors. However, I do hope you find it helpful. Be sure to check out The Ultimate Modal Poster!Welcome to the signals music theory testing laboratory today's experiment involves musical portal technology, utilizing the diminished 7th chord to travel to a different keys in order to further your understanding of this topic. I've uploaded today's instructions into a holographic virtual guitar teacher engaging lesson protocol in three two one. Hey, I'm Jake lizzio. And in this video, what I want to do is Floor some of the more interesting properties of the diminished 7th chord and start writing with those properties. Mainly what we're going to do is explore the idea of these chords being symmetrical the fact that any note can be the root of a diminished 7th chord also the fact that these chords will resolve to either a major chord or a minor chord and what that means is that when you play a diminished 7th chord, there's eight possibilities of keys or chords that you can go to. So to me the diminished 7th chord is like a portal to eight different universes if you can just treat it in the sand. Ambiguous weird way, so the first half of this video will be heavy on the theory. If you're already familiar with diminished seventh then I suggest you just skip to the second half of the video where I'll be putting this all to use in actually trying to write music with some of these properties. So let's get started and talk about the diminished 7th chord, they're built just by adding on notes that are three Frets away or three half steps away. So if I'm starting on the Note C, for example, and if I go three Frets over it takes me to the note E-flat and if I go another three half steps overtakes me the note g flat another three half steps takes me to the note a It should be called a be double flat. We're not going to really call it that in this video. I want to keep things very simple. We've got these four notes c e flat g flat and those four notes are the four notes of a c diminished full diminished chord c folded Minister C diminished seventh ugly sounding chord, right? But here's the deal if I want to figure out the notes. Let's go to the note E-flat. For example, if I want to figure out the notes of an E flat full diminished. It's the same four notes. I have an E flat. I have a g flat. That I have an A and I have a seat. So if these two chords have the exact same set of notes, they're the same chord there just in versions of each other C diminished is the same thing as E flat folded minister is the same thing as g flat folded Ministry is the same thing as a full diminished. Those are the exact same chord. I just play the same chord four different ways by sliding it up, right? So really the point here is when you hear a sea full diminished, I don't want you to think of it only being a see folded matters. I want you to think you know that Be an E-flat diminished. It could be a g flat diminished. It could be an a diminished depending on the way. We're treating it normally whatever the base note is of the cord that actually gets the name. So technically we should call this C diminished and we shouldn't call it a g flat diminished. But I want you to be open-minded and think about how these things work in different contexts. Now, the diminished chord is pretty useless all on its own. Listen to that's really just garbage all on its own. It needs to go somewhere. The diminished 7th chord is a portal and a portal that doesn't go anywhere isn't very useful. It's the destination. That matters right? So this C diminished can take me to a few different places and that's where the real magic happens. How can it take me somewhere though? Well, I want you to think about how the diminished chord has fit in to our major scale. Right? We have to look the Roman numerals. One, two, three, four, five six seven and that's seven chord is supposed to be a diminished Triad. The reason you're not supposed to make it a full diminished chord is because if you make it that 7 chord a full diminished chord you'll be adding in a note that's outside of the scale. You'll be adding in a flat 6, but if we do that a flat 6 will resolve. Down to the perfect fifth very well in that context. So what we're doing is we're breaking the rules, even though the seven chord is supposed to be a diminished Triad. We're going to take the 7th chord and we're going to make it a full diminished chord. So in this context here C diminished is the seven chord of d flat major, right? And that should resolve nice and fine. Same thing in minor. If I look at my minor scale you'll see that the 1 chord is minor and the 2 chord is diminished. So if I look at the C diminished And I ask myself C diminished is the 2 chord of what C is the 2nd note of what minor scale and the answer is B flat if I play a B-flat minor scale. The second note is C. So a c diminished will resolve to B flat minor very very well. So even without worrying about voice leading and where things are going up and down I can find a method of resolution just by taking my diminished chord going up a half step or I take my diminished chord and move it down. Whole step and make it a minor chord two ways to resolve the same diminished. So let's do that in a different key just so you see how easy this is. Let me play a d diminished right? Here's a D full diminished if I want to resolve this horrible chord, I could just go up a half step to a major chord. So what's a half step after d That's a flap, right? You hear that or I could have gone down a whole step to a minor chord what's down a whole step from D? That would be C so C minor diminished. C minor and once again since any one of the notes of this cord could have been the root I can perform this operation of resolving up or down I can perform that operation off of the D. I can perform that operation off of f any one of the notes of my cord I can do the same thing off of it and that's how I'm able to modulate to eight different chords or eight different tonalities off of the exact same chord in a song that I wrote here. This same chord has eight different functions. It functions as the seven chord of four different major tonality. Oddities and it functions as the two chord of four different minor tonalities. So here's how I put the whole thing together. I started off with the diminished chord, which I use to see diminished in every single case here and right after that c diminished my first tonality came in as an E minor and the reason that works is because I was treating my C diminished as a g flat diminished and G flat is the second note or F sharp is the 2nd note of E minor. So I figured hey if this C diminished will treat it like an F sharp diminished. It can resolve to E minor. So my very first section has three measures of the Minor tonality the cord and I also use the E minor scale here was the problem I ran into though. If I just go back to my diminished chord and then launch you directly into a new key, then it's like a surprise every single time. It's like a magic trick. You never get used to what that diminished chord sounds like so I found it was really important to give you a little bit of that diminished chord in the context of our current key, but then the second time that I give you that diminished chord. It's to launch you and portal you into a whole new tonality. So the way I structured this I've got three measures of my new tonality. The one measure of my C diminished three measures of my tonality one measure of my C diminished and that'll take us boom right into our new tonality. And in this case what I did, I believe I moved up a minor 3rd to get into AG minor tonality and I use the G blues scale and really on top of every one of these chords, you know, after that, you know, I think it went to a B-flat major and I'm just looking at B flat thinking what kind of scales have the notes of the B flat chord in it, and there's a lot of options there. So I you know, I went through scales like the flat lydian I ended up using some harmonic Major instead and always kind of highlighting that diminished chord. Once it came around that one little measure of diminished. I thought it would be really important to maybe play that diminished arpeggio or maybe play some of the half whole scale which fits right in with that diminished 7th chord, but really prominently highlight the fact that diminished chord keeps coming back and keeps taking us into new territory. Now there was a lot that I wrote here. I'm not going to go through every lick, but there was one like I just want to comment on because I thought it was really fun. It was over the d flat major section. I decided to go into the d flat major key. I think it's the Only time I just use regular major. So what I had is I had these five note pattern one-two-three-four-five one-two-three-four-five one-two-three-four-five into a four note pattern. One, two, three, four, one two, three, and then a three note pattern one, two, three, one, two, three, one, two, three and a two no pattern one two, one two, one two, and I ended it with a nice little Bend. So it's 5 times 3 and then 4 times 3 and then 3 times 3 and then 2 times 3 and then you finally get that been to close thing off. I thought it was kind of cool. So here's what the whole thing sounds like going through eight different tonalities in a minute and a half using the Zack same chord with eight different functions Nutty stuff that was really really difficult for me to put that all together and have it sound listenable. You know, I think that might be too many different key changes for just a minute and a half. So I tried to Pace things out and you might see in the very very middle. I added a little bit of extra diminished because I thought it was getting really monotonous to just keep changing Keys like that. I figured a little bit of anxiety in the middle there a little extra diminished might help kind of break things up a little bit and at the end, of course, I just decided To make things nice and happy by resolving on a major chord the same major chord that we started off into. So to me this is really exciting and like engaging and fun stuff. It was extremely difficult. This is only a minute and a half of music and I spent probably 10 hours together writing it recording it tracking it plotting it out. You know, it's not every day. You need an Excel spreadsheet just to write a piece of music and I really quickly want to talk about that process. It music isn't always supposed to be written like this. You're not always supposed to, you know decide ahead of time what you're going to write but sometimes this is a Only beneficial process and I learned and grew as a musician just by engaging in this and I know I would have never written this unless I had decided hey, I'm going to do this academic process of experimenting with this one chord and making myself resolve it to all eight different possibilities, you know, I've gained a lot from that process and I wouldn't have ever done unless I decided to so, you know, I don't recommend you this is how you write your music with with this kind of mechanical approach, but I do recommend you sometimes write music like this because I found it very helpful and it's a lot of fun. It's a really good. Project and puts you into some unfamiliar territory. So I hope you like this video and I hope it got you thinking about things in a little different way and thinking about the ambiguity of your diminished 7th chord. If you really liked this video, you can consider supporting my patreon page to find folks over there are sponsoring these lessons and they really wouldn't be possible without them. So thank you to my patreon supporters. If you can't do that though. That's just fine. Think about liking subscribing commenting all that kind of stuff helps me out. So thanks for watching and I will see you next week.
<urn:uuid:a067c63f-c493-4cea-84f6-27604297ca92>
CC-MAIN-2020-16
https://signalsmusicstudio.com/home/recent_video/221/4
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496669.0/warc/CC-MAIN-20200330054217-20200330084217-00354.warc.gz
en
0.976634
2,736
3.140625
3
by Simon Darroch*1 Sitting in the sweltering heat of southern Japan, I’m faced with a conundrum. The limestone cliff in front of me preserves the boundary between the Permian and Triassic periods, a point in time around 250 million years ago that witnessed the greatest mass extinction of the Phanerozoic eon. I’m collecting rock and fossil samples from around this boundary to study how the make-up of fossil communities changed in response to this extinction event: this is palaeoecology. The boundary itself couldn’t be easier to spot — the lower (and older) part of the cliff is composed of a pale white-yellow limestone packed full of fossils of shelled marine invertebrates including brachiopods, bivalves and gastropods, as well as microscopic sea-floor-dwelling (benthic) creatures called foraminifera. Some of these foraminifera have been found elsewhere in the world and are dated to the Permian period. The younger, higher rocks are an ominous-looking black, with fine layering and a blotchy texture that you might otherwise associate with old blue cheese. Fossils in this dark, rotten-looking limestone are extremely rare and dominated by one or two species of mollusc, but researchers have found fossilized teeth belonging to an eel-like animal called a conodont, of the species Hindeodus parvus, which unequivocally dates the rocks as Triassic in age. Somewhere at the boundary between these white and black limestones, 95% of all marine organisms with skeletons became extinct in the geological blink of an eye — currently thought to be less than 200,000 years. The palaeontological story would seem to be extremely simple: a diverse Permian benthic marine community suffered a mass extinction and was replaced by a community composed almost entirely of one or two species (Fig. 1). This pattern is broadly the same all over the world during this transition. So where is the conundrum? The problem comes with deciphering the striking colour change between the Permian and Triassic limestones. The shift from white to black actually has very little to do with the extinction itself, but instead records a dramatic environmental change. The white Permian limestone was laid down in a shallow marine lagoon. The dark and mottled Triassic limestones record something very different — an algal marsh along a shoreline, very similar to that forming on the modern-day Andros Island in the Bahamas. The fine layering and mottled texture were produced through complex interaction between fast-growing algae and sediment carried in by storms. The algae formed flat, sticky mats in low-lying areas protected from the wind, and with surprising adhesive properties. During storms, sediment (made up mostly of clumps of carbonate mud, foraminifera and gastropod shells) was stirred up into the water column, and then transported onshore as part of the storm surge. A thin layer of this sediment was trapped on top of the mat, and became fixed as the algae grew through and around it. This is what produced the layering and unsettling ‘blotchy’ texture of the limestone. The fundamental environmental change that occurred here across the Permian–Triassic boundary highlights two issues that complicate the interpretation of these bodies of limestone: 1) they represent very different environments that probably hosted very different original communities; and 2) these two settings probably preserve very different components of the community (one might preserve small creatures and the other big ones, for example, or one might preserve those with hard shells and the other those with soft bodies). As a result, the fossils record the original living communities with varying accuracy. Palaeontology helps us to deal with the first problem by comparing fossils found in rocks representing similar environments at different times, so that we know what sort of things are recorded in each type of rock: we compare apples with apples and blotchy oranges with blotchy oranges. Dealing with the second problem is slightly more complex. Processes such as being shifted by water currents, winnowing (whereby small and light material is swept elsewhere), selective predation (where certain species are destroyed or taken elsewhere) and disarticulation (creatures’ bodies breaking up after death) can strongly distort the appearance of the community and mask changes in community structure. Furthermore, the relative importance of these processes will vary between environments. As we go through different settings in the geological record, then, how do we know that the fossils that we find accurately represent the original make-up and ecologies of the living communities? Fortunately, live–dead studies conducted in modern environments offer a way to test the quality of the fossil record in a wide variety of sedimentary environments. How do life–dead studies work? On the face of it, live–dead studies are extremely simple. You choose a modern environment where sediment is being laid down and begin collecting members of the living and the dead communities (Fig. 2). In marine environments, the living community can typically be found: on or in the sediment (where you might find, for example, clams, sea urchins and soft-bodied worms); attached to blades of seagrass and other algae (many foraminifera and small gastropods); and at various heights in the water column (fish, squid and jellyfish, among thousands of others). Although some of these organisms may be rarer than others, and they may never interact, they all make up the living community in that environment, and in an ideal world would all enter the fossil record. The dead community, by contrast, is largely restricted to the sea floor, making up the sediment and organic debris scattered on and in the surface. This is the precursor or ‘sub-fossil’ record, and gives a good indication of what a palaeontologist might expect to see in the rock many millions of years later. Holding any handful of sediment under a microscope will reveal the typical contents: worn and broken shells, the broken up remains of sea urchin skeletons, and perhaps the withered cuticles of a few small arthropods. How well these live and dead communities match (‘live–dead agreement’) is an effective measure of the potential quality of the fossil record in that environment. For clarity, palaeontologists refer to the living community and the death assemblage. The difference in terminology is due to the fact that the dead material is typically composed of biological remains both derived from the local environment and transported in from elsewhere (and potentially encompassing a large range of ages). Live–dead agreement can be calculated either on a presence/absence basis (who is there and who is missing?), or in terms of relative abundance (are the common species the most frequently preserved?). Both measures provide valuable information, and can be used to re-calibrate the fossil record in terms of how well the overall diversity and ecological make-up of original communities is being preserved. History and recent advances: Although the field of taphonomy, or fossil preservation, has enjoyed more than 70 years of study, the analysis of live–dead agreement as a way to interpret the past was first thrust into the limelight by the palaeobiologist Thomas Schopf in the late 1970s. Schopf undertook a comprehensive live–dead study of the organisms in the area between high tide and low tide in Friday Harbor in the US state of Washington across three environmental settings (muddy, sandy and rocky substrates). He looked at 169 genera in a wide range of animal groups. The principal findings were encouraging: a relatively large proportion of invertebrates visible with the naked eye (the groups typically considered in paleoecological studies) had better-than-expected frequencies of preservation. But the study also highlighted what many palaeontologists had suspected for years: not a single wholly soft-bodied group (such as marine worms, sea slugs or jellyfish) observed in the living community was found in the death assemblage. This is perhaps the most obvious taphonomic ‘megabias’ in the fossil record — soft-bodied animals are almost never preserved, so compilations of fossil diversity through time only really represent the diversity of biomineralized animals and plants, which is far from a complete picture. This observation also highlights why fossil deposits that preserve the remains of soft-bodied organisms are so important; they represent snapshots in time when live–dead agreement is much higher than usual, providing a much more complete picture of the palaeocommunity (Fig. 3). Fortunately, however, palaeontologists can achieve a great deal by looking at biomineralized organisms alone, and over the past 30 years studies in live–dead agreement have made huge advances in calibrating the accuracy of fossil assemblages, as well as isolating and quantifying the relative impacts of specific processes in different environments. More and more careful live–dead studies are providing powerful ‘taphonomic vindication’ for the study of fossil communities. For example, it has been demonstrated that in modern communities of benthic molluscs, the abundance of species is more often than not well preserved in death assemblages. Put simply, species that dominate the living community tend to be more common in the piles of dead shells that accumulate on the sea floor; that may seem a trivial finding, but it is great news for palaeontologists! In addition, whereas a single sample of the living community will typically contain only species that happen to be there at that ecological instant, death assemblages represent the accumulation of dead material over time, and so typically contain more of the rare species that you might otherwise miss; this means that death assemblages actually paint a better and more complete picture of a given community on reasonable ecological timescales (weeks to years). Finally, even within transects drawn through a single living community of molluscs across an area of sea floor, researchers have shown that measures such as evenness (the relative abundance of different species – a metric beloved by palaeontologists studying mass extinctions) can be replicated faithfully in their corresponding death assemblages. In these cases, death assemblages (and the ‘sub-fossil’ record) provide extraordinary records of the composition and distribution of the original communities. These studies therefore show that when we find these environments in the fossil record, we can trust the fossils in them to be an accurate record of what was once living there. Even when live–dead agreement in easily preserved organisms is shown to be poor, palaeontologists can turn it to their advantage. One of the most important reasons why living communities and death assemblages might show little agreement involves a hot-button term — human impact. It is no secret that humans are having a detrimental impact on the oceans; as the concentration of carbon dioxide in the atmosphere rises, more is absorbed by the oceans, making them more acidic. In coastal areas next to big cities, the water is being contaminated with everything from heavy metals and plastic to nitrates and organic fertilizers. Organic material will decay, using up oxygen in the process and leaving none for invertebrates such as molluscs and crustaceans. Other pollutants may act as outright poisons. In these settings, living communities tend to contain few organisms, and to be dominated by one or two hardy species, similar to the Triassic limestones described above. The death assemblage, however, may contain an accumulation of shell material dating back before the arrival of humans and pollutants — a species-rich and high-evenness assemblage that records the make-up of the community in its original pristine state. Here, then, the living community and the death assemblage are very different. Live–dead agreement (specifically, poor live–dead agreement) acts as an indirect measure of pollution and human impact, and so is an important tool in the emerging field of conservation palaeobiology, in which palaeontological data is used to provide information about important issues in ecology and conservation. Ecologists and palaeontologists alike can use live–dead studies to measure human impact and ecosystem health, and, if necessary, can use them to work out where and how to try to reverse any damage to the environment. So where does that leave me, apart from sitting and staring at my cliff section (still sweltering, and now scratching irritably at some insect bites)? The palaeoecological data show a transition over the Permian–Triassic boundary, from an assemblage bursting with fossils to one containing almost nothing, save for a few lonely bivalves. The story is the same the world over, but are the fossils faithfully recording the living community? The pale Permian limestones probably represent deposition in a warm shallow-water lagoon; live–dead studies in equivalent modern settings suggest that the sea floor here also played host to a rich community of plants, arthropods, and countless soft-bodied organisms. None of these have been preserved as fossils, but the biomineralized groups at least should provide a reasonable record of both the overall diversity and relative abundance of immobile molluscs and brachiopods. The dark Triassic limestones, by contrast, record periodic deposition and algal growth in a shoreline algal marsh; the fossil bivalve shells were probably swept onto shore during storms, but nothing living in the marsh itself stood much chance of being preserved. In the marsh, live–dead agreement was almost certainly extremely low. Sadly, in this instance there is very little we can say about the rate or pattern of Permian–Triassic extinction and recovery, because the environments represented by these two units did not preserve their original communities with equal quality. There is an interesting story here, but it doesn’t involve pre- and post-extinction palaeoecology. Fortunately, not too far away there is another Permian–Triassic section composed of limestone from an area that was almost always submerged in water; there are still changes in the types of rock across the boundary, but they record palaeoenvironments that can be (and have been) studied in the context of their live–dead agreement. In the coming years, palaeontologists will attempt to calibrate all the settings we see in the fossil record, in terms of what is preserved and what isn’t, so that when it comes to studying the composition of fossil communities, we can compare apples with apples across the boundary, rather than apples with blotchy oranges. Suggestions for further reading: Darroch, S.A.F. 2012. Carbonate facies control on the fidelity of surface-subsurface agreement in benthic foraminiferal assemblages: implications for index-based paleoecology. Palaios 27, 137–150. (doi:10.2110/palo.2011.p11-027r) Gould, S. J. 1984. The life and work of T. J. M. Schopf (1939–1984). Paleobiology 10, 280–285. (http://www.jstor.org/stable/2400401) Kidwell, S. M. 2007. Discordance between living and death assemblages as evidence for anthropogenic ecological change. Proceedings of the National Academy of Sciences of the United States of America 104, 17701–17706. (doi:10.1073/pnas.0707194104) Kidwell, S. M. & Bosence, D. W. J. 1991. Taphonomy and time-averaging of marine shelly faunas. In Taphonomy: Releasing the Data Locked in the Fossil Record (eds Allison, P. A. & Briggs, D. E. G.). 115–209. Plenum Press. (ISBN:9780306438769) Olszewski, T. D. & Kidwell, S. M. 2007. The preservational fidelity of evenness in molluscan death assemblages. Paleobiology 33, 1–23. (doi:10.1666/05059.1) Schopf, T. J. M. 1978. Fossilization potential of an intertidal fauna: Friday Harbor, Washington. Paleobiology 4, 261–270. (http://www.jstor.org/stable/2400205) 1Department of Geology and Geophysics, Yale University, New Haven, Connecticut 06520-8109, USA.
<urn:uuid:b6830e06-ce7b-43a3-b455-1bacc5385d75>
CC-MAIN-2020-16
https://www.palaeontologyonline.com/articles/2013/patterns-in-palaeontology-whos-there-and-whos-missing/?shared=email&msg=fail
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371824409.86/warc/CC-MAIN-20200408202012-20200408232512-00035.warc.gz
en
0.942221
3,459
3.90625
4
Even though we don’t know precisely how the future will unfold, we know a few things about it: - Of the 7.5 billion humans on the planet, virtually every individual wants to enjoy a high-energy consumption “middle-class” lifestyle. As a generous estimate, 1.5 billion people enjoy a high-energy consumption lifestyle today; the remaining six billion are aspirants hungry for all the goodies enjoyed by the 1.5 billion—all goodies based on affordable, abundant energy. - Our dependence on debt to fuel growth—more extraction of resources, more energy, more manufacturing, more consumption and more earned income to pay for all this expansion of debt and consumption—has built-in limits: debt accrues interest and principal payments, which reduce the remaining income available to spend on consumption. Our dependence on fast-rising debt just to maintain low rates of growth eventually limits our ability to pay for more consumption/growth. When most income is devoted to servicing debt, there isn’t enough left to buy more stuff or support additional debt. - The debt needed to move the growth needle is expanding at a much higher rate than the growth it generates. While growth is stagnant, debt is expanding by leaps and bounds to unprecedented levels. (Global Debt Hits A New Record High Of $217 Trillion; 327% Of GDP) - Wages are stagnating for the bottom 90% of the workforce. We can quibble about the causes, but there is no plausible evidence to support a belief that this trend will magically reverse. - The cost of the most valuable energy--high-density, easy to transport—will slowly but surely become more expensive as the cheap, easy-to-extract energy sources are depleted, notwithstanding the temporary boost provided by the fast-depleting wells of the fracking “miracle.” - There are limits on our exploitation of resources such as fresh water and wild fisheries. Humans can print currency (money) but we can’t print fresh water, energy, wild fisheries, etc. If one unit of currency currently buys one liter of petrol, printing 10 more units of money doesn’t create 10 more liters of fuel. - Creating currency out of thin air isn’t free in our system: all new currency is loaned into existence and accrues interest. As a result, all currency is a claim on future earnings. If we borrow enough from the future, and earnings remain flat or decline, eventually there’s not enough income left to support the debt service and the expanding consumption the status quo needs to keep itself glued together. What’s the result if we add these up? Simply put, debt-dependent consumption in a world in which wages stagnate for the bottom 90% and energy costs increase as demand outstrips supply is a system with only one possible end-point: collapse. The Energy-Debt-Growth Connection If we accept that energy will get increasingly scarce and costly, and real earned income for the vast majority of households is in structural decline, that means the global economy is in terminal trouble. As this chart shows, energy consumption per capita and GDP (gross domestic product, a measure of growth) are in near-perfect correlation: rising energy consumption per person is the foundation of economic expansion: If energy consumption per person declines, so does GDP. If GDP/ economic expansion stalls, the global financial system--dependent as it is on the permanent expansion of debt and income to service that debt--has a problem. In other words, energy, growth and debt are intrinsically linked. Analysts Gail Tverberg and Chris Martenson, among others, have been discussing the causal connections between energy, debt and the financial system for years. Here are recent examples of their work: Simply put, the extraction of fossil fuel energy and the development of alt energy on a vast scale both require an equally vast expansion of interest-accruing debt, both to fund the actual extraction, processing and transport of energy and the consumers’ purchases of all the energy-intensive goods and services that keep the economy expanding. Right now, oil and natural gas are relatively inexpensive compared to historical peaks, especially when prices are adjusted for inflation. Broadly speaking, the fracking “miracle” (based on expanding debt) has pushed supply temporarily higher than demand. (By temporary I refer to a timeline of a few years.) The resulting collapse in energy prices, while welcome to consumers, negatively impacts energy companies' ability to seek new reserves (exploration and production), tap existing reserves that cost a lot to extract or build new alternative energy facilities on a large enough scale to matter. As we witnessed in the 2008 spike in oil prices to $140 per barrel, soaring energy prices crush consumer spending, triggering stagflation and recession. The solution is a Goldilocks price structure—energy prices that are not too high (for consumers), and not too low (for producers). The problem is that as energy costs ratchet higher while wages stagnate or decline, the financial capability of households and businesses to pay higher energy and debt-service costs and expand their consumption vanishes. Something has to give: either consumption declines (triggering structural, permanent recession) or the energy sector goes bankrupt as its production costs cannot be covered by the price of energy consumers can afford to pay. Meanwhile, the skyrocketing debt required to keep the entire status quo glued together is sapping income, reducing the every participants’ ability to pay for future growth. These realities leave three possible futures: - Energy prices move beyond what’s affordable, and the system breaks. - Debt service costs rise above what’s affordable, and the system breaks. - Both energy and debt service costs rise in tandem, and the system breaks. Magic Technology and Wishful Thinking to the Rescue The consensus solutions to increasingly unaffordable energy are technological: new technologies are going to make energy abundant and so cheap it’s practically free. While it’s true that there are many alternative energy technologies in development, the reality is few make financial sense and few have the potential to scale up rapidly enough to replace oil/coal/natural gas. Take liquid fluoride thorium reactors. The consensus is that this form of nuclear energy is reliable and safe. Yet not a single working thorium reactor is in operation. (An update on the potential of LFTR power - PeakProsperity.com) How about all those solar power technologies that are going to make electricity abundant and cheap everywhere? Magical thinking is appealing, but the reality is wind and solar make up roughly 2% of all energy consumed globally. These could double, triple, quadruple and then double again, and they wouldn't even begin to replace fossil fuels. Even if wind/solar became dirt-cheap to manufacture, install and maintain (in the real world, we have to measure total life-cycle costs, not just the initial purchase price), these alt energy sources are intermittent, and that's a big problem for two reasons: 1. Batteries are not “free” and current technologies rely on scarce resources (lithium, etc.) 2. Utilities need to maintain significant power generation capacity to replace these sources during night, cloudy days, when the wind decreases, etc. This means the entire infrastructure of fossil-fuel generated electricity must be maintained--a very costly requirement. The other problem with the “electricity and storage will be nearly free” line of magical thinking is much of our transport system can't be switched to electricity--aircraft, container ships, etc. Virtually every optimistic vision of a cheap, abundant energy future overlooks these problems, or assumes each will effortlessly be solved with some new whiz-bang technology that just so happens to be dirt-cheap. But not all technologies that work on in lab are affordable and not all technologies scale from the lab to production on a global scale. Maybe some lab will invent a battery based on a cheap, abundant resource like silicon, but the process of manufacture may still be horrendously expensive, i.e. require a lot of energy and costly machinery. Even if batteries can be manufactured at a low cost, they’re only serving the 2% of total energy being generated by intermittent sources. Technological solutions are always the "answer," but the actual costs of scaling up new technologies to offset the decline in conventional oil is ignored or glossed over. If scaling up a new energy source bankrupts consumers and producers alike, is it a solution? Magical Thinking: Debt Doesn’t Matter The other line of magical thinking is that debt doesn’t matter, because future growth will always provide us with enough income to service debt. As noted above, the structural stagnation of earned income means this assumption is no longer valid. The next line of defense is that super-low interest rates will make debt practically weightless. But back in the real world, we find even interest rates near zero eventually burden governments and economies. Consider Japan, which has been running a 25+ year experiment in “debt doesn’t matter.” In 2015, the cost of servicing its astronomical debt was the largest single item in the government’s budget: If this is the result of near-zero .1% interest rates, imagine the eventual impact of 1% or (gasp) 2% interest rates—never mind 4% or higher. Let’s also consider the central bank balance sheet and policy that undergirds this hyper-expansion of debt. This is a chart of the Bank of Japan’s balance sheet. If this looks sustainable to you, hmm, you might want to dial back your happy-meds: And what good came of this unprecedented expansion of central bank “monetary easing”? The net result is a near-zero growth stagnant economy burdened with exploding debt remained glued together, arguably rescued not by the central bank but by the collapse of energy prices and the one-off expansion of China’s economy. These realities force fact-based observers into pondering a future that consumes less energy per person and generates less income and debt per person--a DeGrowth economy. The status quo—highly centralized, dominated by self-serving elites gorging on a highly unequal distribution of wealth and income--cannot survive a structural decline in earned income and the resulting collapse of debt, or a reduction in energy consumption per capita. But humanity could do just fine. In Part 2: A Blueprint For DeGrowth, we provide the blueprint for a DeGrowth economy that’s more sustainable than the status quo, and that leaves magical thinking at the door. The economic/political paradigm of rising energy consumption and debt required to keep the whole status quo glued together is going away. We can’t retain the existing socio-political-financial structures of this paradigm and expect to get different results; that’s a pretty good definition of insanity. We need new models; not just for energy consumption and distribution, but for the creation and distribution of currency and political power. The good news is: they're out there.
<urn:uuid:b10c0edd-3be9-4b3a-abae-27f9c383fbd1>
CC-MAIN-2020-16
https://www.valuewalk.com/2017/07/global-debt-hitsrecord-high-327-gdp-oil-supplies-running/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371810807.81/warc/CC-MAIN-20200408072713-20200408103213-00273.warc.gz
en
0.927911
2,325
2.796875
3
Edmund Davy FRS (1785 – 5 November 1857) was a professor of Chemistry at the Royal Cork Institution from 1813 and professor of chemistry at the Royal Dublin Society from 1826. He discovered acetylene, as it was later named by Marcellin Berthelot. He was also an original member of the Chemical Society, and a member of the Royal Irish Academy. Family and early lifeEdit Edmund, the son of William Davy, was born in Penzance, Cornwall, and lived there throughout his teen years. He moved to London in 1804 to spend eight years as operator and assistant to Humphry Davy in the Royal Institution laboratory, which he kept in order. For a large part of that time, Edmund was also superintendent of the Royal Society's mineralogical collection. When, in October 1807, Humphry accomplished the electrolytic preparation of potassium and saw the minute globules of the quicksilver-like metal burst through the crust and take fire, Edmund described that his cousin was so delighted with this achievement that he danced about the room in ecstasy. Humphry Davy's younger brother, Dr. John Davy, (24 May 1790 - 24 Jan 1868) also was a chemist who spent some time (1808–1811) assisting Humphry in his chemistry research at the Royal Institution. John was the first to prepare and name phosgene gas. Edmund William Davy (born in 1826), son of Edmund Davy, became professor of medicine in the Royal College, Dublin, in 1870. That they cooperated in research is shown in a notice to the Royal Irish Academy on the manufacture of sulphuric acid which Edmund Davy ends with an acknowledgement of the assistance he received in his experiments given by his son, Edmund William Davy. Edmund Davy was the first to discover a spongy form of platinum with remarkable gas absorptive properties. Justus Liebig later prepared this in a purer form able to absorb up to 250 times its volume of oxygen gas. Further, Edmund Davy discovered that even at room temperature, finely divided platinum would light up from heat in the presence of a mixture of coal gas and air. In another such experiment, in 1820, he found that with the platinum, alcohol vapours were converted to acetic acid. (Humphry Davy had discovered a few years earlier that a hot platinum wire lit up in a mixture of coal gas and air.) This release of energy from oxidation of the compounds, without flame, and without change in the platinum itself, was a sign of the catalytic property of platinum investigated later by Johann Döbereiner and other chemists. In the Report of the British Association for 1835 he was the first to publish a series of experiments investigating the protective power of zinc employed in simple contact and in massive form. Shortly thereafter a French engineer, M. Sorel, secured a patent for a process of coating an iron surface with fluid zinc to protect against rust, and the technique was adopted by manufacturers of galvanized iron. Davy claimed priority of discovery, but it was found that a patent had long before been issued, on 26 September 1791 to Madame Leroi de Jancourt for the protection of metals with a coating of an alloy of zinc, bismuth and tin (though without a knowledge of the chemical principles involved). This is an example of cathodic protection, an electrochemical technique developed in 1824 by Humphry Davy to prevent galvanic corrosion. He had recommended that the Admiralty should attach iron blocks to protect the copper sheathing on the hulls of Navy vessels. (The method was shortly discontinued because of an unfortunate side effect - the speed of the ships was reduced by increased fouling by marine life. The protective method reduced the release of copper ions that had otherwise poisoned the organisms and controlled their growth.) Edmund Davy made a series of experiments to detect the presence of metallic poisons by means of electricity, as a test of the presence of poisonous substances in cases of suspected poisoning. He applied a current of electricity to precipitate the salts of various metallic poisons from a prepared solution. The method was valuable because the result was not affected by the presence of organic matter from the contents of the stomach. When used as a test, Davy claimed that the presence of only 1/2500th part of a grain of arsenic could be discovered. In 1836, Edmund Davy discovered a gas which he recognised as "a new carburet of hydrogen." It was an accidental discovery while attempting to isolate potassium metal. By heating potassium carbonate with carbon at very high temperatures, he produced a residue of what is now known as potassium carbide, (K2C2), which reacted with water to release the new gas. (A similar reaction between calcium carbide and water was subsequently widely used for the manufacture of acetylene.) In the paper he read to the British Association at Bristol, Davy anticipated the value of acetylene as an illuminating gas: "From the brilliance with which the new gas burns in contact with the atmosphere it is, in the opinion of the author, admirably adapted for the purpose of artificial light, if it can be procured at a cheap rate." Chemistry in agricultureEdit Davy was active in promoting scientific knowledge, whereby popular courses of lectures were established throughout Ireland. In some of his own lectures at the Royal Dublin Society, Davy showed his special interest in the applications of chemistry in agriculture. He published several papers concerning manures and chemical aids useful to farmers. These included "An Essay on the Use of Peat or Turf as a Means of Promoting Public Health and the Agriculture of the United Kingdom" (1850), and "An account of some Experiments made to determine the relative deodorizing Powers of Peat-charcoal, Peat, and Lime" (1856). He also studied the uptake of arsenic by crops from artificial manures chemically prepared with sulphuric acid in which it was not usual to have arsenic as an impurity. Testing the growth of plants, he found "that arsenic might be taken up in considerable quantities by plants without destroying their vitality, or appearing even to interfere with their proper functions." He understood that arsenic was a cumulative poison, and that with continued consumption the "substance may collect in the system till its amount may exercise an injurious effect on the health of men and animals." - Christopher F. Lindsey, ‘Davy, Edmund (1785–1857)’, Oxford Dictionary of National Biography, Oxford University Press, 2004 accessed 6 April 2008 - Leslie Stephen (Ed.). Dictionary of National Biography, Smith, Elder & Co., London, 1888, Vol. XIV, p.185. - American Council of Learned Societies. Dictionary of Scientific Biography, Charles Scribner's Sons, New York, 1981, Vol. 2, p.67. - Robert Siegfried. The Discovery of Potassium and Sodium, and the Problem of the Chemical Elements, Isis, Vol. 54, No. 2. (Jun., 1963), p.248 gives as footnote 5: "Humphry's brother John reported the story from an account by their cousin Edmund Davy, who was at the time Humphry's assistant. John Davy (ed.), The Collected Works of Sir Humphry Davy, Smith, Elder and Co., London, 1839-1840, 9 volumes. Vol. I, p.109. " - American Council of Learned Societies. Dictionary of Scientific Biography, Charles Scribner's Sons, New York, 1981, Vol. 3, p.604. - Chisholm, Hugh, ed. (1911). Encyclopædia Britannica. 7 (11th ed.). Cambridge University Press. p. 871. . - Edmund Davy. On the Manufacture of Sulphuric Acid, Proceedings of the Royal Irish Academy, M.H. Gill, Dublin, 1850, Vol. IV., pp.297-299 - William Hodson Brock. Justus Von Liebig: The Chemical Gatekeeper, Cambridge University Press, 1997, p.76 which gives the reference: J. Liebig, "Über Edmund Davy's schwarzen Platinniederschlag," Peggendorff's Annalen der Physik 17 (1829), 101-14. - Edmund Davy. On Some Combinations of Platinum, Philosophical Transactions of the Royal Society of London, Vol. 110. (1820), pp. 108-125. - Humphry Davy. Some New Experiments and Observations on the Combustion of Gaseous Mixtures; with an Account of a Method of Preserving a Continued Light in Mixtures of Inflammable Gases and Air without Flame. [Abstract] Abstracts of the Papers Printed in the Philosophical Transactions of the Royal Society of London, Vol. 2. (1815 - 1830), pp. 61-62. - Philip A. Schweitzer. Corrosion and Corrosion Protection Handbook, Marcel Dekker, 1997, p.34. - Massachusetts State Board of Heath. The Use of Zinced or Galvanized Iron for the Storage and Conveyance of Drinking-Water, Fifth Annual Report, Jan 1874, p.490. - American Council of Learned Societies. Dictionary of Scientific Biography, Charles Scribner's Sons, New York, 1981, Vol. 3, p.603 - Edmund Davy. On a Simple Electro-Chemical Method of Ascertaining the Presence of Different Metals; Applied to Detect Minute Quantities of Metallic Poisons, Philosophical Transactions of the Royal Society of London, Vol. 121 (1831), pp. 147-164 - Henry Enfield Roscoe and Carl Schorlemmer. A Treatise on Chemistry, D. Appleton and Co., 1833, p.614 which gives the reference Reports of British Association, 1836, p.62. - William Joseph Dibdin. "Acetylene," Public Lighting by Gas and Electricity, Chap. XXIX, p.489. - A summary in the article "Scientific Intelligence, Botany and Zoology," American Journal of Science, 1859, Vol. XXVIII., p.443-444 gives that the paper is published in the London, Dublin, and Edinburgh Philosophical Magazine, Aug. 1859, p.108. - Russell, Justin (1953). "Edmund Davy". Journal of Chemical Education. 30 (6): 302–304. Bibcode:1953JChEd..30..302R. doi:10.1021/ed030p302.
<urn:uuid:6babf352-8c19-4ded-a10f-febc8e781c8c>
CC-MAIN-2020-16
https://en.m.wikipedia.org/wiki/Edmund_Davy
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510352.43/warc/CC-MAIN-20200403061648-20200403091648-00555.warc.gz
en
0.925723
2,221
3.03125
3
In this essay I intend to look at two poems: Sparrow by Thom Gunn and Rose by Walter de la Mare. I will analyse each poem in terms of their tone, treatment, subject and verse technique and then compare them to see if there are any significant similarities or differences between them. Both poems are examples of lyric poetry. The main features of lyric poetry are strong emotional feeling and extensive use of imagery. Lyric poetry covers everything from hymns, lullabies, and folk songs to the huge variety of love songs and poems. The content of lyric poetry is as varied as the concerns of people in every period and in every part of the world. Attitude and manner were the distinctive aspects of 20th century poetry. Gunn felt that to sentimentalise was to diminish the meaning within a poem. His poem Sparrow completely exemplifies this opinion and the overall style of 20th century poetry. It is written as a speech from a homeless beggar and it gives the reader an outlook on his life. The title of the poem is the nickname of the beggar, Sparrow. Gunn is using a nature comparison by comparing the beggar to the bird; he perhaps feels that sparrows are scavengers and are as helpless as the beggar is. It could also be because living on the streets means the beggar is close to nature. The poem is structured into seven stanzas, each with four lines. This is quite a simple structure, which could serve to represent the intellect of the beggar. The first stanza stands out on the page, as it is slightly indented and also has a different rhyme scheme to the rest of the poem (a, a, b, b). This has the effect of drawing the reader’s attention to it and making it seem more significant than the other stanzas. Its tone differs from that of the other stanzas in the poem, as it acts as a soliloquy; it is the cry of the beggar. Gunn uses the repetition of “change Sir” to create emphasis and add to the desperation of the tone. Using ‘Sir’ to address the person he is pleading to, suggests that Sparrow is polite and respectful, which allows the reader to sympathise with him more. If he were being aggressive the reader would feel that he deserved to be homeless and ultimately would not be moved by the poem. Moreover, I feel that the poet himself empathises with Sparrow’s situation and is subtly coaxing the reader into sharing his empathy. The following quote backs up my opinion, and outlines the underlying intentions of Gunn’s poetry: “flowing like a vein of lava under the surface is a burning empathy — a ferocious outrage over so much meaningless human pain; the kind that lovers inflict upon each other, society upon the homeless, what is felt by AIDS victims, the lonely, and everyday outcasts.” The style of the first stanza is quite childish as it is in the third person, which could again represent his level of intelligence and make him sound even more pitiful. The use of alliteration in “pity poor” could be a rhythmic device to help the poem flow, and it could also be used to link the two key words to give the overall idea of the poem, i.e. Sparrow is poor, therefor we should pity him. The other six stanzas of the poem form the beggar’s speech and are written in first person. The rhyme scheme is a, b, a, b and is made up of full rhyme, which makes it very simple and clear. A negative lexical set is used throughout the poem; ‘bruised’, ‘dirty’, ‘sour’, ‘stale’. This makes Sparrow a perfect example of the way in which modern, 20th century poetry addressed the ugly and dirty things in life. Gunn appeals to the senses of the reader throughout the poem by describing smells, tastes and appearance. This helps to create another dimension to the poem; a vivid picture of what life is like for Sparrow which helps in the reader’s understanding of his situation. The beggar is very honest in the way he describes himself: “in a loose old suit bruised and dirty I may look fifty years old but I’m only thirty” His direct and brutal honesty helps in his (and the poet’s) bid for the audiences sympathies. There is a noticeable and complete lack of punctuation throughout the poem, which creates a ‘rambling’ effect and helps the poem to flow along continuously. This gives an idea of how Sparrow would be speaking. The third stanza is where Gunn uses the most vivid descriptions. He uses consonance where he repeats the harsh, sibilant ‘s’ sound to reflect the harsh and cruel nature of Sparrows existence on the street: “My feet smell bad and they ache the wine’s gone sour and stale in my pores my throat is sand I shake and I live out of doors” The poet doesn’t just use a simile to say Sparrows throat was like sand, he uses a metaphor; “my throat is sand” which makes it very powerful and direct. It is also an example of hyperbole, which is deliberate over exaggeration to create more sympathy. In the fourth stanza it says “in a leaky doorway in leaky shoes”, the poet uses repetition of the word ‘leaky’ to create emphasis and sympathy because even though he has tried to shelter from the rain it is in vain. Sparrow makes a pathetic attempt to deny his alcoholism in the fifth stanza, which makes him sound nave and innocent like a child: “I need some change for a drink of sweet wine Sir a bottle of sherry it’s the sugar in it I think will make me merry” Sparrow then goes on to fantasize about what he would become once he is drunk. His dreams are hopelessly vague and pathetic, as the reader knows he could never fulfill them. It is significant that longer words are used in this stanza like ‘daredevil’ and ‘millionaire’ because they draw out the length of the lines which represents the beggar not wanting his dreams to end. There is also a significant use of soft nasal sounds created by the use of the consonants ‘m’, ‘n’ and ‘l’, which represent the change from harsh reality to the calm of dreams. The final stanza of Sparrow is, in my opinion, the most significant: “The bastard passed me by fuck you asshole that’s what I say I hope I see you cry like Sparrow one day” After all of the honesty and pleading, the man passes the beggar by and has no sympathy for him. The poet again uses repeated ‘s’ sounds along with fricatives: ‘ck’ and ‘c’, to create a harsh tone. Shocking expletives are used which help to emphasize the anger and astonishment that Sparrow feels at the way the man treats him. All the way through the poem Sparrow has been polite and harmless, so although he swears and shouts it seems to restore his dignity as a human being and secure the reader’s sympathies. His final hope of seeing the man suffer as he is suffering is the most pitiful of all, as the reader knows that there is no chance he will. I feel that the powerful and explosive nature of this last stanza highlights the strong feelings of Gunn himself about the way in which society treats the homeless. You get the impression that Gunn is using Sparrow as his mouthpiece in the end to express his own anger. I turn now to consider Rose by Walter de la Mare. Walter de la Mare was a poet, novelist, composer and editor of the 20th century. This poem is about memories of the sister of Thomas Campion whose name was Rose. Thomas Campion was a poet and composer of the 16th century whose poems were mainly about beauty and contained a lot of nature imagery. Examples of these poems are Rose cheeked Laura and There is a Garden in her Face. I think it is significant that De la Mare mentions Campion in this poem. De la Mare was not a typical 20th century poet, as he was not effected by the Modernist changes in poetry. Instead he wrote traditional, sentimental poems like the style of Thomas Campion’s poetry. I feel that Campion was a big influence on De la Mare and the first three lines of Rose support my point: “Three centuries now are gone Since Thomas Campion Left me his airs, his verse, his heedful prose.” The fact that De la Mare says Campion left him those things suggests that he felt the need to continue in the style of Campion’s work. The repetition of ‘his’ creates a sense of possession and is also an example of a persuasive technique, the rule of three. The fact that he describes Campion’s prose as ‘heedful’ suggests that he feels Campion’s work is worth paying attention to and remembering. The poem is split into three stanzas with six lines in each. The poem is quite complex in terms of its rhyme scheme: a, a, b, c, c, b which is made up mainly of half rhyme. The rhyme scheme represents the unusual layout on the page where the third and sixth line of each stanza (the ‘b’ rhymes) are significantly longer than the rest. This makes you pay attention to the content of those particular lines. Each stanza ends with ‘Rose’ which is a technique used to emphasise the subject of the poem so that you never forget whom De la Mare is talking about. In the second stanza, De la Mare goes into more descriptive detail and appeals to the reader’s senses. He uses romantic, nature imagery to compare Rose to flower fragrance. I dont feel that using ‘Woodruff’ is very effective because not everyone would have smelt the flowers of the plant so would not be affected by the description. Soft, nasal sounds are used which create a gentle and peaceful tone, which is a great contrast to the harsh tone of Sparrow. The fricatives sounds in “brittle dust” makes it more like the sort of bleak description you would find in Gunn’s poetry, but De la Mare dismisses harsh nature and follows it with ‘blossoming’ which contains gentle, nasal sounds. I feel that “rarest beauties” could be a reference to the beauties of the nature in the garden but because he says they ‘meet’, I feel that he is really referring to Campion and his sister. The last stanza of Rose is a reflection of the memories and has a more sombre tone than the rest of the poem. The use of words such as ‘Faded’ and ‘changing’ suggests that memories fade over time, in particular the memories of Rose’s face; “Cheek, mouth, and childish brow”. Describing her face as ‘childish’ also suggests that Rose was young when she died. There seems to be a loss of hope because De la Mare says, “Where, too, her phantom wanders no man knows”, but hope is then renewed in the last three lines: “Yet, when in undertone That eager lute pines on, Pleading of things he loved, it sings of Rose.” De la Mare is saying that when you lower your voice you can hear the “eager lute”. The lute, which is an old stringed instrument, is personified as being ‘eager’ to represent Campion’s eagerness for his sister to return. The lute is significant because Campion write both words and music for his many songs with lute accompaniment. De la Mare says that the lute ‘pines on’ and is ‘pleading’ to create the sense of sadness and yearning that Campion felt at his loss of the “things he loved” I move now to compare the two poems because I feel that they have some significant similarities and differences between them. Although Sparrow and Rose are very different poems, by very different poets, they still have significant similarities between them. Firstly, both poems are about people and have the names of their subjects as the title. Gunn and De la Mare both use the technique of appealing to the reader’s senses in order to draw them into the poem and they both compare their subject to nature (although De la Mare does this a lot more than Gunn). Both poems also end on a more sombre note compared with the rest of the poem. Although the poems have a few similarities, it is their differences that are more significant. Despite the fact that both poems were written around the same time, on studying their tone and treatment you wouldnt think this were true. Sparrow is typical of 20th century poetry, an unsentimental, modern poem that describes the dirty and ugly aspects of life. 20th century poetry was opposed to the sentimentality of previous poetry movements but although De la Mare was a 20th century poet, his outlook on the world was completely different to that of Gunn. He was not affected by (and ignored) the modern changes in poetry. Instead he wrote traditional poetry with sentimental values. An explicit and direct contrast between the two poems is the way in which smell is described. In Sparrow it says “my feet smell bad” but De la Mare describes a “fragrant smell”. Punctuation is used in Rose; whereas it is non-existent in Sparrow and the rhyme scheme of Rose is a lot more complex, which could suggest that the memory of Rose is much more complex than the existence of Sparrow. The structure of Rose on the page is also more complex and technical. I feel that it has been worth studying and comparing these two poems as it has allowed me to understand that poems can be written at the same period in history but still be very different. De la Mare was obviously very bold to ignore the poetry ‘trend’ of his time and to write poetry in the style that he enjoyed in Campion’s poetry. De la Mare’s outlook on life was very different to that of Gunn and he would never be able to (or want to) write a poem like Sparrow. It could be said that De la Mare is more positive and optimistic in his poetry, but on the other hand Sparrow is a much more gritty and realistic poem that deals with real issues in society. I dont feel that anything significant is really said in Rose, it is just made up of memories that cannot be De la Mare’s true memories as he was not alive at the same time as Campion, therefor he could not have known Rose. I feel that this makes the poem falsely sentimental. On the other hand, De la Mare could be placing himself in Campion’s position after the death of his sister. Rose is a much more organised poem in terms of its form and diction but I feel that Sparrow is much more powerful in its meaning.
<urn:uuid:508f659e-06a7-4818-ac80-c1bca64e7caf>
CC-MAIN-2020-16
https://therichesof.com/sparrow-and-rose-critical-analysis-essay/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371611051.77/warc/CC-MAIN-20200405213008-20200406003508-00354.warc.gz
en
0.975886
3,218
3.21875
3
Issue: January-March 2017 (Volume-6, Number-1) Original Research : Congenital Anomalies in Neonates – A Study at Medical College Hospital in Himachal Pradesh The Purpose of study was to find out the overall incidence of clinically detectable congenital anomalies in newborns in hospital deliveries. All the newborns delivered at Kamla Nehru Hospital, Shimla were examined for congenital malformations over a period of one year. Five thousand nine hundred and ninety seven newborn babies of consecutive deliveries were examined at birth for the presence of congenital malformations. The overall incidence of malformations was 1.63%. Musculoskeletal anomalies were commonly found followed by gastrointestinal and cardiovascular anomalies. The present study was carried out with the aim to determine the overall rate of congenital malformations, incidence in live births, as well as incidence affecting various organ systems, at a medical college hospital in Himachal Pradesh. Keywords: congenital anomalies, musculoskeletal anomaly, major anomaly. In a developing country like India due to high incidence of infectious diseases, nutritional disorders and social stress, the developmental defects are often overshadowed, but the present scenario is changing rapidly. Congenital anomalies represent defective morphogenesis during early foetal life. A broader definition includes metabolic or microscopic defects at a cellular level. A recent study shows that congenital anomalies contribute to 9% of perinatal deaths as compared to 8% a decade ago. About 2% newborn infants have major anomalies diagnosed at or soon after birth(1). Congenital anomalies account for 8 to 15%(2, 3) of perinatal deaths and 13 to 16% death in India(4,5).Congenital anomalies can be due to abnormality being either result of genetic constituent or antenatal environment ,plus all condition, known to be caused by specific genes at whatever age they become manifest and whether or not they are associated with a demonstrable abnormality of form(6,7). Major anomaly is defined as an anatomic abnormality severe enough to reduce normal life expectancy or compromise normal function e.g. heart defect, spinal defect, intestinal defect(8).Major anomalies have serious medical, surgical and cosmetic consequences. Minor anomaly is a physical feature often familial that is present in only a small proportion (1 to 5%) of normal individuals e.g. simian crease of palm ,epicanthal folds(8). Incidence and geographical distribution of congenital anomalies: The worldwide incidence of congenital disorders is estimated at 3-7% but actual number varies between countries. Population and hospital based studies from different parts of India show that 2.5% of newborns have birth defects. Even here pattern of malformation varies from region to region e.g. Neural tube defects are common un northern India as compared to musculoskeletal defects are more common in rest of India(9). Aims and Objectives: 1) To study the overall incidence of clinically detectable congenital anomalies in newborns in hospital deliveries. 2) To classify the congenital anomalies in major and minor groups. Material and Methods: It's a prospective observational type of study. Study population consisted of five thousand nine hundred ninety seven babies delivered at the Department of Obstetrics and Gynaecology,Kamla Nehru hospital, Shimla who were examined at birth for the presence of congenital malformations. All the newborns were looked for major and/or minor congenital malformations and everyday during routine ward rounds. Relevant information regarding maternal age, gestational age, sex, community, birth weight, birth order and consanguinity was documented. Significant antenatal history like maternal illness, ingestion of drugs, exposure to radiation and complications during labour was recorded. Antenatal ultrasonography (USG) findings were noted. Relevant radiological, histo-hematological tests were carried out. Baby's gestational age, birth weight, sex and symptoms in postnatal period were noted. The detailed general and systemic examinations of the babies were carried out. Thorough physical examinations of newborn babies were done. All macroscopically anatomical defects were recorded in a predesigned Performa. A meticulous general and systemic examination was carried out by a consultant at the time of birth to detect any malformations. Ultrasound was employed routinely to detect multiple congenital anomalies and to rule out majority of the internal congenital anomalies. 2D echocardiography was also used for all congenital heart diseases, along with the routine X-ray chest and electrocard-iogram. Those babies who were born with any external malformations were subjected to relevant investigations to rule out internal anomalies. Malformations were categorized in major and minor defects. The major malformations were divided into central nervous system (CNS), musculoskeletal, gastrointestinal, genitourinary, cardiovascular system (CVS), syndromes and miscellaneous disorders. During period of one year, total 5997 deliveries were conducted, 5867 were live births and 130 were stillbirths. The number of babies with congenital malformations diagnosed at birth or within the first week of life was 77, while the total number of malformations was 96 (1.63%). Table I and II give the sex distribution and incidence of congenital malformations. The sex wise distribution was 39 males and 38 females, giving an M: F ratio of 1.02:1, while p value was 0.6062 which was not statistically significant. Incidence of malformations in general was found to be apparently more in female (1.38%) than in male (1.24%). Pattern Of Congenital Anomalies :Musculoskeletal system was the most common system involved accounting for 23.84% of total congenital anomalies followed by decreasing frequency as cited in Table III. Of the 96 anomalies, there were 83 major anomalies and 13 minor. In the major anomalies Musculoskeletal anomalies were commonly found followed by gastrointestinal and cardiovascular anomalies. In the minor anomalies polydactyly was the most common followed by syndactyly and low set ears. Higher frequency of congenital anomalies was seen in the babies born with low birth weight however P value not being significant as shown in Table IV. There were higher number of congenital anomalies in babies born to mothers in the age group 20-30 yrs of age (74%) and next in order were > 30 years accounting for (18.2%),least number of congenital anomalies were in the babies born to age less than 20 years. P value was not significant. Incidence of congenital anomalies was higher in multigravida as compared to primigravida. The incidence increased as the parity increased as shown in Table V. P value was significant (<0.0.5) and in multigravida (>Gravida 4) Congenital anomalies are important causes of still births and infant mortality, and are contributors to childhood morbidity. The number of birth defects in infants is increasing antenatally and during the neonatal period due to advanced diagnostic technology, especially USG. The incidence in the present study is lower than as reported by studies quoted in table below, this difference may be due to the fact because in the present study only live newborns have been taken and still born have been excluded. Other factors can be different culture, geographical conditions, inaccurate detection at birth, period of observation, autopsy rate as in certain centers autopsies have been performed and this leads to higher incidence in some studies. Other factors which can contribute to the difference in the incidence like genetic factors, geographical area of settlement, socioeconomic status ,maternal nutrition and habits, prenatal health care services and large number of environmental factors which could not be measured. Association of low birth weight with increased risk of congenital malformations was very well documented15, our study was in accordance with this. The incidence of congenital anomalies was significantly higher in term babies as compared to Pre term babies. Male preponderance amongst congenital malformed babies was found in this study which was statistically insignificant. In present study no consanguinity was recorded. Previous studies have15reported that significantly higher incidence of malformation among the mothers of gravida 4 or more and our results are consistent with this finding. This indicates that as the birth order increases, the incidence of congenital anomalies also increases. Certain maternal diseases may occasionally lead to increased risk of birth defects. According to Ordonez et al., 16 diabetes mellitus, arterial hypertension, and hypothyroidism show a positive association with congenital malformation. In our study Antenatal history of mothers were suggestive that 45.5% were anemic, 13% had hypertension and 3% had gestational diabetes mellitus. With regard to pattern of congenital malformations in present study the system involved in descending frequency were musculoskeletal (23%), followed by gastrointestinal system (21%) and cardiovascular (16%),face(12.45%),CNS (10%) and genitourinary system (7.2%).In the present study congenital anomalies involving musculoskeletal system were found in 23.7% with an incidence of 3.92 per 1000 live births, talipes was the most common anomaly. As musculoskeletal anomalies anomalies were most common in our study reason being that these anomalies are externally visible hence were easily picked up. Internal organs (gastrointestinal system, cardiovascular system) anomalies were less detected because of invisible nature of the systems and also neonates have been asymptomatic in particular during first 24 hours of life. Other reasons are lack of follow up. With regard to the cardiovascular system, patent ductus arteriosus was the most common lesion followed by atrial septal defect and last being ventricular septal defect. Among the genitourinary tract anomalies, hypospadias, was the most prevalent lesion. Regarding the central nervous system, the most prevalent anomaly encountered was meningomyelocele seen in 5.2% cases and congenital hydrocephalus found in 3.12% cases. With special reference to the neural tube defect (NTD), the incidence of NTD has markedly reduced in the developed countries following mass promotion and mandatory prescription of folic acid for pregnant mothers.17-20 Incidence of facial anomalies was 2.04 per 1000 live births, percentage being 12.4% among other congenital anomalies. Cleft lip most common followed by cleft palate and low set ears. Incidence of cleft lip and palate in present study was 1.21 per 1000 live births. V. Dutta12(2000) recorded cleft palate as the most common anomaly but it has been cited in gastrointestinal tract anomalies. Congenital anomalies are a major cause of stillbirths and infant mortality. By thorough clinical examination, the life-threatening congenital malformation must be identified, as early diagnosis and surgical correction of the malformed babies offer the best chance for survival. Conflict of Interest - None Source of Funding - Nil Contribution of authors-Vijay Yadav: Conducted the study under supervision, Rakesh Sharma: Chief supervisor and guide, Jyotsna Sharma: Assisted the study, Pancham Kumar: Co guide, Deepak Sharma: Biostatistics and manuscript. 1. World Health organization, world health report 1998. Ganeva WHO 1998;43-47. 2. Ravikumara, M. & Bhat, B.V. Indian J Pediatr (1996) 63: 785. 3. Kumar, M.R., Bhat, B.V.&Oumachigui, A.Indian J Pediatr (1996) 63: 357. 4. Chaturvedi P1, Banerjee KS. An epidemiological study of congenital malformations in newborn. Indian J pediatrics 1993; 60;645- 655 5. Aggarwal SS , Singh V,Singh PS, Singh SS et al.Prevalence & spectrum of congenital malformations in a prospective study at a teaching hospital. Indian J Med Res 1991; 94; 413-419. 6. M.M. Nelson, J. O. Forfar. C ongenital Abnormalities at Birth: their Association in the same Patient. Developmental medicine child neurology1969; 11:3-16. 7. Potter EL. The effect on foetus of viral disease in mother .Clin Obstet Gynaecol1961; 4;327- 340. 8. Maclean DS .Congenital malformation. In: MacDonald MG, Mullet MD et al.Avery's Neonatology: Pathophysiology & Management of the Newborn,5th ed. Philadelphia Lippincott;1999.p.839-859. 9. Park k. congenital malformation. In K Park(ed).In Park's tectbook of preventive and social medicine.15th edition;2005:379-80. 10. Chaturvedi P, Banerjee KS.Spectrum of congenital anomalies in the newborn from rural maharashtra. Indian J pediatrics 1989;56:501-507. 11. Swain S,Agarwal A,Bhatia BD.Congenital Malformations at birth.Indian Pediatr 1994;31:1187-1191. 12 Dutta V,Chaturvedi P. Congenital malformations in rural maharashtra.Indian Pediatr 2000;37:998-10001. 13. Desai AN, Desaai A. Congenital anomalies -A prospective study.bhj 2006;4803:442-445. 14. Singh A,Gupta RK.Pattern of congenital anomalies in newborn:A hospital based prospective study.JK science 2009;11:34-39. 15. Mohanty C, Mishra OP, Das BK, Bhatia BD, Singh G. Congenital malformation in newborn: A study of 10,874 consecutive births. J Anat Soc India. 1989; 38:101-11. 16. 9. Ordóñez MP, Nazer J, Aguila A, Cifuentes L. [Congenital malformations and chronic diseases of the mother. Latin American Collaborative Study of Congenital Malformations (ECLAMC) 1971-1999] Rev Med Chil. 2003;131:404-11. [PubMed] 17. O'Dowa MJ, Conolly K, Ryan A. Neural tube defect in rural Ireland. Arch Dis Child. 1987;62:297-8.[PMC free article] [PubMed] 18. Singh R, Al-Sudani O. Major congenital anomalies at birth in Benghazi, Libyan Arab Jamahiriya, 1995. East Mediterr Health J. 2000;6:65-75. [PubMed] 19. De Wals P, Trochet C, Pinsonneault L. Prevalence of neural tube defect in the province of Quebec, 1992. Can J Public Health. 1999;90:237-9. [PubMed] 20. Martinez-Frias ML, Bermejo E, Frias JL. Analysis of deformations in 26,810 consecutive infants with congenital defects. Am J Med Genet. 1999;84:365-8. [PubMed] NIJP : Vol.-6, No.-1
<urn:uuid:bac24c90-77d7-46b6-bdcd-1eb1ddfe0189>
CC-MAIN-2020-16
http://pai-india.org/index.php?id=v6n1-a2-p07-14
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371830894.88/warc/CC-MAIN-20200409055849-20200409090349-00354.warc.gz
en
0.925372
3,249
2.609375
3
‘Two Look At Two’ by Robert Frost is a forty-two line poem that is contained within one block of text. The lines do not follow a specific pattern of rhyme. This does not mean that the poem lacks unity though. Frost utilizes other techniques, such as repetition and alliteration, as well as a structured pattern of meter, to create a unified feeling. In regards to rhythm, the majority of the lines contain five sets of two syllables. There are moments in which the lines don’t reach ten or stretch past ten to eleven or twelve syllables. These moments are few and far between and are usually surprises which often come at turning points in the text. For example, line thirty contains nine syllables and is the moment in which the reader is surprised to see a different deer coming around the corner than the one they expected. Frost makes great use of the technique of anaphora in this text. It is a kind of repetition in which a word or phrase at the beginning of a line is repeated. A reader can look to the first nine lines to see the word “With” used to start four of the lines. Farther along in the poem, “She” is used a few times in succession. Alliteration combines with anaphora in lines 36-40. Here, Frost uses a word beginning with the letter “T” word five times in a row. You can read the full poem here. Summary of Two Look At Two ‘Two Look At Two’ by Robert Frost describes an important encounter between two human hikers and two deer, on opposite sides of a barbed-wire wall. The poem begins with two people, likely a man and a woman, who have decided to stop walking for the night. They are in the woods and far from any other human beings. They come across a wall. It is wrapped in barbed-wire. It comes to represent the wilder world of nature that is inaccessible to humans. While the stand and look at the wall they see two different deer, a doe and a buck, come out and analyze them. There is a connection between these “two” looking at “two.” Analysis of Two Look At Two In the first lines of ‘Two Look At Two’ the speaker begins by placing the two characters, an unnamed man and woman, in the woods. These two have been traveling through the woods, hiking, or seeking out some particular destination all day and now night is falling. The speaker states that, Love and forgetting might have carried them A little further up the mountain side But only if night wasn’t so near. They would not have made it too much farther, leaning on their love for one another, the world, and their desire to push beyond the confines of their everyday life and “forget” their origins and destination. The speaker adds another detail onto their decision to stop walking, that they “must have halted soon in any case.” It is interesting to note the number of lines that the speaker spends describing their decision to stop, almost as if making excuses for them, or trying to anyway. It could be an attempt to keep outsiders, aka, the reader, from questioning the character’s devotion to their task of being outdoors, or their love to one another. In conclusion, he simply wants the reader to know, it was dark, they had to stop, no matter how much they loved one another. One of the things that is starting to weigh on the minds of the walkers is the “path back” and “how rough it was.” There was a lot of “washout.” This means that the rain and the floods wiped away the path. This will make it hard to see and traverse in the daytime, much less at night. If they attempted to make it home in the dark they’d be taking a big risk. In the next set of lines the speaker describes how the two walkers came upon “a tumbled wall.” This is a surprise, as up to this point it seemed as though the two were far from any form of civilization. The wall is there in front of them, physically, but it represents something larger. It is a metaphor for the wall that humanity has erected between the wildest parts of nature and curated, safe, human-friendly nature. It is a barrier between connecting with other forms of life and seeing them not as inferior, only different. The barrier is one that is tough to cross. It is covered with “barbed-wire binding.” This is a ubiquitous material. It’s used in equal measure to keep people in and keep animals out. The walkers come upon this barrier when they are out on their own, just when things start to become dangerous. They do not, at first, want to cross over onto the other side where the world is wilder. The two face the wall and halt their own impulses to go onward. They know that they need to stop and turn around and go back “up the failing path.” It is interesting to note the different types of danger prevalent in the text. They think that they should choose the danger of the path, as it is known to them, rather than the danger of something wholly unknown. Line thirteen reveals that the two did not move after all. They continued to stand there, sighing and thinking that this was the end of their journey for the night. They say, “‘This is all”’ and “‘Good-night.”’ Again, they go against what they initially think is right. The two still do not move away from the wall. Something is keeping them there. Perhaps a revived curiosity for the unknown that waits on the other side of the barbed-wire wall. All of a sudden, there is a doe. She stands […] round a spruce…looking at them Across the wall, as near the wall as they. The doe’s actions mirror their own. She is as curious about them as they are of her, but for different reasons. In this scenario she represents an unachievable wildness. The walkers on the other hand represent the exact opposite, as will be exposed in the next set of lines. The final line of this section of ‘Two Look At Two’ emphasizes the fact that two very different worlds exist on either side of the wall. The doe is in hers, and the walkers are in theirs. The speaker describes how the doe has trouble seeing the two humans on their side of the wall. The fact that they were not moving, and were of a shape she was unfamiliar with confused her eyes. She saw them as being like “some up-ended boulder split in two.” Although she doesn’t know what these people are, she can tell they aren’t afraid. She thinks they are safe. The narrator moves away from the doe and back to the larger scene. To the walkers, or perhaps just to the narrator looking down at this scene, the doe seems to pass a judgement on the walkers. They were to her, […] though strange, [Something] She could not trouble her mind with too long, The doe does not want to spend any more time looking at the humans than she has to. They confuse her, but not so much that she wants to stay. Contrastingly, the walkers still have not moved. They were transfixed by the sight of this animal. This uneven consideration is a perfect representation of the divide between human and non-human animals. In the last line of this section, in what seems like a conclusion but isn’t, the doe moves “unscared along the wall.” The use of the word “unscared” is interesting. It is not actually a word, but a turn of phrase chosen by Frost to represent two different words. “Scared” and the more commonly used, “unscarred.” The deer is not scared of the people, so she moves away “unscared,” she is also not “scarred” by the metaphorical barbed-wire wall (due to the fact that the humans do not harm her) and moves on. Lines 25- 30 In line twenty-five of ‘Two Look At Two’ one of the walkers wonders aloud if this is it, is there more they can “ask” for from the woods. There is, they aren’t done yet. There is a “snort,” which seems to come from the doe, that bids the two to “wait” where they are. Rather than the doe reemerging, a “buck,” or male deers comes, “round the spruce.” He takes the doe’s place, Across the wall as near the wall as they. The buck feels just as confused about the human onlookers as the doe did. Rather than standing still and thinking though, he “jerks” his head around. This makes it seem as though he is asking and answering, ‘Why don’t you make some motion? Or give some sign of life? Because you can’t. I doubt if you’re as living as you look.” The deer is passing a judgment on the two humans, just as the humans would if the situation were not special. The buck sees them for what they are on the surface, unmoving, seemingly useless, unintelligent pieces of rock. He dismisses them. The walkers see these motions and remain entranced by the sight. Eventually, they are so taken in, they want to “stretch a proffering hand.” No matter if they did so or not, the buck decides to move on. The spell is once again broken, this time by the idea of the human beings moving and regaining agency over the situation. They have come into unique contact with two creatures on the other side of the ideological wall. At this point the buck moves off, as “unscared” as the doe, “along the wall.” Frost concludes the poem with the utilization of a variant of the title, “Two had seen two.” The two human walkers had seen two deer and visa versa, varying connections were made across the wall. The most important part was that there was a change. A “wave” came over the two, so impactful were the encounters. It was, As if the earth in one unlooked-for favour Had made them certain earth returned their love. This connection with the two animals, no matter how uneven or strange, showed the two that they have a deeper tie to the world than to one another, or even their larger human cohort. They share love and spiritual life with creatures on the other side of the wall.
<urn:uuid:cf1f254c-d040-47e2-b26c-619c79c0b004>
CC-MAIN-2020-16
https://poemanalysis.com/robert-frost/two-look-at-two
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505826.39/warc/CC-MAIN-20200401161832-20200401191832-00274.warc.gz
en
0.977985
2,334
2.953125
3
Toxic homes, toxic bodies By: Jayne MacAulay Reprinted with permission copyright © May 2007 CARP magazine |visit: One woman's illness sounds the alarm bells about toxic substances in homes that have the potential to make us all As soon as Brenda Peck walked into the small two-bedroom house in Goderich, Ont., she knew she'd found her home. Not because it was the gorgeous house of her dreams – it was a simple 1950s-vintage red-brick bungalow her brother called a fixed-up fixer-upper. She knew because she felt normal – no shaking, shortness of breath or weakness. Nothing. Peck, 56, has been dealing with environmental sensitivities since 1992, so her six-month house hunt had occasionally been hazardous. "I had been in a lot of houses, and some of them I had reacted very badly to," she says. "I couldn't tell if it was the materials or what they cleaned it with – or what." Like the majority of people with multiple chemical sensitivity (MCS), Peck is extremely sensitive to scent, but everyone has different symptoms. "I get very shaky and unable to walk," she says. Her illness is linked to environmental conditions indoors and out. And is it any wonder? Since the Second World War, thousands of chemicals have flooded into our world for use in agriculture, construction, interior decor, food preparation and preservation, fashion – virtually every phase of modern life. The health effects of poor indoor air hit the radar screen after energy costs soared in the 1970s, and better insulated buildings were tightly sealed to conserve energy. As trapped contaminants accumulated indoors, people began to feel ill. Symptoms included headaches, tiredness, sinus congestion and difficulty concentrating. Today, new homes and buildings usually have heat recovery ventilators or heat, ventilating and air-conditioning (HVAC) systems to provide fresh air. New homes, however, also have an abundance of chemicals that affect air quality – chemical loads that can tip people over the line into MCS. Statistics Canada reported in 2003 some 1.2 million people had been diagnosed with MCS, chronic fatigue syndrome or fibromyalgia – related illnesses with sometimes overlapping symptoms. Peck was one of the 643,000 Canadians (2.4 per cent of the population) known to have MCS. Researchers in Georgia estimated in Environmental Health Perspectives the same year, that 12.6 per cent of Americans suffer from the disorder. They also noted a connection between MCS and certain kinds of asthma, including reactive airways dysfunction syndrome (RADS). symptoms began in 1992, after she moved from a new apartment to a new house. Asthma and food intolerances got worse and she had trouble concentrating. Eventually, at age 50, she had to leave her career as a physiotherapist. MCS is only one problem caused or made worse by polluted indoor air. We're an indoor nation – spending up to 90 per cent of our days inside. Young children, the elderly and people with chronic illnesses get outdoors least, and so are among the most vulnerable to indoor air contamination. Health Canada notes poor indoor air leads to or aggravates asthma, allergies, respiratory illnesses, lung cancer, chronic obstructive pulmonary disease and other serious What's inside the inside air? Indoor air pollutants include biologicals such as moulds, dust mites, pollens, fibres and animal dander. Mould and dust mites flourish when relative humidity is more than 50 per cent and both exacerbate asthma and allergies. It can also cause eye irritation, headaches, fatigue, runny noses, cough and permanent lung disease. Mould may also cause lung infections in people with immune suppression or chronic lung disease. Volatile organic compounds (VOCs) are chemicals that escape as gases (a process known as off-gassing) from paints, plastics, cleaning products, pesticides and building materials. Off-gassing is highest in new houses and renovation projects, so ventilation is important. Formaldehyde gas emitted by furniture and insulation as well as particleboard and plywood can irritate eyes and the Radon, a naturally occurring radioactive gas that increases risk of lung cancer, can seep into basements through cracks and joints. Since levels vary from house to house, testing is the only way to determine the household threat. Carbon monoxide gas, a product of incomplete burning, can kill if present in high concentrations. It can enter a house from a car running in a closed attached garage or when a fireplace or gas stove malfunctions. It's also produced from A cubic foot of air may suspend more than 400 million particles of smoke, dust and pollen. Dust particles can carry pesticides, products of combustion (including candle smoke) and heavy metals – some of which are cancer risks. Visible dust – and the motes floating in a sunbeam – is lint, broken fibres from carpets or clothing or tiny pieces of pet or human hair. It's most hazardous if it contains asbestos fibres, a serious threat to lungs. (Disturbing or removing asbestos should be left to professionals.) Getting back to basics The renovations to Brenda Peck's home had been done about 10 years before she bought it, so chemicals had long ago off-gassed and didn't present a problem. She had the rooms painted with low-VOC paints, the carpeting removed and the maple floors stained with water-based stain and finished with protective seal a month before moving. "If it had been a traditional varnish, it probably would have been a minimum of six months before I could have lived here," she says. Peck has to be constantly vigilant, but she has learned to live with her illness. Ironically, what she has to do to avoid symptoms also turns out to be good for the environment as a whole – using low-VOC products and forgoing harsh cleaning compounds in favour of plain old baking soda and vinegar, for example. If we all acted as though we had multiple chemical sensitivities, our planet would ultimately be healthier, too. Although Peck tolerates the natural gas heating, she replaced the gas stove with an electric one. A charcoal water filter system and a portable air purifier also help keep her reactions at bay. In fact, it was water that made her move from the apartment she'd been managing in comfortably. She couldn't have chlorine-free laundry appliances there and when her parents died and she had to sell their farm, she no longer had access to their non-chlorinated well water. been a problem as the basement is tight and dry. She's even found she can quilt there for up to two hours in spite of its glued-on indoor-outdoor carpet. Peck is grateful to have found two local painters who understood the seriousness of her problem. She had them remove carpeting on the basement stairs and carefully chip out, not sand, the underlying adhesive. Before that, she says, "I had to be very careful not to linger on the stairs or I would start getting dizzy or too weak and I'd have to pull myself up on the railing to get back up." MCS is a Characteristically, symptoms of MCS involve many organ systems and occur in response to low levels of chemicals most people tolerate. They reoccur with each exposure to the offending chemical but improve when it's eliminated. Patients who fit this profile tend to react to odours and may have "brain fog," an unfocused feeling. Women are more often affected. And, as Peck has discovered, MCS is a chronic condition. follow a sudden or heavy exposure (through breathing, eating or absorption through the skin) or a stressful illness or injury or may occur with chronic exposure or continuing stress. Unlike an allergic reaction in response to a specific chemical, allergic-like reactions occur with many unrelated compounds, possibly through a whole different immunological pathway. University of Toronto research suggests the illness has a genetic component. Patients have faced skepticism – that it's all in their heads – but studies have also reported psychological distress most often follows, not precedes, onset. Bray, director of the Environmental Health Clinic at Toronto's Women's College Hospital, is convinced of its reality. "Patients' stories are much the same when they come in – the template of stressors and triggers. It's very text-bookish," she says. think the illness receives the attention it deserves even though its incidence is significantly higher than a disease such as AIDS. "It's on par with other chronic illnesses," she notes. "But this illness is causing people to lose their jobs and have miserable lives. And it's all so preventable." It's as if patients' bodies have been through war and are suffering a physical post-traumatic stress, Bray suggests. But in addition to toxic exposures, patients have also had other significant stresses: emotional, physical exhaustion self-inflicted by Type A personalities or driven athletes, infections or abuse. Their bodies have become hypersensitive; they feel more anxious. "All they need is a tiny whiff," she says. "It's like post-traumatic stress, a flashback and their body gets thrown into that tizzy again." THE RISK OF Home renovations have been the trigger for some. "It's the last thing that tips the balance," Bray says. "When they start reacting, the reno has to stop. Then the big process of cleanup and detox, which can take a couple of Since so many systems are involved and they're not working harmoniously, Bray says, the triggers have to be removed. Getting rid of toxins in the body is part of the treatment, but any detoxification plan has to be person-specific, she emphasizes. "The best thing to do is to see a physician or naturopath who is skilled in this area and get a plan," she says. "It's not one size fits all." handle stress on a psychological level is part of treatment as well. "That does not mean taking medications," she says firmly. "It means biofeedback and other mind-body that the most important thing is to make the home chemical- and dust-free, adding that electromagnetic fields may be a problem for some with a lot of electrical equipment. And she adds, "You don't want a lot of artificial stuff – things Working for a cleaner, safer world Linda Nolan-Leeming, president of the Ottawa branch of the Allergy and Environmental Health Association (AEHA), knows the importance of trigger-free housing from personal experience. She thinks her personal road to MCS began as a child, playing hide-and-seek in the fog behind a tractor spraying pesticide to kill mosquitoes at a Girl Guide camp. Her family's apartment was frequently sprayed with pesticide as well. Later, after living in a series of new homes, she became so ill she had to quit work. "I would get a Parkinson's-like tremor when exposed to perfumes and pesticides," she says. "To this day, if I get a big hit of pesticides, I'll have convulsions, so it's quite AEHA plans to build an apartment-condominium building in Ottawa to provide safe housing for people with MCS. Leading-edge technology will ensure the best achievable air quality, even preventing odours from transferring from one unit to another. She vows the project by a high-profile company in Ottawa dedicated to "green" building will exceed Leadership in Engineering and Environmental Design (LEED) standards. (LEED is a system for evaluating buildings in terms of sustainability, healthful interiors and energy use.) Nolan-Leeming expects the AEHA building will be better than platinum, the highest LEED air – the better option People with environmental sensitivities are affected by extremely low levels of irritants that don't seem to bother others. But notice them or not, chemicals are entering our bodies and there's not much research on what they're doing. School of Public Health report concluded that of 80,000 to 100,000 chemicals in global use, perhaps 25 per cent could be capable of harming human brains – especially young, rages over how dangerous and at what level some chemicals are toxic, some governments are endorsing a precautionary principle – to take precautions when human health is threatened, even though scientific proof of harm may not be complete. Going green – reducing chemical exposures and improving air and water quality – seems at the very least, a interest in a clean environment is encouraging to Dr. Riina "I think it will help. There's a movement now for people to stop using so many chemicals in the home. I think the most important thing is getting our air clean, because it doesn't matter how clean your home is if the air outside isn't great, there goes your immune system." copyright © May 2007 CARP magazine Reprinted with permission
<urn:uuid:f06c4d54-7dfe-4633-a95c-3ed8dc30185a>
CC-MAIN-2020-16
https://www.healthyheating.com/CARP/CARP_Toxic_Homes_Toxic_Body_1.htm
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371810807.81/warc/CC-MAIN-20200408072713-20200408103213-00274.warc.gz
en
0.959933
2,929
2.625
3
Air Compressor Cooling System - Cooling Best PracticesThe building has a central open-cooling tower with an abundance of tower water available at a maximum temperature of 85 ⁰F. Power is available at 460Volts, 3 phase, 60Hz. There is a compressed air dryer in the system that will work properly with 100 ⁰F inlet air from the air compressor. Parallel Condensing Cooling Technology - Power EngineeringBy adding a relatively small cooling tower (and surface condenser), the turbine back pressure with a PAC system can be reduced significantly at hot ambient conditions compared to a 100 percent dry ... Cooling Towers and Dry Coolers | SurnaWith the launch of the Surna Reflector, there will be a few new terms popping up around here. These include Cooling Towers and Dry Coolers. Due to the high-heat of the bulb, the Surna Reflector can be effectively cooled using Cooling Towers and Dry Coolers during most of the year in most regions. Dry Cooling Towers India -Dry Coolers,Dry Cooling Towers ...Since Dry Cooling Tower is functioning with Air-cooling Technology, there is no need of excess water evapouration or make up is required to operate the cooling Tower. And also, the Process/Engine water/oil is directly taken from the system to Dry Cooling Tower. So the mid-Heat Exchanger is not required. Power plant and Industry cooling | ENEXIOSince the takeover by Triton-Partners, another new standalone company has been created out of the former Heat Exchanger Segment of the GEA Group AG. The Power Cooling division - ranges from Air Cooled Condensers, Heller Technology and Wet Cooling Towers including Service for dry and wet cooling systems to ENEXIO 2H Water Technologies – operates seperately and independently under the new Name ... Hybrid cooling tower technology, between wet and dry coolingA new hybrid cooling tower developed by the University of Queensland Geothermal Center of Excellence, could provide a efficient solution combining advantages of wet and dry cooling towers for geothermal power plants. Wet Cooling - University of MichiganWet Cooling. Wet cooling tower (also called "Evaporative cooling" or "Wet re-circulating") Wet cooling systems (Figure 1) are the most common technology in new power plants. 1 Waste heat is dissipated to air via evaporation of cooling water. The greater the difference between the temperature of the cooling liquid and the temperature of ... SPX Dry Cooling Technologies - YouTubeSPX Dry Cooling showcased its range of air cooled condenser and cooler technologies. It highlighted how the company is responding to the latest trends in Europe's fast-evolving energy mix and in ... Novel Dry Cooling Technology for Power Plants•The EERC's DDC system is a novel dry cooling technology currently under development. It is estimated to have a competitive advantage over conventional dry cooling options for large -scale heat dissipation. •The unique cooling system design requirements and economics of solar thermal power plants may make them a more attractive Dry Cooling - EERC Foundationthe largest use for water in the thermoelectric power industry is for cooling water to condense steam. Imagine a technology that would 1) eliminate the need for cooling water, 2) be less expensive, and 3) outperform other dry cooling technologies available today. Dry Cooling Tower Technology - Video Results Hybrid Cooling | ENEXIOThe Deluge Air cooled condenser is the latest technological achievement in hybrid cooling, where the primary interest is in dry cooling, but where limited water resources are available for use during certain periods of the year. SPG Dry CoolingSPG Dry Cooling is a global leader in air cooled condensers & coolers with equipment installed all around the globe. Our success comes from our vast offering of dry cooling solutions which fall under many international patents. Dry Cooling Technology in Chinese Thermal Power Plantsadvance of the dry cooling technology in China. In this paper a summary of dry cooling technology will be given with a focus on the Chinese practice. Keywords: Geothermal energy, Cooling tower, natural draft cooling technology, Coal-fired power plants. Cooling Technology in Thermal Power Plant Thermal power plants make use of a steam cycle SPX CoolingCOOLING TOWER PERFORMANCE. SPX Cooling Technologies, Inc. is a leading global manufacturer of cooling towers, evaporative fluid coolers, evaporative condensers and air cooled heat exchangers. For nearly a century, we have provided exceptional quality equipment and service to the HVAC, process cooling, industrial, and refrigeration markets. Why JC Dry Cooling Tower? - jcequipments.comDry Cooling Tower Design. JC Equipments are designed and developed with to work with any size for various industries. Our Dry Cooling Tower designing department is routine updated with world standards according to the International Standards designing by CTI membership. New technology reduces water use by up to 80 percent | GreenBizSponsored: Johnson Controls' BlueStream hybrid cooling technology could end up being a cooling tower's best friend. New technology reduces water use by up to 80 percent | GreenBiz Home About us | SPG Dry CoolingSPX Dry Cooling becomes SPG Dry Cooling. The new abbreviation reflects the founders, Paharpur, and the fact that we are today a much larger Group of companies. We shall continue to emphasize our long and proud history in Dry Cooling. How it Works: Water for Power Plant Cooling | Union of ...Dry-cooling systems use air instead of water to cool the steam exiting a turbine. Dry-cooled systems use no water and can decrease total power plant water consumption by more than 90 percent. The tradeoffs to these water savings are higher costs and lower efficiencies. Dry Cooling FAQ - July 2012 FINAL - BrightSource EnergyTitle: Microsoft Word - Dry Cooling FAQ - July 2012 FINAL.docx Author: Andrea Arnold Created Date: 7/23/2012 5:12:11 PM Wet Versus Dry Cooling Towers - Cooling Technology InstituteWet Versus Dry Cooling Towers CTI Educational Seminar February 28, 2001 Jim Baker: We have several speakers today beginning with Mr. Tom Feely.Tom is the Environmental and Water Resource Product Manager for the U.S. Department of Reducing Cooling Tower Water Consumption through Advanced ...In a cooling tower, you reject heat, so you need to add fresh water, or makeup water, back into the tower because a certain volume of water is required. Reducing Cooling Tower Water Consumption through Advanced Water Treatment Technology Dry Technology | EVAPCOEVAPCO, Inc. is an industry leading manufacturing company with global resources and solutions for worldwide heat transfer applications. We are dedicated to designing and manufacturing the highest quality products for the evaporative cooling and industrial refrigeration markets around the globe. Cooling tower - WikipediaIn a wet cooling tower (or open circuit cooling tower), the warm water can be cooled to a temperature lower than the ambient air dry-bulb temperature, if the air is relatively dry (see dew point and psychrometrics). As ambient air is drawn past a flow of water, a small portion of the water evaporates, and the energy required to evaporate that ... Dry cooling technology - EskomDry cooling technology keeps the cooling water in a separate closed circuit which is cooled through heat transfer rather than evaporation. Thus, the amount of water needed to cool the plant is significantly reduced. As a result, the water usage for cooling of a dry-cooled plant is on average more than 90% lower than that of a wet-cooled plant. Water for Power Plant Cooling | Union of Concerned ScientistsDry-cooling systems use air instead of water to cool the steam exiting a turbine. Dry-cooled systems use no water and can decrease total power plant water consumption by more than 90 percent. The tradeoffs to these water savings are higher costs and lower efficiencies. B&W SPIG Successfully Completes Cooling Tower ... - babcock.comSPIG S.p.A. (B&W SPIG) operates globally supplying an extensive range of turnkey cooling systems. Since 1936, we have designed, engineered and installed many state-of-the-art projects for a wide range of industries, including oil and gas, petrochemical, power generation, cogeneration and combined cycle, and district heating and cooling, to name a few. Dry Cooling - University of MichiganFor ACC systems, steam from the turbine is routed directly to an array of A-framed tubes and a fan blows air directly across the array, convectively condensing the steam. 1 Dry cooling systems use approximately 95 percent less water than wet systems 2, and are becoming more common in thermal power CTI Bibliography of Technical Papers - Dry CoolingAbstract: The eco wet-dry cooler, developed by EVAPCO, conserves water and energy used at power plants by using an innovative wet-dry fluid technology. The cooling tower works in wet-dry mode during the hot summer months and in dry mode other times of the year. Dry Cooling Tower Technology - Image Results Dry Cooling Towers Manufacturer INDIA Dry Cooling Tower ...Dry Cooling Tower is one of the latest model Cooling Tower. It is specially designed with copper or aluminium finned tube to increase the heat transfer area and these towers are particularly designed for water scarce areas. Since Dry Cooling Tower is functioning with Air-cooling Technology. Dry Cooling Tower - an overview | ScienceDirect TopicsDry cooling is technically feasible for all CSP technologies, and is not a technology risk, as the technology has been implemented in conventional power plants over the globe for a long time. The issue with dry cooling is its negative impact on project economics: • Air as a cooling medium has a lower heat transfer coefficient than water.
<urn:uuid:de765fef-c188-472b-88ba-184c510f8381>
CC-MAIN-2020-16
http://shiatsu-lenz.ch/19230/dry-cooling-tower-technology/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505730.14/warc/CC-MAIN-20200401100029-20200401130029-00395.warc.gz
en
0.909791
2,086
2.5625
3
Click filter to remove Displaying 1 - 9 of 9 Contributor: MacLaury, Robert E., 1944- Subject: Anthropology | Ethnography | Linguistics | Social life and customs | Clothing and dress | Architecture | Oaxaca (Mexico : State)--History Extent: 230 pages Description: From 1968-1970, the anthropologist Robert E. MacLaury conducted fieldwork on Zapotec (Oto-Manguean) language and ethnography at Santa Maria Ayoquesco de Aldama, Oaxaca. His masters thesis based on that research, "Ayoquesco Zapotec: Ethnography, Phonology, and Lexicon," was accepted in partial fulfillment of the requirements for a master's degree in anthropology at the University of the Americas in 1970. Includes eighty black and white photocopy photographs of Zapotec Indians in Santa Maria Ayoquezco de Aldama, Oaxaca, Mexico from 1968-1970. Taken by MacLaury while conducting fieldwork for his thesis, the images reflect the social life and customs of the people, including clothing, utensils, daily activities and dwellings. See finding aid for related material. Collection: Ayoquesco Zapotec (Mss.497.4.M22) Date: circa 1925-1967 Contributor: Voegelin, C. F. (Charles Frederick), 1906-1986 | Turner, Glen D. | Swadesh, Morris, 1909-1967 | Wonderly, William L. | Lyman, Larry | Croft, Kenneth Extent: 7 folders Description: There are many items relating to Mexican languages in the C. F. Voegelin Papers. This entry is intended as a catch-all for materials that cover Mexican (and to some extent, Central American) languages in general. Researchers should also view the entries for specific languages (i.e., Nahuatl, Zoque, etc.) and for South America, under which Voegelin often filed Mexican and Central American materials. In Subcollection I, there is relevant correspondence with Glen Turner and William L. Wonderly in Series I. Correspondence; William L. Wonderly's "List of Central American Indian Languages" and Larry Lyman's "The Verb Syntagmemes of Choapan Zapotec" in Series IV. Works by Others"; and a folder on South American and Other Latin American Languages (which includes Central America and Mexico) in file in Series V. Research Notes, Subseries V-A: Language Notes. In Subcollection II, there is relevant correspondence with Kenneth Croft (regarding American Indian language work in Mexico and Croft's progress with Nahuatl) and Morris Swadesh (his collection of Uto-Aztecan language materials, including many from Mexico) in Series I. Correspondence. Collection: C. F. Voegelin Papers (Mss.Ms.Coll.68) Culture: Wichí | Tohono O'odham | Tepecano | Nahua | Huastec | Karankawa | Otomi | Mazahua | Matlatzinca | Pame | Chichimeca | Cuitlatec | Mazatec | Popoluca | Cuicatec | Amuzgo | Zapotec | Chatino | Chinantec | Purepecha | Tlapanec Alternate forms: Papago, Tarascan, Tarasco Contributor: Brugge, David M. | Mason, John Alden, 1885-1967 | León, Nicolás, 1859-1929 | Weitlaner, Robert J., 1883-1968 | Howard, Agnes McClain | Kroeber, A. L. (Alfred Louis), 1876-1960 | Vaillant, George Clapp, 1901-1945 Subject: Mexico--History | Archaeology | Mexico--Antiquities | Kinship | Linguistics | Architecture | Politics and government | Material culture | Architecture | Botany | Migrations | Pottery Genre: Reports | Essays | Notes | Photographs | Correspondence | Grammars | Vocabularies | Field notes Extent: 165 pages; Circa 300 items; Description: The Mexico materials, John Alden Mason Papers include a log of a trip to Sonora, itinerary of pack trip from Yecora to Maicoba; lists of photographs; journal. Archaological materials: report on archaeological sites near Rancho Guiracoba, Sonora, Mexico with report on surface collections at six sites in southern Sonora. Notes on the Northern Extension of the Chalchihuites Culture, written for the Mexican Historical Congress, Zacatecas. Slayton Creek Excavation, regarding Mexico; the Papago [Tohono O'odham]; a dig at Slayton Creek, Delaware. Regarding archaeological, ethnological, and linguistic work in Mexico; genetic classification of languages of Central America and Mexico. Regarding internal strife in local (Durango) Indian tribe (including murders); archaeology in Durango; collection of specimens of material culture; work at Schroeder pyramid; cliff dwellings near Mezquital. Mentions Alex Krieger. Cave investigations in Durango and Coahuila, report on search conducted with Robert H. Merrill for traces of early man, particularly on the Folsom horizon. Written for Weitlaner volume. Includes description of three varieties of Cucurbita moschata; evidence in conflict with the theory that Cucurbita moschata was introduced into southern Arizona in late prehistoric or early historic times from the north and east. Regarding Maya pottery; Piedras Negras, Guatemala; archaeological work in Mexico and Guatemala; the University Museum (University of Pennsylvania); Vaillant's obituary. Includes correspondence between Mason and Sue Vaillant (Mrs. George C.) and between Mason and Charles Marius Barbeau. Linguistic materials: a list entitled, "Familias linguisticas de Mexico-idiomas y dialectos a ellas pertencientes," with the families with subdivisions: for Museo nacional de arqueologia, historia y etnologia, Anales. Includes lexical items in the various languages--Hokan, Oto-Manguren, Uto-Aztecan, and Maya-- arranged in columns; Spanish glosses. Regarding Mason's Subtiaba-Hokan-Caduveo-Mataco comparative vocabulary. Kroeber is not much impressed with the possible resemblances in Mason's list (included). Mexican linguistics, comparative vocabularies, etc., includes short comparative vocabularies for Comecrudo, Papago-Tepecano, Nahua, Huaxtec, Choctaw, Coahuiltec, Karankawa, Torkana, Atakapa, Chitimacha, Tunica; notes on Sapir's classification; other miscellaneous notes. Comparative vocabulary, includes letter from Frederick Johnson to John Alden Mason; comparative vocabulary which is number-keyed to a list of twenty-two languages and arranged in columns headed by Spanish glosses. Words lacking in some languages for almost all items. Languages include Otomi, Mazahua, Matlatzinca, Ocuiltec, Pame, Chichimeca, Cuitlateco, Mazatec, Popoluca, Chochotec (Tlapanec), Ichcateco, Trique, Chiapanec, Manque, Mixtec, Cuicatec, Amuzgo, Zapotec, Chatino, Chinantec, Tarasco, and Tlapanec. Scholarly materials: two versions of a paper, entitled, "Los Cuatro Grandes Filones Linguisticos de Mexico y Centroamerica," for the International Congress of Americanists, August 1939, Mexico. Photographs: Unidentified photographs showing people, dwellings, terrain, etc. Images of temples, excavations, crypts, jade work, etc. Includes a photograph of John Alden Mason and Burton W. Bascom from Palenque. Entire series of photographs from the Mason papers. The bulk of the images are from Mexico (Chihuahua, Durango, Sonora, etc.). Also 3 contact sheets of images from Peru. From the Durango expedition, a list of photographs; "Informes hacera de la Sierro de la Candela:" notes from Tarayre, pages 184-185; "Ruins of an agricultural colony near Zape"; possible routes of migration into Mexico; Everardo Gamiz "La Raza Pigmea," Durango, April 1934; an incomplete set of numbered photos enumerated in above list (all duplicates from museum set). A linguistic realignment north of Mexico, which gives six phyla, one "broken phylum," and two uncertain languages (for presentation at the meeting of the American Anthropological Association, Chicago, 1940) and a detailed outline of five phyla plus several unaffiliated languages. Collection: John Alden Mason Papers (Mss.B.M384) Contributor: Ayer, Edward Everett, 1841-1927 Subject: Archaeology | Art | Dance | Education | Museums | Travel | Oaxaca (Mexico : State)--History | Arizona--History | New Mexico--History | Ohio--History Extent: 19 items Description: Narratives of travels and adventures, 1881-1864, 1881-1916, in the Far West, Southwest, Northwest, northern Mexico, as well as Ohio, New York, and Europe, apparently written from memory about 1916. Mentions hostilities of Pawnee and Apache, describing an Indian attack. Visits Pima Indians, Navajo reservation; sees Taos Indian dance. Observes Mitla ruins; visits Sacaton Pima reservation; visits California Indian schools. Describes music for Indians at mission; visits Ohio mounds; comments on Northwest Art; statues of Indian heroes in Northwest. Two letters relate to Ayer and the Field Museum. See especially #1, 5, 6, 11, 12, 13, 19. Collection: Reminiscences of the Far West, and other trips (Mss.B.Ay2) Alternate forms: Zapoteco Date: 1920-1930, 1940-1947 Contributor: Angulo, Jaime de | Leal, Mary | Leal, Otis | McQuown, Norman A. | Swadesh, Morris, 1909-1967 Extent: Approx. 980 pages Description: The Zapotec materials in the ACLS collection are located primarily in the "Zapotec" section of the finding aid, which includes a detailed listing. The bulk of the materials were recorded and assembled by Jaime de Angulo and Morris Swadesh. De Angulo's materials include texts with Spanish and English translations, with accompanying linguistic notes, and studies proposing relationship among languages of Oaxaca. Swadesh's materials include vocabularies in multiple varieties of Zapotec with accompanying linguistic analyses. Currently only the three Zapotec languages given above in this listing can be specifically identified based upon information on locations where they were recorded. There are additional Zapotec languages of an undetermined quantity in this materials that are currently only identified in the cataloging according to regional terms such as Mountain and Valley dialects, Ixtlán, and Villa Alta. Some additional comparative materials utilizing Zapotec data can also be found in the "Mexico" and "Mixe" sections of the finding aid. Collection: ACLS Collection (American Council of Learned Societies Committee on Native American Languages, American Philosophical Society) (Mss.497.3.B63c) Contributor: Ficke, Arthur Davison, 1883-1945 | Merrill, E. D. | Parsons, Elsie Worthington Clews, 1874-1941 | Redfield, Robert Subject: Folklore | Linguistics | Oaxaca (Mexico : State)--History | Religion | Rites and ceremonies | Social life and customs Extent: 6 notebooks, 183 photographs, 100+ negatives, 3 drawings Description: The Zapotec materials in the Elsie Clews Parsons papers consist of materials in multiple sections of the finding aid. In Subcollection I, Series I, "Correspondence," see "Mitla, Town of Souls" and Parsons' "Letters in re. Mitla, Town of the Souls." In Subcollection I, Series II, "Notes, manuscripts, etc." the final notebook in "No. 11 Taos notebooks" is predominantly in Spanish and concerns fieldwork in Oaxaca among the Zapotec and other groups. Item "No. 19. Mitla journals" contains notebooks from Oaxaca, primarily concerning Zapotec matters. Item "No. 28. Mitla songs and photographs (Oaxaca region)" includes 14 songs, 183 photos, ca. 100 negatives of Oaxaca; 3 drawings and an article on Zapotec words; letter from E. D. Merrill to Franz Boas, May 13, 1930. Item "No. 53" contains a Zapotec-related newspaper clipping. In Subcollection II, Series I, "Professional Correspondence," see correspondence with Robert Redfield. In Subcollection II, Series III, "Lectures and Manuscripts", see "Addresses - [On Mitla, Oaxaca]," "Mitla: Town of Souls - Correspondence," "Survivals of Indian Culture among Zapoteca-Speaking Mexicans," and "Zapoteca Serpents." In Subcollection II, Series IV, "Research Notes" see "Mexico - Notes" from 1931. Additional relevant material may appear in other notebooks labelled "Mexico" or in other correspondence. Collection: Elsie Clews Parsons papers (Mss.Ms.Coll.29) Date: February 1, 1912 Contributor: Mechling, William Hubbs, 1888-1953 Extent: 2 leaves Description: Letter to Speck discussing his forthcoming article on Oaxaca language map, Mechling (1912). Collection: Frank G. Speck Papers (Mss.Ms.Coll.126) Alternate forms: Zapoteco Language(s): English | Spanish | Zapotec, Aloápam | Zapotec, Cajonos | Zapotec, Chichicapan | Zapotec, Güilá | Zapotec, Isthmus | Zapotec, Mitla | Zapotec, Rincón | Zapotec, Sierra de Juárez | Zapotec, Southeastern Ixtlán | Zapotec, Western Tlacolula Valley | Zapotec, Yalálag | Zapotec, Yareni | Zapotec, Yatee | Zapotec, Zaachila | Zapotec, Zoogocho Extent: Approx. 900 pages; Approx. 20,000 word slips Description: Materials relating to Radin's study of Zapotec languages, located in Series V and Series VIII. Includes a variety of materials, such as word lists, lexical slips, bibliographical notes, grammatical notes, texts (often with interlinear translations), and a Spanish-Zapotec dictionary comprised of about 15,000 slips, as well as materials for a Spanish-Zapotec lexicon and a Spanish-Zapotec vocabulary. Many of the pages are labelled with the name of a town or district in Oaxaca. One informant mentioned: Felipe Castellana, associated with Mitla. Place names associated with Radin's manuscripts are: Abejones, Hidalgo Yalálag, Ixtlán de Juarez, Lachatao, Mitla, Nuevo Zoquiapam, San Andres Solaga, San Antonio de la Cal, San Baltazar Chichicapam, San Esteban Atatláhuca, San Francisco Cajonos, San Francisco Telixtlahuaca, San Juan Atepec, San Juan Juquila Mixes, San Mateo Cajonos, San Miguel Aloapam, San Miguel Talea , San Sebastián Tecomaxtlahuaca, Santa Catarina Ixtepeji, Santa Maria de la Chichina, Santa Maria de Tule, Santa María Jaltianguis, Santiago Ixtaltepec, Santiago Jamiltepec, Sawatlan (Magdaglena Zahuatlan?), Serrano, "Serrano" (San Juan Chicomezúchil), Tehuano, Teococuilco, Teotilan del Valle, Villa Alta (district), Yolotepec de la Paz, Zaachila, "Zapotec del Valle" (Santiago Matatlán), Zimatlán de Álvarez. Collection: Paul Radin papers (Mss.497.3.R114) Alternate forms: Zapoteco Contributor: Rosenbaum, Harvey Extent: 24 pages Description: The Zapotec materials in the Phillips Fund collection consist of 1 item. Materials in this collection are listed alphabetically by last name of author. See materials listed under Rosenbaum: "Constraints In Zapotec Questions And Relative Clauses," an article on a movement rule in Valley Zapotec. Collection: Phillips Fund for Native American Research Collection (Mss.497.3.Am4)
<urn:uuid:cf8e6648-b8e4-483d-9309-508c4fb45699>
CC-MAIN-2020-16
https://indigenousguide.amphilsoc.org/search?f%5B0%5D=guide_culture_content_title%3AZapotec&f%5B1%5D=guide_language_content_title%3AEnglish
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371824409.86/warc/CC-MAIN-20200408202012-20200408232512-00034.warc.gz
en
0.814048
3,677
2.703125
3
Following the development of radar at Orfordness and at the Bawdsey Research Station in Suffolk during the mid 1930’s, the Air Ministry established a programme of building radar stations around the British coast to provide warning of air attack on Great Britain. A survey was undertaken in 1938 to assess the suitability of the local terrain for Air Defence Radar operations with the first of these new stations coming on line by the end of the year. This network formed the basis of a chain of radar stations called CHAIN HOME (CH). These stations consisted of two main types; East Coast stations and West Coast stations. The East Coast stations were similar in design to the experimental station set up at Bawdsey in 1936. In their final form these stations were designed to have equipment housed in protected buildings with transmitter aerials suspended from 350’ steel towers and receiver aerials mounted on 240’ timber towers. The West Coast stations differed in layout and relied on dispersal instead of protected buildings for defence. Thus the West Coast stations had two transmitter and receiver blocks with duplicate equipment in each. Transmitter aerials were mounted on 325’ guyed steel masts with the receiver aerial mounted on 240’ timber towers. The majority of Chain Home stations were also provided with reserve equipment, either buried or remote. Buried reserves consisted of underground transmitter and receiver blocks, each with three entrance hatches (two for plant and one for personnel) set on steel rollers. Nearby were the emergency exit hatch, ventilation shafts and 120’ wooden tower carrying the aerials. On some stations the transmitter and receiver buried reserves were together on an adjoining site (often the next field).At others the two buried reserves were separate but located close to their respective above ground building. Many of the West Coast stations had remote reserves some distance from the main station but utilising similar above ground transmitter and receiver blocks. The station at Netherbutton was a standard east coast style chain home radar station with buried reserves. In January 1939 a radar station was proposed for Orkney as part of the defences for Scapa Flow which was the main anchorage for the British Fleet, this was to be an extension of the Chain Home network. The site chose was Netherbutton, an area of high ground four miles east of Kirkwall. This was not considered to be an ideal location but was the best site available on Orkney’s generally flat terrain. 13 Acres of land were acquired and the first construction on the site was accommodation for the workforce within the compound. A power house was built at Deepdale Farm to the north west of the site. As Netherbutton would not be connected to the mains supply this would provide the main power supply for the station. There were two 60kw generators driven by 175HP Blackstone diesel engines. In case the main power station was knocked out during an enemy attack as standby power station or ‘set house’ was also provided within the main compound. Because of urgency of this new facility a decision was taken to equip the station from other redundant sites rather than wait for new transmitter and receiver sets to be manufactured. 90 foot guyed wooden towers for the transmitter and receiver aerials came from the radar station at Drone Hill in Berwickshire and the aerials, transmitter and receiver came from the redundant station at Ravenscar near Whitby in Yorkshire. Work started on the installation on 13th May 1939 and a test flight on 1st June 1939 showed that the station was functioning correctly with a Blenheim aircraft flying at 8000’ being detected at a range of 60 miles. This temporary Advanced Chain Home (ACH) station was handed over to the RAF the following day. Because of its poor location, RAF Netherbutton did not prove as reliable as had been hoped and Bill Hewison describes the station as ‘essentially useless’ in his book ‘This Great Harbour Scapa Flow’. The Air Ministry refuted these suggestions although the Admiralty claimed that long range data from the light cruiser HMS Curlew was “worth half a dozen Netherbuttons!” In October 1939 there was a proposal to improve coverage by replacing the 90 foot towers with 240 foot wooden towers and converting the station to all-round coverage. This work was completed on 29th October promoting the station from Advanced (ACH) to Intermediate Chain Home (ICH), a temporary stage before upgrading the station to a permanent Final East Coast Chain Home. At this time the transmitters and receivers were housed in sandbagged wooden huts but these were eventually replaced with protected brick transmitter and receiver blocks surrounded by blast walls and an earth traverse. Four new 350’ steel transmitter towers had been erected by February 1940 in an attempt to improve the performance of the station and final calibration work on the new all-round array was completed in July 1941. With all these modifications the stations performance was found to be greatly improved. Initially, Netherbutton was linked to the operations room at Wick but from October 1940 the station relayed information on approaching enemy aircraft to the combined gunnery and sector operations room at Kirkwall from where the anti-aircraft guns located around Scapa Flow were controlled. At the end of the war RAF Netherbutton was placed on care and maintenance but was later selected as one of 15 stations promoted to a ‘readiness chain home’. The station was requipped with a Type 1 radar and two channels, as part of the first phase of the rotor programme. (Code BNT) In 1954 it was still listed as ‘readiness’ but with the introduction of Type 80 radar in 1955 RAF Netherbutton was redundant. Television reception first came to the Orkneys in October 1955 when a new transmitter opened at Meldrum in Aberdeenshire. The Orkneys were never intended to be in the service area for this new transmitter and reception on the island was very unreliable, varying in quality according to weather conditions. A year later there were only 36 television licences issued to Orkney residents and those that did have sets complained of interference from a station in Russia. In order to improve reception on the island the redundant radar station at Netherbutton was selected as a suitable site for a relay station early in 1957 and with the final closure of the radar station in 1958 the site became available. Much of the land was sold back to the original landowners but the transmitter block and the four transmitter towers were sold to the BBC for use as a relay station for the Orkney Islands. Only two of the steel masts were required, one of these was extended to 411 feet. Radio and television transmitters were installed in the transmitter block providing the Orkney Islands with 405 line TV reception and better radio reception. The two redundant masts were demolished at this time. The new relay came on line with limited power in December 1958 and there was a pre-Christmas rush to buy sets. By December 1959 the station was on full power and there were nearly 2000 licenced television sets on the Islands, about one in every four households. In 1986 the relay station became redundant when the BBC moved to a new location at Keelyang. The masts were sold for scrap and the land was auctioned. The transmitter block was later turned into a dwelling house. The two remaining masts were dismantles by J.L. Eve Construction, the same firm that had erected them 47 years earlier. RAF Netherbutton today The technical site at Netherbutton is bisected by the A961. The transmitter block still stands on the west side of the road at the end of a short access drive. There is a derelict picket post at the end of the drive. The transmitter block has been greatly altered, first for its use as a BBC transmitting station and then by its conversion to a dwelling. The earth traverse has been removed but three side of the blast wall surrounding the original brick building are still standing. It is difficult to say how much of the current building is original; it would appear that the shell of the building has now been incorporated into the new two storey dwelling. The two warden’s cottages still stand on the A961 and are now in private occupation. In the field behind the cottages there is a blast wall running around three sides of a square it is assumed a building once stood in the centre. The receiver block stands on the opposite side of the road at the end of a drive; it too has recently been converted into a dwelling. It would appear that the blast wall itself now forms the building and the internal brick structure has probably been demolished. The bases of the wooden receiver tower can be seen in an adjacent field. The stand-by set house could not be found so it is assumed this has been demolished. The buried reserve is located on the south side of Northfield Farm house, 400 yards north east of the receiver block. Both bunkers can still be seen together with their adjacent mast bases but only the stubs of the ventilation shafts are still extant. The transmitter reserve is flooded; the level of the water varies between 2’ and 8’ depending on weather conditions and the time of year. The internal walls are faced with red glazed bricks. Some ventilation trunking can be seen lying on the floor beneath the water level. The main transmitter room has been completely stripped; even the doors into the lobby and toilet have been removed. The receiver block is dry but strewn with rubble, much of it glazed bricks from the demolished internal partition walls. Both the toilet wall and the crew room walls have been demolished. The ventilation plant room has been stripped leaving only the concrete plinths where the plant was mounted. Both reserves still retain their three flat reinforced concrete covers on steel rollers and running rails. The two larger covers were for plant access and the smaller cover gives access to a steel staircase down 17’ 5” into the bunker. All the hatches are closed but can be opened using farm machinery. The receiver reserve was entered by this method for this report. Both reserves still retain their emergency escape shafts with their double interlocking waterproof hatches still in good condition. There is an eight foot vertical shaft giving access to a low passage that runs for 13 feet to a blast door (now removed) half way up the wall at the back of the operations room. A second offset ladder is fixed to the wall. - Bob Jenner - The Orkney Wireless Museum - The Orcadian - PRO Files Air 25⁄681 & AVIA7/308
<urn:uuid:f6e2e582-dc31-44a5-b8ed-6b445d91116b>
CC-MAIN-2020-16
https://www.subbrit.org.uk/sites/netherbutton-chain-home-radar-station/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371618784.58/warc/CC-MAIN-20200406035448-20200406065948-00233.warc.gz
en
0.981092
2,165
3.609375
4
The Tomb Guard Serving at the Tomb of the Unknown Soldier (Tomb) was a defining period in the lives of Tomb Guards. Although Tomb Guards come from every state in the United States of America (U.S.) and every walk of life, they are forever bonded through their shared experience of service at the Tomb. A strong bond was formed through an extremely demanding and humbling experience. Tomb Guards are handpicked and rigorously trained. The duty at the Tomb is not for everyone, with the majority of soldiers who begin Tomb Guard training failing. Tomb Guards describe their service as a privilege and an honor, and are undeniably proud of their service. They are part of an unbroken chain of soldiers dating back to 1926. The ideals of the Tomb became the Guidepost for their lives, as well as a motivating factor and measuring stick for future endeavors. The Sentinel’s Creed is the Tomb Guard standard. The 99 words of the creed capture the true meaning of their duty. You will often hear the words “Line-6” proudly uttered by Tomb Guards as they converse with each other or with their chain of command. The Sentinel's CreedMy dedication to this sacred duty is total and whole-hearted. In the responsibility bestowed on me never will I falter. And with dignity and perseverance my standard will remain perfection. Through the years of diligence and praise and the discomfort of the elements, I will walk my tour in humble reverence to the best of my ability. It is he who commands the respect I protect, his bravery that made us so proud. Surrounded by well meaning crowds by day, alone in the thoughtful peace of night, this soldier will in honored glory rest under my eternal vigilance. Tomb Guards are part of the 3rd U.S. Infantry Regiment "The Old Guard". Serving the U.S. since 1784, The Old Guard is the oldest active infantry unit in the military still in service. After a valorous performance in the Mexican War, the Old Guard received its unique name from General Winfield Scott during a victory parade in Mexico City in 1847. The Old Guard has a long history of service to the U.S., from the Revolutionary War to the Iraq War. Since World War II, the Old Guard has served as the official "U.S. Honor Guard" unit and "Escort to the President", as well as maintaining its certification as an infantry unit for combat roles. In that capacity, Old Guard soldiers are responsible for conducting military ceremonies at the White House, Arlington National Cemetery (ANC), Pentagon, national memorials and elsewhere in the nation’s capitol. In addition, these soldiers defend civil authorities in Washington D.C. and support overseas contingency missions. The Old Guard recruits soldiers based on certain intangible traits, and with requirements for height and weight, physical fitness, aptitude scores, and conduct. These soldiers are considered to be the most suitable to represent the U.S. at home and abroad, and the Tomb Guards are considered the best of this elite unit. The Old Guard is comprised of three battalions 1, with two of them residing at Ft. Myer. The battalions are organized in several companies to fulfill their mission, and the following specialty platoons: - The U.S. Army Drill Team - The U.S. Army Continental Color Guard - The U.S. Caisson Platoon - Presidential Salute Battery - Pershing's Own - The Old Guard Fife and Drum Corps and the most recognized platoon: The Tomb is comprised of three Tomb squads "reliefs", 1st, 2nd and 3rd Reliefs. The reliefs are organized based on height, so that the Tomb Guards are similar in size during the Changing of the Guard. Although the Sergeant of the Guard can organize reliefs based operational needs. The mission of the Tomb platoon is: - Responsible for maintaining the highest standards and traditions of the United States Army and this Nation while keeping a constant vigil at this National Shrine, and; - Whose special duty is to prevent any desecration or disrespect directed toward the Tomb. To become a Tomb Guard, an Old Guard soldier must volunteer by applying for appointment to the Tomb through the Sergeant of the Guard. To be considered for an appointment, the soldier must be highly motivated and disciplined, and possess a strong military bearing and soldierly appearance. If appointed, the soldier is assigned to the Tomb for an initial two week training period. The period focuses on basic Changing of the Guard sequences, uniform preparation, and memorization of a basic "knowledge" packet about the Tomb and ANC. At the conclusion of the two weeks, the soldiers are tested in these areas. If they pass, they are assigned to one of three reliefs as a trainee for an intense training period. If they fail, they are assigned back to their company. Upon reporting to a relief, the trainee is assigned a Tomb Guard trainer. The trainer is a mentor who is expected to mold the trainee into a Tomb Guard. The trainer informs the trainee of what is expected of them, including following strict rules, training guidelines, and the need for complete dedication and commitment to the Tomb. Then the trainer teaches, monitors, inspects, and test the trainee during the training cylce. The training cycle is intense, consisting of a series of five exhaustive tests over six to twelve months. The tests focus on outside performance (Changing of the Guard, and "Walking the Mat" 1), uniform preparation, and knowledge. Outside performance tests on weapons manual, ceremonial steps, cadence, military bearing, and orders. Uniform preparation tests on Tomb uniform standards 2 for the Army Dress Blues, Shoes "Spits", glasses, and brass and metals. Knowledge tests on 35 pages of information on the history of the Tomb and ANC, for which the trainee must recite verbatim - including punctuation. The tests are progressive, demanding quantifiable improvement and demonstrated performance. If the trainee completes the training cycle and passes the tests, they will be able to flawlessly conduct seven different types of ceremonies, to meet the highest standards of uniform preparation, and recite 35 pages of information without error. If the trainee fails any test, they are assigned back to their company. The successful trainee is awarded the Tomb Guard Identification Badge (Badge), and will be referred to thereafter as a Tomb Guard - and affectionately known by their peers as "Badgeholder". The Badge is the least awarded badge in the Army, and the second least awarded badge in the U.S. military, trailing only the Astronaut Badge. The Badge is the only military badge that can be revoked for any action that brings disrespect to the Tomb during the lifetime of the Tomb Guard. The relief is lead by a Commander of the Relief (Staff Sergeant) who is responsible for the operation, welfare and morale of the relief. Ideally, the relief will consist of two teams, each consisting of an Assistant Relief Commander (Sergeant) and four additional Tomb Guards for a total of nine soldiers. The relief is lead and supported by Tomb Headquarters, consisting of the Platoon Leader (Lieutenant), Sergeant of the Guard (Sergeant First Class), Assistant Sergeant of the Guard (Staff Sergeant), the primary trainer and a driver. The Platoon Leader oversees the administrative and operational functions of the Tomb. In addition, they serve in various ceremonial functions on the company level. The Sergeant of the Guard oversees the same day-to-day functions, mentors and develops junior Non-Commissioned Officers, and conducts presidential wreath laying ceremonies. The three reliefs are on duty utilizing 24 hour rotational shifts. The Tomb Guards' day begins at 5:00 A.M. with arrival at the Tomb Quarters 1 for duty. The Tomb Guards will inspect the quarters, prepare their uniforms, review orders, and receive their duty assignments. At 6:30 A.M., the Tomb Guards inspect the trainee's readiness and uniforms. If a trainee meets relevant standards, the Tomb Guard may allow them to walk the morning "bolo" 2 at 7:00 A.M. The evening "bolo" will be the final change and walk of the day. During the hours of the day ANC is open to the public, the Tomb Guards will perform several Changing of the Guard and wreath laying ceremonies, and Walking the Mat. During summer hours, the Changing of the Guard ceremony takes place every half-hour, and during winter hours every hour. Although all walks are sacrosanct, the most coveted walk for a Tomb Guard is the noon "Noon Moon" 3 walk. Tomb Guards also conduct retreat and retire the colors in accordance with the military tradition. During the same time, the trainees perform "mirror time" 4, conduct uniform preparation, study knowledge, check-in wreaths, and alert the Tomb Guards of the next Changing of the Guard by performing a "quarter till" 5. The Tomb is guarded 24 hours a day and 365 days a year. So, after the evening "bolo", non-ceremonial changes and walks in battle dress uniforms are performed until the next morning's "bolo". During this time, the Commander of the Relief usually conducts entire relief training. With repetition and meticulous attention to detail the relief works together on the various sequences emphasizing uniformity and cohesion. These night hours are the time when the trainees hone their skills. The mechanics of guard duty come naturally to very few. Trainers spend countless hours providing feedback and teaching the nuances of guard duty. - The Tomb Quarters is located below the Memorial Amphitheater, and is where the Tomb Guards live and work during their duty time. ↩ - The term bolo stands for "be on the look out", and is first and last guard change and walk prior to public ANC hours. The Tomb Guard may allow a trainee to walk the mat in full ceremonial uniform as practice. ↩ - The "Noon Moon" walk is coveted because it is the most visited, and therefore highest profile, Changing of the Guard and Walk of the day. ↩ - "Mirror-time" is part of Tomb Guard training when the trainee practices weapons manual and movements in front of several ceiling to floor mirrors in the quarters. ↩ - The "quarter till" alerts the Tomb Guards of the next Changing of the Guard, and is also a time to present Tomb Guards with special knowledge "high-speed" or certain motivation for the privilege of Walking the Mat. ↩
<urn:uuid:0728455c-8e47-437a-b3e1-3d0e938b1962>
CC-MAIN-2020-16
https://tombguard.org/tomb-of-the-unknown-soldier/the-tomb-guard/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370497301.29/warc/CC-MAIN-20200330181842-20200330211842-00114.warc.gz
en
0.954076
2,192
2.625
3
"The light that burns twice as bright burns for half as long - and you have burned so very, very brightly, Roy. Look at you: you're the Prodigal Son; you're quite a prize!" -Tyrell, from Blade Runner Look up at the night sky. On a clear, dark night with normal vision, you can literally see thousands of stars. Some of them are barely visible, others shine so brightly that they come out when the sky's still blue! Why do some appear brighter than others? Two reasons. Some stars are simply closer to us, but others, intrinsically, shine spectacularly bright. Let's take a look at a small section of the Southern sky. Alpha Centauri (in yellow, above) is one of the brightest stars in the night sky, coming in at #4 on the list. It's similar to our Sun, only a little bit bigger and brighter, and has roughly the same color. The reason it's so bright, though, is that it's so incredibly close to us: only 4.4 light years distant. But take a look at the second brightest star above (the blue one). Known as Beta Centauri, it's the 10th brightest star in the night sky, appearing about 70% as bright as its yellow neighbor. Except Beta Centauri isn't really Alpha Centauri's neighbor. While that yellow star is 4.4 light years away, Beta Centauri, the blue one, is 530 light years away, or over 100 times as far away! Why then, does Beta Centauri appear almost as bright as Alpha Centauri? Well, because it's a different type of star! If we go by color, yellow Alpha Centauri is a "G-type" star, much like our Sun. But Beta Centauri is one of the bluest stars out there, making it a "B-type" star. In fact, if it were just a little bluer, it'd be an "O-type" star, the bluest of them all. Surprisingly, we can learn a lot from a star's color. For the whole time a star burns Hydrogen into Helium -- just like our Sun has been doing since its birth billions of years ago -- its color is indicative of another property. Take a look at the picture below. The bluer ones are bigger, too. In fact, the bluer a star is, the larger, brighter, and hotter it is as well! And Tyrell from Blade Runner got it right: the brighter, hotter, bluer stars burn through their fuel faster! A "G-star" like our Sun will live about 10 billion years. A star only 10% as massive, an "M-star", will live many trillions of years! But what if you start looking at stars more massive than our Sun? Well, you need to know where to look, because the very massive ones are rare. We find most of them inside star clusters, like 30 Doradus, below. A "B-type" star, like Beta Centauri, can be up to about 12 times as massive as our Sun, and instead of 10 billion years, only lasts about 10-20 million years before it burns out all of its fuel. Despite being over 100 times farther away and having much more fuel to burn, a B-type star can appear incredibly bright and short-lived, because it burns its fuel over 10,000 times faster than the Sun does! But they're not even the most extreme stars in the Universe. If you can get more than around 12-15 times the mass of the Sun together in one star, you're going to get the brightest, hottest star type in the whole Universe, an "O-type" star, like Alnitak (below). But why muck around with a star only 28 times as massive as our Sun? Even though Alnitak has a total lifetime of just one or two million years or so, we can find even brighter, heavier, shorter-lived ones. If we look near the center of the galaxy, we can find star WR 102ka, located in the Peony Nebula below. WR 102ka is located where the bright white spot near the center of the image is, and it weighs in at a whopping 175 times the mass of the Sun! At around 25,000 light years away, WR 102ka has an incredibly interesting property: it's already dead! A star that massive will live less than 25,000 years, and since the light we're seeing now left that star 25,000 years ago, it's already burned up all of its fuel, and has likely died in a tremendous supernova explosion! But WR 102ka, you need to move your skinny butt over. There's a new star in town that's got you beat. (And click to enlarge this image above.) It weighs in at 265 times the mass of the Sun. It's so massive that it's probably already blown off something like 60 times the mass of the Sun, meaning it was over 300 times as massive as the Sun when it was born. A star like this is unheard of, and many were dubious that a star like this could have even existed! With a lifetime under 10,000 years, we're lucky to have caught a glimpse of it at all! And since it is the biggest star we've ever found, would you like to see an illustration of just how big it is? In terms of size, this star is to the Sun like Jupiter is to the Earth: huge! In terms of energy output, this one star, R136a1, radiates more than 10 million times faster than our Sun. Imagine that, for a minute. If you replaced the Sun with this star, you'd be able to place Earth nearly an entire light-year away from the Sun, and life would still survive. For comparison, in our solar system, it takes sunlight less than nine minutes to reach us. So be aware that behemoths like this are out there, burning through Hydrogen with the fury of millions of Suns, and be very, very happy about the fact that they're as far away from us as they are! Second, in your third image above, in addition to the M through O type stars illustrated next there are also two objects labeled T & L. What are they? Thanks! Well, WR 102ka may be the most massive, and the brightest, but it is far from the biggest. The red Hyper-giant star "Canis Majoris" is waaaay bigger. google it, and see size comparison with the sun. How do we know R136a isn't a binary of two extremely large, but not model-destroying, stars? I seem to recall that this was an issue for several other hot blue supergiants. "and be very, very happy about the fact that they're as far away from us as they are!" we wouldn't be here to be very happy if it wasn't a fact =p i bet you are tired if this kind of thinking =p and thx for this... enlightening post Aha. This may answer a question I've been wondering about. Since heavier elements (and in particular, Carbon) need to be generated in stars, and then scattered about to be later accumulated on cold wet things like Class M planets, there was no chance of life forming on any planet like body circling any early stars (because they couldn't have existed) or later in time (because there could not have been much carbon). So if the universe is 14*10^9 years old and a "typical" star cycles at 5*10^9 or whatever, sufficient carbon to be laying around for life to get started would have existed only in the more recent cycles, easily for the last half of the Universe's time, not so likely for the first half. But if there are these big huge stars that cycle quickly, no problem. Lots of carbon and other elements would be around early on. Right? Wrong? Not so simple? @ Lane: the normal stellar classification system runs O, B, A, F, G, K, M, however there are a few extensions: at the very bright end, Wolf-Rayet stars (types W, or ON, OC, BN, BC) have mostly helium burning rather than hydrogen and often exhibit strong emission lines of nitrogen, carbon, or oxygen. At the cool end, L and T are red and brown dwarfs respectively, the brown dwarfs having their peak emission in the infra-red and displaying methane in their spectra. @ Greg: supernova explosions are usually viewed as the prime mechanism for distributing large quantities of higher mass nuclides around a galaxy (especially trans-ferrous elements) since the supergiant stars involved usually have very short life-times, but have nonetheless run all the way up to the iron limit, and the explosions blow off a considerable proportion of the mass in a shockwave expanding in all directions. So initially heavy elements are confined to regions that are proximate to supernova eruptions, but over a long period of time they would tend to be better scattered throughout a galaxy. I have to wonder here.. If the lifetime of this thing is only 10,000 years, does it ever really qualify as a star in the sense that most people would recognise? After all, surely it will still be in the process of coalescing when it goes supernova? Henjak (#2): You probably mean "VY Canis Majoris". See, this proves there's a creator! So as the sun starts to set on Saturday, October 22, 4004 BC, God starts work and says to his little boy, "Stick with me, kid, and I'll make you a big star...." Andrew @7: If it has intrinsic luminosity due to continuous fusion reactions, it's a star. You are probably right that it never settles down to an equilibrium size, which is part of the reason it has blown off so much mass. Also, the t=0 point for a star comes after (or near the end of) the coalescence phase, since you have to have compressed a core to the necessary region of temperature-pressure space to ignite fusion. I think I read that when stars start fusing, the outward radiation pressure prevents the star collecting more hydrogen. So, what are the top theories for how a star gets this big? Colliding stars? Haven't you got your massive stellar lifetimes a bit off. I read that these heavyweights last a million or two years. If I do a simple computation L/M (luminosity divided by mass) for R136a1 versus the sun the number is 30e3, leading to an expected lifetime of 300,000 years. If I then account for the fact that the variation of luminosity of a solar type star is maybe 10,000 time (red giant versus main sequence), but O type stars luminosity doesn't vary so much I'd think the heavyweight lifetime would be lengthened more. Then the heavyweights burn more of their fuel during their lifetime (I'm not sure this is true, perhaps the core explodes too early to allow this), the heavyweights gain another advantage. So I suspect the press reported lifetimes of a million or two million years is closer to the truth. you pretty much summed it up. however, i recently read something someplace (nature? science? boys life?)that suggested supernovas can't account for all the metals* in the universe. *apparently wacky astro types consider everything but hydrogen and helium a metal. It seems to me that the numbers in this article must be inaccurate. If the power output of a star is 10 million times that of the sun, the the distance at which we would receive the same power flux is about 3162 times the distance to the sun (3162 is square root of 10 million). The sun is about 8.3 light minutes from the earth. Hence to get the same radiation from the big star, the distance should be about 18.3 light days which is way short of the claimed "nearly a light year" (still an impressive distance though). If nothing (by Einsteinian theory, anyhow) can travel faster than light, and if time and space are interconvertible, how can one say that a what we are seeing is the light of a star that "used to be?" @Steve nobody said that. It's perfectly possible (and likely in the case of a large or distant star) that the stars we're observing now have long since died. Bah, never mind, I totally misread that comment. Scientists are not really sure about the brightest stars because we have studied just a bit But if we still get the light, gravity and so on of a star, in what sense is it not there? hi i was on the net looking at the cosmos when i forgot the full name of the biggest star ever found so far. then...while looking for the name i found your article. two questions...1 is R136a1 and vy Canis Majoris the same star?...because i thought vy Canis was the biggest followed by beetle juice second. and 2 you mentioned that the biggest stars burn blue^^...so then why dose Canis Majoris burn red in every demonstration that I've seen of it?. thnx...and a friendly reminder that am just a novice. VY Canis Majoris is not fesable it is that big our sun is a like the size of an atom compared it google search it and u will know wat i mean. too many words! unde este invatatura? care sunt stelele mari? where are the big stars? where are the planets? is r136a1 more powerful than vy canis majoris? is vy canis majoris is bigger than r136a1?? when it vy canis majoris and r136a1,which one is more bigger blast scientist ?? is VY CANIS MAJORIs IS THE BIGGEST STAR IN THE UNIVERES Does nobody know how big planets around that big star could be? The bigger the star the bigger the planets in those systems are. That would make planets bigger then our own sun...... we only cant see them because of the star itself.... Does not follow. The star could have gobbled up all the material leaving nothing for planets to form with. A planet bigger than the sun would be a star. is there a biggest stars doesnt discover yet? i can"t realize there have a biggest star. What would it take to move planet to the right atmosphere temperature from the sun To hold life What about vy canis majoris? @Arfat #36: See the Wikipedia article (http://en.wikipedia.org/wiki/VY_Canis_Majoris). It is only 17 solar masses (compared to 265 M_sun for R136a1 discussed here), and has a radius of 1420 R_sun, compared to just 35 R_sun for R136a1. So VY CMa is bigger in volume, but much, much smaller in mass. Why are the star look different shape from the stars look if you seen the R136a1 that is not a star ship it looks circle!!! The energy a planet receives from a star diminishes with the square of the distance from that star. If the star puts out 10 million times as much energy as the Sun, then for a planet to receive the same amount of energy as Earth, it should be 1 AU x sqrt(10,000,000) distance from the star. That's about 20 light-days, not 1 light-year. Am I missing something? the graphic is an artists impression of the differing sizes of those stars. In the photographs of real stars, the cross you see from the brighter stars is caused by the mechanical structure of the telescope. Stars do not naturally have anything other than a spherical shape. Hope that helps a little. :)
<urn:uuid:976b3249-1828-4c48-ab56-e0fc3c7508be>
CC-MAIN-2020-16
https://scienceblogs.com/startswithabang/2010/07/22/the-biggest-star-weve-ever-fou
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371880945.85/warc/CC-MAIN-20200409220932-20200410011432-00553.warc.gz
en
0.954682
3,359
3.40625
3
The Barents Sea is a shelf sea of the Arctic Ocean. Being a transition area between the North Atlantic and the Arctic Basin, it plays a key role in water exchange between them. Atlantic waters enter the Arctic Basin through the Barents Sea and the Fram Strait (Figure 3.1.1). Variations in volume flux, temperature and salinity of Atlantic waters affect hydrographic conditions in both the Barents Sea and the Arctic Ocean and are related to large-scale atmospheric pressure systems. Mesozooplankton play a key role in the Barents Sea ecosystem by transferring energy from primary producers to animals higher in the food web. Geographic distribution patterns of total mesozooplankton biomass show similarities over time, although some inter-annual variability is apparent. Challenges in covering the same area each year are inherent in such large-scale monitoring programs, and inter-annual variation in ice-cover is one of several reasons for this. This implies that estimates of average zooplankton biomasses for different years might not be directly comparable. Benthos is an essential component of the marine ecosystems. It can be stable in time, characterizing the local situation, and is useful to explain ecosystem dynamics in retrospect. It is also dynamic and shows pulses of new species distribution, such as the snow crab and the king crab, and changes in migrating benthic species (predatory and scavenger species such as sea stars, amphipods and snails with or without sea anemones). The changes in community structure and composition Most Barents Sea fish species are demersal (Dolgov et al., 2011); this fish community consists of about 70-90 regularly occurring species which have been classified into zoogeographical groups. About 25% are Arctic or mainly Arctic species. The commercial species are all boreal or mainly boreal (Andriashev and Chernova, 1995), except for Greenland halibut (Reinhardtius hippoglossoides) that is classified as either Arcto-boreal (Mecklenburg et al., 2013) or mainly Arctic (Andriashev and Chernova, 1995). Phytoplankton development in the Barents Sea is typical for a high latitude region with pronounced maximum biomass and productivity during spring. During winter and early spring (January-March), both phytoplankton biomass and productivity are relatively low. Spring bloom is initiated during mid-April to mid-May and may vary strongly from year to year. Bloom duration is typically about 3-4 weeks and is followed by a reduction in phytoplankton biomass mainly due to nutrient exhaustion During the 20 June to 14 August 2017 period, a sighting survey was conducted in the Barents Sea east of 28°E as part of a six-year mosaic survey of the Northeast Atlantic to estimate the regional abundance of minke whales and other cetaceans during summer. Coverage was adequate, except in the southeastern area where military restrictions re-stricted survey activity. The most often observed species was minke whale, followed by white-beaked dolphins, harbour porpoises, humpback whales, and fin whales. A few observations were also made of bowhead whales and beluga whales. Zoogeographical groups of fish species are associated with specific water masses. Rel-ative distribution and abundance of fish species belonging to different zoogeographic groups are of interest because these fish will respond differently to climate variability and change. Since they are not commercial species, fishing does not directly contribute to changes in abundance and distribution of these species. Different zoogeographic groups also tend to differ in their trophic ecology: many of the Arctic species are small, resident, and feed mainly on invertebrates; whereas, most boreal and mainly boreal species are migratory and piscivorous. Zero group fish are important consumers on plankton and are prey of other predators, and, therefore, are important for transfer of energy between trophic levels in the ecosystem. Estimated total biomass of 0-group fish species (cod, haddock, herring, capelin, polar cod, and redfish) was 1.92 million tonnes during August-September 2017; slightly above the long term mean of 1.76 million tonnes (Fig 3.5.1). Biomass was dominated by cod and haddock, and mostly distributed in central and northern-central parts of the Barents Sea. The level of discarding in fisheries is not estimated, and discards are not accounted for in stock assessments. Both undersized fish and by-catch of other species can lead to discarding; fish of legal size but low market value are also subject to discarding to fill the quota with larger and more valuable species (known as high-grading). Discarding is known to be a (varying) problem, e.g., in haddock fisheries where discards are highly related to the abundance of haddock close to, but below, the minimum legal catch size. Fishing activity in the Barents Sea is tracked by the Vessel Monitoring System (VMS). Figures 184.108.40.206-220.127.116.11 show fishing activity in 2017 based on Russian and Norwegian data. VMS data offer valuable information about temporal and spatial changes in fishing activity. The most widespread gear used in the Barents Sea is bottom trawl; but long lines, gillnets, Danish seines, and handlines are also used in demersal fisheries. Pelagic fisheries use purse seines and pelagic trawls. The shrimp fishery used special bottom trawls. Management of the minke whale is based on the Revised Management Procedure (RMP) developed by the Scientific Committee of the International Whaling Commission. Inputs to this procedure are catch statistics and absolute abundance estimates. The present quotas are based on abundance estimates from survey data collected in 1989, 1995, 1996–2001, 2002–2007 and 2008–2013. The most recent estimates (2008–2013) are 89 600 animals in the Northeastern stock, and 11 000 animals for the Jan Mayen area, which is exploited by Norwegian whalers. Norwegian and Russian vessels harvest northern shrimp over the stock’s entire area of distribution in the Barents Sea. Vessels from other nations are restricted to trawling shrimp only in the Svalbard zone and the Loophole — a piece of international waters surrounded by the EEZs of Norway and Russia. No overall TAC has been set for northern shrimp, and the fishery is regulated through effort control, licensing, and a partial TAC in the Russian zone only. The regulated minimum mesh size is 35mm. Fishing has the largest anthropogenic impact on fish stocks in the Barents Sea, and thereby, on the functioning of the entire ecosystem. However, observed variations in both fish species and ecosystem are also strongly impacted by climate and trophic interactions. During the last decade, catches of most important commercial species in the Barents Sea and adjacent waters of Norwegian and Greenland Sea varied around 1.5 – 3 million tonnes and has decreased in the last years (Fig. 18.104.22.168.). With retreating sea ice, new areas in the northern Barents Sea become available for fisheries, including bottom trawlers. Of special interest to WGIBAR is therefore the vulnerability analysis. Current knowledge on the response of benthic communities to the impact of trawling is still rudimentary. The benthos data from the ecosystem survey in 2011 has been used to assess the vulnerability of benthic species to trawling, based on the risk of being caught or damaged by a bottom trawl (see WGIBAR report 2016). In order to conclude on the total impact of trawling, an extensive mapping of fishing effort and bottom habitat would be necessary. In general, the response of benthic organisms to disturbance differs with substrate, depth, gear, and type of organism (Collie et al. 2000). Seabed characteristics from the Barents Sea are only scarcely known (Klages et al. 2004) and the lack of high-resolution (100 m) maps of benthic habitats and biota is currently the most serious impediment to effective protection of vulnerable habitats from fishing activities (Hall 1999). The impact of fisheries on the ecosystem is summarized in the chapter on Ecosystem considerations in the AFWG report (ICES 2016c), and some of the points are: In most of the measured years, the biomass in the northeast part of the Barents Sea was above the total Barents Sea mean (see Fig. 3.4.7). But from 2013 and ongoing, the mean biomass was reducing, and was record low (<20 kg/n.ml) in 2016, and below the total Barents Sea mean. This decrease could be explained by the maximum distribution of the snow crab predating on the benthos, and with increasing bottom temperatures (chapter 3.1). But in 2017 the biomass increased to 116 kg/nml, the highest value recorded both with and without snow crabbiomass. The interaction cod-capelin-polar cod is one of the key factors regulating the state of these stocks. Cod prey on capelin and polar cod, and the availability of these species for cod varies. In the years when the temperature was close to the long term mean, the cod overlap with capelin and polar cod was lower than in the recent warm years. Cod typically consume most capelin during the capelin spawning migration in spring (quarters 1+2), but especially in recent years the consumption has been high also in autumn (quarters 3+4) in the northern areas (Fig. 4.2.3). The Barents Sea polar cod stock was at a low level in 2017. Norway conducted commercial fisheries on polar cod during the 1970s; Russia has fished this stock on more-or-less a regular basis since 1970. However, the fishery has for many years been so small that it is believed to have very little impact on stock dynamics. Stock size has been measured acoustically since 1986, and has fluctuated between 0.1-1.9 million tonnes. Stock size declined from 2010 to a very low level in 2015, increased to 0.9 million tonnes in 2016, and again declined to 0.4 million tonnes in 2017. The Barents Sea capelin has undergone dramatic changes in stock size over the last three decades. Three stock collapses (when abundance was low and fishing moratoriums imposed) occurred during 1985–1989, 1993–1997, and 2003–2006. A sharp reduction in stock size was also observed during 2014-2016; followed by an unexpectedly strong increase during 2016-2017. Observed stock biomass in 2015 and 2016 was below 1 million tonnes, which previously was defined as the threshold of collapse. Cod is the major predator on capelin; although other fish species, seabirds and marine mammals are also important predators. In the last 6-7 years, cod stock levels have been extremely high in the Barents Sea. Estimated biomass of capelin consumed by cod in recent years has been close to the biomass of the entire capelin stock (Fig. 4.2.3). Abundance levels of predators other than cod are also high and, to our knowledge, stable. Eleven years (2006-2016) of capelin diet were examined from the Barents Sea where capelin is a key forage species, especially of cod. The PINRO/IMR mesozooplankton distribution shows low plankton biomass in the central Barents Sea, most likely due to predation pressure from capelin and other pelagic fish. This pattern was also observed in 2017. In the Barents Sea, a pronounced shift in the diet from smaller (<14cm) to larger capelin (=>14cm) is observed. With increasing size, capelin shift their diet from predominantly copepods to euphausiids, (mostly Thysanoessa inermis - not shown), with euphausiids being the largest contributor to the diet weight in most years (Figure 4.1.1). Marine litter is defined as “any persistent, manufactured or processed solid material discarded, disposed or abandoned in the marine and coastal environment”. Large-scale monitoring of marine litter was conducted by the BESS survey during the 2010-2017 period, and helped to document the extent of marine litter in the Barents Sea (the BESS survey reports, Grøsvik et al. 2018). Distribution and abundance of marine litter were estimated using data from: pelagic trawling in upper 60 m; trawling close to the sea floor; and Visual observations of floating marine debris at surface. Oceanic systems have a “longer memory” than atmospheric systems. Thus, a priori, it seems feasible to predict oceanic temperatures realistically and much further ahead than atmospheric weather predictions. However, the prediction is complicated due to variations being governed by processes originating both externally and locally, which operate at different time scales. Thus, both slow-moving advective propagation and rapid barotropic responses resulting from large-scale changes in air pressure must be considered. Most of the commercial fish stocks found in the Barents Sea stocks are at or above the long-term level. The exceptions are polar cod and Sebastes norvegicus. Also the abundance of blue whiting in the Barents Sea is at present very low, but for this stock only a minor part of the younger age groups and negligible parts of the mature stock are found in the Barents Sea. Concerning shellfish, the shrimp abundance is relatively stable and above the long-term mean while the abundance and distribution area of snow crab is increasing.
<urn:uuid:47bc45f9-8fd7-42de-aeff-8b28a58b1766>
CC-MAIN-2020-16
http://www.barentsportal.com/barentsportal/index.php/en/status-2018
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510287.30/warc/CC-MAIN-20200403030659-20200403060659-00114.warc.gz
en
0.94048
2,871
3.59375
4
The stability and controllability of an airplane are strongly affected by the geometric relationship between the center of gravity (CG) and the aerodynamic center of the machine. It must balance properly in order to fly successfully. As the designer develops the configuration of the airplane, the CG position and its travel are established. Every mass that is part of the airplane or aboard at takeoff affects the position of the CG. In order to get the airplane balanced properly, the designer must first establish the CG of the empty airplane and then define the positions of the variable loads (crew, passengers, payload, fuel, etc.) that the airplane will carry. No, this Zenith CH 750 Super Duty doesn’t have a balance problem. Sebastien Heintz, president of Zenith Aircraft, is simply demonstrating how easy the airplane is to move around on the ground by pushing down on the aft fuselage. For most airplanes, weight of the empty airplane is 50% or more of the takeoff gross weight. Accordingly, the position of the CG of the empty airplane is of critical importance in the balance of the machine. The empty CG is determined by the configuration of the airplane, but each type of configuration tends to have the CG in a different place. The designer has surprisingly little freedom to move the CG around once the basic configuration of the airplane and its major components are decided. Each component’s individual CG falls in a specific place on the component itself, and the overall configuration of the airplane severely limits how the components can be moved relative to each other to move the CG. For the moment we will limit our discussion to single-engine propeller-driven airplanes, although the basic principles will apply to other types also. The engine accounts for 20 to 30% of the empty weight of a typical light airplane. It is a concentrated mass and must be able to drive the propeller, which itself weighs 2 to 3% of the total airplane empty weight. If the prop is driven directly by the engine, as is the case for most airplanes, the engine/prop combination ends up being one-quarter to one-third of the total empty weight of the airplane. Because this is such a large percentage of the total empty weight, the CG of the empty airplane will be pulled toward the CG of the engine. The CG of the fuselage will tend to be close to its midpoint. While details about the shape of the fuselage and where it is reinforced to take concentrated loads like engine mounts, seats, landing loads, and wing attach loads will move the CG a little, a typical fuselage is a relatively uniform structure with its mass distributed continuously along its length. Its CG will be near its 50% length point, and there is little a designer can do to change this. Once the length of the fuselage is set, its CG position is essentially set also. The center of gravity of a wing tends to be at about 40% of its mean aerodynamic chord (MAC). This is slightly forward of the centroid of area of the wing (50% MAC) because the primary spar structure or wing box is biased forward in the wing to place the spar at or near the maximum thickness of the airfoil and to align the elastic axis of the wing with the aerodynamic center of the wing so the wing does not twist when it bends under air loads. The forward bias of the spar structure moves the wing CG forward somewhat from the centroid, which is also the CG of the wing skins. While the designer may move the wing fore and aft to affect the balance of the airplane, this is done for aerodynamic reasons, not to move the CG of the mass of the wing. Accordingly, the wing will be placed after the CG of the fully integrated fuselage, including seats, systems, engine, and tail surfaces, has been located. While the mass of the wing must then be taken into account in the determination of the CG of the complete airplane, the wing CG itself will be close to the target CG of the complete airplane, and the wing position will not cause the CG to move much. (Note that this is not the case for canards and tandem-wing configurations where the wing weight is usually significantly separated from the desired flight CG, and changes in wing weight and position can cause significant changes in the airplane’s empty CG.) The designer has even less freedom to make changes to the tail surfaces to affect CG. The size and position of the tail surfaces are dictated almost entirely by the aerodynamic effects required to give the airplane acceptable stability and control characteristics. The mass of the tail surfaces is not variable either. It is dictated by a combination of the need for structural integrity and the mass of the balance weights required to avoid flutter of the control surfaces. Even though the weight of the tail surfaces cannot be directly varied to alter CG, it is important to keep a close eye on it during preliminary design. Because the tail is mounted at the extreme aft end of the airplane, it is a long way away from the CG of the complete airplane. The same long moment arm that gives the tail the aerodynamic leverage to stabilize and control the airplane also gives the mass of the tail a disproportionately large leverage on the CG position of the airplane. Because the tail has one of the largest effects per unit of mass on the CG, getting the estimated weight of the tail right is very important in preliminary design. It is very easy to underestimate the weight of any airplane component, partially because it is easy to overlook small items that add up (like hinges, control horns, etc.) and partially due to the natural tendency to be optimistic in our initial design assessments. The problem with underestimating the weight of the tail is that the extra weight of the tail will pull the CG aft significantly, and the airplane will be tail-heavy relative to the initial CG estimate. Because the tail has so much leverage on the overall airplane, the modifications one must make to get the CG back where it should be after the airplane has been built are disproportionately large. I have had his very experience on one of my personal projects. On my Lightbeam ultralight design, I underestimated the weight of the tail by a few pounds. When the airplane was completed and weighed off, I found that the empty CG was significantly aft of where it was supposed to be. Before the airplane was first flown, I had to completely revamp the installation of my BRS parachute system, moving the ‘chute and its can well forward on the airframe and reroute the parachute risers in order to bring the CG within limits. Alexander Kartveli, chief engineer at Republic Aircraft, shown with models of many of his designs. (Photo: San Diego Air & Space Museum Archives via Wikimedia Commons) Ways to Adjust CG The experience just recounted illustrates how one goes about adjusting the empty CG of an airplane. Ideally, this happens during the “paper” (or digital) phase of the design so that we don’t have to modify already-built structure or hardware. As we have seen, the designer has remarkably few options available to alter empty CG. Most of the components of the airplane are either fixed for a good reason, or they produce larger aerodynamic than mass properties effects if they are moved. This leaves only a few options: Engine Mount Length The minimum distance between the firewall and the cockpit seats is determined by the need to accommodate the pilot, and the minimum length of the engine mount is typically determined by clearance between the engine accessories and the firewall. It is, however, relatively straightforward to move the engine forward by lengthening the engine mount. In the preliminary layout of a tractor-engined airplane, it’s a good idea to make the initial engine mount longer than the minimum length dictated by engine clearance from the firewall. This leaves two types of design flexibility: First, if the airplane turns out to be nose-heavy once it is completely defined, the engine can be moved aft to move the CG aft. Second, when the inevitable desire to put a bigger, more powerful engine into the next iteration of the airplane appears, there is room to move the new, heavier, engine aft to keep the CG within limits. Another reason to start out with a slightly longer engine mount and err on the side of nose-heaviness is that it is a lot easier to move the CG of an airplane aft than it is to move it forward. The engine and firewall are much closer to the CG than the aft end of the tail cone, so moving mass aft in the fuselage moves the CG aft quickly, but there is no place that has similar leverage if we need to move mass forward. While it is possible to move the engine forward after the airplane is built, it is a major modification requiring a new engine mount, modified or new cowling, and replacement of engine control cabling and fuel lines. The CG of an airplane design tends to creep aft as the design progresses, and lots of little weights aft of the CG grow. It is told that Alexander Kartveli, the chief designer of Republic Aircraft, would mount a block of lead to the tail post of each Republic fighter prototype as it was being built and challenge his engineers to make the airplane balance with the lead in place. The story goes that they could never actually succeed at this, and as the build progressed, they would persuade Mr. Kartveli to remove a little bit of lead at a time as the CG crept aft. By the time the airplane was finished, the CG would be in the right spot, and all or most of the lead would be gone. This was his way of hedging against the tendency of the as-built CG to be aft of the estimated CG. As designers, the lesson we can take from this is to target a CG forward of the most aft acceptable CG during design phase so we have reserve for when the CG moves aft. The other mass that is relatively easy to relocate on an airplane is the battery. Moving the battery from just ahead of the firewall to aft of the cockpit or even farther aft in the fuselage can move the CG quite significantly. As we have already seen, because of the geometry of the airplane, there is much more room to move the battery aft to move the CG aft than there is to move it forward. The designer should make an early estimate of how much the battery position can affect the CG by choosing a most forward and most aft possible position for the battery box and then calculate the airplane CG with the battery in each position. This information will be very valuable in deciding where to target the CG initially, and where moving the battery can take it. It’s not uncommon for an airplane that can take a variety of engines to design a forward battery mount for the initial (smaller, more economical) engine installation and an aft battery box to keep the CG within limits when a builder installs the inevitable larger, heavier engine. Next month we will continue our discussion of how the configuration of the airplane affects CG.
<urn:uuid:e83efbeb-d5dd-47ef-bcc2-c3dee1448c03>
CC-MAIN-2020-16
https://www.kitplanes.com/wind-tunnel-87/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371665328.87/warc/CC-MAIN-20200407022841-20200407053341-00314.warc.gz
en
0.949609
2,245
3.25
3
The Spanish encountered the Aztec Empire not as a bunch of lost cities in the jungle but as a living, breathing civilization. When the conquistadors were welcomed into the Aztec capital of Tenochtitlan by the Emperor Montezuma in 1519, the Aztecs had controlled most of central Mexico by outright subjugation and through various systems of tribute. The Aztec Empire’s influence was felt as far away as Central America and the American Southwest. Many living under Aztec control wanted the empire out of their lives, and when the Spanish arrived they welcomed the Europeans who would help them overthrow the empire. Before the arrival of the Spanish the Aztecs knew their control over central Mexico was somewhat tenuous and were always aware of the possibility of internal strife causing a political and social collapse. In the days of Montezuma’s reign, at the beginning of the 16th Century and starting some ten years before the arrival of Cortés and his men, Emperor Montezuma was witness to 8 omens which supposedly foretold the end of the empire and his own death. Because of these omens there was an underlying feeling that the Aztecs were doomed, and when the Spanish arrived those who remembered the omens saw their fates as sealed. Whether or not these omens actually occurred is a question for historians and folklorists alike. We first see them mentioned in The Florentine Codex, a massive 3-volume illustrated ethnographic compilation put together by the Spanish Franciscan friar Bernardino de Sahagún. The codex has over 2,000 illustrations in its 2,400 pages and in Book 12 of the Codex we find the 8 signs that supposedly predicted the doom to befall the Aztecs. Scholars are divided as to whether or not these omens were made up after the fact to justify the Spanish Conquest in the eyes of the conquered natives and to the rest of the world, or if they really happened. Myth or real, here are the 8 omens of Montezuma in the order they occurred. The first omen reportedly occurred a full 10 years before the arrival of the Spaniards, which would put it happening around 1509. One day what has been described as a “fire plume” appeared in the sky. According to the legends, this is commonly referred to as “the sky omen”. A great streak of light appeared in the night sky for almost a year, described as narrow at the tip and wide at its base, and so bright that it seemed like daybreak in the middle of the night. This “fire plume” was most likely a comet and is verified by an Aztec source called The Codex Telleriano–Remensis which chronicles natural disasters and cosmic events that happened in central Mexico from the 14th Century to the 16th Century. There is an illustration in this codex showing Emperor Montezuma with a comet overhead and the Aztec calendar date corresponds to the European year of 1509. So, this may be an omen that has an actual historical basis. In any event, the sky “fire plume” omen caused great distress among the people of ancient Mexico. According to one source, “As soon as it appeared, men cried out, slapping their mouths with the palms of their hands. Everybody was afraid, everybody wailed.” Comets throughout history have been seen as bringers of good luck or bad luck, and in this case, the comet was seen as a bad omen. The second omen had to do with the Aztec god Huitzilopochtli and it also involved a fire, but a more terrestrial kind. Huitzilopochtli was not only the god of war, of human sacrifice and of the sun, he was the patron and protector of the Aztec capital at Tenochtitlan and was seen as sort of a national god of the Aztecs. According to Aztec legends it was this god who was with them as a protector from Day One. From their wanderings in the desert through their conquest of most of Mexico, Huitzilopochtli was always there watching over and guiding the Aztec people. It was a national catastrophe when the temple dedicated to this god caught fire in the central ceremonial complex of the Aztec capital. First, the wooden pillars of the temple caught fire suddenly and then the fire spread to the rest of the structure. It seemed that whenever water was poured on the fire, the fire increased. When the fire was finally extinguished with most of Huitzilopochtli’s temple gone, the Aztec priests and astrologers declared what the citizens of the great city had already felt: this was a very bad omen. The third omen occurred at another sacred place, a building used as both a temple and a monastery called Tzommolco-calmecac, also located in the central part of the Aztec capital of Tenochtitlan. The temple-monastery complex was dedicated to the god Xiuhtecuhtli. This god was symbolized by the North Star and was seen as the lord of fire, patron and keeper of the Mexican volcanoes and god of the daytime and of heat. Xiuhtecuhtli was also the god of food during famine, warmth during cold and of life after death. He lived in an impenetrable enclosure made of turquoise located somewhere underneath the earth so that no harm would come to him. The temple at Tzommolco was not as strong as this turquoise enclosure. On a day of misty drizzle, a bolt of lightning came down and struck the temple and its thatched roof caught fire immediately. Witnesses claimed that there was no sound of thunder accompanying the lightning strike and that the storm was not a severe one. It seemed to occur for no reason, other than to be a “bad sign.” The fourth omen happened much like the first, overhead in the skies. Thousands of people throughout central Mexico looked to the skies bewildered and afraid when they saw three large balls of fire emitting sparks streak across the sky from west to east. Some reported a terrible sound accompanying this spectacle, like a deep roar of a wild animal. Later it was determined that these were most likely meteors entering the earth’s atmosphere and heading for somewhere in the Gulf of Mexico. Nevertheless, this heavenly phenomenon was interpreted by all who saw it as a bad sign. The fifth omen had to do with the lifeblood and the highway of the Aztecs, the very lake on which their capital sat, Lake Texcoco. Fishing boats were out on the water normally one day in calm weather when suddenly the lake welled up. Swirling eddies tossed about the boats and caused a mini tidal wave to hit the settlements on the shore, including the capital city which was on the island in the middle of the lake. Many buildings flooded and some structures crumbled. While not too disastrous, this event had a more devastating psychological effect. No one could explain why the water in the lake would do that and it was another one to put on the list of the bad omens that were foretelling the great disaster that was to come. Modern day scientists and researchers theorize that seismic or underground volcanic activity could have been responsible for the strange behavior of Lake Texcoco that day. The sixth omen concerns the sounds of a weeping woman which some say may be the basis of the legend of La Llorona. Please see Mexico Unexplained episode Number Three for a detailed description of the Llorona legend. For several nights the citizens of the Aztec capital city had heard the cries of a woman. Some believed that it was the snake-skirted goddess Coatlicue, the mother deity of all the Aztecs, warning her children of the disasters yet to come. On some evenings the female voice was heard to be saying, “My children, it is already too late,” and “My children, where can I take you?” The haunting voice filled all who heard it with a deep sense of dread and news of what was happening quickly spread from the capital city to all the corners of the Aztec Empire. What was behind this wailing woman’s message of foreboding? What did it mean? The seventh omen had to do with a strange bird found by fisherman on Lake Texcoco. When the men saw this unusual gray-colored bird, they captured it in their nets and brought it directly to the imperial palace to present it to Emperor Montezuma, who had an impressive private zoo and collected strange and interesting creatures. Please see Mexico Unexplained episode number 43 for a detailed discussion about Montezuma’s private zoo. In his vast collection of animals, the Aztec emperor had never seen such an unusual bird. It appeared to be some sort of crane, but it had a flat, round, black reflective surface on its forehead, almost like a mirror. When he looked at the mirror-like fixture on the bird’s head, Montezuma could see the sky and the constellations and then people came into view. He saw a great army with men riding gigantic deer and carrying weapons unknown to him. When Montezuma called the court priests and astrologers over to see the images in the mirror, the images vanished and the bird died. The last omen occurred just weeks before the Spanish arrived at the Aztec capital. A two-headed man appeared in the streets of Tenochtitlan. Witnesses were alarmed at the sight and people knew that the emperor had a human section of his zoo where he housed people with various deformities, so the two-headed man was brought directly to Montezuma. According to the legend, when the emperor laid eyes on him, the two-headed man just disappeared. Another variation of this omen has a number of two-headed men showing up in the streets of the Aztec capital and all of them vanishing when brought to the imperial palace. As it was known that Montezuma took a keen interest in such people, this omen may have some basis in historical fact. According to the legends, Montezuma did not dismiss the omens but meditated on them and took them very seriously. In spite of having the best astrological and priestly counsel in the Aztec Empire, the emperor had no idea of what the omens meant or what fate would befall him or his realm. As news of the omens spread throughout the empire, perhaps some people were psychologically prepared for what was to come. Given the brutality experienced by some of the peoples subjugated by the Aztecs, perhaps each omen represented hope instead of doom. Whether good or bad, all who had heard of these omens had a feeling that big changes were on the horizon and they were right. REFERENCES (This is not a formal bibliography) Latin American Folktales by John Bierhorst Various online sources
<urn:uuid:998f48d5-9fa5-4e75-83ea-5c52a76ac55e>
CC-MAIN-2020-16
http://mexicounexplained.com/8-omens-montezuma-end-aztec-empire/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371807538.83/warc/CC-MAIN-20200408010207-20200408040707-00393.warc.gz
en
0.979837
2,280
3.71875
4
Register now and grab your free ultimate anatomy study guide! Origins, insertions, innervation and functions of the adductors of the thigh. Hello, hello, everyone! This is Joao from Kenhub, and welcome to another tutorial where, this time, I'm going to be talking about another group of muscles known as the adductors of the thigh. Now, as I mentioned, yes, this is a group of muscles that you find on your thigh. And right now, I have here an image showing you the anterior view of your thigh. And I am going to remove right now the anterior thigh muscles, right now, so you can expose and see the thigh adductors. Now, depending on how you look at these muscles, you can talk about in terms of functions, which we are right now, because we call them thigh adductors, but you can also call them hip adductors depending also whether you’re looking at them working on or producing functions on the hip joint. But you can also talk about in terms… define this group of muscles in terms of location. And if we were to do so, we would say that these are medial thigh muscles. So these are the medial thigh muscles because if you remember well from other tutorials that we have here at Kenhub, there are also the anterior thigh muscles, and you have a posterior group, so posterior thigh muscles, and eventually, this one that we’re going to be talking about today. And as you can clearly see here on this image, the thigh adductors range from the lower pelvic bone to the femur. So they go all the way from the pelvic bone to the femur and also to the knee region. And these muscles shape the surface anatomy of the medial thigh. The first thing that I want to talk about is in terms of innervation. As you can clearly see now highlighted in green, this is the nerve that is going to be responsible for most of the adductors, so most of the innervation of the adductors. And this one is the obturator nerves which arises from the lumbar plexus—and you can clearly see here on this image—and reaches the adductors through the obturator canal. There are two muscles that belong to the thigh adductors that have what is called as double innervations, and we’re going to check that out later on on this tutorial. Another important part of this tutorial would be to mention before we go into details, but I would like to just briefly mention the different functions associated to this group of muscles, the thigh adductors. And as the name suggests, this muscle or these muscles are going to be involved in adduction. They’re going to be also contributing to other functions which include external rotation, internal rotation, flexion, and extension. Now, the hip adductors are particularly used when crossing… when you cross your legs, and overall, they play an important role in balancing your pelvis during standing and also when you walk. And a very important situation when you need your adductions is when you go ice skating during winter. The slippery ice lets your legs glide laterally, so you need your adductors to pull them back medially so you don’t accidentally split your legs. And this might really hurt. So thanks to your adductors, you are covered. Before I go on and talk about the different muscles that belong to the thigh adductors, I want to list them. And these are the pectineus, the adductor magnus, we’re going to be talking about the adductor longus, and if we have a longus, we might as well have a brevis, and the minimus, the adductor minimus, and we’re going to finalize this tutorial talking about another muscle that belongs to this group known as the gracilis. Let’s start off with the very first one here on our list. Now, seen highlighted in green, this muscle is the pectineus. And the pectineus runs from the superior, so in terms of origin, from the superior pubic ramus, which you can clearly see here on this image. This is the origin point for this muscle and a point known as the iliopubic eminence. Then this muscle is going to go all the way to the femur to insert on what is known to be as the pectineal line and the linia aspera of the femur. So right here, right about in this area, this is where the pectineus is going to be inserting. Now, another thing to add here is that I mentioned before there are a couple of muscles that will be or will have double innervation. And the pectineus is clearly this case. So in terms of additional innervation, we’re going to see that this muscle is going to be innervated not only by the obturator nerve like we saw before, but also this nerve seen here, highlighted on the image on the right side. This is known as the femoral nerve. Also, I would like to add a few words on the different functions associated to this muscle. And what it does, this muscle is responsible for adduction, of course, as we mentioned, of the hip, and also flexion, as you can see here represented by both of these arrows. Now, we’re going to move on to the second muscle on our list. This one seen here highlighted in green. This is known as the adductor magnus, and it does deserve its name because it’s one of the largest muscles in your body and the largest one of the group that we’re talking about on this tutorial. And it’s so large that we can actually divide it into two parts. And these parts are what is known as a muscular or adductor part. And the other one is the tendinous or hamstring part. Now, let’s have a look at the different origin points for this large muscle, and we need to remember three. One is the inferior pubic ramus. The other one is the ischial ramus which will serve as the origin point for the muscular part, while the tendinous part is going to come from the ischial tuberosity. And you can see the origin points here combined for the adductor magnus. Now, this muscle goes all the way from the different hip bones to the femur to then insert at the linea aspera. So there is this fleshy insertion as you can see here that is going to be inserting on the linea aspera. And another insertion point that we call the tendinous point or tendinous insertion as you can see here that is going to go all the way to the medial condyle of the femur. In terms of innervation, if you remember well, this is one other muscle that is going to have double innervations. Because it’s so large, it definitely deserves another nerve. And in terms of innervation, we’re going to be seeing the obturator nerve as we talked about in the beginning innervating the adductor magnus. But the tendinous part of the adductor magnus is going to be also supplied by this nerve that you see here on the screen, highlighted in green. And if you guess it well, yes, this is the tibial nerve. Now, moving on from the innervation of the adductor magnus, we’re going to talking about the different functions associated to this muscle. There are few that you need to remember, because since this is a large muscle, it’s going to be inserting on different points as we’ve seen before. And every time it contracts, it will cause different types of movements. The first one is an obvious one that associated to the name. This is adduction. But this muscle will also cause external rotation and also flexion of the thigh at the hip joint. And in addition to that, the tendinous insertion of the adductor magnus will also produce internal rotation and extension of the hip joint. That way, depending on which part of the muscle is contracting, it will result in either a flexion or extension of the hip. Now, we’re going to move on to the next one on our list, this long muscle seen here, highlighted in green. And yes, I spoiled it already. This is a long muscle, and we’re going to be calling it the adductor longus. And the adductor longus is going to be originating from two parts that we need to remember here. The first one is the superior pubic ramus, and the other part is the pubic symphysis. And you can clearly see here on this image, pubic symphysis, which is partially serving as the origin point for the longus, the adductor longus. And also the superior pubic ramus is serving as an origin point for this muscle. Now, as a long muscle, it is going to go all the way to find its insertion on the femur on the linea aspera, which will serve as other muscles that we’ve seen before. The linea aspera is also going to be serving as insertion point for the adductor longus. I wanted to add here an important point for your notes before the next exam, so you have something extra to know about this muscle, is that, distally, the adductor longus is going to form an aponeurosis which then extends to the vastus medialis muscle, and this is called the vastoadductorial membrane. So the next part that we’re going to be talking about is related to the different functions of the adductor longus. There are two that you need to remember. One is already shown on the name of the muscle, and that is adduction. And the other one is going to be flexion, so flexion of the thigh at the hip joint. Let’s move on to the next muscle on our list, this one seen here, highlighted in green. This is known as the adductor brevis. So if you had a longus, you need to have a short one, the brevis. And in terms of origin points, you need to remember one, and that is the inferior pubic ramus which you can also see here. This is going to serve as the origin point for the adductor brevis. And in terms of insertion point, we’re going to be talking about or seeing that this muscle is going all the way to another area that we’ve been talking about, another insertion point that we talked about for other muscles, other thigh adductors. If you remember, yes, on the femur the linea aspera which will serve then as the insertion point for this muscle. Next stop is going to be the different functions associated to the adductor brevis. And also, you need to remember just a few. The first one is on the name, the adduction. The other one is going to be external rotation of the thigh. And the other one is going to be flexion of the thigh at the hip joint. It’s time for us to move on to next muscle on our list, this one seen here, highlighted in green, known as the adductor minimus. The adductor minimus describes the inconstant cranial separation of the adductor magnus, which is found in many but not all people. So this is an important point to remember. Let’s talk about the origin point for the adductor minimus. And you just need to remember one that the inferior pubic ramus, as you can see here in this image, is going to be serving as the origin point for the adductor minimus. While the insertion point for this muscle is going to be also one and the same as many others that we talked about, the linea aspera on the femur is going to be serving as insertion point for the adductor minimus. Moving on to the different functions associated to the adductor minimus, there are two that you need to remember. Two functions, one is on the name, the adduction and the other one: flexion of the thigh. The last muscle on our list is this one here, this long, long muscle known as the gracilis. And the gracilis runs from the inferior border of the pubic symphysis and goes all the way to insert, as you can see here, quite long, and it’s going to go all the way to insert on the superficial pes anserinus. Important thing to mention about the gracilis is that its tendon is easy to palpate in the inguinal region together with another muscle or the tendon of another muscle, the adductor longus muscle. The next topic here on this tutorial is going to be the different functions associated to the gracilis. Now, being the only two-joint adductor, the gracilis muscle moves the knee joint as well, where its contraction causes, then, flexion and internal rotation of the knee represent by these two arrows as you can see here. The other functions associated to the gracilis would be then adduction and flexion of the hip joint just like the rest of the thigh adductor muscles. Now, I also wanted to make a point here on this tutorial about an insertion point that we talked about before, the pes anserinus. That is an insertion point for different muscles. And this is a popular exam question, so I wanted to highlight here on this slide. And the pes anserinus is a roughly goose foot-shaped insertion point for the following three muscles. And these are the gracilis, as we saw before, the semitendinosus muscle, and the sartorius. So all these three muscles are going to go all the way to the medial proximal surface of the tibia to then insert on this insertion point that is called the pes anserinus. Another important fact to mention here is that, sometimes, the insertion of the semimebranosus is referred to as the pes anserinus profundus or the deep pes anserinus. And before I end the tutorial, I wanted to do a review on the different functions of the thigh adductors, starting off with the adduction and flexion that was seen on all thigh adductors—so remember this. And another one was external rotation that was seen on these three muscles, the pectineus, the adductor brevis, and adductor magnus. We also saw an internal rotation. The tendinous insertion of the adductor magnus and the gracilis are responsible for internal rotation of the thigh. Another function we saw here on our muscles was extension that was seen on the tendinous insertion of the adductor magnus.
<urn:uuid:e758288f-9854-4080-8c81-1f52760d47d3>
CC-MAIN-2020-16
https://www.kenhub.com/en/videos/adductors-of-the-thigh
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370492125.18/warc/CC-MAIN-20200328164156-20200328194156-00075.warc.gz
en
0.952474
3,183
3.25
3
Led Stadium Lighting Frequently Asked Questions Led Stadium Lighting Frequently Asked Questions 1. What Type of Lights are Frequently Used in Sports Fields? A range of lights can be used to light different kinds of sports fields, however, the most widely used are the HID (High Intensity Discharge) lamps such as high-pressure sodium or metal halide. a. LED vs metal halide SSL (Solid State Lighting) like LED (Light Emitting Diode) is also popular because it requires less running costs and maintenance. However, an initial purchase of LED is significantly more expensive than a lumen HID of the same output. Each lamp offers different characteristics and selection of any given lamp is often based upon energy consumption, the color of light emitted, and life expectancy. On average stadiums use higher wattage lamps when compared to other outdoor lighting applications, for example, parking lots, roadways, and billboards. b. Application of HID Most types of lighting can be used in a variety of applications; however, HID lights are limited to sports fields, warehouses, and industrial use. This is because they produce high levels of light. For example, most commercial and residential applications require each lamp to produce between 800 and 4000 lumens of light. A single HID lamp can produce more than 15,000 lumens. To achieve high levels of light output, you need a lot of energy. HID lights are possibly more efficient than other types of lighting if you consider a lumens per watt basis. HID lights produce 75 lumens per watt while LED lights produce 45 lumens of light per watt. While HID lights are very efficient, they take some time to warm up and achieve full brightness after they have been turned off. And restrike times are usually even much longer than initial warm-up times. For instance, in the event of a power blackout during a match in a stadium, after power has been restored, it may take HID lamps around 5 to 20 minutes to reach 90% brightness. An important note though: some types of Metal Halide HID lights employ a different starting mechanism which can reduce warm-up time down to 1 to 4 minutes and restrike time down to 2 to 8 minutes. 2. What is the Lumen of Football Stadium Lights? Lumen measures the amount of light you get from a lighting device. It is simply the intensity of light – luminous flux – where the more lumen the more the light. Stadium lighting requires very high intensity, which is best achieved by lumens. Lumens for stadia provide the ideal level of brightness to play sport. Probably by now, you have had or heard about the great lighting experience that comes with the people and the adrenaline during late-night football. Lumen is part of that experience. LEDs today come with lumen output to achieve higher light output and very low level of energy consumption. For example, a 320w LED comes with a 41600-lumen output that is exceptional high brightness necessary for floodlights. Modern pitches consider lumens an innovative approach of determining the brightness of light sources. According to recent research, visible light from bulbs, lamps, and LEDs is measured with lumens. A higher lumen rating provides brighter light, which is the case with stadium lights making it hard to differentiate between night and day. 3. How Many Power and Number of Pieces of Luminaries Do I Need to Light Up a Stadium? Stadiums require good lighting that meets the needs of the audience, TV broadcasts, and participants. But how does one design good stadium lighting? Well, read on to find out more. Good stadium lighting ensures maximum visibility for teams, referees, and spectators. Given that, designers adopt various layouts when installing stadium lights. However, factors such as pole height, size of playing field, and type of matches determine the amount and power of luminaires required. a. Recommended Lux for Stadiums Given the variations in sporting events, the amount and power of installed lighting vary as well. For instance, cricket matches involve fast actions, small balls, and extended viewing distances. Cricket stadiums require high illumination levels. On the other hand, basketball and football are slower sports, have closer viewing distances, and larger playing objects. As a result, these stadiums require different illumination (lux) levels based on ball speed, location, uniformity, and desired visibility. b. Lux levels required for different classes Recommended stadium lighting levels have three classes: Class I, Class II, and Class III. The UNE-EN 12193 standard defines the minimum lighting levels for different sporting events. According to this specification, Class I sporting events have an illumination level of 750 lux, Class II events at 500 lux, and Class II events at 200 lux. Besides, lights for top flight and intermediate events have a glare rating of 50, except for athletics which is 55. However, lamps used for lower flight events should have a rating of 55. Determining a stadium optimum lighting level requires understanding several factors. These include stadium roof height, ground level, sports type, and uniformity. For this reason, selecting the right amount a power of luminaries affects a stadium’s overall atmosphere. Adequately lit stadiums are critical to the safety of fans and the success of sporting events. What’s more, installing the right number of lights creates an immersive experience in a stadium. 4. Flood Lights vs Spotlights Which one works best for sports field? With more and more sporting events being organized in the evening, there is a great need for proper lighting in the sports field. There has been a lot of research and development in this area and the modern-day LED lightings have replaced the conventional methods, however outdoor lights are available in multiple forms, designs and are to be used based on the requirements. Football, cricket, hockey, tennis, rugby, and a lot other sporting events are often organized in the evening and require proper illumination. A lot of people usually get confused about the type of lighting that should be installed in a sports field. The requirements of each of these sports can differ in terms of the details, but one thing that remains common is the fact that these require distributed illumination throughout the field. a. What are the critical parameters to considered Multiple things need to be put into consideration when selecting lighting for a sports field. Some of these are mentioned below- - Type of game that is played - Visual comfort of the players as well as the viewers - The color of the sports equipment that is being used - Size of the field - Shadows length being formed and their impact on the game - Availability of lights at the entrance and exit spots b. Flood light or Spotlights – what should you choose? Below listed points will help you in making right choice and plan accordingly It can provide a more substantial spanning lighting and are hence a great choice when the requirement is to cover a large area of a sports field. They can help in covering and reaching out to areas that are not easy to illuminate otherwise. Floodlights make use of LED that helps in providing superior illumination at an affordable cost. These are further subdivided into various categories and designs based on the level of brightness required and the solution used. Floodlights are installed at a suitable height to allow them to cover a wide area. They do not require much effort to set up and can be connected easily to any power source. They are installed when there is a requirement to produce direct and focused lighting. These lights are a great choice to illuminate doors, entrance, and exits. They direct the beam on a single area and are suitable for attracting the attention of the viewers to a particular section of the sports field. It is essential not to overuse the spotlights as it will end up spoiling the purpose. You need to use the spotlights only in areas that you wish to highlight. Installing a spotlight may require a lot more effort and planning. Both floodlights and spotlights are very common in the outdoor lighting area, but their primary feature and usage differs considerably. To be able to make the best use of these lights, you must understand the difference between them. To ensure that the objective of providing an excellent experience for the players are viewers is met, it is recommended to have a right balance of floodlights and spotlights in a sports field. 5. Why are Stadium Lights Always On? How to Reduce Light Pollution? Stadiums used to be situated outside the major cities. Currently, cities have expanded and many stadiums are situated in the center of residential areas near people’s gardens. Different sports are also very popular and the number continues to increase. So to cover all these sports, the clubs do play even until late at night. That is why you’ll find most stadium lights stay on and there are reasons for that. While this is great for players and cheerleaders, it leads to light pollution. Neighbors complain about the strong lighting from these sports fields. That is why stadiums should use LED lights. a. Switching Stadium Lights may lead to power system malfunction Major reason stadium lights are always on is that switching them off may cause the power system to malfunction. The power framework cannot cope with the huge power spike that light switch off causes. b. The cheerleaders and fans need that lighting The evening shadows do make the stadium unusual. So, the lights are turned on to compensate for this. Actually, the cheerleaders practice at least thrice in a week, and they need those lights. c. For added security While leaving the stadium lights on all through leads to light pollution, it makes some people feel more secure. Undoubtedly, glare caused by unshielded lighting can make shadows where lawbreakers can hide. Though that is true, splendid lights can simplify the work of those lawbreakers. d. How to Reduce Light Pollution Light pollution is almost everywhere. It could be originating from a stadium or elsewhere. You can admire it from afar when you gasp at the skyline of your town. It might also be the annoying light on the streets that shines through your bedroom window during bedtime. Light pollution contends with starlight in the sky. Further, it affects the astronomical views, disrupts the ecosystem and may have severe health effects. There are 4 common kinds of light pollution: clutter, glare, trespass and sky glow. All these four kinds can be reduced through the use of LED lights. Nowadays, people have accepted LEDs as exterior lighting; and this is how to use them to reduce light pollution. i. Technological Advancement Currently, there are 3-part lighting fixtures that emit light rays parallel to each other. Such an arrangement makes sure that the light is emitted only to the desired area, instead of lighting the entire space. ii. The correct light with the correct angle Lighting experts can now choose LED lights with the right beam angle enclosed both above and to the sides. This does not only channel the light downward but reduces light pollution as well. This procedure is referred to as “cutting-off light at the horizontal.” It makes sure that the light only illuminates the field. iii. Sensor and dimming abilities Using LEDs that have occupancy sensors is a way to reduce light pollution. These sensors do switch on and off depending on movements. Hence the ground is lit only when required. Adaptive lighting is also great for reducing light levels at specific times of night when an area is not used. iv. Using warmer LEDs Using LED lights with white and warmer illumination is another way to reduce light pollution. LEDs, with a color temperature of 4000k or below, are ideal. Leaving stadium lights on all through is beneficial in different ways as discussed here. However, the stadium managers should be mindful of neighboring residences and businesses. They should understand the fixture and LED lights that are right for their sports fields before making the installations. 6. How Much Does It Cost to Buy & Run Football Field Lights? Installing high-quality lights for your football will provide numerous benefits to not only the players but also fans. High-end football light also improves the safety of the athletes, at the same time making scheduling for practices, and games more flexible. Of course, before settling on the best football field lights, there are some things you need to consider. Remember some technologies are cheaper than others are. However, what is crucial considering is that long-term effects and cost of the technology. This means that you shouldn’t consider the initial costs of the lighting equipment only. The size of the field also matters. Of course, installing a new lighting system for a high school football field is way less than installing the lights in a professional field for premier league and Olympic Games. This is also true when it comes to maintenance costs. a. Maintenance costs With a high school field as our base field, we will begin by giving an estimate of maintenance & running costs. A typical high school soccer field has a lighting requirement of about 300-400Lux. LED has a better efficiency, this means that the stadium requires about 35,000W LED lamps for adequate lighting for both the audience area and turf. However, if you decide to go with the metal halide lamps, you will need to use 70,000W for the same field. In the US, the average cost of electricity is about $0.12KW/h. So, running a medium-sized football cost will translate to about $4.2 per hour (35000 x 0.12 / 1000). Assuming that the lights run for about eight hours daily for 15 days in a month, the monthly cost will be (4.2 x 8 x 15) =$504. On the other hand, if you decide to use the metal halide, you are going to spend double that amount ($1008) b. Cost of the lamps Besides the energy bills, you also need to consider the initial cost of the lamps. A regular high school football field will cost about $50,000-$120,000. The wide difference is brought about by the brand and the origin of the LED lights. 7. Do LED Lights Consume a Lot of Electricity? For years, sports stadiums have used high-pressure sodium and metal halide fixtures to light playing surfaces. But now, these are turning to LED lights to improve both viewing and playing experiences for both fans and athletes. Professional athletes show preference to LED lightings as it replicates natural lighting on playing surfaces. Led lights are better for stadiums and sports field because it not only brightens the play area but also minimizes power consumption. That means, LED stadium lights consume less than half the energy of traditional lights and require less repair and maintenance over their useful lifespans. Why LED help reduce the electricity cost Here we shall see some reason how LED stadium lights consume less electricity: Traditional high-pressure sodium and metal halide light fixtures require a long warmup period before coming to full power. Due to this warm-up process in traditional light fixtures, nearly 40-80 percentage of the energy is wasted. However, LED lights illuminates almost immediately after being powered up. So, by using LED’s, the loss during powering process drops to 10 percentage, which means LED stadiums will require 40 to 70 percent less energy compared to traditionally lighted stadiums. The efficacy of a light source is often measured in Lumens per Watt. LED lights have higher luminance ground reach compared to traditional stadium lights. Traditional metal halide sport lights require more than 50 percent of the energy used by LED lights to produce the same field luminosity. Traditional metal halide lights have very low power efficiency ballast compared to LED lights. This means only 60 to 80 percent of the energy is used in the right way by the ballast. LED’s come with switch made power supply technology, that leads to 95 percent voltage efficiency. Don’t forget, efficiency saves energy and money. If you are planning to upgrade your old stadium lights to LED lights, then you should make your decision quick. LED lights are energy saving and is a clear winner in every aspect of light. We offer free lighting design service to better assist your sports lighting projects. Please feel free to contact us by filling in the info in our form.
<urn:uuid:3ae8a3b2-04df-4e35-9812-b9559762766c>
CC-MAIN-2020-16
https://www.tyledlight.com/article/led-stadium-lighting-frequently-asked-questions.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371876625.96/warc/CC-MAIN-20200409185507-20200409220007-00114.warc.gz
en
0.946023
3,359
2.5625
3
As The world warms, the dry areas of the globe are growing even drier. in Jordan, some villages are already working on what to do when the rain stops coming. Words by Allison Ford. Photography by Josh Estey/CARE. Originally published in Jo Magazine. The last time rain fell in Bayoudeh was February 10. The land has only gotten dryer since then. People in and out of Jordan like to talk about how water poor the country really is, but 2008 arrived to prove it with a vengeance. Um Mubarak increases the yeild of her trees with mulch, and grows cactuses with gray water. Bayoudeh is a small village of about 3,500 people. It sits wedged between the Jordan valley and its highlands, perched on the slopes among the few remaining wild oak trees and stretches of olives. It’s been a dry year, but compared to the over-grazed hills deep in the Valley, Bayoudeh looks positively lush. Most of the modest stone houses are nestled in pockets of vegetation; even dried out vegetation is better than none, and in this year of almost no rain, the brown patches under the trees are crisp with dried grasses and low-lying, water-hardy plants. In the last rainy season, the land got only about 60 percent of the average rainfall expected from the wet season, according to Sameeh Nuimat, the permaculture project manager at CARE International, Jordan. The average is about 400mm of rain, in a season lasting from November to April. This year, the rains came from December to February, and deposited only 302 mm. Farmers all over Jordan are feeling the water crunch. The Jordan Valley Authority has banned planting during the summer months, and has limited the amount of water available per parcel of crop land. For several years, scientists have been predicting that global climate changes might shift rainfall towards the poles and away from some already arid temperate areas. Whether Jordan’s recent dry spell is just a fluctuation, or a sign of things to come, is anyone’s guess. But it begs the question, what is the future of Jordan’s rural farming villages if the country’s already sparse rainfall gets even sparser? In Bayoudeh, eight small family farms are trying to create a model for that future. Orderly and domestic, the farm could be seen as the antithesis to nature. But permaculturists say this is not the case. A villager holds out handfuls of rich soil, composted from organic waste. Australian scientist Bill Mollison, credited with coining the term “permaculture,” called it “a design system for creating sustainable human environments,” by integrating the built environment with natural cycles. So, for example, instead of building a cement house that traps heat and depending on large amounts of energy to cool it electronically, permaculture encourages using local materials that are naturally heat resistant (such as mud) and positioning the house to protect the its interior from the day’s heat. As a philosophy, permaculture begins by looking at sustainability at a household level and working outwards—household to yard and farm, and farm to community. The farms in Bayoudeh participating in the project are run by local families, with technical assistance and support from CARE International and funding from HSBC bank. |Getting twice the mileage out of household water means substantial savings for farmers| Nuimat, the project manager, proudly shows off the elements of a holistically designed house and farm system. Rainwater is collected in catchments on the roof of a house, and flows through pipes into a filtration system that uses gravel and sand to collect debris. It is then stored in an underground well, where it can be pumped for use by the family. Leftover food feeds chickens, which root around under the trees, aerating soil. Mobile chicken “tractors”—simple wire and wood structures that can be moved around to contain the birds in different areas—let farmers manage the spread of chicken manure, which acts as fertilizer. Pests are controlled with natural mixes made out of onion, garlic, tobacco, and neem oil. In addition to being efficient and non-toxic to people or plants, the homemade pesticides are cheap: conventional ones are often petroleum based and prone to volatile price leaps. Organic waste is composted into nutrient rich soil, which is applied to plants and covered with mulch, which prevents moisture evaporation. Contours in the land create water catchments that allow plants to drink their fill, and prevent erosion. A compost toilet reduces the need for water. In permaculture, there is a use for just about everything. One particularly clever system makes beneficial use out of the eucalyptus tree’s tendency to suck up water, crowding out other plants in the vicinity. Often demonized by ecologists as the quintessential invasive species, at the CARE Visitor’s Center, the eucalyptus was intentionally planted next to the cesspit, where waste from the latrine is composted under ground. “It acts as a water pump,” Nuimat explained, “sucking up waste water and purifying it.” Furthermore, bees are attracted to the plant; keeping bees could be the next step, allowing people to effectively make honey out of human waste! Indeed, there was nothing around the site but the faint but pleasant scent of eucalyptus. Gray water systems, which reuse household waste water from sinks, showers, and washing (not toilets), are another important component of coping with scarcity. Getting twice the mileage out of household water by cycling it through the farm means substantial savings for farmers, as well as minimizing the impact of pumping water out of natural systems. Khadeeja, from bayoudeh, sits atop sacks of compost waiting for sale. Mulch is one of the keys to ensuring efficient water use on the farms. It traps the water in the soil, preventing evaporation, which causes a significant percentage of water loss. This is key to maximizing the use a farmer can get out of a drop of water. According to Nuimat, the high natural evaporation rates of water from the land is crux of the region’s water struggle. The whole region around Bayoudeh gets an average of 350-450 mm of rainfall of year, he says, but evaporation rates are as much as 1,500-1,600 mm a year. “The rainfall levels in London are about that much—300 mm a year,” Nuimat says. “But they have an equal evaporation rate, so it’s not the same.” Working in conjunction with the Volunteer Society in Bayoudeh, CARE has helped set up eight demonstration farms and refurbished the Volunteer Society building to use as an educational center. The building itself is a part of the demonstration. Built in a time when energy for heating and air conditioning was not readily available, the 120-yearold mud-and-stone house provides welcome reprieve from the intense summer sun, with walls more than a foot thick. The project itself came about after CARE participated in another project, EMPOWERS, which asked local people to analyze their water demand, in the hope that that would lead to better conservation practices. “We were looking at water, but not what people were needing that water for,” explains Harriet Dodd, country director of CARE. “Unless we looked at land use, it didn’t really matter what we found out about water.” Industrialized agriculture is increasingly under fire for being damaging to the environment, using up water resources, depleting soils and pouring pesticides and herbicides into land and water. It’s easy enough to demand that agriculture become more sustainable, but at the heart of the problem is how to grow enough food to keep feeding the world. |Pests are controlled with natural mixes made out of onion, and neem oil It is a constant refrain amongst environmentalists that Jordan’s agricultural sector uses about 70 percent of the country’s available water, while contributing a modest 4-6 percent of its GDP. Friends of the Earth Middle East (FOEME), a regional environmental non-profit holds that position that if Jordan is to realistically deal with its water shortage, export agriculture is not a viable economic activity. “We should not export our water outside the region,” says Abdel Rahman Sultan, of FOEME. “We export our products at [a] minimum economic yield … Our water is being subsidized to benefit other countries. This is not a wise practice. Agriculture should be limited for domestic use only.” He further points out that in spite of unwise water use for export agriculture, Jordan has never reached food-supply independence, “so there is no point in putting more pressure on natural resources.” But doing away with agriculture in a country where just about everyone has some kind of rural roots is not such an easy task. While numbers highlighting the disparity between water use and contribution to GDP are hard to deny, CARE’s Jordanian director argues that this statistic doesn’t paint the whole picture: “How many people are being fed by agriculture that falls under the GDP radar?” Dodd asks. And agriculture is much more than the Production of food. “Agriculture contributes hugely to identity, if not GDP,” Dodd says. “I don’t think that Jordan wants to see its rural societies completely diluted. It goes much deeper than GDP. It’s about tribal lands, cultural history.” Tribal elders from al rajef, in the eastern badia, make a visit to bayoudeh to see the benefits of adopting drought-tolerant crops like pistachio and rain-fed okra. Khadeeja pours excess water into an underground storage system, for later use. This attachment to rural lands and the water that sustains them could be the crux that eventually pushes people towards finding a solution to the problems. In addition to the cultural value of agricultural lifestyles, Dodd questions the alternative to the employment that agriculture offers. Without farming, people in the villages would have few options. There would be a huge influx of people from the rural communities into Amman— one that could easily be unsustainable. “Does Amman want to have a sprawling ghetto around its edges?” she asks. The alternative, she says, is finding ways to maintain rural lifestyles. The CARE project is small scale and rural, and that’s one of the reasons it seems to work. “We don’t come in with a blueprint,” Dodd says; the focus is on experimentation on what is right for a particular community. In just over a year, the demonstration farms are flourishing. Plants grown using permaculture techniques have been shown to have more vegetation and a healthier appearance. Almond trees that were mulched and fertilized with compost had a 30 percent higher yield than those that weren’t. The benefits of the gray water system have been evident in the substantial decrease in cost to the families that use them: the water bill for drinking water has decreased by about JD20 per month, per household. |“How many people are being fed by falls under the The water saved has been used to grow fodder for livestock, saving the families another JD55 a month that they previously spent on animal feed. The compost toilets have done away with the need for septic tank pumping, eliminating another JD30-a-month fee per household. Permaculture food production is also helping families deal with rising food prices. Nuimat attests to this fact, pointing out that he never used to keep chickens, but since the prices of meat and eggs have increased, he has found it more cost effective to raise his own. Three of the demonstration farms have achieved complete self-sufficiency regarding chicken products; this means they save an average of JD10 a month in egg costs, as well as making a small profit from selling their excess eggs. Families have also begun raising ducks for eggs and meat. Only eight households are considered demonstration farms, but more than 20 households in the area now use composting. And the project will be expanding soon, CARE officials said, because the four surrounding villages of Seyhan, Jareesh, Gussyb and Alegoon have asked to be included. American permaculturist Ethan Young points out that in order to create systematic change, such as a wide scale move towards sustainable water practices, a minority movement must bring a society to a tipping point. “Most people just want to try to make do in whatever system exists,” he writes. “But only 15 percent of the population has to be… organized in order to change the system for the rest of the 85 percent … in the end, small scale, personal activism is all there really is.”
<urn:uuid:ac6211e1-b893-4327-8b3d-863ac0906f38>
CC-MAIN-2020-16
https://www.permaculturenews.org/2009/01/14/greenhouse-effect/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370519111.47/warc/CC-MAIN-20200404011558-20200404041558-00194.warc.gz
en
0.953733
2,754
2.8125
3
In his seminal work, On the origin of the species, Charles Darwin wrote: Preservation of favourable individual differences and variations, and the destruction of those which are injurious, I have called natural selection or survival of the fittest. In the year that we celebrate the bicentenary of Darwin's birth, it is salutary to consider how much chemistry, in particular natural product chemistry, contributes to what he termed 'survival of the fittest' For lower organisms and plants, the survival of one species over another comes down to chemical warfare - The interactions between species are often mediated by natural products At the most fundamental level the success of an organism can be assessed by whether it passes its genes (DNA sequences) on to the next generation. Any genetic changes - and the resultant novel proteins and enzymes - that increase the likelihood of successful reproduction are positive evolutionary events, while genetic alterations that reduce the likelihood of successful reproduction probably move the organism one step closer to extinction. The ultimate physical consequences of such changes (mutations) may include better means of defence or escape from predators, greater success in mating, or better response to climate change or food restrictions. For most of the lower organisms, eg bacteria and fungi, however, it is the production of natural products that enhances their chances of survival and reproductive success. Adapt or die Bacteria have been evolving continuously for around 3.5 billion years. For instance, as the reducing atmosphere of the early Earth (a mixture of ammonia, CO2, methane and H2S) changed to the oxidising atmosphere of the later Earth, bacteria evolved to produce siderophores to help them assimilate iron from the relatively insoluble Fe(III) salts in such an environment.1 Some bacteria have had to adapt to survive hostile habitats, such as the hot sulfurous pools associated with volcanoes, geysers and deep-sea hydrothermal vents. One Thermococcus species, for example, survives in deep-sea hydrothermal vents by producing thermally stable glyceryl ethers as major components of its cell membranes, rather than the less stable but more usual triglycerides (ie esters of fatty acids with glycerol) of terrestrial species. The myriad fungal species - moulds, mushrooms, yeasts and mildews - that have co-evolved with bacteria over the past 1200 million years often compete with bacteria for food or ecological niche, and chemical warfare is sometimes waged when the species come into contact. The complexities of the chemical compounds involved in these interactions can be bizarre in the extreme, and we can only speculate about the number of discrete genetic changes that have been involved in their 'evolution'. For example, the bacterial species Micromonospora echinospora, found in rock samples from Texas, defends itself from predators by producing the natural product calicheamicin (1). If this substance is absorbed by competing bacterial or fungal cells, metabolising enzymes within these cells react with the trisulfide portion, converting it into a thiol. This reacts with the unsaturated ketone part of calicheamicin, which causes a relaxation in the rigid structure, and the ene-diyne part of the structure undergoes a Bergman reaction to produce a benzene diradical (equation (i)). This highly reactive species abstracts hydrogens from the DNA or proteins of the competing bacteria or fungi, resulting in more radicals and resultant chemical damage. (Calicheamicin is currently being investigated as an anticancer agent. The prodrug, Mylotarg, is produced by linking this natural product to a monoclonal antibody specific for the CD33 surface protein of human myeloid leukaemia cells.) Although there are several similar ene-diyne natural products, this mode of chemical warfare is relatively rare in Nature, and most bacteria and fungi have evolved to produce antibacterial agents which either interfere with the production of new bacterial cell wall material, or they inhibit the production of bacterial proteins and enzymes. The mould Penicillium notatum, for example, produces penicillins, which inhibit the transpeptidation process by which a two dimensional bacterial cell wall precursor is cross-linked to produce the three dimensional final cell wall (Box 1). The mould Cephalosporium acremonium produces cephalosporins which act in a similar way. The natural penicillins, and their relatives the cephalosporins, were exploited by the pharmaceutical industry to provide thousands of man-made semi-synthetic penicillins. Of all the moulds it is those of the Streptomyces family which have evolved the most ingenious methods for avoiding predation by bacteria, and they produce an arsenal of antibiotics. Among these, the tetracyclines and the aminoglycosides (eg streptomycin, 2) disrupt protein synthesis in the attacking organism by binding to the 30S subunit of ribosomal RNA, while the macrolide antibiotics (eg erythromycin, 3) bind to the 50S subunit and interfere with protein biosynthesis. Throughout the 1950s and 1960s a huge array of natural and synthetic antibiotics based upon the natural products of the Streptomyces family was produced, and these man-made antibiotics together with the penicillins and cephalosporins were thought to be the end for pathogenic bacteria. Unfortunately this was not to be. Survival of the fittest A typical bacterium reproduces itself every 20 minutes or so, with perhaps one DNA base pair error in every 10 million (107) for each replication. In a typical bacterial population (in a patient) of 100 billion (1011), the number of possible mutations quickly becomes vast. Any one of these might confer a slight advantage for the bacterium (or may lead to its demise), and given the pressure that the pathogenic bacteria came under once the man-made antibiotics were introduced only, as Darwin predicted, would the fittest survive. Within a few years of their introduction, bacteria developed methods of resistance to each new antibiotic. Some produced enzymes that could destroy or modify the man-made drugs; others developed methods for expulsion of the drugs - eg efflux pumps; while other bacteria developed alternative ways to make cell wall material that circumvented the routes disrupted by the drugs. The bacteria were given every chance to mutate and improve themselves since they not only met these new drugs in humans, but also in countless millions of farm animals which had been treated with antibiotics to improve their health during intensive growth programmes. Probably one quarter and perhaps as much as one half of all antibiotic use in the last half of the 20th century was in agriculture, and this was exacerbated by prescribing these valuable drugs to people with colds and other viral infections - for which they are useless. In this way the pathogenic bacteria encountered man-made antibiotics on a huge scale and had to mutate or die. Within three years of the introduction of the penicillins in 1943, the first resistant strains of bacteria had appeared, and most of these employed enzymes - β-lactamases - to destroy the essential four-membered lactam ring of the penicillins (see Box 2). It is likely that these enzymes had always existed in certain bacteria as part of the long-standing warfare between bacteria and moulds, but the enzymes now appeared across a wider range of pathogenic bacteria. Very quickly many of the simpler penicillins became useless. At this point another mould metabolite came to the rescue, ie clavulanic acid (4) from Streptomyces clavuligerus. This natural product had evolved over the millennia to combat the effects of the natural β-lactamases since it works as a suicide substrate for this enzyme. In the 1960s Beechams used this natural defence mechanism to counter the effect of the β-lactamases through a strategy that used a combination of clavulanic acid and ampicillin (5), which was called Augmentin. The clavulanic acid serves as a suicide substrate for the β-lactamases, thus allowing the ampicillin free access to the bacterial cross-linking enzymes (transpeptidases) to inhibit cell wall production. While this is a good example of human ingenuity versus bacterial mutation, it is probably only a matter of time before the pathogenic bacteria mutate to find a way of inactivating clavulanic acid. The pathogenic bacteria found an even simpler route for modifying aminoglycosides such as streptomycin and gentamycin. Over a period of about 10 years from their introduction, many bacteria underwent genetic changes that led to the production of acetyl transferase or phosphoryl transferase enzymes which acetylated amino groups and phosphorylated hydroxyl groups of the antibiotics as they were administered. The resultant acetylated and phosphorylated drugs were either inactive or much less active than the parent compounds. What makes the bacteria so effective at overcoming man-made antibiotics is not only the brevity of the replication timescale and their rapid rate of mutation, but also their ability to pass on their resistant genes to other bacteria that they encounter in the mammalian gut and other places. This is why it is so important to finish a course of antibiotics - any bacteria left alive at the end of treatment could survive (in the gut) to pass on their resistant genes to bacteria that do not yet have them. In this way gut bacteria can quickly become multiply resistant to many antibiotics. This is the problem that faces us at the present - most of the really dangerous bacteria have acquired resistance to many if not all of the current armoury of antibiotics. It has taken barely 60 years for these lowly organisms to overcome the chemical ingenuity of mankind with their own ingenious chemistry. Plants and insects Plants and insects have also been continuously co-evolving for the best part of 300 million years, and once again their interactions are often mediated by natural products. The tree family Pinus is among the oldest in evolutionary terms, and the various species are well protected against insect predation through the production of monoterpenes like α- and β-pinenes, (6) and (7), myrcene, and 3-carene which act as feeding deterrents for most insect species. However, numerous species of the bark beetle family Dentroctonus have evolved during a long period of interaction with pines to produce oxidative enzymes that detoxify the monoterpenes to produce, for example, verbenols (8) and verbenone (9) from α-pinene. These and other metabolites act as aggregation pheromones for several species of Dentroctonus, thus enhancing their ability to breed and survive. A similar successful subvention is seen in the interactions of the cotton boll weevil with its host plant. The cotton plant produces β-myrcene (10) as a feeding deterrent but the boll weevil uses this compound as the starting material for the biosynthesis of grandisol (11), which acts as an aggregation pheromone. At present we do not know what enzymes have been produced through mutation of existing genes, but there is no doubt that this particular species has evolved to occupy a particular ecological niche through the use of some clever synthetic chemistry. One natural product that is almost universally successful as a feeding deterrent is nicotine, from wild tobacco Nicotiana sylvestris. This plant species can also increase its production of the alkaloid by >200 per cent if it suffers mechanical damage through insect feeding. A few insect species have evolved to produce detoxifying enzymes and others have learnt how to feed without producing major mechanical damage, but nicotine remains a successful chemical deterrent strategy for tobacco plants. The Indian neem tree similarly produces a successful deterrent, azadirachtin (12), which will even deter swarms of locusts. The large number of mutations that must have been required to produce this complex natural product provides strong evidence that survival in the face of insect predation is a major driving force in plant evolution. While these are all passive means of defence, plants have also evolved a number of more offensive strategies. Some plants, for example, produce phytoecdysones, ie steroidal compounds which mimic the biological activity of natural insect moulting hormones like ecdysone (13), first isolated from the pupae of the silkworm moth (Bombyx mori) in 1954. Insects feeding on plants producing phytoecdysones like polypodine B (14) suffer disruption of their usual life cycles. A number of plant species also produce insecticides and the pyrethrins from certain chrysanthemum species are the most potent and broad spectrum. Numerous synthetic analogues of the natural pyrethrins (15), like permethrin (16) have been synthesised, and these are used all over the world as potent and environmentally acceptable insecticides. Darwin made no comment about natural selection in Man and the emergence of Homo sapiens as the dominant hominid owed more to the mental powers of a large well-developed brain than it did to natural product chemistry. However, there are several examples of enhanced survival that can be linked to a particular genetic mutation and the concomitant changes in the chemical properties of a protein or enzyme. A single change in the DNA sequence of the gene coding for the haemoglobin β-chain (the haemoglobin molecule comprises an aggregate of two α-chains and two β-chains) produces 'sickle-cell haemoglobin' HbS. This aberrant molecule, has a hydrophobic valine in place of a hydrophilic glutamate in the β-chain, and is less soluble in the blood than the normal haemoglobin molecule. The 'sickle-cell haemoglobin' (named after the rigid sickle shape of the blood cells) is less efficient at carrying oxygen around the body, and an individual who inherits copies of this gene from both parents is usually afflicted with life-threatening anaemia. Historically, and even today in developing countries, these unfortunate people usually die before adolescence and so do not pass on their aberrant genes to their descendants. However, individuals who carry one gene for sickle-cell β-chain and one gene for normal β-chain, can lead reasonably normal lives. Interestingly, sickle-cell haemoglobin cannot sustain the parasite Plasmodium falciparum which causes malaria, so these affected individuals actually have an enhanced chance of surviving in countries where malaria is endemic. In this way the aberrant gene is maintained in the population and currently an estimated one third of the native population of sub-Saharan Africa carry it. Although natural selection for mutated human genes has occurred in the past, it is unlikely to be a major factor in Man's future development. Control of our destiny is now determined more by economics than genetics, and if we do ultimately destroy our environment, the bacteria will be waiting in the wings ready to inherit the Earth from us. John Mann is emeritus professor of chemistry in the school of chemistry and chemical engineering at Queen's University Belfast, Stranmillis Road, Belfast BT9 5AG. Box 1 - Antibacterial agents The bacterial cell wall precursor is a polymer comprising a repeating disaccharide unit with attached polypeptide side chains that end with a d-alanyl-d-alanine unit. The transpeptidase enzyme cleaves the terminal d-alanine and the amino group of the glycine then reacts with the penultimate d-alanine on a neighbouring chain to produce the mature cross-linked matrix of the cell wall. The structural similarity between the penicillins and d-alanyl-d-alanine allows the antibiotics to act as inhibitory substrates for the transpeptidase enzyme. Box 2β - Lactamase attack on penicillin The bacterial transpeptidases and the various β-lactamases are serine proteases and the first stage in their mechanism of action involves attack on the -lactam ring of the penicillin or of clavulanic acid. Augmentin is a combination of ampicillin and clavulanic acid. - A-K. Duhme-Klair et al, Educ. Chem., 2009, 46 (1), 25.
<urn:uuid:c10765b8-4ea9-46cc-bed7-b46e3391faeb>
CC-MAIN-2020-16
https://edu.rsc.org/feature/survival-of-the-fittest/2020236.article
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506477.26/warc/CC-MAIN-20200401223807-20200402013807-00154.warc.gz
en
0.936566
3,391
3.8125
4
William Saroyan (August 31, 1908 - May 18, 1981) was an American author who wrote many plays and short stories about growing up impoverished as the son of Armenian immigrants. These stories were popular during the Great Depression and reflected the immigrant experience and the struggles of that time. Saroyan grew up in Fresno, the center of Armenian-Americans in California, which served as the basis for many of his settings. Despite the difficulties of life during the Depression era, Saroyan's work nonetheless contained a ray of hope and optimism about life that expresses the indomitable spirit of those immigrants who helped to build America. Saroyan was born in Fresno, California, the son of an Armenian immigrant. His father moved to New Jersey in 1905. He was a small vineyard owner, who had been educated as a Presbyterian minister. At a certain point his father was forced to take farm-laboring work, and he died in 1911. At the age of four, William was placed in the Fred Finch Orphanage in Oakland, California, together with his brothers—an experience he later described in his writing. Five years later, the family reunited in Fresno, where his mother, Takoohi, had obtained work in a cannery. In 1921, Saroyan attended technical school to learn to type, but left at the age of fifteen; his mother had shown him some of his father's writings, and he decided to become a writer. Saroyan continued his education on his own, supporting himself by taking odd jobs. At the San Francisco Telegraph Company, for example, he worked as an office manager. A few of his early short articles were published in The Overland Monthly. His first stories appeared in the 1930s. Among these was "The Broken Wheel," written under the name Sirak Goryan and published in the Armenian journal Hairenik in 1933. Many of Saroyan's stories were based on his childhood experiences among the Armenian-American fruit growers of the San Joaquin Valley, or dealt with the rootlessness of the immigrant. The short story collection, My Name is Aram (1940), an international bestseller, was about a young boy and the colorful characters of his immigrant family. It has been translated into many languages. As a writer, Saroyan made his breakthrough in Story magazine, with "The Daring Young Man on the Flying Trapeze" (1934), the title taken from the nineteenth century song of the same name. The protagonist is a young, starving writer who tries to survive in a Depression-ridden society: Through the air on the flying trapeze, his mind hummed. Amusing it was, astoundingly funny. A trapeze to God, or to nothing, a flying trapeze to some sort of eternity; he prayed objectively for strength to make the flight with grace. Following the United States involvement in World War II, Saroyan enlisted in the U.S. Army and in 1942, he was posted to London as part of a film unit. He narrowly avoided a court martial when his novel, The Adventures of Wesley Jackson, was read as advocating pacifism. In 1943, Saroyan married eighteen-year-old Carol Marcus (1924-2003); they had two children, Aram Saroyan and Lucy Saroyan. By the late 1940s, Saroyan's increasing problems with drinking and gambling had taken a toll on his marriage, and he filed for divorce upon his return from an extended European trip. They remarried again and divorced. Lucy later became an actress, and Aram became a writer who published a book about his father. Carol Marcus was subsequently married to the actor Walter Matthau. Saroyan's financial situation did not improve after the war, when interest in his novels declined and he was criticized for sentimentalism. Saroyan praised freedom; brotherly love and universal benevolence were for him basic values, but his idealism was considered out of step with the times. However, he still wrote prolifically. While Saroyan's writing in general is particularly renowned among fellow Armenians, The Armenian and the Armenian is an especially stirring declaration of solidarity. The piece is set during the Armenian Genocide, in which over 1.5 million Armenians were killed and deported. The destruction of the Armenian culture occurred during the government of the Young Turks from 1915 to 1917, in closing years of the Ottoman Empire. The words evoke notes of grief, rage, resilience, and rebirth, in relation to Armenian cultural and social life. Above all, it is a tribute to the resilience of the Armenian people. I should like to see any power of the world destroy this race, this small tribe of unimportant people, whose wars have all been fought and lost, whose structures have crumbled, literature is unread, music is unheard, and prayers are no more answered. Go ahead, destroy Armenia. See if you can do it. Send them into the desert without bread and water. Burn their homes and churches. Then see if they will not laugh, sing and pray again. For when two of them meet anywhere in the world, see if they will not create a new Armenia. As a playwright, Saroyan's work was drawn from deeply personal sources. He disregarded the conventional idea of conflict as essential to drama. My Heart's in the Highlands (1939), his first play, was a comedy about a young boy and his Armenian family. It was produced at the Guild Theatre in New York. Among Saroyan's best known plays is The Time of Your Life (1939), set in a waterfront saloon in San Francisco. It won a Pulitzer Prize. Saroyan refused the honor, on the grounds that commerce should not judge the arts, but accepted the New York Drama Critics Circle award. In 1948, the play was adapted into a film starring James Cagney. The Human Comedy (1943) is set in Ithaca in California's San Joaquin Valley, where young Homer, a telegraph messenger, bears witness to the sorrows and joys of small town people during World War II. "Mrs. Sandoval," Homer said swiftly, "your son is dead. Maybe it's a mistake. Maybe it wasn't your son. Maybe it was somebody else. The telegram says it was Juan Domingo. But maybe the telegram is wrong…" (Quotation from The Human Comedy). The story was bought by MGM and made Saroyan's shaky financial situation more secure. Louis B. Mayer had purchased the story for $60,000 and gave Saroyan $1,500 a week for his work as producer-director. After seeing Saroyan's short film, Mayer gave the direction to Clarence Brown. The sentimental final sequence of the Oscar-winning film, starring Mickey Rooney and Frank Morgan, was called "the most embarrassing moment in the whole history of movies" by David Shipman in The Story of Cinema (vol. 2, 1984). Before the war, Saroyan had worked on the screenplay of Golden Boy (1939), based on Clifford Odets's play, but he never gained much success in Hollywood. Saroyan also published essays and memoirs, in which he depicted the people he had met on travels in the Soviet Union and Europe, such as the playwright George Bernard Shaw, the Finnish composer Jean Sibelius, and famous filmmaker, Charlie Chaplin. During World War II Saroyan joined the U.S. Army. He was stationed in Astoria, Queens, but he spent much of his time at the Lombardy Hotel in Manhattan, far from the Army personnel. In 1952, Saroyan published the first of several book-length memoirs, The Bicycle Rider in Beverly Hills. In the novellas, The Assyrian and other stories (1950) and in The Laughing Matter (1953), Saroyan mixed allegorical elements within a realistic novel. The plays Sam Ego's House (1949) and The Slaughter of the Innocents (1958) examined moral questions, but they did not gain the success of his prewar works. When Saroyan made jokes about Ernest Hemingway's Death in the Afternoon, Hemingway responded: "We've seen them come and go. Good ones too. Better ones than you, Mr. Saroyan." Many of Saroyan's later plays, such as The Paris Comedy (1960), The London Comedy(1960), and Settled Out of Court (1969), premiered in Europe. Manuscripts of a number of his unperformed plays are now at Stanford University with his other papers. William Saroyan's stories celebrated optimism in the midst of the trials and tribulations of the Depression. Several of Saroyan's works were drawn from his own experiences. Saroyan worked tirelessly to perfect a prose style that was full of zest for life and was seemingly impressionistic. The style became known as "Saroyanesque." Saroyan's work has some connections to Knut Hamsun's penniless writer in his novel, Hunger (1890), but without the anger and nihilism of Hamsun's narrator. Saroyan worked rapidly, hardly editing his text. Much of his earnings he spent in drinking and gambling. From 1958, the author lived mainly in Paris, where he had an apartment. I am an estranged man, said the liar: Estranged from myself, from my family, my fellow man, my country, my world, my time, and my culture. I am not estranged from God, although I am a disbeliever in everything about God excepting God indefinable, inside all and careless of all (Here Comes There Goes You Know Who, 1961). In the late 1960s and the 1970s, Saroyan managed to write himself out of debt and create substantial income. Saroyan died from cancer, aged 72, on May 18, 1981, in his hometown of Fresno. "Everybody has got to die," he had said, "but I have always believed an exception would be made in my case." Half of his ashes were buried in California, and the rest in Armenia. - "The writer is a spiritual anarchist, as in the depth of his soul every man is. He is discontented with everything and everybody. The writer is everybody's best friend and only true enemy—the good and great enemy. He neither walks with the multitude nor cheers with them. The writer who is a writer is a rebel who never stops" (From The William Saroyan Reader, 1958). - The Daring Young Man On The Flying Trapeze (1934) - The Trouble With Tigers (1938) - My Name Is Aram (1940) - The Human Comedy (1943) - Tracy's Tiger (1951) - The Summer of the Beautiful White Horse (1938) - Rock Wagram (1951) - Love (1955) - Gaston (1962) - one Day in the Afternoon (1964) - Days of Life and Death and Escape to the Moon (1970) - Obituaries (1979) - My name is Saroyan (1983) - An Armenian trilogy (1986) - Madness in the family (1988) - The Time of Your Life (1939)—winner of the New York Drama Critics' Award and the Pulitzer Prize - My Heart's in the Highlands (1939) - Elmer and Lily (1939) - The Agony of Little Nations (1940) - Hello-out there! (1941) - Across the Board on Tomorrow Morning (1941) - The Beautiful People (1941) - Bad Men in the West (1942) - Talking to You (1942) - Don't Go Away Mad (1947) - The Slaughter of the Innocents (1952) - The Stolen Secret (1954) - Hanging around the Wabash (1961) - The Dogs, or the Paris Comedy (1969) - Armenians (1971) - Assassinations (1974) - Tales from the Vienna Streets (1980) - "Third Day after Christmas" (1926) - "Come On-a My House" was a hit for Rosemary Clooney. Based on an Armenian folk song, it was written with his cousin, Ross Bagdasarian, later the impresario of Alvin and the Chipmunks. - ↑ New York Times Dispatch, Lord Bryce's report on Armenian atrocities an appalling catalogue of outrage and massacre. Retrieved May 5, 2008. - ↑ Robert Bevan, The Destruction of Memory (London: Reaction Books, 2006), p 25-60. - Balakian, N. The World of William Saroyan. Bucknell University Press, 1998. ISBN 9780838753682 - Floan, H.R. William Saroyan. New College & University Press, 1966. ISBN 9780808403296 - Foster, E.H. William Saroyan. Boise State University Press, 1984. ISBN 9780884300359 - Foster, E.H. William Saroyan: A Study in the Shorter Fiction. Twayne Publishers, 1991. ISBN 9780805783353 - Gifford, Barry and Lawrence Lee. Saroyan. Thunder's Mouth Press, 1984. ISBN 9781560257615 - Harmalian, Leo, ed. William Saroyan. Fairleigh Dickinson University Press. ISBN 9780838633083 - Keyishan, H., ed. Critical Esays in William Saroyan. Twayne Publishers, 1995. ISBN 9780783800189 - Leggett, John. A Daring Young Man: A Biography of William Saroyan. 2002. ISBN 9780375413018 - Samuelian, Varaz. Willie & Varaz: Memories of My Friend William Saroyan. Ag Access Corporation, 1985. ISBN 9780914330738 - Saroyan, A. William Saroyan. William Morrow, 1983. ISBN 9780688021467 - Whimore, Jon. William Saroyan. Greenwood Press, 1995. ISBN 9780313292507 All links retrieved October 21, 2016. New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: The history of this article since it was imported to New World Encyclopedia:
<urn:uuid:9249ef80-5636-4c9f-a549-01da72cab454>
CC-MAIN-2020-16
https://www.newworldencyclopedia.org/entry/William_Saroyan
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370521876.48/warc/CC-MAIN-20200404103932-20200404133932-00514.warc.gz
en
0.959918
3,113
2.703125
3
The cargo shipping industry is massive. With the direct gross output of the industry totaling $183.3 billion, and with a compensation of $27.2 billion to the 4.2 million individuals directly employed by the industry, there are definite benefits to our global marketplace. Cargo transported through this industry equates to about two-thirds of total global trade, totaling $4 trillion worth in goods. Container shipping has made shipping cheap, as it lowered freight bills and saved time. However, this massive industry comes at a cost. The environmental impact of container shipping calls for innovative solutions to the most pressing environmental problems associated with the shipping industry. Emissions and IMO 2020 Due to the fuel used to power these cargo vessels, ships emit sulfur and nitrogen oxides, particulate matter, and carbon dioxide. Even though cargo ships are fuel-efficient, 80% of ships use heavy fuel oil, which is a more carbon-intensive type of fuel. These carbon emissions add up to approximately 3% of the total global greenhouse gas emissions, and are projected to increase to 20% of total global emissions by 2050 if we do not intervene. Nitrogen and sulfur oxide emissions from 2007–2012 represented 15% and 13% respectively of the global man-made emissions of these oxides. Although those statistics may seem daunting, there have been many policies to avoid a greater negative impact. First, a recent meeting of the U.N. International Maritime Organization (IMO), with a total of 173 countries, agreed to cut shipping emissions by 50% by 2050. But how are they planning on doing this? The first step to reduce emissions is a sulfur cap. On January 1, 2020 a cap on the sulfur content of cargo ship fuel will go into effect (known as IMO 2020). This will decrease the sulfur limit from 3.5% to 0.5%. Although this will cost a maximum of $30 billion, it will also result in reducing sulfur, particulate matter, and carbon emitted by cargo shipping significantly. Next, making ships increasingly energy efficient will help to decrease air emissions from cargo shipping. Since 2011, an Energy Efficiency Design Index led to the adoption of energy efficiency standards for newly built ships. Any ships built in 2013 or later are required to meet increasingly stringent fuel standards. These standards would help ships to reduce the amount of fuel they consume through new designs, reducing vessel speed, and building larger ships that can carry more cargo, therefore making ships more energy efficient per unit of cargo carried by the ship. It has also been estimated that through the use of new technology, alternative fuel, and renewable energy sources, the industry could utilize fuel without carbon by 2035 according to the Organization for Economic Cooperation and Development. Below are some of the companies that are moving in this direction: Methanex, which produces marine grade methanol. This cost effective alternative fuel reduces emissions of sulphur oxides (SOx) by 99%, nitrogen oxides (NOx) by 60%, and particulate matter by 95%. These drastic decreases in emissions come from the fact that methanol is a naturally occurring gas and can be produced from renewable sources such as biomass and recycled carbon dioxide, as well as anything derived from pants. Unlike most alternative fuels, methanol is one of the top five chemical commodities shipped around the world annually. Meaning, that it readily available at most ports and its prices are typically lower than other fuels. IPCO Power, has designed an FID Injector that creates a stable water in fuel emulsion. This process adds distilled water to fuel onboard the vessel and creates a new fuel that has a more effective combustion. This higher efficiency fuel means lower fuel consumption, reduced NOx, HC, and PM pollutants, and will keep engine systems and scrubbers cleaner for longer. Eco Marine Power, is a Japan-based company that is looking for innovative ways to completely remove fuel consuming engines from vessels. Over the last few years, the company has been experimenting with the idea of solar panels that capture the power of the sun as well as energy from wind. These panels are as thin as cardboard and flexible like plastic. This new design is expected to cut emissions by up to 10%, which is around four tons of fuel saved every day on a large cargo ship. Through researching alternative fuels and propulsion systems, developing energy efficiency standards, creating energy efficient designs for new ships, and working on innovative technology to reduce emissions, the shipping industry is continuously working on decreasing the negative impact it has on the environment. Large cargo ships discharge ballast water, bilge water, gray water, and black water, all of which have the ability to decrease water quality, negatively impact aquatic environments, and increase risks to public health. Although ships discharge these at rates that fit with both international and national standards, they still pose an environmental threat. Gray water is water from the accommodation areas of ships including the shower and sink, laundry, and galley, whereas black water is sewage that contains feces and urine. Gray water can only be released from ships at least 1 mile from land or people in water, and 3 miles from an aquaculture lease. Bilge water is water that contains oil, and must be properly separated before being discharged. International agreements have required ships to have an Oil Water Separator to limit bilge water oil content to less than 15 ppm. These are all regulated through the EPA’s Clean Water Act National Pollutant Discharge Elimination System Program, and created a vessel permitting program which contains federal standards for managing vessel discharge. Below are a few of the companies who are helping carriers meet EPA standards and clean up the discharge that their vessels produce. Alfa Laval’s “PureBilge” is a high speed centrifugal oily water separator that is able to reduce oil-in-water content to less than 5 ppm. This system is relatively smaller than most and can fit into almost any engine room. This separator incorporates a fully automated, computer based platform that constantly monitors water to oil levels to help carriers avoid any EPA or IMO fines. Genoil, is another company focused on helping vessels stay up-to-code with environmental laws and standards. Their “Crystal Sea Separator” is an automatic system that requires no filter or maintenance. It is also able to meet the gold standard of less than 5 ppm discharge levels, and can be used for a myriad of different applications such as fuel spills. Genoil has also established service centers all around the world, so vessels can make sure that their systems are constantly operational. Wartsila, and their “Super Trident” sewage treatment offer a cost-effective and safe answer to the disposal of waste at sea. The system is optimized for black and grey wastewater flows, and can be implemented into a vessel’s preexisting water treatment infrastructure. These compact systems can be installed between decks, and make outgoing water safe for release into the ocean. Aquatic invasive species are plants, animals, and pathogens that live in the water. They are not native to the area, and may flourish in the new environment they have been introduced into. They have been transplanted to these new environments by a variety of things, but shipping is one of the causes. Invasive species can cause ecosystem damage. The damages and costs of controlling invasive species is over $9 billion annually. These organisms can cause public health issues through toxic dinoflagellates and cholera bacteria. The way in which invasive species are transferred is through the discharge of ballast water from vessels and from hull fouling. Ballast water is water carried in ballast tanks of ships, as it helps to improve a ship’s stability and balance. It is taken into the ship and released when cargo is loaded and unloaded, and when there is poor weather in order to gain increased stability. When water is taken up in one area and discharged in another area, the transfer of nonnative species can easily occur. Every hour more than 2 million gallons of ballast water are released into U.S. waters, and over 3,000 marine species travel around the world in ballast water daily. The IMO, U.S. Coast Guard, and EPA regulate the discharge of ballast water in order to decrease the environmental impact caused by invasive species. The IMO has an environmentally protective numeric standard that a ship’s ballast water must be treated before it can be discharged, and has an implementation schedule for treatment systems in ships, ballast water management plans, requirements for sediment removal, and guidelines for new treatment technology. The Coast Guard created a federal treatment standard for cargo ships within U.S. waters that follow the standards presented by the IMO, which the EPA also adopted. The second way invasive species are transplanted is through hull fouling. Hull fouling is when organisms are attached to the outside of ships, and are therefore transplanted from one habitat to another. Examples of species that can attach themselves to the outside of ships include barnacles, mussels, sponges, algae, and more. These species can then come into contact with structures at the next port, or release larvae into the water. The surface area of ships that arrives within the U.S. annually is 2.5 times the area of Washington, D.C. meaning there is a lot of area on boats that organisms are able to attach onto. Ships can contain up to 90 tons of hull fouling. Without hull fouling control systems, fuel efficiency of cargo ships can be impacted by up to 40% due to an increase in drag, creating greater air emissions as well. Innovative hull coatings and cleaning solutions that deliver the greatest reduction in drag and prevent hull fouling will decrease both air emissions and the transplant of invasive species. LightTech LightSources, are a water treatment technology company that has built one of the preeminent ballast water treatment methods. Alongside their partners in the field, they have created a UV Radiation system that has proven most effective for eradicating any threatening invasive species. This system does not require any chemical treatments, and is a relatively low cost solution that has a small footprint inside a vessel’s hull. BIO-SEA, has combined two technologies into a single automated unit for the treatment of ballast water. Their system implements physical filtration such as screens and filters, plus UV filtration to ensure compliance with IMO standards. Not only does it disinfect the water it is treating, but it completely eliminates even the smallest of microorganisms. GEA Group, is one of the largest manufacturers of ballast water filtration systems in the world. Their “BallastMaster ultraV” is a highly efficient mechanical and physical system for treating ballast water capacities of up to 1000 cubic meters. Just like the aforementioned systems, this platform does not use any harmful chemicals, and is still capable of exceeding the IMO standard for disinfection. Marine Life and Habitats Not only is marine life impacted by invasive species, but also through an increase in the traffic of cargo ships causing injury to mammals spending time near the surface of the water. Marine life includes 78 species of whales, dolphins, and porpoises that spend a lot of time near the ocean surface. Vessel traffic causes these animals to be vulnerable to injury. In order to avoid the injury of these animals, the IMO has developed guidelines for operators to follow in hopes of avoiding water that is home to these animals. However, injury is not the only risk that marine life is facing. An increase in shipping traffic as the cargo shipping industry continues to expand causes an increase in the ambient level of marine noise. Researchers have grown concerned about how this increase in noise may affect mammal communications, breeding, and behavior. In order to combat these problems, the IMO has created marine protected areas that limit the activities allowed within these designated areas. ClearSeas, is an organization that was established to monitor the effects global shipping has on the environment. They work closely with carriers, ports, agencies, and startups to establish innovative ways of keeping our seas clean and wildlife safe. Through their research, they have countless heatmaps that detail marine life and habitat locations, and give clear instructions on how to preserve these protected areas. They are also a great resource for smaller vessels to get up-to-speed with current EPA and IMO regulations. Ocean Tracking Network, is a global aquatic animal tracking, data management, and partnership platform that uses electronic tags to track every kind of marine animal. Their data warehouses allow agencies and carriers to plan trade routes in a way that is safer for marine life and habitats. These up-to-date findings have helped reduce the number of annual boat strikes globally, as well as find better ways to keep animals safe who inhabit port areas. The ECHO Program, is an initiative taken by the Port of Vancouver to help eliminate the sound pollution coming from large vessels. The noise generated by ships can disrupt travel routes and habitats of larger animals such as well. The Port of Vancouver and its partners have started using underwater listening devices to monitor ambient and underwater ship noise, as well as the presence of certain marine life. Through this research, the ECHO Program is hoping to work with carriers to make voluntary changes to shipping operations across the globe. These changes will allow vessels to run more efficiently through routes, and keep marine life out of harm’s way. The industry is taking the environmental impacts of cargo shipping seriously. Through initiatives like IMO’s 2020 and 2050 climate strategies, it is expected that greenhouses levels will decline drastically over the next 30 years, and shipping will be more cost-effective across the board. There have also been improvements in both regulations and technology to help decrease vessel discharges, invasive species, and the negative impacts the industry can have on marine life. Shipping companies across the world are beginning to implement changes in order to mitigate the effects of shipping on the environment. This massive industry is working alongside chemical companies, agencies, and startups to develop innovative technologies in order to keep our seas and our air clean.
<urn:uuid:8a045225-a60d-4930-a9f0-cd5cae967560>
CC-MAIN-2020-16
https://www.dynamo.vc/blog-posts/the-environmental-impact-of-maritime-freight
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371803248.90/warc/CC-MAIN-20200407152449-20200407182949-00074.warc.gz
en
0.955274
2,880
3.796875
4
Northern Command (United Kingdom) |Garrison/HQ||Newcastle upon Tyne| Great Britain was divided into military districts on the outbreak of war with France in 1793. The formation in the North, which included Northumberland, Cumberland, Westmorland and Durham, was originally based at Fenham Barracks in Newcastle upon Tyne until other districts were merged in after the Napoleonic Wars. In 1840 Northern Command was held by Major-General Sir Charles James Napier, appointed in 1838. During his time the troops stationed within Northern Command were frequently deployed in support of the civil authorities during the Chartist unrest in the northern industrial cities. Napier was succeeded in 1841 by Major-General Sir William Gomm, when the command included the counties of Northumberland, Cumberland, Westmorland, Durham, Yorkshire, Cheshire, Derbyshire, Lancashire, Nottinghamshire, Flintshire, Denbighshire and the Isle of Man, with HQ at Manchester. Later the Midland Counties of Shropshire, Lincolnshire, Leicestershire, Rutland, Warwickshire, Staffordshire and Northamptonshire were added and from 1850 to 1854 the Command included three sub-commands: NW Counties (HQ Manchester), NE Counties (HQ York) and Midlands (HQ Birmingham). From 1854 to 1857 there were two sub-commands, Northern Counties and Midland Counties, each with a brigade staff, but after that they disappeared and Northern Command remained a unitary command. In 1876 a Mobilisation Scheme for the forces in Great Britain and Ireland was published, with the 'Active Army' divided into eight army corps based on the District Commands. 6th Corps and 7th Corps were to be formed within Northern Command, based at Chester and York respectively. The Northern Command Headquarters itself moved from Manchester to Tower House in Fishergate in York in 1878. The corps scheme disappeared in 1881, when the districts were retitled ‘District Commands. Northern Command continued to be an important administrative organisation until 1 July 1889, when it was divided into two separate Commands: North Eastern, under Major-General Nathaniel Stevenson (HQ York), and North Western, under Major-General William Goodenough (HQ Chester). The 1901 Army Estimates introduced by St John Brodrick allowed for six army corps based on six regional commands. As outlined in a paper published in 1903, V Corps was to be formed in a reconstituted Northern Command, with HQ at York. Major-General Sir Leslie Rundle was appointed acting General Officer Commanding-in-Chief (GOCinC) of Northern Command on 10 October 1903, and it reappears in the Army List in 1905, with the boundaries defined as 'Berwick-on-Tweed (so far as regards the Militia, Yeomanry and Volunteers) and the Counties of Northumberland, Cumberland, Westmoreland, Durham, Lancashire, Yorkshire and the Isle of Man. The defences on the southern shores of the estuaries of the Humber and Mersey are included in the Northern Command'. By 1908 the Midland Counties of Lincolnshire, Nottinghamshire, Derbyshire, Staffordshire, Leicestershire and Rutland had been added, but Westmoreland, Cumberland and Lancashire had been moved into Western Command. The Command HQ was established at Tower House in Fishergate in York in 1905. First World WarEdit Army Order No 324, issued on 21 August 1914, authorised the formation of a 'New Army' of six divisions, manned by volunteers who had responded to Earl Kitchener's appeal (hence the First New Army was known as 'K1'). Each division was to be under the administration of one of the Home Commands, and Northern Command formed what became the 11th (Northern) Division. It was followed by the 17th (Northern) Division of K2 in September 1914. At the end of 1914, Lieutenant General Sir Herbert Plumer, the GOCinC, left Northern Command to form V Corps in France, and Major-General Henry Lawson was placed in temporary command, followed by Lieutenant General Sir John Maxwell after he had suppressed the Easter Rising in Ireland. Maxwell was formally appointed GOCinC in November 1916. Second World WarEdit - 15th/19th The King's Royal Hussars - 7th Royal Tank Regiment - 7th Field Regiment, Royal Artillery - 9th/17th, 16th/43rd Field Batteries, Royal Artillery - 20th Anti-Tank Regiment, Royal Artillery Territorial Army troops included 25th Army Tank Brigade. On 20 December 1942, the 77th Infantry (Reserve) Division was assigned to the command to act as its training formation. On 1 September 1944, the 77th was replaced by the 45th (Holding) Division. Command Training CentresEdit Between 1941 and 1943, each regional command of the British Army formed at-least one training centre which trained those recruits preparing to move overseas. The centres which were based in the area were: - Durham Light Infantry Training Centre, Brancepeth Castle, became No.4 Training Centre on 14 August 1941 — affiliated with The Duke of Wellington's (West Riding) Regiment and Durham Light Infantry - From 4 July 1941 included No.54 Physical Training Wing - Green Howards Training Centre, Richmond Barracks, became No.5 Training Centre on 14 August 1941 — affiliated with The Duke of York's Own (East Yorkshire) Regiment, (Alexandra, Princess of Wales's Own) The North Yorkshire Regiment (Green Howards), and Manchester Regiment (infantry battalions only) - From 4 July 1941 included No.55 Physical Training Wing - The King's Own Yorkshire Light Infantry Training Centre, Queen Elizabeth Barracks, became No.6 Training Centre on 14 August 1941 — affiliated with The Prince of Wales's Own (West Yorkshire) Regiment, Lancashire Fusiliers, and The King's Own (South) Yorkshire Light Infantry - From 4 July 1941 included No.56 Physical Training Wing - Lincolnshire Infantry Training Centre, Sobraon Barracks, became No.7 Training Centre on 14 August 1941 — affiliated with Royal Lincolnshire Regiment, Nottinghamshire and Derbyshire Regiment (Sherwood Foresters), and York and Lancaster Regiment - From 4 July 1941 included No.57 Physical Training Wing The Fishergate site was named Imphal Barracks in 1951, but closed in 1958, when Northern Command HQ moved to a new Imphal Barracks on Fulford Road, York. Portions of the former headquarters at Fishergate are now serviced accommodation. The Command was merged into HQ UK Land Forces (HQ UKLF) in 1972. General Officers Commanding-in-ChiefEdit - 1793–1795: General Sir William Howe - 1796–1802: General the Duke of Gloucester and Edinburgh - 1802–1806: Lieutenant-General Sir Hew Dalrymple - 1807–1809: Lieutenant-General Sir David Dundas Note: between 1810 and 1812 England was divided into 15 Districts - 1812–1814: Lieutenant-General Sir Charles Green - 1814–1815: Lieutenant-General William Wynyard - 1815–1816: Lieutenant-General Sir Lowry Cole - 1816–1828: Lieutenant-General Sir John Byng - 1828–1836: Major-General Sir Henry Bouverie - 1836–1839: Major-General Sir Richard Jackson - 1839–1841: Major-General Sir Charles Napier - 1842: Major-General Sir William Gomm - 1843–1849: Lieutenant General Sir Thomas Arbuthnot - 1850–1855: Lieutenant General Lord Cathcart - 1856–1859: Lieutenant General Sir Harry Smith - 1859–1860: Lieutenant General Sir John Pennefather - 1860–1865: Lieutenant General Sir George Weatherall (1 July 1860) - 1865–1866: Lieutenant General Sir Sydney Cotton - 1866–1871: Major-General Sir John Garvock (10 October 1866) - 1871–1872: Major-General George Carey - 1872–1874: Major-General Daniel Lysons - 1874–1878: Lieutenant General Sir Henry Percival de Bathe (1 July 1874) - 1878–1881: Major-General George Willis (1 April 1878) - 1881–1884: Major-General William Cameron - 1884–1886: Lieutenant General Frederick Willis - 1886–1889: Major-General Charles Daniell - 1889: Major-General Nathaniel Stevenson In 1889 Northern District was divided into North Eastern District and North Western District. General Officer Commanding North Eastern District - 1889–1891: Major-General Nathaniel Stevenson - 1891–1894: Lieutenant-General Henry Wilkinson - 1894–1902: Major-General Sir Reginald Thynne - 1902–1903: Brigadier-General Edward Stevenson Browne - 1903–1905: Major-General Sir Leslie Rundle General Officer Commanding-in-Chief Northern Command - 1905 - 1907 Lieutenant General Sir Leslie Rundle (acting 10 November 1903) - 1907 - 1911 Lieutenant General Sir Laurence Oliphant (10 November 1907) - 1911 - 1914 Lieutenant General Sir Herbert Plumer (10 November 1911) - 1915 - 1916 Lieutenant General Sir Henry Lawson (temporary 1 January 1915) - 1916 - 1919 Lieutenant General Sir John Maxwell (temporary 27 April 1916; substantive 1 November 1916) - 1919 - 1923 Lieutenant General Sir Ivor Maxse (1 June 1919) - 1923 - 1927 Lieutenant General Sir Charles Harington (1 November 1923) - 1927 - 1931 Lieutenant General Sir Cameron Shute (15 May 1927) - 1931 - 1933 Lieutenant General Sir Francis Gathorne-Hardy (15 May 1931) - 1933 - 1937 Lieutenant General Sir Alexander Wardrop (17 October 1933) - 1937 - 1940 Lieutenant General Sir William Bartholomew (12 October 1937) - 1940 - 1941 Lieutenant General Sir Ronald Adam (8 June 1940) - 1941 - 1944 Lieutenant General Sir Ralph Eastwood (3 June 1941) - 1944 - 1946 Lieutenant General Sir Edwin Morris (7 June 1944) - 1946 - 1947 Lieutenant General Sir Philip Christison (27 February 1946) - 1947 - 1949 Lieutenant General Sir Montagu Stopford (6 March 1947) - 1949 - 1953 Lieutenant General Sir Philip Balfour (21 March 1949) - 1953 - 1957 Lieutenant General Sir Geoffrey Evans (7 May 1953) - 1957 - 1960 Lieutenant General Sir Richard Goodbody (8 May 1957) - 1960 - 1962 Lieutenant General Sir Michael West (11 May 1960) - 1962 - 1963 Lieutenant General Sir Charles Jones (1 June 1962) - 1963 - 1964 Lieutenant General Sir Charles Richardson - 1964 - 1967 Lieutenant General Sir Geoffrey Musson (1 December 1964) - 1967 - 1969 Lieutenant General Sir Walter Walker (3 October 1967) - 1969 - 1970 Lieutenant General Sir Cecil Blacker (1 June 1969) - 1970 - 1972 Lieutenant General Sir William Jackson (10 October 1970) - Robert Burnham and Ron McGuigan, The British Army Against Napoleon: Facts, Lists and Trivia, 1805–1815 (2010) p. 7. - Adolphus, p. 353 - Hart's Army List 1840. - Priscilla Napier, I Have Sind: Charles Napier in India 1841-1844, Salisbury: Michael Russell, 1990. - Hart's Army Lists. - "'The barracks', in A History of the County of York: the City of York, ed. P M Tillott". London. 1961. p. 541-542. Retrieved 19 December 2015. - Army List 1876–1881. - Col John K. Dunlop, The Development of the British Army 1899–1914, London: Methuen, 1938. - Quarterly Army List April 1905. - Army List 1908. - British History on line: Imphal Barracks - "11th Division". The long, long trail. Retrieved 14 December 2015. - "17th Division". The long, long trail. Retrieved 14 December 2015. - Army Lists. - Patriot Files - Forty 2013, Reserve Divisions. - Joslen 2003, p. 73. Harv error: no target: CITEREFJoslen2003 (help) - Frederick, pp. 115–6. - "Badges & Insignia". The Prince Albert's Own Yeomanry. Retrieved 18 November 2016. - Subterranea Britannica - Fishergate: Serviced Offices - "Army Command Structure (United Kingdom)". Hansard. 17 December 1970. Retrieved 15 December 2015. - Whitaker's Almanacks 1905 - 1972 - Northern Command at Regiments.org - Army Commands - "William Howe, 5th Viscount Howe". Oxford Dictionary of National Biography. Retrieved 19 December 2015. - Mackenzie, Eneas (1827). "Historical events: 1783 - 1825, in Historical Account of Newcastle-Upon-Tyne Including the Borough of Gateshead". Newcastle-upon-Tyne. p. 66-88. Retrieved 18 December 2015. - "Dalrymple, Sir Hew Whitefoord". Oxford Dictionary of National Biography. Retrieved 19 December 2015. - Fewster, p. 215 - Urban, Sylvanus (1831). "Gentleman's Magazine and Historical Review, Volume 101, Part 2". J. B. Nichols & Son. - "William Wynyard". Gregory Don Cooke. Retrieved 6 February 2016. - Cole, John William (1856). "Memoirs of British Generals distinguished during the Peninsular War". London, R. Bentley. Retrieved 19 December 2015. - Bentham, Jeremy (2015). "The Book of Fallacies". Oxford University Pres. p. 327. ISBN 978-0198719816. - John Sweetman, Bouverie, Sir Henry Frederick (1783–1852), Oxford Dictionary of National Biography, Oxford University Press, 2004 - Norman Hillmer and O. A. Cooke, JACKSON, Sir RICHARD DOWNES, Dictionary of Canadian Biography, vol. 7, University of Toronto/Université Laval, 1988 - Ainslie T. Embree, Napier, Sir Charles James (1782–1853), Oxford Dictionary of National Biography, Oxford University Press, 2004 - "No. 27474". The London Gazette. 16 September 1902. p. 5964. - Adolphus, John (1818). The political state of the British empire, containing a general view of the domestic and foreign possessions of the crown. 2. T. Cadell and W. Davies. - Forty, George (2013) . Companion to the British Army 1939–1945 (ePub ed.). New York: Spellmount. ISBN 978-0-7509-5139-5. - Fewster, Joseph (2011). The Keelmen of Tyneside: Labour Organisation and Conflict in the North-East Coal Industry, 1600-1830. Boydell Press. ISBN 9781843836322. - Joslen, H. F. (2003) . Orders of Battle: Second World War, 1939–1945. Uckfield: Naval and Military Press. ISBN 978-1-84342-474-1.
<urn:uuid:fe749cfe-2837-44d0-a629-32b98f77f8fc>
CC-MAIN-2020-16
https://en.m.wikipedia.org/wiki/Northern_Command_(United_Kingdom)
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370494064.21/warc/CC-MAIN-20200329074745-20200329104745-00274.warc.gz
en
0.901529
3,281
3.1875
3
are insects related to grasshoppers. They have mainly cylindrical bodies, round heads and long antennae. Behind the head is a smooth, robust pronotum. The abdomen ends in a pair of long cerci; females have a long cylindrical ovipositor. The hind legs have enlarged femora, providing power for jumping. The front wings are tough, leathery elytra and it is by rubbing parts of these together that some crickets chirp. The hind wings are membranous and folded when not in use for flight; many species however are flightless. The largest members of the family are the bull crickets, Brachytrupes, which are up to 5 cm (2 in) long. There are more than 900 species of crickets, the Gryllidae are distributed all around the world except at latitudes 55° or higher, with the greatest diversity being in the tropics. They occur in varied habitats from grassland, bushes and forest to marshes, beaches and caves. Crickets are mainly nocturnal, and are best known for the loud persistent chirping song of males trying to attract females, although some species are mute. The singing species have good hearing, via the tympani on the tibiae of the front legs. Crickets are small to medium-sized insects with mostly cylindrical, somewhat vertically flattened bodies. The head is spherical with long filiform antennae arising from cone-shaped scapes and just behind these are two large compound eyes. On the forehead are three ocelli (simple eyes). The pronotum is trapezoidal in shape, robust and well-sclerotinized. It is smooth and has neither dorsal or lateral keels. At the tip of the abdomen is a pair of long cerci, and in females, the ovipositor is cylindrical, long and narrow, smooth and shiny. The femora of the back pair of legs are greatly enlarged for jumping. The tibiae of the hind legs are armed with a number of movable spurs, the arrangement of which is characteristic of each species. The tibiae of the front legs bear one or more tympani which are used for the reception of sound. The wings lie flat on the body and are very variable in size between species, being reduced in size in some crickets and missing in others. The forewings are elytra made of tough chitin, acting as a protective shield for the soft parts of the body and in males, bear the stridulatory organs for the production of sound. The hind pair are membranous, folding fan-wise under the forewings. In many species the wings are not adapted for flight. The largest members of the family are the 5 cm (2 in)-long bull crickets (Brachytrupes) which excavate burrows a metre or more deep. The tree crickets (Oecanthinae) are delicate white or pale green insects with transparent forewings while the field crickets (Gryllinae) are robust brown or black insects. Crickets have a cosmopolitan distribution, being found in all parts of the world with the exception of cold regions at latitudes higher than about 55° North and South. They have colonised many large and small islands, sometimes flying over the sea to reach these locations, or perhaps conveyed on floating timber or by human activity. The greatest diversity occurs in tropical locations, such as in Malaysia, where 88 species were heard chirping from a single location near Kuala Lumpur. There could have been a greater number than this present because some species are mute. Crickets are found in many habitats. Members of several subfamilies are found in the upper tree canopy, in bushes and among grasses and herbs. They also occur on the ground and in caves, and some are subterranean, excavating shallow or deep burrows. Some make galleries in rotting wood, and certain beach-dwelling species can run and jump over the surface of pools. Crickets are relatively defenceless, soft-bodied insects. Most species are nocturnal and spend the day hidden in cracks, under bark, inside curling leaves, under stones or fallen logs, in leaf litter or in the cracks in the ground that develop in dry weather. Some excavate their own shallow holes in rotting wood or underground and fold in their antennae to conceal their presence. Some of these burrows are temporary shelters, used for a single day, but others serve as more permanent residences and places for mating and laying eggs. Burrowing is performed by loosening the soil with the mandibles and then carrying it with the limbs, flicking it backwards with the hind legs or pushing it with the head. Other defensive strategies are the use of camouflage, fleeing and aggression. Some species have adopted colourings, shapes and patterns that make it difficult for predators that hunt by sight to detect them. They tend to be dull shades of brown, grey and green that blend into their background, and desert species tend to be pale. Some species can fly but the mode of flight tends to be clumsy, so the most usual response to danger is to scuttle away to find a hiding place. Captive crickets are omnivorous: when deprived of their natural diet, they will accept a wide range of different organic foodstuffs. Some species are completely herbivorous, feeding on flowers, fruit and leaves, with ground-based species consuming seedlings, grasses, pieces of leaf and the shoots of young plants. Others are more predatory and include in their diet invertebrate eggs, larvae, pupae, moulting insects, scale insects and aphids. Many are scavengers and consume various organic remains, decaying plants, seedlings and fungi. In captivity, many species have been successfully reared on a diet of ground up, commercial dry dog food, supplemented with lettuce and aphids. Crickets have relatively powerful jaws, and several species have been known to bite humans. Male crickets establish their dominance over each other by aggression. They start by lashing each other with their antennae and flaring their mandibles. Unless one retreats at this stage, they resort to grappling, at the same time each emitting calls that are quite unlike those uttered in other circumstances. When one achieves dominance, it sings loudly while the loser remains silent. Females are generally attracted to males by their calls, though in non-stridulatory species, some other mechanism must be involved. After the pair have made antennal contact, there may be a courtship period during which the character of the call changes. The female mounts the male and a single spermatophore is transferred to the external genitalia of the female. Sperm flows from this into the female’s oviduct over a period of a few minutes or up to an hour, depending on species. After copulation the female may remove or eat the spermatophore; males may attempt to prevent this with various ritualised behaviours. The female may mate on several occasions with different males. Most crickets lay their eggs in the soil or inside the stems of plants, and to do this, female crickets have a long needle-like or scabre-like egg-laying organ called an ovipositor. Some ground-dwelling species have dispensed with this, either depositing their eggs in an underground chamber or pushing them into the wall of a burrow. The short-tailed cricket (Anurogryllus) excavates a burrow with chambers and a defecating area, lays its eggs in a pile on a chamber floor, and after the eggs have hatched, feeds the juveniles for about a month. Crickets are hemimetabolic insects, whose life cycle consists of an egg stage, a larval or nymph stage that increasingly resembles the adult form as the nymph grows, and an adult stage. The egg hatches into a nymph about the size of a fruit fly. This passes through about ten larval stages, and with each successive moult it become more like an adult. After the final moult, the genitalia and wings are fully developed, but a period of maturation is needed before the cricket is ready to breed. Crickets have many natural enemies and are subject to various pathogens and parasites. They are eaten by large numbers of vertebrate and invertebrate predators and their hard parts are often found when the contents of animal’s guts are examined. Mediterranean house geckos (Hemidactylus turcicus) have learned that although a calling decorated cricket (Gryllodes supplicans) may be safely-positioned out-of-reach in a burrow, female crickets attracted to the call can be intercepted and eaten. Crickets are simple to breed and maintain in captivity and are reared on a large scale as food for zoo and laboratory animals. The entomopathogenic fungus Metarhizium anisopliae attacks and kills crickets and has been used as the basis of control in pest populations. The insects are also affected by the cricket paralysis virus, which has caused high levels of fatalities in cricket-rearing facilities. Other fatal diseases that have been identified in mass-rearing establishments include Rickettsia and three further viruses. The diseases may spread more rapidly if the crickets become cannibalistic and eat the corpses. Red parasitic mites sometimes attach themselves to the dorsal region of crickets and may greatly affect them. The horsehair wormParagordius varius is an internal parasite and can control the behaviour of its cricket host and cause it to enter water, where the parasite continues its lifecycle and the cricket likely drowns. The larvae of the sarcophagid fly Sarcophaga kellyi develop inside the body cavity of field crickets. Female parasitic wasps Rhopalosoma lay their eggs on crickets, and their developing larvae gradually devour their hosts. Other wasps in the family Scelionidae are egg parasitoids, seeking out batches of eggs laid by crickets in plant tissues in which to insert their eggs. The fly Ormia ochracea has very acute hearing and targets calling male crickets. It locates its prey by ear and then lays its eggs nearby. The developing larvae burrow inside any crickets with which they come in contact and in the course of a week or so, devour what remains of the host before pupating. In Florida it was found that the parasitic flies were only present in the autumn and that at that time of year the males sang less but for longer periods. There is a trade-off for the male between attracting females and being parasitized.
<urn:uuid:fe3f0aaa-f905-4341-84c6-82e4e92cbeff>
CC-MAIN-2020-16
https://www.rockypest.com.au/crickets/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371618784.58/warc/CC-MAIN-20200406035448-20200406065948-00234.warc.gz
en
0.961365
2,220
3.609375
4
library(knitr) opts_chunk$set(fig.width = 8, fig.height = 4) Here we will describe how to use the treeDA package. The package provides functions to perform sparse discriminant analysis informed by the tree. The method was developed for microbiome data, but it could in principle be applied to any data with the same tree structure. The idea behind the package is that when we have predictor variables which are structured according to a tree, the mean values of the predictor variables at each node in the tree are natural predictor variables, and can be used in addition to the initial predictors defined at the leaves. For microbiome data, this means using both the abundances of the initial set of taxa as well as the abundances "pseudo-taxa", which correspond to nodes in the tree and are the agglomeration of all the taxa which descend from that node. Without regularization, using both sets of predictors would yield an ill-defined problem because the node predictors are linear combinations of the leaf predictors. However, when we add regularization, the problem becomes well-posed and we can obtain a unique solution. Intuitively, the regularization allows us to incorporate the intuition that a solution where one node is selected is more parsimonious than one in which all the leaves descending from that node are selected. This package is based on the implementation of sparse discriminant analysis implemented in the package. The main function, treeda, creates the node and leaf predictors, performs sparse discriminant analysis on the combination of node and leaf predictors, and then translates the results back in terms of leaf predictors only. The package also includes functions to perform cross-validation and plotting, which will be demonstrated in Our first step is to load the required packages and data. We will illustrate the method on an antibiotic dataset ( provided by the package adaptiveGPCA. Note that no other elements of adaptiveGPCA package are used in this tutorial. The antibiotic dataset consists of measurements taken from three subjects before, during, and after taking each of two courses of an antibiotic. The major groupings in the data are by subject (called ind in the phyloseq object) and by the the antibiotic condition. The antibiotic treatment is discretized into abx/no abx in a variable called where abx corresponds to samples taken when the subject was taking the antibiotic and the week following, and no abx corresponds to all the library(treeDA) library(ggplot2) library(phyloseq) library(adaptiveGPCA) library(Matrix) data(AntibioticPhyloseq) theme_set(theme_bw()) The main function in the package is called treeda. It takes a response vector giving the classes to be separated, a matrix of predictor variables which are related to each other by a tree, the tree which describes the relationships between the predictor variables, and the sparsity (p, the number of predictors to use). In the antibiotic dataset, we have several potential discriminatory variables. One of these describes whether the sample was taken during or immediately after the subject was subjected to antibiotics, and we can try to find taxa which discriminate between these two groups using the following out.treeda = treeda(response = sample_data(AntibioticPhyloseq)$type, predictors = otu_table(AntibioticPhyloseq), tree = phy_tree(AntibioticPhyloseq), p = 15) Here the output of the model is stored in an object called out.treeda. The print function will give an overview of the fitted model, including the number of predictors used and the confusion matrix for the training data. From this, we see that 15 predictors were used (since this was what we specified in the initial call to the function). These predictors potentially include nodes in the tree (corresponding to taxonomic clades) and leaves on the tree (corresponding to individual species). The combination of nodes and leaves can be written purely in terms of the leaves (or species, or OTUs), in which case the model is using 903 of the leaves. This indicates that some of the nodes which were selected as predictive were quite deep in the tree and corresponded to large groups of taxa. Finally, the confusion matrix shows us how well the model does on the trainnig data: we see that a total of 16 cases were classified incorrectly, split approximately evenly between cases which were actually from the abx condition and those which were actually from the no abx condition. The object containing the output from the fit also contains other information. These are: means: The mean value of each predictor. This is only included if the call to center = TRUE, in which case the means are stored so that new data can be centered using the mean values from the training data. sds: The standard deviation of each predictor. Like with the means, this is only included if the call to = TRUE, in which case the standard deviations are stored so that the new data can be scaled using the standard devaiations from the leafCoefficients: A representation of the discriminating axis using only the leaves. This is a list containing beta, which are the intercept, which is the constant term. input: A list containing the response, predictors, and tree used to fit the model. nPredictors: The number of predictors (in the node + leaf space) used in the model. nLeafPredictors: The number of predictors in the leaf space used in the model. sda: The sda object used in fitting the model. class.names: The names of the classes to be discriminated between. projections: The projections of the observations on the classProperties: The prior probabilities, mean in discriminating space, and variance in the discriminating space of the classes. predictedClasses: Predicted classes for each observation. rss: Residual sum of squares: the sum of squared distances between each observation and its class mean in the discriminating space. Once we have fit the model, we can look at the samples projected onto the discriminating axis. These projections are found in out.treeda$projections, and we can see them plotted for the antibiotic data below. In the figure below we also separate out the samples by individual to see whether the model works better for some individuals than others. We see that positive scores along the discriminating axis correspond to the no abx condition, and that there is some difference between the individuals but that the quality of the model is approximately the same across the three subjects. ggplot(data.frame(sample_data(AntibioticPhyloseq), projections = out.treeda$projections)) + geom_point(aes(x = ind, y = projections, color = type)) We can also look at the coefficient vector describing the discriminating axis using the plot_coefficients function. This gives a plot of the tree with the leaf coefficients aligned underneath. For comparison, we can look at the results when we try to discriminate between individuals instead of between the abx/no abx conditions. We try this with the same amount of sparsity, p = 15. out.treeda.ind = treeda(response = sample_data(AntibioticPhyloseq)$ind, predictors = otu_table(AntibioticPhyloseq), tree = phy_tree(AntibioticPhyloseq), p = 15) out.treeda.ind In this case, since we have three classes we obtain two discriminating axes, each of which uses 15 node or leaf predictors for a total of 30 predictors. This corresponds to only 85 leaves on the tree, indicating that the nodes which were chosen corresponded to individual leaves or to much smaller clades than when our aim was to discriminate between the abx and no abx conditions. We can see this more clearly when we look at the coefficient plot, where there are many more singleton leaves with non-zero coefficients than we saw in the corresponding plot for the abx/no abx model. Note that this model contains two discriminating axes because we have three classes, while the abx/no abx model had only one discriminating axis because there were two classes. We would often like to choose the sparsity level automatically instead of manually. A common way of doing this is by cross validation, which we have implemented in the function treedacv. It takes most of the same arguments as as treeda: a vector containing the response, or the classes for each of the observations, a matrix of predictors which are related to each other by a tree, and the tree. In addition, the number of folds for the cross validation needs to be specified (the folds argument), and a vector giving the levels of sparsity to be compared by cross validation (the pvec argument). The folds argument can be given either as a single number, in which case the observations will be partitioned into that number of folds, or as a vector assigning each observation to a fold. In this case, the vector should have length equal to the number of observations, and the elements in the vector should be integers between 1 and the number of desired folds assigning the observations to a fold. Here we are using four-fold cross validation, discriminating between the individuals in our dataset, and comparing levels of sparsity between 1 and 15. When we print the output from treedacv, it tells us both which value of p (amount of sparsity) corresponded to the minimum CV error, and what the smallest value of p was which was within one standard error of the minimum CV error. (The intuition behind using this value of p instead of that with the minimum CV error is that we would like the most parsimonious model which is statistically indistinguishable from that with the minimum CV error). For us, the minimum CV error is at 11, but if we were following the one standard error rule we would use 7. set.seed(0) out.treedacv = treedacv(response = sample_data(AntibioticPhyloseq)$type, predictors = otu_table(AntibioticPhyloseq), tree = phy_tree(AntibioticPhyloseq), folds = 4, pvec = 1:15) out.treedacv The results from the cross validation are stored in out.treedacv$loss.df. This data frame contains the CV error for each fold, the mean CV error, and the standard error of the CV error for each value of p. We can use this matrix to plot the CV error as a function of the sparsity, or we can use the plotting function defined by the package, as shown below. This plot confirms what we said earlier: 11 predictors corresponds to the minimum cross validation error, and 7 predictors corresponds to the sparsest solution which is within 1 standard error of the minimum cross validation error. We can then fit the model with 11 predictors to all the data and look at the plot of the coefficients along the discriminating axis. out.treeda.11 = treeda(response = sample_data(AntibioticPhyloseq)$type, predictors = otu_table(AntibioticPhyloseq), tree = phy_tree(AntibioticPhyloseq), p = 11) out.treeda.11 From the coefficient plot above, we might be interested in the relatively large group of taxa with the largest positive coefficients. Since the samples in the abx condition have positive scores on the discriminating axis, taxa with positive coefficients are over-represented in the abx condition. We can find out what these are by examining the leaf coefficient vector. We first convert the Matrix object containing the leaf coefficients into a vector, then find all the taxa which have the maximum positive coefficient, and then print out the unique elements of the taxonomy table corresponding to those taxa. We see that this is a group of 74 Lachnospiraceae. They are mostly not annotated beyond the family level, but one is annotated as being from the genus Moryella. coef = as.vector(out.treeda.11$leafCoefficients$beta) taxa.max = which(coef == max(coef)) length(taxa.max) unique(tax_table(AntibioticPhyloseq)[taxa.max,]) Any scripts or data that you put into this service are public. Add the following code to your website. For more information on customizing the embed code, read Embedding Snippets.
<urn:uuid:54139691-b339-4d6c-8bfd-e4aa4fa86012>
CC-MAIN-2020-16
https://rdrr.io/cran/treeDA/f/vignettes/treeda-vignette.Rmd
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370524604.46/warc/CC-MAIN-20200404165658-20200404195658-00394.warc.gz
en
0.909724
2,802
2.65625
3
Steam Train at South Yarra railway station c1883. At the end of 1854 the only railway line in the colony of Victoria was from Sandridge (Port Melbourne) to the town of Melbourne. It was constructed by a private company and opened to traffic on September 13 of that year. Later other companies were given approval to construct lines but by 1860 with the rapid increase in railway coverage, the government decided to step in and take responsibility from private companies for the development and operation of railways. On July 9, 1881 Thomas Bent was appointed Minister of Railways in the O’Loghlen government and over the twenty one months he was in that office Victoria experienced a huge railway building program, while significantly increasing its levels of debt. It was only five months after he was appointed to that portfolio that Bent officiated at the opening of the Caulfield to Mordialloc line on December 19, 1881. The following year saw the line’s extension to Frankston. From the inauguration of the Government railways, and particularly after the establishment of the Railway Department, there was considerable political interference in management. This, Harrigan suggests, was because of the many Ministry changes that occurred. From 1857 to 1883 there were 32 Ministers holding various titles, including Vice President of the Board of Land and Works and Commissioner of Public Works, or Commissioner of Railways, or Commissioner of Railways and Roads, and each dabbled in the management of the Railway Department. James Patterson, the Minister of Railways said he was tired of the political wrangling about the nature of the railway extensions and would have preferred that the routes be described on the basis of parishes traversed, rather than detailing them from allotment to allotment. On November 16, 1876 the Railway Construction Bill introduced into the Legislative Assembly made provision for the construction of thirty two new lines including one from Caulfield to Mordialloc, and authorized expenditure for the necessary survey work to be undertaken. Three reasons were accepted by Parliament to justify the construction expenditure of £40,916 on the nineteen miles fifteen chains track to Frankston. The first was defence, the second the existence of potential customers and the third was the need to provide rail transport to a proposed cemetery at Frankston. The view on the defence issue was that if Melbourne were attacked from Western Port Bay she would have to quickly assemble troops on the eastern shore of the bay to face the threat, and trains were the most efficient way to get troops to the site of the action. As far as customers were concerned, it was recognised that the area was rapidly developing with market gardens, orchards and small centres of population. The people living there required an efficient and cheap way of transporting themselves and their produce to the city. Associated with the population argument, although not stated, was the fact that this line would open up new areas of land for sub-division and create new profit making opportunities for the astute investor. Several politicians and future politicians already had large land holdings in the area the line was to traverse or were exploring the possibility of purchase. Thomas Bent held a significant amount of land as did Charles Henry James, Benjamin Fink and George Taylor. At the time Matthew Davies was acquiring a taste for land speculation. One of his investment companies subsequently moved quickly, shortly after the line was constructed to Mordialloc, to acquire 255 acres from Herbert Balcombe in a swampy area south of Cheltenham. Later this was called Mentone. During the first reading of the Railway Construction Bill in October 1880, attention was drawn to costs of construction and the desire of the politicians to avoid the earlier situation where landowners extracted exorbitant prices for land resumed by the government for the rail lines. An answer to this, provided in the Bill, was to allow deviation of the surveyed route. However this proposal was met with reservation by some members of parliament who, while recognising mistakes in the past, pointed to instances where the livelihood of small land owners was destroyed when the government acquired a significant part of their property. Mr Woods, a former Minister of Railways, said, “The House has the right to resume land for public purposes, on paying the owner the original cost and a fair amount for interest, but that it should not have to pay him the unearned increment, which belongs to the nation.” He drew attention to the claim of £4000 and £5000 for a strip of land through which a line was to pass. Through arbitration £400 to £500 was proposed but the owner laughed at the suggestion. The matter went to court and the jury awarded him £150 and he had to pay costs. A K Smith MLA suggest that it was a wise policy to keep the public ignorant of the exact route along which it was intended to construct a line, as, by that means, the operations of land speculators would be checked, and the land required for the line could be got at a more reasonable price than might otherwise be the case. He went on to inform his parliamentary colleagues that some of the early lines were not properly surveyed. Only trial surveys were made, he suggested. It was three months later, in February 1881, that the Southern Cross reported that the route of the Caulfield Frankston line was being resurveyed with the expectation that the line would take a more westerly route than originally suggested. By May of 1881 tenders were called for the project and eleven were submitted, with Faulkingham and Bunn submitting the lowest cost estimate of £40,916/1/2. By August 1881 it was reported that the laying of rails had been completed as far as the Shire Hall in Brighton South (Moorabbin) and the contractor was pushing on vigorously with construction to Mordialloc. Although the construction of the bridge over the Mordialloc creek had not commenced it was expected at that time to have the line open to traffic as far as Mordialloc in the course of a month. The railway line, as constructed to Mordialloc, proceeded in a reasonably straight line south from Caulfield until it reached South Brighton (Moorabbin) where it darted across Point Nepean Road and tracked on the western side of that road until it reached close to Mordialloc when it resumed it eastern orientation. The route traversed was a controversial issue for many local residents as well as officers of the Railway Department. Route of the Caulfield-Mordialloc railway line. Courtesy Kingston Collection. The original surveyed route of the line detailed by J P Madden, the engineer in charge, was not followed in the construction although what he suggested was the less expensive option. The report in the Argus indicated that the original route was very much straighter, traversed cheaper land, and was further away from a line that Bent proposed to run from Brighton along the coast. Six road crossings with gate keepers at a cost of £80 per annum each would have been required in contrast to the twenty two on the revised route. The Age noted that no one seemed to know why the original surveyed route was abandoned or who was responsible for the decision. Martin, an engineer involved in the construction, denied rumours that he was responsible for varying the route at Cheltenham so that it came nearer to land in which he held an interest. In a letter to Mr Watson he indicated the decision was made by the former Engineer in Chief, Mr Higginbotham, the Engineer from the Shire of Frankston and Mr Walton the then Engineer of Surveys. While he acknowledged he was present at the meeting he stressed he had no involvement either directly or indirectly with the change of route. People from Friendship Square in Cheltenham were irate about the change of route and expressed their displeasure at the official opening of the line at Mordialloc by the Minister of Railways, Thomas Bent. They had expected the line to traverse Holloway’s original purchase of Two Acre Village bringing the line closer to their properties and thereby improve their value and saleability. Despite the anger directed at the Minister by individuals, the Brighton Southern Cross reporter relieved Bent of responsibility for the changed route pointing out that he was not accountable for the apparent blunder because the construction of the line was almost finished at the time of his appointment. Nevertheless, this does not exclude the possibility that Bent used his influence amongst his parliamentary colleagues to gain an outcome he desired. His hands may not have been totally clean. After all, his Brighton constituents wishing to maintain their exclusiveness were not in favour of a cheaper option of extending the Brighton line around the coastline to Mordialloc. While Bent had purchased several allotments in Friendship Square in 1879, after the line was first mooted, he listed them for sale a few months later. Perhaps Bent got wind of a possible change of route for the railway line and decided the opportunity for making large profits on this particular venture had dissipated. Besides the fuss caused by the adopted route there was also trouble about the ballast used on this and other lines. A Select Committee was appointed by the parliament to establish whether the Commissioner of Railways, Thomas Bent, was guilty of corrupt conduct regarding several matters including the supply of ballast for the Caulfield-Mordialloc Line. Three thousand six hundred cubic yards of gravel had been obtained from land owned by Mrs Bent without either her or her husband’s knowledge, or so it was claimed. As soon as Thomas Bent was aware the gravel was being used for railway purposes, he ordered the cessation of any further supply, according to evidence he gave to the Committee. One of the conclusions of the Select Committee was that the available evidence did not disclose any improper conduct whatever on the part of the Commissioner of Railways. Margaret Glass in her book, Bent by Name and Bent by Nature, queried this decision. While Mrs Bent had bought the land at a land auction in May, in the year before her husband was appointed Minister of Railways, Glass claims Thomas Bent failed to explain all the circumstances surrounding this so called ‘purchase’ for the quarry paddock had in fact belonged to Bent himself. Prior to the construction of the line, On May 26, 1881, notices in the Government Gazette called for tenders for the building of stations on the route from Caulfield to Frankston. In those notices the stations were named Weritmuir (probably South Brighton or later Moorabbin), Warren Road (became North Road and later Ormond) and Hythe Street (later Highett Road), Subsequent Gazettes listed East Brighton, Beaumaris, (renamed Balcombe Road and later Mentone) Mordialloc, Cheltenham, Glen Huntly Road as well as listing Hythe Street, Warren Road and Weritmuir with changed spelling to Whitmuir. The following tenders were received: Glenhuntly, E. Cholerton £750; North Road, E Cholerton £750; East Brighton Road, J Shimmin, £830; South Brighton, Davies and Batty, £729 10s.; Cheltenham, Davies and Batty, £729.10s.; Frankston D Spence, £776 10s., Mordialloc, Wm. Chaffer, £749.15. At the time of the opening of the line in December 1881 no station buildings had been constructed. This was because Thomas Bent refused to accept the tenders for the wooden buildings that were similar in design to those on the Shepparton line. However, platforms and sidings had been formed. Although some of the houses for gatekeepers were still in the process of being constructed most had been built. A twenty thousand gallon water tank to supply the steam engines had been erected at both Caulfield and South Brighton and two bridges with three openings of ten foot each constructed between these two stations. It took some time before stations deemed satisfactory by members of the travelling public were built. Moorabbin Railway Station c1900. Courtesy Public Transport Corporation. Mr Gossoon of Bay Road, South Brighton, wrote a letter to the editor of the Brighton Southern Cross in which he asked “our kind, good natured railway king” to erect a temporary roof or shed at Highett Road so as to afford some shade for those waiting for a train. He said he was aware that Mr Bent had ordered the erection of a building at Highett but so far no builders or material had appeared. He could cope, the scorching sun would have no effect on him, but he was pleading on behalf of the “fair daughters of Eve.” Almost seven years later Cr Vail expressed concern about the lack of suitable shelter at Cheltenham. His worry was not lack of protection from the sun but from the rain. He told fellow councillors he had witnessed several people standing in the rain on the Cheltenham platform awaiting the arrival of a train. There were several ladies amongst the thirty people waiting. The only shelter on the platform was a small box like office where passengers were refused admittance by a railway official. He urged his council colleagues to write a strongly worded letter censuring the Commissioners for not providing accommodation and requesting that the necessary shelter be provided without delay. The following year a deputation waited upon the Railway Commissioners urging them to provide better accommodation at Mentone. It was claimed that the buildings provided were of a temporary nature and very unsightly. The deputation pointed out that £2,800 had been provided in the estimates for station requirements at Mentone so lack of money was not a hindrance. Nevertheless, it took some time before the deputation’s goal was achieved. The first issue of the Moorabbin News reported in April of 1900 that Young Bros of Moonie Ponds had been granted the tender to build the new station replacing the old ramshackle buildings that had been in operation for twenty years. The main building on the Melbourne side was to consist of a booking lobby, station-master’s, parcels and telegraph office; a general waiting and ladies’ retiring room with marble mantelpiece, and lavatories. A verandah, extending 66 feet in front, was also to be built on the same principle as the new one at Cheltenham. There was also to be a building erected on the Mordialloc side consisting of booking office, ladies rooms and a verandah to provide ample shelter. All were expected to be completed by the first week of June of that year. Mentone Railway Station c1910. Courtesy Leo Gamble. An earlier deputation to the Railway Commissioners to that noted above occurred in May 1887. On that occasion the members of the deputation were urging the duplication of the Mordialloc line. They claimed it was important from a military point of view, for the safety of the passengers and for an enhanced financial return. Mr Speight speaking for the Commissioners said they would like to see duplication of all lines but it was a question of expense. If parliament provided the money they would not hesitate in laying down the rails. While agreeing the traffic might be heavy in certain seasons, under the existing system no mishaps could occur unless a mistake occurred. Speight commented that there was not much ground for complaint when eleven trains ran per day to Mordialloc, and when responding to the request to increase the speed of the trains, he pointed out that the trains ran at 30 to 40 miles per hour and no train service in the world kept better time. He added, the number of places the train had to stop to collect passengers influenced the total time of the journey to Melbourne. During November 1887 it was announced that the government had allocated £36,000 to duplicate the line from Caulfield to Mordialloc. The community welcomed this information as it removed the apprehension of collisions and it opened the possibility for a more frequent service because after the opening of the railway line to Mordialloc no Sunday service was provided. For many years public meetings were called in an effort to convince the Railway Commissioners that a more numerous train timetable was required. In addition, they hoped that the fare structure could be improved. Success on these issues proved elusive. Two years later Cr Ward of the Shire of Moorabbin said the timetable for trains was a farce. Some passengers, he claimed, had often to wait two or three hours to catch a train. Oakleigh had four trains to Cheltenham’s one. Often trains ran to Caulfield but no further and after 9.30 pm no train left from Cheltenham to Melbourne. The expectation that the duplication of the line would result in improved service was not met. In fact, he believed the service was worse. Community agitation and efforts of Moorabbin councillors continued over decades for a more frequent train service on the Caulfield Mordialloc line and the provision of better facilities. Over time, parts of the line were regraded, new track laid, rolling stock improved and station buildings completed or replaced, but the operation of the line continued to be a political issue with various governments attempting to make it an efficient and cost effective system. Members of Chelsea Historical Society meet Centenary Train at Carrum 1982. Courtesy Leader Collection. © 2020 Kingston Local History | Website by Weave City of Kingston acknowledges the Kulin Nation as the custodians of the land on which the municipality is a part and pays respect to their Elders, past and present. Council is a member of the Inter Council Aboriginal Consultative Committee (ICACC).
<urn:uuid:4b280829-881b-492c-883e-e1df75cd2b65>
CC-MAIN-2020-16
https://localhistory.kingston.vic.gov.au/articles/329
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370526982.53/warc/CC-MAIN-20200404231315-20200405021315-00275.warc.gz
en
0.985016
3,618
2.859375
3
On a recent visit to Athens I chanced upon the supposed tomb of Socrates near the Acropolis. Socrates chose to remain in the city after being found guilty on trumped up charges of corrupting youth. For this he was handed the ultimate sanction of a death sentence, to be self-inflicted with hemlock. By receiving his punishment he was making a statement to posterity to the effect that the Rule of Law was of greater importance than the individual injustice being inflicted on him. The operation of the law would just have to improve, the alternative being anarchic barbarity. Nearby, somewhat hidden and a tad derelict is perhaps the most historically-significant structure in Athens, the birthplace and site of Athenian democracy, and thus the birthplace of democracy itself, where the impassioned speeches of the great orator Pericles (died c. 450 BCE) set the small polity on the destructive course of the Peloponnesian War. More recently, Randy Newman’s song about America,‘In Defense of Our Country’ from Harps and Angels (2008) expresses a cautious, pre-Trumpian optimism that the political leaders of a decade ago were ‘hardly the worst / This poor world has seen.’ But presciently he references Caligula, the emperor which President Donald Trump best resembles at the fag end of American empire. But Trump actually democratically won the Presidential election, at least the electoral college, just as Hitler achieved power through elections, before dismantling the Rule of Law. I have expressed reservations in the past about democracy, and I despise demagoguery. But let me construct a few words in its defence. I – ‘Benevolent Authoritarianism’ A comment often attributed to Churchill is that democracy is the least worst form of government, which I consider trite, and perhaps untrue. The enlightened despot may prove more effective, as the great Franklin Delano Roosevelt showed. Similarly, David Runciman in How Democracy Ends (Profile Books, 2018) endorses the concept of benevolent authoritarianism. Such is the luck of the draw, however, that a benevolent oligarchy almost invariably leads to despotism of the Right or Left, and utter disaster. Let us nonetheless lay out the positives of what Pericles effectively pioneered. First, in the immaculate expression of honest Abraham Lincoln ‘the rail splitter’ in his Gettysburg Address of 1863 it is governance by the people. On the scene of the Civil War battlefield that would eventually end slavery he resolved: these dead shall not have died in vain, that this nation, under God, shall have a new birth of freedom and that government of the people, by the people, for the people, shall not perish from the earth. So it is that “we the people” are sovereign, as opposed to governance by faceless corporations, multi-national banks and nefarious corporate law firms, purchasing our political class. We also find governance by the people for the people in the U.S. Declaration of Independence (1776), the first modern constitutional statement of democracy. At least in theory. The problem is our public representatives are beholden to the crypto-fascist advocates of neo-liberalism. The Irish state, for example, is effectively run by Goldman Sachs, corporate law firms, Vulture Funds and banks for their own enrichment. The people are irrelevant, and many among the judiciary, mired in debt, seem to be in on the act. The people are drip-fed justifications by the establishment media for austerity, on behalf of these global parasites, and conditioned to accept inflated house prices, robber baron banks, besides substandard and ludicrously expensive rental accommodation. The abolition of pensions, and death on a hospital corridor are the new reality. Our Brave New World of the Internet is incubating a dangerously compliant and accepting population, reflected in Trump’s ability to win over the American people, who he persuaded to consent to their own demise. This, what Timothy Snyder called ‘anticipatory obedience’ (Snyder, 2017) involves going with the flow of home seizures and deportation of untermenschen migrants, until at last they come for you, at which point there is no one left to protect you. As Pastor Neimoller put it under the Nazis: First they came for the socialist and I did not speak out because I was not a socialist. Then they came for the trade unionists and I did not speak out because I was not a trade unionist then they came for the Jews and I did not speak out because I was not a Jew Then they came for me and there was no one left to speak for me. So stand up and be counted. Hopefully it won’t require you to walk out in front of a tank, but be prepared. II – A Final Solution At the Wannasee Conference of 1942 the Nazis under Reynard Heydrich decided on the Final Solution, or genocide, of the Jewish people. The transcript is available, and captured on celluloid in Kenneth Branagh’s film Conspiracy (2001). A modern incarnation of this is the secretive and monastic meetings of the Bilderberg Group – once chaired by our own late unlamented Peter Sutherland – where the spoils of an utterly unsustainable and unequal economic system are divided. The modern Wanasee meetings are no doubt attended by a phalanx of pseudo-experts, or even genuine experts, working out what to do with the troublesome poor of the Earth. I suspect their plan is to to undermine democracy on behalf of the world’s corporate elite. People are commodified by banks and financial institutions: there are far too many of them, and their number needs to be reduced. Liquidation can occur by degrees: beginning with withdrawal of social support and evictions, which leads to suicide, addiction, health collapse and early death. In the Third World it will be far worse for those in coastal regions when the storms hit. Meanwhile, the good ol’ boys of Steve Bannon et al will continue to reap the harvest. People are often ill-informed and vote stupidly. Trump was elected on a ballyhoo of promising the disenfranchised working and middle class social protection, and job creation, after stoking fears about a foreign Other. What happened both with the election and since is the most nefarious soap job since the Nuremberg rallies. Trump appointed to his cabinet three Goldman Sachs officials, who were responsible for much of the mess that people find themselves in the first place. He has also appointed mad dog generals, and cosies up to vile dictators. The spectre is truly frightening. Trump immediately set about dismantling Obamacare and tore up the Paris Climate Change Agreement. With two strokes of the pen much of the Obama legacy was lost. The smooth-talking Obama is now a political eunuch. The elite are intent on making ‘difficult’ decisions, which will reduce the population of the world. This will require ‘strong’ government and the maintenance of ‘public order’ when disobedience appears. Neo-liberal policies will certainly not be in the interest of the people who voted Trump in. As the former Greek finance Minister Yanis Varoufakis put it, ‘And the weak suffer what they must.’ The democratic problem is that ‘we the people’ did vote for neo-liberals in Ireland and for a long time in the U.S. Even Viktor Orban in Hungary has a democratic mandate, and Brazilians have voted for a New Age conquistador in Jair Bolsonaro. Meanwhile the National Front are on the threshold of power in France. Democracy is electing fascists. Why? Well genuine democracy requires mass literacy and proper education, which is diminishing, as is access to accurate information. Bannon and Cambridge Analytica have used artificial intelligence to influence voting patterns, and warp the human mind. We are witnessing the dissemination of disinformation, and what Zizek calls ‘Ideological Misindentification’. People are buying the bullshit, even though, at heart, they know it is untrue. Nonetheless, declining adult literacy and the use of sophisticated triggers have conditioned people into buying advertising as argument and substituting soundbites for subtlety and nuance. Hysteria, semi-baked nonsense and shrillness is replacing rational discourse. In the Post-Truth zeitgeist appeals to emotion have replaced the importance of facts, and fascists have always enjoyed rituals and symbols. Whenever anyone talks of nationalism or the national interest I am reminded of the adage that ‘patriotism is the last refuge of the scoundrel.’ The Left are nostalgic and see opportunity in Austerity but, lest we forget, after the Wall Street Crash the Weimar Republic did not witness a Populist socialist insurgency but Nazism. Our present economic collapse is ineluctably leading towards a new form of corporate fascism. If the Left is to salvage democracy it must borrow the approach of Antonio Gramsci, the leader of the Communist Party of Italy in the 1920s, which is to construct a cultural hegemony with a receptive middle class (especially now as the distinction between working and middle class is being obliterated). This will involve an expansion of state institutions and husbandry of natural resources to bring an electable and progressive broad social democratic front to power. I do not think this is impossible, ‘Hope springs eternal in the human heart’ as Alexander Pope put it, but democracy needs leadership of a kind that is not apparent at this juncture. III – A Lost Leader On my plane journey to Athens I read an extract from a speech by Mr Obama about visiting the same birthplace of Periclean democracy I had visited. He expresses himself beautifully: precise, as is his want; erudite (something he is given too little credit for); and with pristine socially-democratic-convictions. But he is now disempowered, and his legacy is being dismantled by Trump. This brings us back to Roosevelt, and one major problem with U.S. democracy, at least. Obama was prevented from seeking a third term by rules introduced in the wake of Roosevelt’s becoming electorally unassailable, primarily because he was obviously acting in the interests of the people. If the rules had not been changed the American public would not have had to face the unenviable choice of Hilary Clinton or Donald Trump, with the former the lesser of two evils. We need a new Obama, or better still a new Roosevelt, a leader with vision and with purpose. We may need many of them, but few are apparent. Direct democracy and referenda by the people are also required. Further, we need to steel ourselves for civil disobedience to aid in the vitalisation of our democracy. Instead we have a spectator democracy, or passive democracy, controlled by vested interests. When the institutions of state and the state itself act criminally the obligation for citizens is to fight back in proportion to the force they are confronted with. We also need proper information, and since it is not coming through mainstream media, which has been bullied into submission, the new radical press is the only drip feed available for the vitalization of the body politic, alongside similarly-motivated NGOs. The truth is indeed in some respects the only weapon as Havel put it while imprisoned under another dictatorship: ‘If the main pillar of the system is living a lie then it is not surprising that the fundamental threat to it is living in truth (Havel, 1991).’ IV – A False Dawn Ethical decisions are indeed complex: the suppression of fearless criticism is a negation of ethics. The obligation of professional ethics should be fearless truth-telling. Standing up to the power. Democracy dies when it denies the legitimacy of the opposition, when the Rule of Law is set aside, and when authoritarian politicians act subversively, and in a concerted fashion, to undermine civil liberties and human rights by criminalising and prosecuting dissent or opposition. Using the excuse of such shibboleths as national security, public order and the common good, rogue state institutions classify their enemies as criminals and subversives. Other characteristics of failing democracy include a breakdown in forbearance and the utilisation of constitutional hardball, such as Trump stacking and weaponizing the Supreme Court. Democracy is dying because our elected leaders rather than distancing themselves from extremists are embracing them. In fact they are the extremists. Let us be clear about this: we are seeing state fascism. There are insidious forms of subversion: a coup can really be governance by the grey, for the grey, where small but influential think tanks and special interests pull the strings. If it inconveniences these elites, the democratic will of the people is ignored, as in Greece, where Alexei Tsipras twice received a mandate to counter austerity but was ignored. Greeks must honour their debts even if they were induced into them by Goldman Sachs and its acolytes. The banal refrain is that Greeks do not pay their debts, but the same could be said for all the banks that have had their debts written off. While the Greek electorate recognised where their true interest lay, by electing a radical socialist, in most countries passivity has created a consumer model of democracy that has lost any bite. The real source of a failing democracy is found in vacuous digital communication, and the passivity wrought by blanket advertising. The false dawn of online democracy through social media is proving to be a chimera. The sharing of inconsequential thoughts in organisations that purport to be democratic, produce sound chambers that operate like cults as David Eggers splendid fictional book The Circle (Eggers, 2013) documents. A cult of mindless belonging to nothing is manifest, and it is not the only mindless cult around. We also have scientology, our esteemed religious traditions, and of course the neo-liberal cult itself. I fear that humans are becoming increasingly robotic, technical machines. Altruism, compassion and a concern for the plight of others is being eliminated. So leadership is what is needed but the Leader must like Churchill have ‘nothing to offer you but blood, sweat and Tears.’ And yet I retain faith that we will fight back against the fascism which Madeleine Albright, no less, believes has returned (Albright,2018). We are drifting towards this precipice incrementally, led by a coalition of interests inculcating robotic consumerism, passivity, environmental destruction and widening inequality. The democratic order has been subverted by rogue states and the corporatocracy. The Barbarian hordes are at the gates and a new Roosevelt must emerge to save democracy. Madeleine Albright, Fascism: A Warning. Collins, New York, 2018. Dave Eggers, The Circle, Knopf, New York, 2013. Vaclav Havel Open Letters: Selected Writings 1965-1990, Faber and Faber 1991. Timothy Snyder, On Tyranny: Twenty Lessons from the Twentieth Century, Tim Duggan Books, New York, 2017.
<urn:uuid:9248895b-45b6-4d94-866f-75ac4ad388d4>
CC-MAIN-2020-16
https://cassandravoices.com/current-affairs/global/there-is-something-rotten-in-the-state-of-democracy/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505730.14/warc/CC-MAIN-20200401100029-20200401130029-00394.warc.gz
en
0.957098
3,135
2.625
3
A mythological legend, based on oral traditions, states that Lahore was named after Lava, son of the Hindu god Rama, who supposedly founded the city. Lahore Fort has a vacant temple dedicated in honour of Lava. Likewise, the Ravi River that flows through northern Lahore was said to be named in honour of the Hindu goddess Durga. Ptolemy, the celebrated astronomer and geographer, mentions in his Geographia a city called Labokla situated on the route between the Indus river in a region described as extending along the rivers Bidastes or Vitasta (Jhelum), Sandabal or Chandra Bhaga (Chenab), and Adris or Iravati (Ravi). The oldest authentic document about Lahore was written anonymously in 982 and is called Hudud-i-Alam. It was translated into English by Vladimir Fedorovich Minorsky and published in Lahore in 1927. In this document, Lahore is referred to as a small shahr (city) with “impressive temples, large markets and huge orchards.” It refers to “two major markets around which dwellings exist,” and it also mentions “the mud walls that enclose these two dwellings to make it one.” The original document is currently held in the British Museum. ahore has been the capital of Punjab for about little above one thousand years; first from 1021 to 1186 under the Ghaznavid Dynasty, founded by Mahmud of Ghaznavi, then under Muhammad of Ghor followed by various Sultans of Delhi. It reached its full glory under Mughal rule from 1524 to 1752. The third Mughal emperor, Akbar, held his court in Lahore for 14 years from 1584 to 1598. In the 18th and 19th centuries the Sikhs also had their capital at Lahore. When the British took over in 1849, they erected splendid Victorian public buildings in the style that has come to be called Mughal-Gothic. Lahore is undoubtedly ancient. Legend has it that it was founded by Loh, son of Rama, the hero of the Hindu epic, the Ramayana. Some others think that the name means Loh-awar, meaning a “Fort as strong as Iron”. Within the walled city you may come across old Havelis or the spacious houses of the rich, which give you an inkling of the style of the rich and notables in the Moghul reign. The British during their reign (1849 -1947) contributed towards the beautification of Lahore by harmoniously combining Mughal, Gothic and Victorian styles of architecture. They built some important buildings, like the High Court, Government College, the Museums, the National College of Arts, Montgomery Hall, Tollinton Market, the Punjab University (Old Campus) and the Provincial Assembly. Lahore is Pakistan’s cultural, intellectual and artistic center. Its faded elegance, busy streets and bazaars, and wide variety of Islamic and British architecture make it a city full of atmosphere, contrast and surprise. Being the center of cultural and literary activities it may rightly be called the cultural capital of Pakistan With the advent of spring Basant Festival is celebrated with pomp and show in mid February every year in Lahore. In other words this is the spring festival. The entire population participates in kite flying matches to herald the coming of spring. That is why this festival is also known as “Jashn-e-Baharan”. This festival is at its peak in the spirited city of Lahore. Lahoreis enthusiastically participate various fun activities while kite flying being the main attraction. Basant is not only a kite flying event, but also a cultural festival of traditional food, dresses, dances and music. Night time kite flying is another spectacular sight to witness. The entire sky is lit with heavy duty lights and in this illuminated sky one can see hundreds of white colored kites dancing and competing for supremacy over the other. This atmosphere is further enlightened with barbecues and loud tempting music coming from all corners of the city. There are many hotels and restaurants in Lahore. Here is a listing of the hotels in Lahore which includes Hotel names their addresses phone numbers and fax numbers if any |Hotels and Addresses| |Hotel Name||Addresses||Phone & Faxes| |Adnan Hotel||Main Boulevard, Defence, Lahore.||Phone: 92-42-6663142 |Ali Continental||1-Mozang Road, Behind Lahore High Court, Lahore-54000||Phone: 92-42-7351421 |Amer Hotel||46-Lower Mall, Lahore.||Phone: 92-42-7115015 |Amin Sara Hotel||Main Boulevard, Defence Society, Opp. Adil Hospital, Lahore Cantt.||Phone: 92-42-5724601 |Avari Hotel||87-Shahrah-e-Quaid-e-Azam, Lahore||Phone: 92-42-6375805 |Baadees Hotel||35-Empress Road, Opp. Radio Station, Lahore||Phone: 92-42-6365378 |Bakhtawar Hotel & Restaurant||11-Abbot Road, Lahore.||Phone: 92-42-6316763 |Best Eastern Hotel||50-52, E III, Commercial Zone, Liberty Market, Gulberg III, Lahore-54660||Phone: 92-42-5751081 |CC Motel||105A Shahrah-e-Quaid-e-Azam, Opp. Chief College, Lahore.||Phone: 92-42-6360346 |Canal View Motel||2 Upper Mall, Canal Bank, Lahore.||Phone: 92-42-877153 |Citytrac Holiday Hotel||Room #6, Naqi Market, 75-The Mall, Lahore.||Phone: 92-42-6303990 |Country Comfort Motel||105A Mall Road, Lahore.||Phone: 92-42-6360346| |Davis Hotel||8-Davis Road, Lahore.||Phone: 92-42-6364150 |Emperor’s Inn||32A Zafar Ali Road, Behind State Guest House, Lahore.||Phone: 92-42-875577 |Executives Inn||7A Upper Mall, Canal Bank, Lahore.||Phone: 92-42-5753253 |Faletti’s Hotel||3-Egerton Road, Lahore.||Phone: 92-42-6363955 |Gino’s Pizza||Subs & Sandwiches, Ijaz Centre, Main Boulevard, Gulberg III, Lahore.||Phone: 92-42-5762971 |Herfa Inn Hotel||23/3, Race Course Road, Near Circuit House, China Chowk, Lahore.||Phone: 92-42-6376101 |Holiday Time Resorts||# 41, 42 and 43, 3rd Floor, Land Mark, Jail Road, Lahore.||Phone: 5711997 |Hotel Alpine||38M Model Town (Ext.) Lahore.||Phone: 92-42-5168401 |Hotel Ambassador||7 Davis Road, Lahore.||Phone: 92-42-6316830 |Hotel Dubai International||53-Shadman Market, Shadman, Lahore.||Phone: 92-42-7576772 or 7591961, Fax: 7581003 |Hotel Indus||56-Shahrah-e-Quaid-e-Azam, Lahore-54000,||Phone: 92-42-6302858 |Hotel Kashmir Palace (Pvt.) Ltd||14-Empress Road, Lahore-54000||Phone: 92-42-6316703 |Hotel Liberty & Restaurant||44-Commercial Zone, Liberty Market, Lahore.||Phone: 92-42-875233 |Hotel Rise||22 Liberty Market, Gulberg III, Lahore.||Phone: 92-42-870338| |Hotel Serene House||3S LCCHS, Defence, Lahore Cantt.||Phone: 92-42-5725565 |Hotel Services International||Shahrah-e-Quaid-e-Azam, Lahore.||Phone: 92-42-5750598 |Hotel Shine Shadman||56 Shadman 1, Lahore.||Phone: 92-42-7570600| |Hotel Sunfort||72-D/1, Liberty Commercial Zone, Gulberg III, Lahore.||Phone: 92-42-5763810 |Kija Royal Suites||8J Gulberg III, Lahore.||Phone: 92-42-851532 |Lahore Continental||70 Cavalry Ground, Lahore Cantt.||Phone: 92-42-6670830 |Lahore Hotel||Farooq Centre, McLeod Road, Lahore-54000||Phone: 92-42-7235964 |Magnum Hotel & Restaurant Co.||60 Street No. 2, Main Cavalry Ground, Lahore Cantt.||Phone: 92-42-6664665 |National Hotel||1 Abbot Road, Lahore.||Phone: 92-42-6363013 |Oriental Palace Hotel||104B-1 MM Alam Road, Gulberg III, Lahore.||Phone: 92-42-5755617 |Panache Motel||6 Allaudin Road, Bridge Colony #2 Next to Sherpao Bridge, Lahore Cantt.||Phone: 92-42-6672814| |Pearl Continental Hotel||Shahrah-e-Quaid-e-Azam, Lahore-54000||Phone: 92-42-6360560 |Ravi Lodge||12 Tufail Road, Lahore.||Phone: 92-42-6663968 |Regency Inn Hotel||5/G-1 Block H, Jail Road, Gulberg II, Near Sherpao Bridge, Lahore.||Phone: 92-42-5716030 |Safari Motel||98C Anand Road, Upper Mall, Near Gymkhana, Lahore.||Phone: 92-42-5750841 |Sara Continental Hotel||Khayaban-e-Jinnah, Cantt., Main Boulevard, Defence, Lahore.||Phone: 92-42-5732223 |Serenity Hotel||50B, Nagi Road, Lahore Cantt.||Phone: 6661238, |Seven Stars Motel||323 Upper Mall, Lahore-54000||Phone: 92-42-5711098 |Shalimar Hotel & Restaurant||36-Liberty Market, Gulberg III, Lahore.||Phone: 92-42-5758815 |Sheraton Hotel & Towers||172 Tufail Road, Lahore.||Phone: 92-42-6670400 |Shezan Hotels & Restaurants (Pvt.) Ltd.||7 Dyal Singh Mansion, Shahrah-e-Quaid-e-Azam, Lahore.||Phone: 92-42-7244106 |Shobra Hotel||55 Nicholson Road, Lahore.||Phone: 92-42-6364961 History of Lahore
<urn:uuid:33947622-5091-40d2-9857-1a561be9610e>
CC-MAIN-2020-16
http://www.pakscvts.org/history-of-lahore/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371818008.97/warc/CC-MAIN-20200408135412-20200408165912-00153.warc.gz
en
0.826053
2,459
3.453125
3
Rabbits are noisy. It’s shocking to hear your pet rabbit scream, growl and hiss. I suggest you learn all the sounds your pet rabbit makes and what they mean. In fact, I get asked by people all the time what noises a rabbit makes. So, do rabbit make noise when they: sleep, get hurt, mate, birth, get attacked or are happy? What Noises Does A Rabbit Make? Although your pet rabbit can’t bark like a dog or meow like a cat, she can make a variety of noises. These noises communicate her happiness, displeasure or even her fear. Rabbit owners are surprised at how vocal their pet rabbit can be. It might be disconcerting when to hear your cute bunny honk or growl, but it’s normal behavior for a rabbit. So, what noises can your rabbit make and what does it mean? Here is a list of common rabbit noises and what your pet rabbit is communicating. This is a common rabbit sound for an unaltered male rabbit. It’s a sign that he’s looking to mate. While grunting or honking, the male will circle around a female. Sometimes males grunt while mating with a female. Spayed females and neutered males also make these noises, but they make this sound because they’re happy or eager. Rabbit owners will hear this sound when they are feeding their rabbit or playing with their rabbit. Sometimes rabbits honk to get attention from you if you’re not giving them enough attention. Rabbits purr happily when you pet them. They make this noise by grinding their teeth together. Louder grinding of their teeth is not a happy sound, but a sign of pain. If your pet rabbit growls at you beware, she’s about to lunge at you and maybe even bite you. This noise means your rabbit is protecting her territory, ie her cage. Unspayed females are more prone to making this noise. Sometimes rabbits growl because they’re afraid. Whatever the reason, steer clear if your pet rabbit growls. She needs some space and you don’t want to get scratched or bit. Your pet rabbit thumps instinctively like his wild rabbit relatives. Thumping is a signal rabbits make with their hind feet to warn of danger. In domestic rabbits it’s more of a warning to keep a distance. Your rabbit thumps when a dog or cats gets too close to her cage or if she doesn’t like the sound of the vacuum cleaner so close. She’s letting your know she’s displeased and to please stop. Rabbits sigh sounds are soft and not always heard by their owners. If your rabbit likes her fur brushed or the special treat you just gave her, she’ll make a sighing sound. It means she’s happy and contented. Female rabbits make clucking sounds while feeding their babies. Spayed females cluck to show appreciation. Sneezing or snorting If your rabbit smells a strong odor, she’ll might sneeze or snort to show you her displeasure at the scent. It’s almost a sigh of disgust. If your rabbit continuously sneezes and has a runny nose, this means she’s getting sick and should be checked out by your vet. Rabbits squeal during a fight with another rabbit. Your pet rabbit will also squeal if you handle her too rough or hurt her. Rabbits make a hissing sound when they want to scare off any enemy or a perceived enemy. It’s an unhappy sound for a rabbit to make, so it could lead to lunging or scratching. If your pet rabbit whines at you as you’re about to pick her up, it means she doesn’t want you to pick her up. If a pregnant doe gets put into a cage with a male rabbit, whining means she’s not happy about being with him because he will try to mate with her and could get her pregnant with a second litter even though pregnant. Why Do Rabbits Scream When They Die? A rabbit screams, sometimes called a “death scream,” when she is dying or thinks she’s dying. It’s a panicky, hysterical sounding scream that’s often heard in the wild because a predator has grabbed a rabbit. Those who have heard it, say it’s a terrible sound to hear. Some rabbits owners say that their rabbit screams for no reason or because she’s afraid, but not necessarily dying. So, although rabbits have screamed when dying or about to die, they can scream when frightened or for reasons unknown. Do Rabbits Cry? Rabbits cry when in pain or frightened or very sad. Rabbits cry tears along with whimpering or screaming noises. Some rabbit owners are unaware that rabbits have such emotions. Here’s a list of reasons why your rabbit may act sad or cry. - Illness-Rabbits are prone to illness, some that cause sudden death. If your rabbit seems sad or depressed or she lets out a pitiful cry- have her checked at your vet immediately. Because rabbits are prey animals, when ill, rabbits kick into an instinctive survival mode which makes them hide their illness. Some rabbit owners say their rabbits was on death’s door but her only symptoms was that she looked a bit sad. So, when your rabbit shows pain, it’s serious. - Pain-If your rabbit is in pain, she might let out a cry. If she’s handled too rough or injured, she’ll cry out. - Fear-Rabbits scare easily. As a prey animal, they’re always on the alert for a predator. Even domestic rabbits perceive predators around them if the household dog or cat comes too close to their cage, or a child is too loud near the rabbit cage. Rabbits who are afraid cry or whimper. Why Do Rabbits Whimper? Rabbits whimper. A rabbit’s whimpers sound a nasal dog whimper. So, if you hear your pet rabbit whimper, here are some reasons why. Your rabbit…. Doesn’t want to be held-This could because your rabbit had a bad experience recently or she just don’t feel in the mood to be held. As a rabbit owner, you’ll get to know your own rabbit’s personality and preferences. Doesn’t feel safe-If your rabbit doesn’t feel safe with you or someone else, she might whimper to let you know she needs help. Pregnant rabbits don’t want males around-When a rabbit is pregnant, she doesn’t like to be around unneutered males because they might want to mate. Even pregnant, a female can get pregnant with a second litter. Thus the phrase, ”breed like rabbits.” Don’t feel safe in environment-If your rabbit is inside or outside and doesn’t feel safe, she may whimper. Rabbits feel unsafe is they think there’s a predator nearby. This could be a dog, a cat or even a squirrel if she’s outside. Why Is My Rabbit Making A Buzzing Noise? Rabbits make buzzing noises for the same reason they make honking noises. It’s usually a sign of happiness and pleasure. They often buzz or honk while circling either you or another rabbit. In unaltered rabbits, it’s a sign of sexual excitement or interest in mating. What Does It Mean When My Rabbit Squeaks? If your rabbit is squeaking, she’s a happy bunny. Her high-pitched squeak shows she’s really excited to eat a special treat or enjoying her new toy. But if your rabbit makes a deeper pitched squeaking sound, this means she’s not so happy. She might feel scared or unhappy or she doesn’t want to be held. As a rabbit owner, you’ll learn what your rabbit is communicating with her noises and the tones of each noise. What Does Bunny Oinking Mean? A female rabbit’s soft oinking is sometimes called honking. The oinking sound is associated with mating. She could call a male rabbit to mate with her or telling a male rabbit to stay away from her. It’s not an aggressive angry sound. Sometimes she’ll circle the male rabbit. Rabbit oinking is sometimes called a “courting sound.” Although rabbits circle if interested in mating, it sometimes means your rabbit just wants your attention, she needs food, water or affection. Is My Rabbit Depressed? Rabbits are happy, social animals. But sometimes, they get depressed if they’re sick, stressed out or their environment isn’t the best. If your rabbits is depressed for a long time need medical attention. They might be sick with very few symptoms. As a rabbit owner, you will become familiar with your rabbit enough to know that’s something wrong with them. Here are some symptoms of depression to look for in your rabbit. Lethargy-Your rabbit should be energetic, running around inside your house or in her outside play area. When your pet is depressed, she will often act listless, tired and won’t want to play with you. Hiding-If your rabbit hides, she’s not happy or not well. Unsocial-Rabbits are social animals by nature. If your rabbit doesn’t want to interact with you or other rabbits, this is a bad sign. Loss of appetite-Rabbits need to eat their weight in hay every day. They need plenty of fresh vegetables, herbs and water. If your rabbit isn’t eating these things it could cause depression or loss of appetite. Not eating will affect her digestive system which can cause illness because of diarrhea or other digestive problems. Pacing-When your rabbit paces, she’s under stress or anxious. Rabbits are sensitive to being moved, if you’ve moved her cage, she could feel unsafe in the new place. Rabbits are clean animals. If her cage or hutch isn’t clean, this can cause anxiety for your rabbit. Keep fresh hay for her to eat in the hay rack. Replace old straw with fresh straw daily. Clean out your rabbit’s litter box. Always change your rabbit’s water bowl daily. Biting- Rabbits need to chew to keep their teeth trimmed down since they grow for their entire lives. If your rabbit stops chewing hay or wooden toys you’ve included in the cage, but is biting things like the bars of the cage, it could mean she’s depressed or sick. Over grooming-Over grooming is a sign that something is wrong with your rabbit. Often fear or anxiety causes Over grooming. If your rabbit keeps Over grooming, she’ll cause bald spots on her fur. Plus she might eat lots of fur causing hairballs to build up in her stomach. Hairballs can cause obstructions in her digestive tract. If you notice these symptoms, contact your vet right away. He can check your rabbit to make sure she’s healthy. Rabbits are good communicators. They squeak, buzz, honk, scream and grunt. Their sounds vary depending upon what they’re trying to communicate to their owners. Rabbit owners soon learn to interpret their pet rabbit’s noises. Because every rabbit has her own personality and preferences, learning what her noises mean will help her owner be able to take care of her in the best way.
<urn:uuid:8ba05247-c937-4875-b03e-111314f2a09d>
CC-MAIN-2020-16
https://www.petsial.com/rabbits-noise-die-sleep-get-hurt-mate-give-birth-get-attacked-are-happy/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370492125.18/warc/CC-MAIN-20200328164156-20200328194156-00075.warc.gz
en
0.948966
2,471
2.515625
3
FAQ – Legal Professionals What is judiciary interpretation? There are several different branches of interpretation: (1) legal, (2) conference, (3) medical/mental health, (4) escort, (5) seminar, and (6) business. Legal interpretation is divided into two main categories, judicial (commonly known as court interpreting) and quasi-judicial (interpreting that takes place in other legal settings). Judiciary interpreters work in courtrooms and in out-of-court settings, in any matter related to law or a legal case. Judiciary interpreters are highly skilled professionals who fulfill an essential role in the administration of justice by providing complete, unbiased, and accurate interpretation between English speakers and non-English or limited-English-proficient (LEP) defendants, litigants, victims, or witnesses. They are impartial officers of the court, with a duty to serve the judicial process. The judiciary interpreter’s role is to help remove the linguistic barriers that impede an LEP individual from full and equal access to justice under the law. Where do judiciary interpreters work? Assignments may take place in juvenile, municipal, state, or federal court, or in out-of-court settings such as attorneys’ offices, jails, law enforcement facilities, or other locations. In some states, such as California, certified court interpreters are also qualified to interpret in medical settings, by virtue of their certification, qualification, training, and experience. What kinds of cases do court interpreters work in? Judiciary interpreters cover virtually every kind of case (civil and criminal) in municipal, state, and federal courts. State court interpreters cover matters including personal injury, small claims, landlord/tenant disputes, traffic violations, domestic violence, child support, sexual assault, rape, homicides, drug offenses, arson, and illegal gambling, to name a few. Legal proceedings may include initial appearances, bail applications, pretrial conferences, pleas, evidentiary hearings, trials, sentencings, or post-sentencing hearings. Judiciary interpreters also work outside of the courtroom in other legal or quasi-legal settings such as attorney-client interviews, prosecutor and victim or witness interviews, proffer sessions with prosecutors, Grand Jury proceedings, or law enforcement interviews and interrogations. In addition, they may interpret for court support personnel or other justice services (e.g., probation officers, medical personnel conducting psychiatric evaluations, law enforcement personnel conducting polygraph examinations); probation and parole interviews; administrative hearings; depositions; immigration hearings; and worker’s compensation hearings. What is the difference between interpretation and translation? In popular usage, the terms “translator” and “translation” are frequently used for conversion of either oral OR written communications. However, within the language professions, translation is distinguished from interpreting according to whether the message is produced orally (interpreting) or in writing (translation). What is the difference between the trained interpreter and the bilingual speaker? While a trained interpreter is bilingual, a bilingual person is not a competent interpreter in the legal environment. A legal interpreter is trained in professional conduct, is knowledgeable about legal terminology and procedures, follows ethical procedures required in courts and other legal settings, allows both parties to communicate as directly as possible. While the legal interpreter is not allowed to address possible cultural issues in the court setting, he/she may be able to identify and address these cultural barriers during legal consultations in the lawyer’s office and other non-court settings. What are the techniques of interpreting? How is it done? In legal settings, only three modes of interpretation are permitted by federal or state statute, court rule, or case law. These modes are: simultaneous interpretation, consecutive interpretation, and sight translation. All three modes require skills beyond near-native proficiency in both languages. (See Position Paper on Modes of Interpreting.) The main technique in judiciary interpretation is that the interpreter uses the same grammatical voice as each speaker, without ever lapsing into the third person. This is called direct speech, and permits people to communicate with each other directly. The interpreter’s task is to interpret everything from one language into the other language, while preserving the tone and register of the original discourse. An interpreter is not permitted to give a summary (also known as “occasional” interpretation) of a speech or text. Some judges and attorneys have a mistaken belief that an interpreter renders court proceedings word for word, but this is impossible since there is not a one-to-one correspondence between words or concepts in different languages. For example, sometimes one word in English requires more than one word in another language to get the same idea across, and vice versa. Rather than word for word, then, interpreters render meaning by reproducing the full content of the ideas being expressed. Interpreters do not interpret words; they interpret concepts. What does a legal translator do? A legal translator prepares written translations of documents related to criminal and/or civil matters, such as medical or psychological evaluations; forensic reports (drug analyses, DNA reports or medical reports); divorce decrees; foreign judgments; extradition documents; statutes and contracts, or other relevant documents. The translation may be from the foreign language into English or from English into the foreign language. Tape transcription and translation of audio or video recordings are also needed for legal and quasi-legal proceedings. Transcription is an area of legal interpretation that requires additional training and expertise. What are the job requirements for becoming a court interpreter? In addition to near-native fluency in English and another language, and specialized skill in the required modes of interpretation, a judiciary interpreter and/or translator must be knowledgeable about the structure of the court system and the terms related to criminal and civil justice settings. A judiciary interpreter must have wide general knowledge (equivalent to at least two years of college-level education); and an extensive vocabulary ranging from formal discourse to colloquialisms and slang. Competence also requires a cooperative and flexible attitude. An interpreter deals with people from many walks of life and must remain professional, unbiased, and neutral towards all. Lastly, a judiciary interpreter must have a good understanding of the protocol applicable to each distinct venue and be familiar with the interpreters’ code of ethics and the laws that govern it. An interpreter must possess good short-term memory skills; must be able to multi-task while engaged in note-taking; and must process and reproduce meaning quickly and accurately into another language. These skills are acquired over considerable time and constantly polished to improve speed, accuracy and delivery. A translator must possess excellent writing, research and analytic skills. Is there a code of ethics that judiciary interpreters and translators must follow? NAJIT has a Code of Ethics and Professional Responsibilities which is binding on all its members. The Administrative Office of the United States District Court has developed standards for performance and professional responsibility. The states that belong to the Consortium for Language Access in the Courts have also developed codes of ethics for judiciary interpreters and translators, which may vary in format, but are based on the National Center for State Courts’ Model Code and cover the same ethical principles. CourtEthics.org has a compendium of interpreter codes in the states. Are interpreters subject to background checks and security clearances? Yes. Each court system may conduct a criminal record check on the interpreters it employs part or full time. Is there a constitutional right to an interpreter? Although the United States Constitution does not explicitly provide for the right to an interpreter, the individual rights and liberties afforded to all individuals under the Fourth, Fifth, Sixth, Eighth and Fourteenth Amendments are meaningless for non- or limited-English speakers unless they are provided with complete, competent, and accurate interpreting services. Do states have statutes that require interpreters for court proceedings? Yes. Many states have statutes or court rules that provide for the appointment of interpreters in court proceedings. An internet search will lead you to the relevant statutes. Is there a federal statute that governs the appointment of interpreters? Nationally, the Court Interpreters Act was enacted in 1978. Title 28 USC §1827 is the federal law that establishes appointment and qualification procedures for interpreters in judicial proceedings instituted by the United States. In addition the Civil Rights Act of 1964, and Executive Order 13166, issued in 2000, requires all recipients of federal assistance, including state courts, to implement plans to ensure that limited English proficient individuals have access to services. In recent years, the U.S. Department of Justice has been reviewing state court’s compliance with this executive order. In 2010, Assistant Attorney General Perez wrote a letter to the state courts outlining the Department of Justice’s concerns about access to courts for limited English proficient individuals. What is the relevant case law regarding interpretation? United States ex rel Negrón (1970) is among the most important cases related to judiciary interpretation. In Negrón, a Spanish-speaking defendant’s New York state murder conviction was overturned on constitutional grounds. Negrón had been provided only periodic summaries during breaks, rather than a complete, ongoing interpretation of his trial proceedings. His limited comprehension of the proceedings was found to be a violation of his due process rights. What happens if an interpreter makes a mistake? Poor interpretation may cause injustices; that is why standards, training, and certification are so vitally important. However, interpreters are human, and humans are fallible, so mistakes do occasionally occur. When an interpreter becomes aware of an error in interpretation, the interpreter is ethically required to correct the mistake immediately. In court, the interpreter should address the judge, acknowledge the error, and request that the record reflect the correction. Outside of court, an interpreter should address the legal authority in the specific setting in which the interpreter is working. For example, if an interpreter has been contracted by an attorney to interpret in an attorney-client interview or witness interview, the interpreter should address the attorney to acknowledge an error. If an interpreter has been contracted by law enforcement, the interpreter should address the interviewing or interrogating officer. If an interpreter was contracted by a social services agency, the interpreter should address the social worker, and so forth. Complex and sensitive issues of protocol are involved when correcting a mistake for the record. For example, the interpreter might need to request permission to approach the bench during a jury trial. Interpreters should become familiar with procedures and protocols for resolving specific problems or managing specific situations. Ideally, interpreters should work in teams of two for trials and longer proceedings. This helps avoid interpreter fatigue, and provides mutual assistance when omissions or other errors occur. Federal statute as well as some state court statutes or court rules contain provisions on the use of team interpreting. NAJIT strongly recommends this standard. (See NAJIT’s position paper on Team Interpreting). What happens if an interpreter doesn’t know how to interpret a word or phrase? The answer depends on where this occurs. Knowledge of ethics and technique come into play when an interpreter is confronted with an unknown word or expression. - If a witness says something that the interpreter does not understand, the interpreter must seek clarification of the problem word or expression, after requesting permission from the judge to inquire of the witness. - In situations outside the courtroom, the interpreter must request permission from the attorney or other judicial or law enforcement officer to seek further clarification from the speaker. While simultaneously interpreting court proceedings, an interpreter may have to interrupt the speakers in order to request a repetition or clarification. There are several other ways of making corrections or compensating for gaps, depending on the situation. In general, an interpreter uses finely tuned analytic and cognitive skills to derive meaning from context. Electronic or other dictionary resources may quickly be consulted, or colleagues may be consulted, or further clarification may be requested of the original speaker. An interpreter should not hesitate to request clarification immediately if a witness uses an unfamiliar expression. Are cases ever appealed because of an interpretation issue? Yes. For a thorough discussion of interpretation-related issues on appeal, see Interpreter Issues on Appeal by Dr. Virginia Benmaman(Proteus, Fall 2000). See also, “Interpreters and Their Impact on the Criminal Justice System: The Alejandro Ramirez Case,” by Isabel Framer (Proteus, Winter-Spring 2000). Finally, you might want to read “Interpreters As Officers of the Court: Scope and Limitations of Practice” (Proteus Summer 2005);
<urn:uuid:739ad06f-6225-4639-9117-0c0b8b1fc0b8>
CC-MAIN-2020-16
http://mamiinterpreters.org/faq-for-legal-professionals/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371810807.81/warc/CC-MAIN-20200408072713-20200408103213-00273.warc.gz
en
0.912057
2,590
2.578125
3
When a group of influential Leeds merchants petitioned King Charles I for a Royal Charter in 1626, one of the wrongs they wanted correcting was Leeds’s lack of representation in Parliament. The King granted the Charter but Leeds failed to secure an MP. Although Leeds’s interests were represented in Cromwell’s Parliaments in the 1650s by Adam Baynes, a civil war officer from Knostrop, the town would have to wait until the Reform Act 1832 before it became a Parliamentary constituency. The battle for Parliamentary reform was fought bitterly in Leeds. The Tories, who controlled the corporation of Leeds, resisted the calls for wider representation and were supported by the Tory-sympathising ‘Leeds Intelligencer’ newspaper. Edward Baines, of the Whiggish ‘Leeds Mercury’, led the calls for reform. On 14th May 1832, a crowd of 20,000 attended a public meeting at the Leeds Mixed Cloth Hall (the site of modern-day City Square) in support of the Reform Bill, requiring greater representation in Parliament. At the end of the meeting, according to a report in the ‘Leeds Intelligencer’, various prominent backers and opponents of the Bill were given a chorus of cheers and groans: “Three cheers for Earl Grey [the Prime Minister]; three cheers for reform; three groans for the Duke of Wellington and three groans for the Queen; three cheers for Lord Morpeth; three cheers for Lord Brougham; three cheers for the majority of the House of Commons; and three groans for the bishops.” Above: a depiction of the Reform Bill meeting at the Mixed Cloth Hall, 1832. Picture credit: Leeds Library and Information Service. The Reform Act 1832 received Royal Assent later that year and, as a result of the reforms, Leeds elected its first members of Parliament in that year’s general election. Campaigning for the 1832 election was accompanied by much violence: Whig-supporting Orangeists fought with Tory Blue Bludgeon Men. The town’s special constables were sent to the Mixed Cloth Hall to quell a riot caused when the Tories marched from Briggate carrying a banner depicting children working as slaves in Marshall’s Mill, Holbeck (John Marshall junior was standing as a Whig candidate along with historian Thomas Babington Macauley; the Tory candidate, Michael Sadler, was a vocal campaigner for better conditions for children working in mills and factories). Sadler grew a beard during the campaign and his opponents claimed that he had done so to curry favour with Jewish voters and speculated whether he might also have been circumcised. In response, Sadler said that his detractors had brought voters’ attention to “a certain part of the human body to which it had hitherto been thought in a civilized society indecent to allude”. In the 1832 election, only households worth £10 or more were eligible to vote. This counted against Sadler who enjoyed much support among the working classes but they were ineligible to vote. Both Whig candidates, Marshall and Macauley, were duly elected as Leeds’s first two MPs. Leeds remained a Liberal stronghold for the next 50 years or so. On 3rd May 1842, a petition in support of the People’s Charter was presented to Parliament. It had been signed by 3 ¼ million people, more than 40,000 of whom came from Leeds. The Chartists, as they were known, were asking for the right to vote to be extended to all men over the age of 21 and for certain other reforms to make Parliamentary elections fairer and more representative. Leeds was at the centre of the Chartist movement and was the home of the ‘Northern Star & Leeds General Advertiser’ founded by one of the heroes of the movement, Feargus O’Connor. At its peak, the ‘Northern Star’ had the second-highest newspaper circulation in the country. When Parliament rejected the petition, frustration boiled over into anger, exacerbated by a worsening economic climate, increased unemployment and widespread pay cuts for those in work. The summer of 1842 saw a general strike and a series of sometimes violent protests, in which protestors sabotaged mills by removing plugs from the boilers to prevent steam from being raised – these became known as the ‘Plug Riots’. Such was the tension in Leeds that Prince George, Duke of Cambridge was despatched to the town at the head of the 17th Lancers to help keep order. More than 30,000 staves were prepared and sharpened to use against the mob. On 17th August 1842, an immense crowd set out from Bradford towards Leeds. By the time they had reached Calverley, their numbers had swelled to more than 6,000. At every mill they passed, they demanded that the plugs be pulled and, if their demands were not met, they were prepared to use violence. At Bramley, some of the protestors were so racked with hunger that they looted butcher’s shops and ate the meat raw. After stopping all the mills in Bramley, the protest rolled on to Pudsey. By now, several thousand more had joined the throng which had grown to more than 10,000-strong. The manager of Bank’s Mill in Pudsey sought to defy the plug-pullers and violence soon erupted, with the protestors intent on destroying the mill. A detachment of the 17th Lancers was sent to read the Riot Act and put down the protest. Defiantly, the crowd surged towards the mounted soldiers who, numbering only 14, turned their horses and fled back to Leeds. Once Bank’s Mill, like all the others, had been stopped, the crowd began to disperse. Meanwhile, in Leeds, there were several clashes between protestors, the Lancers and the Leeds City Police Force. Thirty-eight men were arrested and subsequently charged. Despite popular support for Chartism, it was not until 1867 that the right to vote was extended to working class (male) heads of households and it would be another half century before all men obtained suffrage. Leeds was also central to the women’s suffrage movement and I have written about that here. Leeds was granted a third MP in 1867 and was subdivided into five constituencies in 1885. In the late 19th and early 20th centuries, Leeds also played a pivotal role in the creation of the modern Labour Party. In 1884, Tom Maguire established a Leeds branch of the Social Democratic Federation which later disbanded and became part of the Socialist League. Maguire organised the Leeds gasworkers’ strike in 1890 before joining the Independent Labour Party in 1893. Maguire’s contemporary, Isabella Ford, founded the Leeds Independent Labour Party and the Leeds Tailoresses’ Union. In 1903, she became the first woman to address the annual conference of the Labour Representation Council (which later became the Labour Party). Above: Isabella Ford Some of Leeds’s notable MPs include the following: Liberal Prime Minister, William Ewart Gladstone, was elected as one of Leeds’s MPs in the 1880 general election but had also campaigned and been elected in Midlothian. He took his seat there instead. Above: cartoon depicting Gladstone’s twin campaigns. Picture credit: Leeds Library and Information Service. Conservative Gerald Balfour, brother of former Prime Minister, Arthur Balfour, represented Leeds Central from 1885 to 1906. He was interested in the paranormal and was “a firm believer in the reality of communication between the living and the dead”. He served as President of the Society for Psychical Research after losing his seat. Herbert Gladstone, son of Prime Minister Gladstone, was re-elected as MP for Leeds West in 1906 with an increased majority as a result of a tactical voting pact which saw the Labour Party candidate stand aside. At the same election, James O’Grady became Leeds’s first Labour MP. Prominent local solicitor Arthur Willey was elected as Conservative MP for Leeds Central in 1922 after tipping his racehorse ‘Leeds United’ to win at Leicester. Its subsequent win netted Leeds voters combined winnings of £190,000. Willey’s tragic death (about which I have written here) resulted in a by-election in which fellow Conservative Sir Charles Wilson was elected in his place. Wilson was a larger than life character who reputedly once wrestled naked with the Mayor of Paris. He wanted Leeds’s territory to extend from the Pennines to the North Sea. During his leadership of Leeds City Council, various suburbs were brought within expanded Leeds boundaries, including Cookridge, Alwoodley, Roundhay and Seacroft. He once declared that “I am Leeds” and wanted “to make Leeds the hub of the universe”. Above: cartoon depicting Sir Charles Wilson Leeds West Conservative MP (1931-45) Vyvyan Adams was one of only two Conservative MPs opposed to Chamberlain’s Munich Agreement with Adolph Hitler. When the Nazi zeppelin Hindenburg flew over Leeds in 1936, Adams raised concerns in the House of Commons. He served in the Second World War with the Duke of Cornwall’s Light Infantry. Labour’s landslide election victory in 1945 saw Leeds return its first female MP. Alice Bacon served Leeds North East and then Leeds South East from 1945 to 1970. Rachel Reeves (current MP for Leeds West) is so far the only other woman to have represented Leeds in Parliament. Sir Donald Kaberry (Conservative, Leeds North West 1950-83) commanded a battery at Dunkirk and was injured when the IRA bombed the Carlton Club in 1990. He died the following year, aged 83. Dennis Healey served as Labour MP for Leeds South East and then Leeds East from 1952 to 1992. Healey, famed for his bushy eyebrows and supposed catchprase ‘silly billy’ (attributed to him by impressionist Mike Yarwood), once famously likened being questioned in the House of Commons by Geoffrey Howe to being “savaged by a dead sheep”. Labour leader and Leeds South MP Hugh Gaitskell was widely expected to become Prime Minister at the 1959 election but the Conservatives under Harold McMillan ran a highly effective campaign and increased their majority. Gaitskell died prematurely aged 56 following a visit to the Soviet Union. Conspiracy theorists claimed that he had been the victim of a KGB plot to install Harold Wilson as Labour leader and, subsequently, Prime Minister. These claims have never been substantiated. Roman Catholic John Battle (Labour MP for Leeds West from 1987-2010) was made a Knight Commander with Star of the Order of St Gregory the Great by Pope Benedict XVI for his contributions to the church and to Parliament. Hilary Benn (Labour MP for Leeds Central since 1999) has the dubious distinction of being elected on a turnout of just 19.6% which, at the time, was the lowest ever turnout at a by-election. My book ‘On This Day In Leeds’ is available to purchase here
<urn:uuid:9f7d6e64-ddd8-4160-a2c6-98696e523872>
CC-MAIN-2020-16
https://rhodestothepast.com/2019/12/08/leeds-election-special/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371824409.86/warc/CC-MAIN-20200408202012-20200408232512-00035.warc.gz
en
0.979994
2,342
2.59375
3
By John P. Walsh A closed-down weather-beaten replica of the very first McDonald’s franchise restaurant started by Ray Kroc (1902-1984) on April 15, 1955 standing on its original site in Des Plaines, Illinois, is slated to be demolished by McDonald’s Corporation with its land donated or possibly sold. It was not long ago that McDonald’s touted that approximately one in every eight American workers had been employed by the company (Source: McDonald’s estimate in 1996) and that even today McDonald’s hires around 1 million workers in the U.S. every year. By 1961 there were 230 McDonald’s franchises in the United States. In 2017 there was 37, 241 McDonald’s restaurants worldwide. Not only historians and historic preservationists decry the imminent demolition of the first McDonald’s restaurant in Des Plaines, Illinois, just west of Chicago, but others impressed by its direct significance to the growth and impact to U.S. labor history as well as the American restaurant industry and American automotive culture in the post-World War II era. Further, McDonald’s restaurants today reach into 121 other countries around the world influencing and being influenced by global cuisine. That all of this cultural and business import was born on a now-threatened patch of land on Lee Street in Des Plaines, Illinois, is impressive. It appears that if and when McDonald’s follows through on its November 2017 decision to raze the building and give up the site, this originally-designed McDonald’s restaurant on Ray Kroc’s original site in Des Plaines will be forever lost. The story of how that planned demolition of this unique piece of Americana came to be began 35 years ago. It was on March 3, 1984 that after 29 years of continual operation the original franchise restaurant on the original site was permanently closed and demolished. Founder and former McDonald’s Corporation chairman Ray Kroc had died less than six weeks before in January 1984 at 81 years old in San Diego, California. The McDonald’s restaurant brand opened its first burger bar called McDonald’s Bar-B-Q in California in 1940 – and, by 1953, brothers Maurice and Richard McDonald started a small franchise business in Phoenix, Arizona and Downey, California. Today’s nationwide and global franchise empire that serves 75 burgers every second (Source: McDonald’s Operations and Training Manual) began when Oak Park, Illinois-born Ray Kroc, a paper-cup-turned-milkshake-machine salesman, convinced the McDonald brothers to let him franchise their business nationwide. Kroc offered to manage the franchises in the U.S., excepting the brothers’ first franchises in Arizona and California, and the pair were to receive a tiny percentage of gross sales nationwide in return. Kroc’s first walk-up franchise McDonald’s restaurant at the “Five Corners” intersection in Des Plaines, Illinois, served an assembly-line format menu of hamburgers, cheeseburgers, french fries and a selection of drinks. In 1955, he founded McDonald’s System, Inc., a predecessor of the McDonald’s Corporation, and six years later bought the exclusive rights to the McDonald’s name and operating system. By 1961, Ray Kroc’s vision had clearly paid off for the now 59-year-old former paper cup salesman. That same year, Kroc bought out the McDonald brothers for $2.7 million and launched his strict training program, later called “Hamburger University, ” in nearby Elk Grove Village, Illinois, at another of his 230 new McDonald’s restaurants. Ray Kroc’s original vision was that there should be 1,000 McDonald’s restaurants in the United States. When Kroc died in January 1984, his goal had been exceeded six fold — there were 6,000 McDonald’s restaurants in the U.S. and internationally in 1980. The Des Plaines suburban location of Ray Kroc’s very first McDonald’s franchise retains its relatively humble setting even as the McDonald’s Corporation it spawned earns $27 billion in annual sales making it the 90th-largest economy in the world (Source: SEC). Kroc, the milkshake machine salesman who convinced the McDonald brothers to let him franchise their fast-food operation nationwide, saw his original McDonald’s franchise at 400 Lee St. in Des Plaines open for business until, shortly after his death, it closed on Saturday, March 3, 1984. In 1984 there were no plans to preserve the site – its golden arches and road sign had been carted away – but a public outcry prompted McDonald’s in 1985 to return the restaurant’s restored original sign designed by Andrew Bork and Joe Sicuro of Laco Signs of Libertyville, Illinois, and dedicate a restaurant replica that still exists today on the original site though it is now slated for demolition. The historic red neon-lettered sign turned on for the opening of Kroc’s first store on April 15, 1955 – there is one similar to it preserved in The Henry Ford museum in Dearborn, Michigan dating from 1960 – proclaimed “McDonald’s Hamburgers” and “We Have Sold Over 1 Million” and, intersecting with an iconic golden arch displayed a neon-animated “Speedee” chef, the fast food chain’s original mascot. (The clown figure of Ronald McDonald first appeared in 1963). The day after the original restaurant closed – Sunday, March 4, 1984 – a McDonald’s restaurant franchise moved across the street into a state-of-the-art new building on a site that once accommodated a Howard Johnson’s and, after that, a Ground Round. The full-service McDonald’s in Des Plaines, Illinois, today continues to operate out of that 1984 building. It may confuse the visitor which exactly is the original site of the first McDonald’s as the newer 1984 building not on the first site displays inside a high-relief metal sign that reads: “The national chain of McDonald’s was born on this spot with the opening of this restaurant.” Though undated, it is signed by Ray Kroc which points to it being brought over from the original restaurant when it was closed. At the replica restaurant on the original site two metal plaques (dated April 15, 1985) properly proclaim: “Ray A. Kroc, founder of McDonald’s Corporation, opened his first McDonald’s franchise (the ninth McDonald’s drive-in in the U.S.) on this site, April 15, 1955.” A few months after the first franchise restaurant was closed and demolished in 1984, the parcel of land on which it sat – it had only always been leased since 1955 – was purchased by McDonald’s at the same time they announced plans for the replica landmark restaurant. The original architectural plans by architect Robert Stauber from the mid1950’s were lost, so 1980’s planners applied architectural drawings of McDonald’s restaurants built in the late 1950’s for the replica. Its kitchen included refurbished equipment brought out of storage, including the restaurant’s original six-foot grill. It also displayed one of Ray Kroc’s original multimixers like the ones he sold to Maurice and Richard McDonald that started a fast-food partnership in the 1950’s which by the mid-1960’s inspired many well-known copy cats of McDonald’s model, including Burger King, Burger Chef, Arbys, KFC, and Hardee’s. The original restaurant had been remodeled several times during its almost 30 years of operation but never had much in the way of indoor seating or a drive-through. It did feature a basement and furnace built for Chicago’s four seasons and was used by the replica museum to exhibit items. The McDonald’s Museum was open for tours until September 2008 when the site experienced record-setting flooding from the nearby Des Plaines River. In April 2013 another record flood in Des Plaines submerged the McDonald’s Museum and produced serious speculation that the site would be moved or permanently closed. In mid-July 2017, only four years since the last significant flood, the area experienced its worst flooding on record. In November 2017 McDonald’s announced it would raze the replica restaurant structure and by May 2018 the site had had its utilities disconnected and its golden arches, Speedee sign, and main entrance McDonald’s sign dismantled and removed. These historically valuable items were taken by McDonald’s out of public view to an undisclosed location. Once again, and this time more seriously it appears, the prospect of pleas by Des Plaines municipal authorities, historic preservationists, social media and others for McDonald’s Corporation to preserve the site intact is murky at best. number of franchises in U.S. 1961 – http://sterlingmulti.com/multimixer_history.html# – retrieved May 8, 2018 number of restaurants 2017- https://www.statista.com/statistics/219454/mcdonalds-restaurants-worldwide/ -retrieved May 8, 2018. 121 countries – https://en.wikipedia.org/wiki/List_of_countries_with_McDonald%27s_restaurants – retrieved May 8, 2018. McDonald’s System, Inc; McDonald brothers for $2.7 million; Hamburger University; Kroc’s 1,000 restaurant vision – https://www.mcdonalds.com/us/en-us/about-us/our-history.html – retrieved May 8, 2018. 6,000 McDonald’s restaurants by 1980- https://en.wikipedia.org/wiki/History_of_McDonald%27s#1980s – retrieved May 8, 2018 original architectural plans lost – http://www.dailyherald.com/news/20171120/mcdonalds-plans-to-tear-down-des-plaines-replica-retrieved May 6, 2018. 2008 Des Plaines River flood- http://articles.chicagotribune.com/2013-04-18/news/chi-des-plaines-roads-flooded-after-storm-20130418_1_des-plaines-river-big-bend-lake-water-levels- retrieved May 8, 2018. 2013 Des Plaines River flood – https://patch.com/illinois/desplaines/bp–des-plaines-river-flood-information-03bfa82b– retrieved May 8, 2018. 2017 Des Plaines River flood ©John P. Walsh. All rights reserved. No part of this material may be reproduced or transmitted in any form or by any means, electronic or mechanical, which includes but is not limited to facsimile transmission, photocopying, recording, rekeying, or using any information storage or retrieval system.
<urn:uuid:15093257-dfe7-4097-8f13-5cfaab201896>
CC-MAIN-2020-16
https://johnpwalshblog.com/2018/05/08/ray-krocs-very-first-mcdonalds-franchise-restaurant-started-in-1955-in-des-plaines-illinois-is-slated-to-meet-the-wrecking-ball/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371611051.77/warc/CC-MAIN-20200405213008-20200406003508-00354.warc.gz
en
0.942227
2,330
2.6875
3
Sponsored Link • Bjarne Stroustrup talks with Bill Venners about raising the level of abstraction, why programming is understanding, how "oops happens," and the difference between premature and prudent optimization. Bjarne Stroustrup is the designer and original implementer of C++. He is the author of numerous papers and several books, including The C++ Programming Language (Addison-Wesley, 1985-2000) and The Design and Evolution of C++ (Addison-Wesley, 1994). He took an active role in the creation of the ANSI/ISO standard for C++ and continues to work on the maintenance and revision of that standard. He is currently the College of Engineering Chair in Computer Science Professor at Texas A&M University. On September 22, 2003, Bill Venners met with Bjarne Stroustrup at the JAOO conference in Aarhus, Denmark. In this interview, which is being published in multiple installments on Artima.com, Stroustrup gives insights into C++ best practice. Bill Venners: I originally learned C++ from Borland's "World of C++" video. At the beginning of that video, you have a brief cameo appearance in which you state that what you were trying to do in C++ is raise the level of abstraction for programming. Bjarne Stroustrup: That's right. Bill Venners: What does raising the level of abstraction mean, and why is a high level of abstraction good? Bjarne Stroustrup: A high level of abstraction is good, not just in C++, but in general. We want to deal with problems at the level we are thinking about those problems. When we do that, we have no gap between the way we understand problems and the way we implement their solutions. We can understand the next guy's code. We don't have to be the compiler. Abstraction is a mechanism by which we understand things. Expressing a solution in terms of math, for instance, means we really did understand the problem. We didn't just hack a bunch of loops to try out special cases. There is always the temptation to provide just the solution to a particular problem. However, unless we try to generalize and see the problem as an example of a general class of problems, we may miss important parts of the solution to our particular problems and fail to find concepts and general solutions that could help us in the future. If somebody has a theory, such as a theory for matrix manipulation, you can just work at the level of those concepts and your code will become shorter, clearer, and more likely to be correct. There's less code to write, and it's easier to maintain. I believe raising the level of abstraction is fundamental in all practical intellectual endeavors. I don't consider that a controversial statement, but people sometimes consider it controversial because they think code at a higher level abstraction is necessarily less efficient. For example, I got an email two days ago from somebody who had heard me give a talk in which I had been arguing for using a matrix library with proper linear algebra support. He said, "How much does using the matrix library cost more than using arrays directly, because I'm not sure I can afford it." To his great surprise, my answer was, "If you want the efficiency I pointed to, you cannot use the arrays directly." The only code faster than the fastest code is no code. By abstracting to matrix manipulation operations, you give the compiler enough type information to enable it to eliminate many operations. If you were writing the code at the level of arrays, you would not eliminate those operations unless you were smarter than just about everybody. So you'd not only have to write ten times as much code if you used arrays instead of the matrix library, but you'd also have to accept a program that runs more slowly. By operating at the level where we can understand things, sometimes you can also operate at the level where we can analyze the code�we being compilers in the second case�and get better code. My two favorite examples of this phenomenon are matrix times vector operations, where C++ on a good day can beat Fortran, and simple sorts, where C++ on a good day can beat C. The reason in both cases is you've expressed the program so directly, so cleanly, that the type system can help in generating better code�and you can't do that unless you have a suitable level of abstraction. You get this beautiful case where your code gets clearer, shorter, and faster. It doesn't happen all the time, of course, but it is so beautiful when it does. Bill Venners: In the static versus dynamic typing debate, the proponents of strong typing often claim that although a dynamically typed language can help you whip up a prototype very quickly, to build a robust system you need a statically typed language. By contrast, the main message about static typing that I've gotten from you in your talks and writings has been that static typing can help an optimizer work more effectively. In your view, what are the benefits of static typing, both in C++ and in general? Bjarne Stroustrup: There are a couple of benefits. First, I think you can understand things better in a statically typed program. If we can say there are certain operations you can do on an integer, and this is an integer, then we can know exactly what's going on. Bill Venners: When you say we know what's going on, do you mean programmers or compilers? Bjarne Stroustrup: Programmers. I do tend to anthropromorphize, though. Bill Venners: Anthropromorphize programmers? Bjarne Stroustrup: Anthropomorphize compilers. I tend to do that partly because it's tempting, and partly because I've written compilers. So as programmers, I feel we can better understand what goes on with a statically typed language. In a dynamically typed language, you do an operation and basically hope the object is of the type where the operation makes some sense, otherwise you have to deal with the problem at runtime. Now, that may be a very good way to find out if your program works if you are sitting at a terminal debugging your code. There are nice quick response times, and if you do an operation that doesn't work, you find yourself in the debugger. That's fine. If you can find all the bugs, that's fine when it's just the programmer working�but for a lot of real programs, you can't find all the bugs that way. If bugs show up when no programmer is present, then you have a problem. I've done a lot of work with programs that should run in places like telephone switches. In such environments, it's very important that unexpected things don't happen. The same is true in most embedded systems. In these environments, there's nobody who can understand what to do if a bug sends them into a debugger. With static typing, I find it easier to write the code. I find it easier to understand the code. I find it easier to understand other people's code, because the things they tried to say are expressed in something with a well-defined semantics in the language. For example, if I specify my function takes an argument Temperature_reading then a user does not have to look at my code to determine what kind of object I need, looking at the interface will do. I don't need to check if the user gave me the wrong kind of object, because the compiler will reject any argument that is not a Temperature_reading. I can directly use my argument Temperature_reading without applying any type of cast. I also find that developing those statically typed interfaces is a good exercise. If forces me to think about what is essential, rather than just letting anything remotely plausible through as arguments and return values, hoping that the caller and the callee will agree and that both will write the necessary runtime checks. To quote Kristen Nygaard, programming is understanding. The meaning is: if you don't understand something, you can't code it, and you gain understading trying to code it. That's the foreword vignette in my third edition of The C++ Programming Language. That is pretty fundamental, and I think it's much easier to read a piece of code where you know you have a vector of integers rather than a pointer to an object. Sure, you can ask whether the object is a vector, and if so you can ask if it holds integers. Or perhaps it holds some integers, some strings, and some shapes. If you want such containers you can build them, but I think you should prefer homogeneous vectors that hold a specific type as opposed to a generic collection of generic objects. Why? It's really a variant of the argument for preferring statically checked interfaces. If I have a vector<Apple>, then I know that its Apples. I don't have to cast an Object to an Apple to use it, and I don't have to fear that you have treated my vector<Fruit> and snuck a Pear into it, or treated it as an vector<Object> and stuck an HydraulicPumpInterface in there. I thought that was pretty well understood by now. Even Java and C# are about to provide generic mechanisms to support that. On the other hand, you can't build a system that is completely statically typed, because you would have to deploy the whole system compiled as one unit that never changes. The benefits of more dynamic techniques like virtual functions are that you can connect to something you don't quite know enough about to do complete static type checking. Then, you can check what interfaces it has using whatever initial interfaces you know. You can ask an object a few questions and then start using it based on the answers. The question is along the lines of, "Are you something that obeys the Shape interface?" If you get yes, you start applying Shape operations to it. If you get no, you say, "Oops," and you deal with it. The C++ mechanism for that is dynamic_cast contrasts with dynamically typed languages, where you tend to just start applying the operations. If it doesn't work, you say, "Oops." Often, that oops happens in the middle of a computation as opposed to the point when the object becomes known to you. It's harder to deal with a later oops. Also, the benefits to the compiler in terms of optimization can be huge. The difference between dynamically and a statically typed and resolved operation can easily be times 50. When I talk about efficiencies, I like to talk about factors, because that's where you can really see a difference. Bill Venners: Factors? Bjarne Stroustrup: When you get to percents, 10%, 50%, and such, you start arguing whether efficiency matters, whether next year's machine will be the right solution rather than optimization. But in terms of dynamic versus static, we're talking factors: times 3, times 5, times 10, times 50. I think a fair bit about real-time problems that have to be done on big computers, where a factor of 10 or even a factor of 2 times is the difference between success and failure. Bill Venners: You're not just talking about dynamic versus static method invocation. You're talking about optimization, right? The optimizer has more information and can do a better job. Bjarne Stroustrup: Yes. Bill Venners: How does that work? How does an optimizer use type information to do a better job of optimizing? Bjarne Stroustrup: Let's take a very simple case. C++ has both statically and dynamically bound member functions. If you do a virtual function call, it's an indirect function call. If it's statically bound, it's a perfectly ordinary function call. An indirect function call is probably 25% more expensive these days. That's not such a big deal. But if it's a really small function that does something like a less-than operation on an integer, the relative cost of a function call is huge, because there's more code to be executed. You have to do the function preamble. You have to do the operation. You have to do the postamble, if there is such a thing. In the process of doing all that, you have to get more instructions loaded into the machine. You break the pipelines, especially if it's an indirect function call. So you get one of these 10 to 30 factors for how to do a less-than. If such a difference occurs in a critical inner loop, the difference becomes significant. That was how the C++ sort beat the C sort. The C sort passed a function to be called indirectly. The C++ version passed a function object, where you had a statically bound inline function that degenerated into a less than. Bill Venners: C++ culture is concerned with efficiency. Is there a lot of premature optimization going on? And how do we know the difference between early optimization that's premature versus early optimization that's prudent? Bjarne Stroustrup: Some parts of the C++ community are concerned with efficiency. Some of them, I think, are concerned for good reasons, others just because they don't know any better. They have a fear of inefficiency that's not quite appropriate. But certainly there's an efficiency concern, and I think there are two ways of looking at it. The way I would look at efficiency is this: I would like to know that my abstractions could map in a reasonable way to the machine, and I would like to have abstractions that I can understand. If I want to do linear algebra, I want a matrix class. If I want to do graphics, I want a graphics class. If I want to do string manipulation, I want a string class. The first thing I do is raise the level of abstraction to a suitable level. I'm using these fairly simple examples, because they're the most common and the easiest to talk about. The next thing I look out for is not to have an N2 or N3 algorithm where I don't need it. I don't go to the web for information if I have the information locally. I don't go to the disk if I have a cached version in memory. I've seen people using modeling tools that ended up writing to the disk twice to write two fields into a record. Avoid such algorithms. I think this is prudent up front design-level optimization, which is the kind of thing you should be concerned with. Now, once you have a reasonably modeled world, with reasonably high level of abstraction, you start optimizing, and that sort of late optimization is reasonable. What I don't like is when people, who out of fear of high level features and fear of abstraction, start using a very restricted subset of the language or avoid good libraries in favor of their own hand-crafted code. They deal with bytes where they could just as well deal with objects. They deal with arrays because they fear that a vector or a map class will be too expensive for them. Then, they end up writing more code, code that can't be understood later. That's a problem, because in any big system you'll have to analyze it later and figure out where you got it You also try to have higher abstractions so you can measure something concrete. If you use a map, you may find that it's too expensive. That's quite possible. If you have a map with a million elements, there's a good chance it could be slow. It's a red black tree. In many cases, you can replace a map with a hashtable if you need to optimize. If you only have 100 elements, it won't make any difference. But with a million elements, it can make a big difference. Now, if you've hacked at all at the lowest level, even once, you won't really know what you have. Maybe you knew your data structure was a map, but more likely it was an ad hoc map-like data structure. Once you realize that the ad hoc data structure didn't behave correctly, how do you know which one you can replace it with? You're working at such a low level that it's hard to get ideas. And then finally, if you've written an ad hoc data structure, you may have operations scattered all over your program. That's not uncommon with a random data structure. There's not a fixed set of operations you use to manipulate it, sometimes data is access directly from user code "for efficiency". In that case, your profiler won't show you where the bottleneck is, because you have scattered the code across the program. Conceptually the bottleneck belongs to something, but you didn't have the concept, or you didn't represent the concept directly. Your tools therefore cannot show you that this concept is what caused your problem. If something isn't in the code directly, no tool can tell you about that something by its proper name. Come back Monday, February 23 for the next installment of this conversation with Bjarne Stroustrup. If you'd like to receive a brief weekly email announcing new articles at Artima.com, please subscribe to the Artima Newsletter. is author of The C++ Programming Language, which is available on Amazon.com at: is author of The Design and Evolution of C++, which is available on Amazon.com at: Bjarne Stroustrup's home page: Bjarne Stroustrup's page about the C++ Programming Preface to Third Edition where Bjarne talks about Programming Publications by Bjarne Stroustrup: Interviews with Bjarne Stroustrup: Bjarne Stroustrup's FAQ: Bjarne Stroustrup's C++ Style and Technique FAQ: Bjarne Stroustrup's C++ Glossary: Libsigc++ Callback Framework for C++: C++ Boost, peer-reviewed portable C++ source libraries: Al Stevens' review of The C++ Programming Language, by Bjarne Stroustrup:
<urn:uuid:34222fd4-3f48-4ab5-8516-bf2a1bafd370>
CC-MAIN-2020-16
https://www.artima.com/intv/abstreffi.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371700247.99/warc/CC-MAIN-20200407085717-20200407120217-00194.warc.gz
en
0.945357
3,871
3.171875
3
A friend recently asked me what Latino literature I would recommend that would broaden her understanding of Latino experiences in the US. She enjoyed the now-classic The Brief Wondrous Life of Oscar Wao by Junot Diaz, but what should she pick up next? A few older classics (The House on Mango Street by Sandra Cisneros and Esmeralda Santiago’s books) immediately came to mind as well as exciting work in community theatre (Luis Valdez and El Teatro Campesino) and seminal, more theoretical texts of Chicana Feminism (Gloria Anzaldúa’s Borderlands/La Frontera: The New Mestiza and the works of Cherrie Moraga). After a bit more brainstorming, the following list evolved. Also, there are many existing lists out there of Latino books that I haven’t gotten to yet, but they might be of interest. The House on Mango Street by Sandra Cisneros (1984)–the NYTimes bestseller is a slim, easy to read volume of poetic vignettes that tell a coming-of-age story about a Chicana girl, growing up in Chicago. It’s so easy to read that it’s often assigned reading in middle and high school (which surprises me given some of the intense experiences described by the protagonist). Cisneros uses some codeswitching, humor, and incredible details that make the vignettes memorable and illuminate experiences of Chicano youth as they establish their identity within U.S. society. Down These Mean Streets by Piri Thomas (1967)–in one of the earlier novel-memoirs by a Latino, like Cisneros, Thomas tells a coming-of-age story about growing up in Spanish Harlem as an Afro-Latino of Puerto Rican and Cuban descent. In contrast to The House on Mango Street, this text uses harsher language and is grittier, more explicitly considering violent and criminal themes–it was actually banned in some states and schools because of this content. Additionally, the identity struggles differ in how Thomas navigates prejudice from both a Latin-Caribbean heritage and an African-American physical appearance. Besides this well-known book, Thomas was also a poet and is considered one of the pioneers of spoken word poetry (more about this on the PBS Independent Lens accompanying webpage). …And The Earth Did Not Devour Him by Tomas Rivera (1971)–also comprised of vignettes told from multiple perspectives, this book focuses on the experiences of migrant farm workers in the 1940s and 1950s in the border region. Few in the U.S. are familiar with the history of the bracero program, importing Mexican workers to farms in the U.S., mere decades after a significant repatriation (aka deportation) program during the Great Depression. And this text sheds light on the miserable working and living conditions, extreme vulnerability of such workers, and the impact of such circumstances on younger generations. I didn’t find the work as satisfying a read as some others on this list, but it is a great way to learn about Chicano social justice issues and the types of issues Cesar Chavez was working so furiously to address. How the García Girls Lost Their Accents by Julia Álvarez (1991)–like Diaz, Alvarez tells the stories of individuals with Dominican ancestry, but this novel centers on female characters. It traces the differing experiences of assimilation among four sisters while engaging a narrative structure inspired by Aristotle. For me, the most interesting aspects of the work are how it engages with Dominican history and presents the story in three parts, reverse chronologically. Contemporary Novels & Short Story Collections Drown (1996), This is How You Lose Her (2012) by Junot Diaz–these two collections of short stories blew my mind the first time I read them, not just for the Latino-specific experiences of Diaz’s Dominican subjects but also for his narrative flair. Diaz channels his protagonists perfectly, has a unique voice as an author, and is unafraid to lay out the harsh realities immigrants face. The stories in This is How You Lose Her told in second-person made me want to try to write a story in the point-of-view because of how compelling the strategy was in the context of the stories. Very highly recommended, especially for fans of The Brief Wondrous Life of Oscar Wao. The Meaning of Consuelo by Judith Ortiz-Cofer (2004)–I picked up a copy of this to read during my trip to Puerto Rico since I had read it told was a coming-of-age story about a bookish girl living in San Juan. While set in San Juan, the text as a whole pushes beyond the boundaries of the Caribbean U.S. territory to consider ways in which the U.S. became involved in the economy and society during the 1950s. The book tackles complex themes, considering mental illness through the character of Consuelo’s sister who has schizophrenia and whom the family tries to protect, and characterizing the stigma of homosexuality through a character who is a close relative of Consuelo and exploring his sexuality. War by Candlelight: Stories by Daniel Alarcón (2005)–when I spent a summer in Lima, Perú, I was desperately looking for literature that would help me understand the city. I eventually came across Alarcón, featured as one of The New Yorkers 20 fiction writers under 40. Some of the stories in this collection are set in Peru, others in Manhattan, but they all have the flavor of a Latino author, Alarcón having attending UC Berkeley and anticipating a US audience for the book. Many of the stories deal with war, as the title suggests, but the “wars” lived through by the characters are wide-ranging. Beyond this collection, he has also published two acclaimed novels, and produces a fantastic radio program named Radio Ambulante that reports on Latino and Latin American topics in Spanish and English (English programs are referred to as Ambulante: Unscripted) Vida by Patricia Engel (2010)–I picked up this collection of short stories on a whim from the remainders section of the Harvard Book Store (I should admit, the paratext of a Junot Diaz endorsement on the book cover swayed me). The stories focus on a young Colombian woman living in Miami, trying to understand her heritage and her identity in the context of living in the US. I enjoyed reading the stories, but they didn’t leave a strong impression, besides being pleasant quick-reads. The Book of Unknown Americans by Cristina Henríquez (2014)–this novels tells the stories of various tenants occupying a shabby apartment building in Delaware. The residents all have Latin American ancestry but hail from distinct countries and immigrated for a variety of reasons. The work is ambitious in its attempt to represent such a plurality of voices and backgrounds, but it is not without direction; there is clear emphasis on the story of most recent arrivals at the building: a family from Mexico who relocated to Delaware for the daughter to attend a school for students with special needs (after suffering an accident, leaving the daughter with brain damage). While this family resides legally in the US, they still suffer from marginalization, injustice, and the language barrier, not to mention the parents are struggling with protecting their daughter in the new environment and managing evolving family dynamics with the circumstances after the daughter’s brain injury. Down the Rabbit Hole by Juan Pablo Villalobos (2011)–the short novel (really more of a longer short story) is a great introduction to the genre of narcoliterature. The story is told from the perspective of a young boy, trying to make sense of shady characters he comes into contact with who are abusing corrupt systems for their own gain. While he may be an unreliable narrator and try to impress with overly complicated vocabulary, the son of a drug kingpin provides enough clues for the reader to piece together the extent of violence and crime surrounding his upbringing. The reader is left both laughing and cringing at the dark humor and grim world of the drug gangs. The Story of My Teeth by Valerie Luiselli (2015)–I recently picked up this book by the young Mexican author when I saw it on display at McNally Jackson in Manhattan and was intrigued to learn that Luiselli wrote it with collaboration from Jumex factory workers (a large Mexico-based juice company). I had heard of Luiselli after reading praise from Mario Bellatin (who I love but didn’t include on this list since he is a Latin American, not Latino author, focusing on Latin American rather than Latino subjects). The book as a material object is a work of art, with designed page inserts and parts (particular the latter sections) having a scrapbook vibe, presenting relevant photographs, quotations, and other references that complement the main narrative plot. The afterword explains the process of working with the Jumex factory employees as Luiselli shared her work in installments with workers and received input, which she incorporated into the final product. Guilty by Juan Villoro (2015)–while browsing at McNally Jackson, I noticed this small book of short stories, also by a contemporary Mexican author. I wish I could say I don’t judge books by their covers, but I was certainly drawn to this one after reading the quotation from Roberto Bolaño on the cover (another favorite author, but, again, a Latin American who does not deal mainly with Latin subjects, thus, I didn’t include him on this list), celebrating Villoro’s writing. Then standing in the store and reading the first story, about a disillusioned mariachi superstar and the projects he becomes involved with after his prime, I was immediately enchanted. I am eager to read the others but am pacing myself to savor the dark humor and mockery of US-Mexico cultural exchange and appropriation. So far, I am also impressed by the meditations on contradictions comprising Mexican identity, struggles to find authenticity, and nationalist culture as represented in Mexican arts production. Image source: northcountrypublicradio.org
<urn:uuid:b5b95186-8495-4a24-aa38-f4f2c5c4b1b1>
CC-MAIN-2020-16
https://juliagracecohn.com/2016/02/19/latino-literature-recommendations/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370494349.3/warc/CC-MAIN-20200329140021-20200329170021-00154.warc.gz
en
0.959441
2,087
2.765625
3
German Conceptual and Multimedia Artist Summary of Hans Haacke Hans Haacke largely invented modern 'artivism' as a political strategy for conceptual artists. His work intervenes through the space of the museum or gallery to decry the influence of corporations on society and reveal the hypocrisy of liberal institutions accepting sponsorship from aggressive and conservative capitalists. This work has been immensely significant in prefiguring the modern challenge to 'artwashing', the attempted diversion from harmful business practices through philanthropic engagement with the arts. Haacke's politics extend to his artistic career, providing a principled example to artists and audiences. He still maintains partial ownership over his artworks after sale, for example, allowing him a measure of control over the extent to which his protest can be coopted by the art market. As a teacher and writer Haacke's influence is not only in the work he directly produced himself, but in the dissemination of his political strategies through later generations of artists. Haacke's fearlessness and refusal to bend in relation to institutional pressure has had an enduring legacy that persists to this day. - Haacke's work often shows a lack of respect or reverence towards institutions and convention. His curation pieces, for example, lay bare the inner workings of a gallery or museum for the public to see, questioning conventions of behavior towards art objects. He highlights simple or everyday materials (water, grass, a potted plant) as worthy of serious observation, whilst placing historical artifacts on the floor or in rough piles. His work also invites participation, asking that audiences read, absorb and act on the things it reveals. This has contributed to contemporary conversations about access and political responsibility still going on in museums and galleries today. - Despite his resistance to the financial and corporate structures of the art market, Haacke's work has grown in profile to the point where it is now recognized and pursued by museums as work that is highly significant in the development of political visual art practices. After the censure, denial and scandal, his work is now invited into institutions rather than kept out. - Haacke 'lives' his politics even through his interactions with the art world - a market-driven international network of capital. By not relying on the sale of artworks to support himself or his family he is able to decide when and how to exhibit and create, and he maintains an unprecedented level of control over the pieces that he does sell to collectors. This provides a model for artists who wish to critique the art world without being wholly subsumed within its inherently capitalist framework. - Formally, Haacke's work shares characteristics of Land Art and Minimalism but maintains a far sharper political edge than the archetypal examples of those practices. Drawing on highly symbolic processes and materials, his sculptures and installations highlight the same relationships in the gallery space as more conventional minimalist sculpture, but also make more direct allusions to history, politics and the world in which the sculptures are made. His work offers a challenge to the supposed detachment of minimalism or the monumentalism of Land Art, demonstrating to audiences and artists that the same techniques have potential as tools of direct political critique. Biography of Hans Haacke Hans Christoph Carl Haacke was born in Cologne in 1936, during the period of extreme social change that saw the rise of the Nazi Government in Germany. By the time he was three years old WWII had begun, and by the age of six bombs regularly fell on the street he lived on. In his own words, "I remember walking by a still smoking ruin on my way to school." His father was affiliated with the Social Democratic party and refused to join the Nazis, costing him his job with the city of Cologne. Such traumatic episodes led the Haacke family to move from Cologne to a small rural town in the southern district of Bad Godesberg. Important Art by Hans Haacke Condensation Cube is a transparent acrylic box containing a few inches of water. The work was first created in 1963, but has been recreated many times. Although it is tempting to compare Haacke's cube with the works by Minimalist artists like Donald Judd or Robert Morris, and with the lightheartedness of group ZERO, Condensation Cube goes beyond this as it incorporates the water cycle, animating the ready-made object. The work changes depending on the temperature in a constant cycle of evaporation, precipitation and condensation. The artist notes that "the conditions are comparable to a living organism which reacts in a flexible manner to its surroundings. The image of condensation cannot be precisely predicted. It is changing freely, bound only by statistical limits. I like this freedom." The work represents the rise of interest in biology, ecology, and cybernetics in the 1960s. Such a seemingly simple work is actually rather complex, revealing one of the most fundamental aspects of nature. As noted by architectural historian Mark Jarzombek, "by confining a natural phenomenon inside the culturally proscribed space of the art gallery or museum, Haacke invites the viewer in as an observer and participant in both natural and cultural phenomena." Another groundbreaking aspect of the work is that it was created at the same time that museums started incorporating moisture engineering. This new technology, which includes humidifiers, anti-humidifiers and thermohygrometer, affects and is affected by the Condensation Cube, questioning the relationship between humans, nature and the institution by highlighting the lack of attention usually afforded to these natural processes, and the artificiality of the space of the institution, which operates by constraining ideas into preservable and regulated spaces. Grass Grows consists of a pile of soil in a cone shape formation sprinkled with grass seeds that sprout throughout the length of the exhibition thanks to the light that invaded the space from its large windows. Audience members arrive and observe the piece at different moments of its development, challenging the notion of a piece being 'finished' or able to be seen in its entirety. Grass Grows is a work that highlights biological systems, which Haacke describes as "a grouping of elements subject to a common plan and purpose that interact so as to arrive at a joint goal." As it is constantly changing Grass Grows is a work that occurs independently of its audience. A trivial occurrence, grass sprouting, becomes almost magical simply as a result of being displaced from the outdoors and moved to an institutional context. System theory, the study of the organization of phenomena, also influenced the artist, who saw it as a way to explain life. The system which constitutes the artwork here only ceases to exist when life does. Grass Grows is significant as an incorporation of living organisms into a highly conceptual framework, and an early challenge to the idea of the gallery as a place where static objects are on display in a neutral space. The work was part of the exhibition "Earth Art" at Cornell University's Johnson Museum of Art, curated by Willoughby Sharp, which was decisive in shaping the public perception of Land Art as it included the works of Robert Smithson and Richard Long. Important names of a newer generation of artists such as Gordon Matta-Clark and Louise Lawler were amongst the students that helped installing the show. Haacke was not only working with plants at this time but animals too, a period that he refers to as his 'Franciscan phase' - referring to Saint Francis, known as the protector of the animals. With time though, Haacke's works moved in a different direction soon after, away from the grand landscapes of the other artists included and towards the more self-contained political gallery pieces that he is best known for. Shapolsky et al. Manhattan Real Estate Holdings, A Real Time Social System, as of May 1, 1971 (1971) Shapolsky et al. Manhattan Real Estate Holdings is a political work comprising of photographs and photocopied documents displaying slumlord Harry Shapolsky's real estate holdings. The work includes over 140 photographs of buildings in Harlem and the Lower East Side, alongside text detailing how Shapolsky obscured his ownership through dummy corporations and companies 'owned' by family members. The piece culminated in two maps showing the extent of his property empire across New York. Remarkably, the work was entirely based on content open to the public, with the data collected by the artist from the public record. Formally, Shapolsky et al. Manhattan Real Estate Holdings is innovative and engaging in its presentation of this data. The immensity of its collection of texts, diagrams, and photographs, all equally framed and displayed side by side resemble works from Joseph Kosuth and Hanne Darboven. At first glance the work is monumental in scale and arrangement but begs close reading of the information displayed. Like Minimalist works that succeed through the relation of the object to the beholder, Haacke invites and provokes a changing relationship between the reader and what is being read. The viewers move close, step back to take it all in, and crane to read individual lines of text. Haacke used this engagement politically, aiming at an increase in political awareness and attempting to provoke social change. As stated by scholar Rosalyn Deutsche, Haacke challenged "the prevailing dogma that works of art are self-contained entities." In this way, Shapolsky et al. Manhattan Real Estate Holdings blends art with life and social justice. Some critics have argued that Shapolsky et al. Manhattan Real Estate Holdings is more investigative journalism than art, but this ambiguity is what makes the work unique and noteworthy. The work led to the cancelation of Haacke's show at the Guggenheim, as well as the dismissal of its curator. Art world rumors suggested that Shapolsky was related to one of the Guggenheim's board members, although this was never proved. Regarding the episode, museum director Thomas Messer wrote in a letter to the artist that the institution's policies ''exclude active engagement towards social and political ends.'' In a newspaper interview Messer similarly defended himself by saying: "I'm all for exposing slumlords, but I don't believe the museum is the proper place to do it." Haacke spent the next 12 years without selling or showing his work in American museums. Influences and Connections Useful Resources on Hans Haacke - Hans HaackeBy Walter Grasskamp - Hans Haacke: Unfinished BusinessBy Brian Wallis (Editor) - Hans Haacke: October FilesBy Rachel Churner - Hans Haacke: For RealBy Hans Haacke, Benjamin Buchloh, and Rosalyn Deutsche - Contrarian Stays True to his CreedBy Randy Kennedy / The New York Times / October 23, 2014 - The Art of Good Business: Hans Haacke Goes After a Koch, Readies London PlinthBy Andrew Russeth / Art News / December 9, 2014 - Hans Haacke: In conversation with Terry CohnBy Terry Cohn / SFAQ / June 6, 2016 - At the MET with: Hans Haacke; Peering at a Wide World Beyond Works on a WallBy Michael Kimmelman / December 9, 1994 - Hans Haacke: 4 DecadesOur PickTrailer. Directed by Michael Blackwood - Hans Haacke, Seurat's 'Les Poseuses' - SmarthistoryBeth Harris, Sal Khan and Steven Zucker discuss art and institutional critique in relation to Hans Haacke's Seurat's 'Les Poseuses' (small version), 1884-1975 from 1975 - Okwui Enwezor discusses Hans Haacke at MoMA New YorkIn this video, Nigerian curator Okwui Enwezor discusses Hans Haacke's work as part of a Phaidon hosted conversation on defining contemporary art - Hans Haacke: An InterviewA 1980s interview with Hans Haacke in which he discusses the cancelation of his Guggenheim show, its consequences and the reasons behind it
<urn:uuid:079feed9-0e30-4061-98d8-b6648335c17e>
CC-MAIN-2020-16
https://m.theartstory.org/artist/haacke-hans/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371896913.98/warc/CC-MAIN-20200410110538-20200410141038-00314.warc.gz
en
0.954964
2,466
2.859375
3
This guide tracks privacy issues with antivirus software and is periodically updated with new information. (First published on February 4, 2019, last updated on July 15, 2019.) It goes without saying that reliable antivirus software plays a crucial role in IT security. As malware continues to become more sophisticated and prolific (more than 350,000 malware samples are released every single day), home users and business owners alike need to have protection in place to stop these modern digital threats. However, antivirus products are not immune to privacy problems. While the antivirus industry is ostensibly on the side of good, many antivirus products behave in a way that infringes on users’ privacy. Whether they intercept web traffic, sell browser history data, or allow backdoor access to government agencies, many antivirus products are guilty of jeopardizing the very thing they are designed to protect: your data. Here are five ways antivirus software may interfere with your privacy. 1. Selling your data to third-party advertisers To provide you with the protection you need to keep your system safe, your antivirus software needs to know a lot about you. It keeps an eye on the programs you open to ensure you’re not accidentally executing malicious software, and it monitors your web traffic to stop you accessing dodgy websites that might try to steal your login credentials. It might even automatically take suspicious files it finds on your computer and upload them to a database for further analysis. This means your antivirus software could collect and process an awful lot of your personal data if it wanted to. With great power comes great responsibility. While some antivirus providers are quite conscientious with their users’ data and only use it when absolutely necessary, others are much less scrupulous. Avast – Avast’s popular free android app sends personally identifiable information such as your age, gender and other apps installed on your device to third-party advertisers. As an AVG spokesperson explained to Wired, “Many companies do this type of collection every day and do not tell their users.” From free VPN services to free antivirus, the old adage rings true: if you’re not paying for the service, you’re probably the product. 2. Decrypting encrypted web traffic Most modern antivirus products include some sort of browser protection that prevents you from accessing known phishing and malware-hosting websites. However, doing so is easier said than done due to the fact that so much data is now transferred via Hypertext Transfer Protocol Secure (HTTPS). HTTPS is the protocol your web browser uses when communicating with websites. The “S” in HTTPS stands for “secure” and indicates that the data being sent over your connection is encrypted, which protects you against man-in-the-middle attacks and spoofing attempts. Today, 93 percent of all websites opened in Google Chrome are loaded over HTTPS, up from 65 percent in 2015. If you want to know if a website uses HTTPS, simply check the URL or look for a padlock icon in the address bar. The rapid adoption of HTTPS has helped to make the web a more secure place, but it has also introduced an interesting problem for antivirus companies. Normally when you visit an HTTPS website, your browser checks the website’s SSL certificate to verify its authenticity. If everything checks out, a secure connection is established, your website loads, and you can browse away to your heart’s content, secure in the knowledge that the website is legitimate. But there’s just one problem. Because the connection is encrypted, there’s ultimately no way for antivirus software to know if the website you are trying to visit is safe or malicious. Most antivirus products use HTTPS interception to overcome this issue. This involves installing a local proxy server that creates fake SSL certificates. When you visit an HTTPS website, your connection is routed through your antivirus’ proxy server, which creates a new SSL certificate and checks the safety of the site you’re trying to access. If your antivirus software judges the website to be safe, the site loads as normal. If the website is unsafe, the proxy will display a warning in your browser. By redirecting your data through a proxy, your antivirus is decrypting the data you send on encrypted connections – data that is only meant to be visible to you and the HTTPS website. There are a few ramifications here: - Because your antivirus is faking SSL certificates, there’s no way to be 100 percent certain that the website displayed in your browser is the real deal. In late 2017, Google Project Zero researcher Tavis Ormandy discovered a major bug in Kaspersky’s software. In order to decrypt traffic for inspection, Kaspersky was presenting its own security certificates as a trusted authority, despite the fact that the certificates were only protected with a 32-bit key and could be brute forced within seconds. This meant that all 400 million Kaspersky users were critically vulnerable to attack until the company patched the flaw. - Most antivirus products query the safety of a URL server side, which means the company could potentially track your browsing habits if they wanted to. - It increases the risk of phishing attacks and man-in-the-middle exploits. A team of researchers even published a paper on the troubling security implications of HTTPS interception by popular antivirus companies, where they noted: As a class, interception products [antivirus solutions that intercept HTTPS] drastically reduce connection security. Most concerningly, 62% of traffic that traverses a network middlebox has reduced security and 58% of middlebox connections have severe vulnerabilities. We investigated popular antivirus and corporate proxies, finding that nearly all reduce connection security and that many introduce vulnerabilities (e.g., fail to validate certificates). While the security community has long known that security products intercept connections, we have largely ignored the issue, believing that only a small fraction of connections are affected. However, we find that interception has become startlingly widespread and with worrying consequences. HPKP is a technology enabling website operators to “remember” the public keys of SSL certificates in browsers, enforcing the use of specific public keys for specific websites. This reduces the risk of MiTM attacks using rogue/non authorized SSL certificates. But HTTPS scanning and HPKP can’t work together, therefore if a website has HPKP enabled, when you access it the support for HPKP for that site will be disabled in the browser. VPN.ac found this to be the case with ESET, Kaspersky, and Bitdefender: Tip: Avoid antivirus software that utilizes HTTPS interception/scanning, or just disable this “feature” within your antivirus. 3. Installing potentially unwanted programs on your computer Even if your antivirus doesn’t pose a direct threat to your privacy, it may come bundled with software that does. As the name suggests, potentially unwanted programs, or PUPs for short, are applications that you may not want on your computer for various reasons. While they’re technically not malicious, they usually change the user experience in some way that is undesirable, whether that’s displaying advertisements, switching your default search engine, or hogging system resources. Many free antivirus products come with PUPs such as browser toolbars, adware, and plugins that you may inadvertently allow to be installed while quickly clicking through the installation process. For example, free versions of Avast and Comodo try to install their own Chromium-based web browsers, which you may or may not want on your computer. Meanwhile, AVG AntiVirus Free automatically installs SafePrice, a browser extension that claims to be able to help you find the best prices while shopping online. Unfortunately, it can also read and change all your data on the websites you visit. A few years back Emsisoft found that most free antivirus suites were bundled with PUPs. Here were the culprits: - Comodo AV Free - Avast Free - Panda AV Free - AdAware Free - Avira Free - ZoneAlarm Free Antivirus + Firewall - AVG Free PUPs aren’t inherently malicious, but they can seriously encroach on your privacy. Some PUPs will track your search history or browser behavior and sell the data to third parties, while others may compromise your system’s security, affect system performance, and hinder productivity. Keep unwanted applications off of your computer by carefully reading installation options during the setup process and only install the software and features that you need. 4. Cooperating with governments It’s theoretically possible that antivirus software could be leveraged to help government agencies collect information on users. Most security software has very high access privileges and can see everything that’s stored on a computer, which is necessary in order for the software to keep the system to safe. It’s easy to see how this power could be used by nefarious parties to spy on individuals, businesses, and governments. Kaspersky Lab, a Russia-based cybersecurity company whose products account for about 5.5 percent of antivirus software products worldwide, was embroiled in a major privacy scandal a couple of years ago. According to the Washington Post, Kaspersky software used a tool that was primarily for protecting users’ computers, but also could be manipulated to collect information not related to malware. Kaspersky is the only major antivirus company that routes its data through Russian Internet service providers, which are subject to Russia’s surveillance system. In September 2017, the U.S. government banned federal agencies from using Kaspersky Labs software following allegations about cooperation between Kaspersky and Russian intelligence agencies. Shortly after, the FBI began pressuring retailers in the private sector to stop selling Kaspersky products, and the British government issued a warning to government departments about the security risks of using Kaspersky software. Of course, it would be naive to think this issue is limited to Russian software. Similar concerns have been raised recently about Huawei equipment with “hidden backdoors” installed. “Antivirus is the ultimate back door,” explained Blake Darché, a former N.S.A. operator and co-founder of Area 1 Security, as quoted by The New York Times. “It provides consistent, reliable and remote access that can be used for any purpose, from launching a destructive attack to conducting espionage on thousands or even millions of users.” 5. Undermining security and giving hackers access to private data Sometimes, security software does the opposite of its desired intent by undermining your security. One such case occurred with the Royal Bank of Scotland (RBS), which was offering Thor Foresight Enterprise to its business banking customers. In March 2019, Pen Test Partners discovered an “extremely serious” security flaw with the software that left RBS customers vulnerable: Security Researcher Ken Munro told the BBC: “We were able to gain access to a victim’s computer very easily. Attackers could have had complete control of that person’s emails, internet history and bank details.” “To do this we had to intercept the user’s internet traffic but that is quite simple to do when you consider the unsecured public wi-fi out there, and it’s often all too easy to compromise home wi-fi set ups. “Heimdal Thor is security software that runs at a high level of privilege on a user’s machine. It’s essential that it is held to the highest possible standards. We feel they have fallen far short.” While Heimdal was quick to patch the vulnerability within a few days, it does raise an interesting point. That is when your security software is actually undermining your security. Choose your antivirus software wisely In the best case scenario, antivirus companies use your data responsibly to refine their products and provide you with the best malware protection possible. In the worst case scenario, they sell your data to third-party advertisers, install annoying software on your system, and cooperate with government agencies to spy on your personal information. So, how do you sort the best from the rest? - Pay for your antivirus software. Most free antivirus products will be far more liberal with your data than premium software as the company ultimately needs to monetize their services in some way. - Read installation options: It’s easy to blindly click through “Next” when installing new software. This can result in the installation of browser toolbars, adware, and all sorts of other PUPs, which can encroach on your privacy in various ways. - Customize privacy settings. Some antivirus software will allow you to customize privacy settings such as usage statistics, browsing behavior, and whether to upload malicious files for analysis. Adjust these settings to maximize your privacy. - Read AV reports. Some independent analysts release reports on how antivirus companies handle your data. Take the time to read these reports and reviews to get a better understanding of a company’s reputation and how it handles privacy matters. It’s important to note that this article isn’t a rallying call to abandon all antivirus software in the name of privacy, because there are some good players out there. Antivirus software is an essential part of modern IT security and plays a critical role in protecting your data against malware, phishing, and a plethora of other digital attacks that pose a real threat to everyday users. While some antivirus providers are invasive and should be avoided, there are still some companies that strive to protect their users’ privacy. Emsisoft, for example, has earned itself a reputation for providing reliable protection without compromising its users’ privacy. ClamAV is another privacy-friendly option that is completely open source. So do your homework, weigh up your options carefully and remember that not all antivirus solutions are created equal when it comes to respecting your privacy. Last updated on July 15, 2019.
<urn:uuid:fab50509-c9c9-484e-b1f0-6ab2a08697ce>
CC-MAIN-2020-16
https://restoreprivacy.com/antivirus-privacy/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505730.14/warc/CC-MAIN-20200401100029-20200401130029-00395.warc.gz
en
0.93468
2,918
2.75
3
Let us start, in the spirit of steampunk, by imagining a new and different past. One that is just a little different to that which we currently have. So welcome to the year 1867. The Victorian age is at its zenith and a new, powerful and monied middle class is looking for things to do with their cash. Towns and cities seem to be growing bigger with each passing day, and horizons are transformed as new buildings appear everywhere. One aspect of the urban landscape never changes though. Everywhere you look you will see one of the huge gasometers that have been a constant feature of the cityscape for almost 20 years now. They are filled with the hydrogen gas essential to run the fuel cells – or gas batteries, as the Victorians call them – that are so vital for the economy and for powering everyday life. In both this imagined and the real past, the gas battery was invented in 1842 by a young Welshman from the then town of Swansea, William Robert Grove. It was a revolutionary device because rather than using expensive chemicals to produce electricity like ordinary batteries, it used common gases – oxygen and hydrogen – instead. However in this timeline, unlike our own, within 20 years the Welsh man of science’s amazing invention had ushered in a new industrial and cultural revolution. Our imagined scene is the British Empire’s new electrical age. The horseless carriages that run along roads and railways are all powered by electricity from banks of gas batteries. So is the machinery in the factories and cotton mills that produce the cheap goods which are the source of Britain’s growing wealth. The demand for coal to produce the hydrogen needed to run gas batteries has transformed places such as Grove’s own south Wales, where coalfields are expanded to meet the insatiable need for more power. Middle-class homes are connected to those gasometers through networks of pipes supplying the hydrogen needed as fuel to run all kinds of handy electrical devices. Machines for washing clothes – and dishes – have trebled the workload of domestic servants by transforming their employers’ expectations concerning daily hygiene. There are machines for cleaning floors and furniture. Electric ovens are fast replacing the traditional kitchen range in the more fashionable houses. Gas batteries also run the magic lanterns that provide entertainment for middle-class families every evening after dinner. Of course, none of this actually happened. The true history of energy, and the culture that depends on that energy, over the past 150 years or so has been rather different. It was coal and oil, rather than hydrogen, that powered the 19th and 20th-century economies. A curious voltaic pile The gas battery’s real history begins in October 1842, when Grove, newly appointed professor of experimental philosophy at the London Institution, penned a brief note to chemist and physicist Michael Faraday at the Royal Institution. “I have just completed a curious voltaic pile which I think you would like to see,” he wrote. The instrument was “composed of alternate tubs of oxygen and hydrogen through each of which passes platina foil so as to dip into separate vessels of water acidulated with sulphuric acid.” The effect, as Grove described it to Faraday, was startling: “with 60 of these alternations I get an unpleasant shock and decompose not only iodide of potassium but water so plainly that a continuous stream of thin bubbles ascends from each electrode”. Grove had invented a battery which turned hydrogen and oxygen into electricity and water. In 1842 Grove was busily making a name for himself in metropolitan scientific circles. He had been born in 1811 into a leading family in the commercial and public life of Swansea, and grew up in a world where the importance and utility of science was commonly understood. The Groves’ neighbours included prominent industrialists including pottery manufacturer and botanist Lewis Weston Dillwyn and John Henry Vivian – an industrialist and politician – who were also fellows at the Royal Society. Grove studied at Brasenose College Oxford before going to London to prepare for a career in the law. While there he became a member of the Royal Institution and it is clear that from around this time he started to become an active electrical experimenter. This is when some of Grove’s earliest forays into scientific work began to appear. In 1838 he gave a lecture to the society describing a new battery he had invented: “an economical battery of Mr Grove’s invention, made of alternate plates of iron and thin wood, such as that used by hatters”. This emphasis on economy was a theme that would recur in his work on the powerful nitric acid battery that he developed a year later – and which led to his aforementioned appointment as professor, and fellowship of the Royal Society – as well as in his work on the gas battery. Grove described in a letter to Philosophical magazine how the battery “with proper arrangements liberates six cubic inches of mixed gases per minute, heats to a bright red seven inches of platinum wire 1/40th of an inch in diameter, burns with beautiful scintillations needles of a similar diameter, and affects proportionally the magnet”. This is typical of the way battery power was demonstrated. Scientists would show how it could break down water into its constituent gases, make wires glow, or work an electromagnet. Significantly, Grove also went on to say that as “it seems probable that at no very distant period voltaic electricity may become a useful means of locomotion, the arrangement of batteries so as to produce the greatest power in the smallest space becomes important”. Indeed, shortly after Grove announced his invention, the German-born engineer Moritz Hermann von Jacobi used a bank of Grove’s batteries to power an electromagnetic motor boat on the river Neva in Saint Petersburg. And the technology later went on to be used extensively by the American telegraph industry. Born of necessity It was Grove’s continuing work on making batteries more efficient and economic that led directly to the gas battery which was to be the forebear of the now modern fuel cell. He wanted to find out just what happened in the process of generating electricity from chemical reactions. It showed how “gases, in combining and acquiring a liquid form, evolve sufficient force to decompose a similar liquid and cause it to acquire a gaseous form”. To Grove, this was “the most interesting effect of the battery; it exhibits such a beautiful instance of the correlation of natural forces”. The gas battery provided powerful evidence in favour of the theory Grove had developed regarding the inter-relationship of forces, which he described a few years later in his essay, On the Correlation of Physical Forces. There he argued: that the various imponderable agencies, or the affections of matter, which constitute the main objects of experimental physics, viz. heat, light, electricity, magnetism, chemical affinity, and motion, are all correlative, or have a reciprocal dependence. That neither taken abstractedly can be said to be the essential or proximate cause of the others, but that either may, as a force, produce or be convertible into the other, this heat may mediately or immediately produce electricity, electricity may produce heat; and so of the rest. In other words, forces were interchangable and any one of them could be manipulated to generate the others. But what about utility and practical power? Grove clearly believed, as did many of his contemporaries – including the electro-magnet’s inventor, William Sturgeon – that the future was electrical. It would not be long before electromagnetic engines like the one that Jacobi had used for his boat on the Neva would replace the steam engine. It was just a matter of finding the right and most economic way of producing electricity for the purpose. As Grove put it to a meeting of the British Association for the Advancement of Science in 1866, if: instead of employing manufactured products or educts, such as zinc and acids, we could realise as electricity the whole of the chemical force which is active in the combustion of cheap and abundant raw materials … we should obtain one of the greatest practical desiderata, and have at our command a mechanical power in every respect superior in its applicability to the steam-engine. We are at present, far from seeing a practical mode of replacing that granary of force, the coal-fields; but we may with confidence rely on invention being in this case, as in others, born of necessity, when the necessity arises. He was clear that realising this particular dream was not his problem, however: “it seems an over-refined sensibility to occupy ourselves with providing means for our descendants in the tenth generation to warm their dwellings or propel their locomotives”. A new past Grove certainly made no attempt to turn his gas battery into an economic device, but like many Victorians he was fond of looking into the future and putting his technologies there. In many ways it was Victorians such as Grove who invented the view of the future as a different country that we are so familiar with now. Their future was going to be a country full of new technologies – and electrical technologies in particular. By the time Grove died in 1896 commentators were prophesying a future where electricity did everything. Electricity would power transport systems. Electricity would grow crops. Electricity would provide entertainment. Electricity would win wars. It seemed almost impossible to talk about electricity at all without invoking the future it would deliver. All this brings us neatly back to the new past for Grove and the gas battery that our future technologies may deliver. If the future of new and clean electrical technology – that contemporary promoters of the fuel cell are today offering us – really happens, then the obscure story about a curious little invention by a largely forgotten Welsh man of science will become an epic piece of technological history. That future, if it happens, will change our past. It will change the ways we understand the history of Victorian technology and the ways in which the Victorians used those technologies to tell stories about their future selves. We should not forget that we still pattern our own projected futures in the same way as they did. We extrapolate bits of our contemporary technologies into the future in the same sort of way. It is interesting to speculate in that case why particular sorts of technologies make for good futures and others apparently do not. At the end of the 19th century the gas battery clearly did not look like a good piece of future making technology to many people. It does now.
<urn:uuid:b475ce83-c309-4d9d-92ee-86948b5e0e3a>
CC-MAIN-2020-16
https://www.borntoengineer.com/know-engineering-hydrogen-fuel-cell-invented-victorian-lawyer-wales
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370511408.40/warc/CC-MAIN-20200410173109-20200410203609-00434.warc.gz
en
0.971379
2,195
2.640625
3
SEN Information - Annual report 2019 What kinds of special educational needs are provided for at Urchfont Church of England Primary School? Urchfont is proud to be able to provide an inclusive education for children who may have: - Cognition and learning needs - Communication and interaction needs - Physical and medical needs - Behavioural, social and emotional needs - Sensory needs How do we identify children and young people with SEN and their needs? Prior to starting school a child may have already been identified with a Special Educational Need or Disability. Where this is the case, we work closely with parents and supporting agencies to ensure transition into school is as smooth as possible and to plan so that good achievement is made by that child. In school we make regular assessments of children and record their progress against the statutory requirements of the Early Years Foundation Stage (EYFS) in reception, or against the National Curriculum for children from Year 1 to Year 6. We also monitor closely children’s emotional and behavioural wellbeing. Where assessments show a child is not operating at age related expectations, or they are making less than expected progress, or if they are finding it difficult to make friends or behave appropriately, we will use our professional judgement to ascertain if the child may have a Special Educational Need. Once we have considered the possibility that a child may have a Special Educational Need, we will, in the first instance, approach that child’s parents or carers to discuss our concerns. In agreement with parents we will then assess the specific issue that is preventing the child making progress, or is preventing them from making friends, or behaving appropriately. To make our assessments, in most cases we use a ‘toolkit’ provided by SEND services at Wiltshire Council called the ‘Wiltshire Indicators and Provision Document (WIPD)’. Once we have made our assessments we will be able to ascertain whether we can adapt our class based provision to meet a child’s needs or whether we need to involve outside agencies. Again, parents will be kept informed and included in the decision making process. At every stage of the identification process, we will also involve the young person. Of course, if you have concerns that your child may need additional support, you should approach the school at your earliest convenience. Our current SEN policy can be downloaded from the policy section of this website. Who are our SEND team? Our Special Educational Needs Co-ordinator (SENCO) is Mrs Talbot. She has overall responsibility for leading school development in SEND and is contactable in writing or by telephone via the school at: Urchfont Church of England Primary School How do we consult with parents of children who have SEND? Where we have an initial concern it is most likely that your child’s class teacher, or SENCO will approach you either informally if the opportunity presents itself or via telephone to discuss concerns and next steps. If your child has been identified as having SEND and outside agencies are involved, you will be invited to attend a meeting at least once a year to discuss your child’s provision. We welcome contact outside of formal meetings so that we can all work together to find the best provision for children with SEND. How do we consult with children who have SEND? Class teachers will consult regularly with children who have outcomes planned for them individually, and how they feel they are progressing to meeting those outcomes. Children will also be part of any review meetings and will be asked to give their views either on paper (where a child has a difficulty that prevents them from drawing or writing a teaching assistant or teacher might work with that child to complete any written evidence) and / or in person at the review meeting. What arrangements are in place for assessing and reviewing children and young people’s progress towards outcomes and what opportunities are available to work with parents and young people as part of the assessment and review? Inclusivity in the classroom means that young people with SEND will receive feedback on progress made in learning at the start of every learning sequence. They will also be invited to participate in and / or submit written evidence to any formal meeting that is organised to review progress to meeting outcomes. Parent evenings are also offered in terms 1, 4 and 6 to discuss progress towards meeting outcomes. All parents are offered support on how to work with their children outside of school through termly class newsletters and other events that are held for parents on how to help their child make progress. What arrangements are in place for supporting children and young people moving between year groups in school and for moving from the primary phase to the secondary phase of education? Towards the end of each school year, class teachers meet to share information to help make transition from one year group into another as seamless as possible for all pupils. Children also experience some time with their new class teachers. For children with SEND, this provision may be increased and a new class teacher might, for example, provide a small book to a child who finds change difficult so that child knows what to expect when they move class. For Year 6 children, the SENCO will arrange to meet with transitions workers and SENCOs from receiving secondary schools and a firm plan for transition will be made. Children will meet these adults and will be involved in the transition process. Parents will also be kept informed about these meetings. How do we approach teaching children and young people with SEND? Perhaps the first thing to understand about our approach to teaching children with SEND is that we have the philosophy and expectation that children will reach their full potential – in other words SEND is not an excuse for not doing well at school. However, because a child has SEND we realise that we have to change our provision so that they can access learning in order to meet their full potential. We do this in a number of ways including: - teachers adapting planning so that individuals have specific learning outcomes: - withdrawing children from class for short periods of time so that any gaps in learning can be closed; - providing extra adult support in class so that children are focussed on accessing the curriculum; - meeting regularly in staff teams to discuss provision and if it needs to be adapted; - liaising with outside agencies such as Central SEN Services and Behaviour Support to receive the best advice on how to help children learn; and - adapting buildings and furniture if necessary so that children are not restricted from using the school fully. It should be remembered that we do not offer a ‘one size fits all solution’ to children with SEND – this is because every child and every need is individual. We will endeavour to always involve parents and children in developing the best provision so that children have the best possible school experience. Children joining our school who are recognised currently as having a statement and over time as in receipt of a ‘My Plan’ where additional funding is supplied by the Wiltshire Local Authority, will be given priority in admissions if our school is the first choice of parents. What adaptions are made to the curriculum and learning environment for children with SEND? In our curriculum, and depending upon the need of the child, we make adaptions so that children can access learning. These include: - providing enlarged print for texts; - breaking curriculum content down into small parts; - providing visual cues and timetables so children are able to be independent in their learning; and - providing children with resources that allow independent access to curriculum content such as acetate overlays for children who have been diagnosed with dyslexia. It may also be necessary to make physical adaptions in the school building to allow children with SEND to access learning. Adaptions could include: - specialist furniture; - installation of induction loops for children with hearing difficulties; - seating arranged so that there is line of sight to important resources and the class teacher; - access to specialist IT equipment; - modifications to toilet facilities; - widening of entry and exit points; and - installation of ramps and removal of stairs to allow wheel chair access. How do we ensure that our staff are trained to support children and young people with SEND? The school SENCO attends regular updates on SEND provision and disseminates this to colleagues. We also send other staff on relevant continuing professional development courses so that they are equipped to teach children with SEND. How do we evaluate the effectiveness of the provision we make for children and young people with SEND? Class teachers and other adults responsible for the provision of children with SEND meet regularly to discuss progress against planned outcomes. We have a management structure that is focussed on assessment and analysing information about all children in the school and the staff have delegated responsibility for ensuring that children with SEND are making at least expected progress. The SENCO will also monitor planning and other evidence to ensure that children with SEND are receiving a full and inclusive entitlement. The SENCO evaluates the SEND policy annually to ensure that it is fit for purpose. How do we ensure that young people with SEND are enabled to engage in activities available with children and young people in the school who do not have SEND? At the classroom level, all children are planned for so that they can access the curriculum regardless of their need or the subject being taught. We are committed to adapting physical resources, teaching styles and techniques and following advice from any professional body or recognised advisory service so that children with SEND have equality of access. What support is in place for improving emotional and social development? Staff are on hand to provide pastoral support for all children including those with SEND. We are also part of a cluster arrangement with local schools so that we have access to counselling services should they be required. We do not tolerate bullying and should we have cases reported to us we follow the school’s anti-bullying policy – a copy of which is available on this website under the policy section. Concerns and behaviour issues, including incidents of bullying, are recorded and acted upon as necessary. Above all, children with SEND are encouraged to participate fully in the life of the school. How do we involve other bodies, including health and social care bodies, local authority support services and voluntary sector organisations, in meeting children and young people’s SEN and supporting their families? Where there is an identified need and a multi-agency approach is required, including voluntary agencies, we ask families to participate in the Common Assessment Framework (CAF) process. By engaging with this process we can make referrals to relevant agencies as necessary. What happens when parents and carers are not happy with our provision? Most issues can be sorted out by speaking directly to a child’s class teacher – we pride ourselves in an open door policy and our inclusive atmosphere. Where speaking to the class teacher has not resolved an issue, there is a clear complaints procedure that can be downloaded from the policy section of this website. Specifically for SEND, if after the class teacher has been approached and a satisfactory conclusion has not been reached, then the Headteacher should be approached. If this still does not resolve the issue, a formal complaint can be made to the Chair of Governors. We want to hear what you think about our SEND provision. If you have any questions about the new Code of Practice or anything else to do with SEND we would also like to hear from you. Please clink on link which explains the SEND process Decision Making Flowchart We have included a link to a brochure which explains additional agencies that can support children and families : Directory of Members
<urn:uuid:64cfea8e-4e17-4bb5-9132-cd96a5672886>
CC-MAIN-2020-16
https://www.urchfontprimary.co.uk/index.php/about-us/sen-information-report
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506870.41/warc/CC-MAIN-20200402080824-20200402110824-00473.warc.gz
en
0.960366
2,406
2.609375
3
Employee selection is the process employers use to determine which candidates to choose for particular jobs or roles within the organization. (Some organizations select for a particular job, e.g., customer service representative, whereas others select for a role, e.g., management.) Often, employee selection connotes preemployment selection—that is, determining which external applicants to hire. However, the same term can also apply to a number of situations in which current employees are placed into an organizational role or job, including through promotions and transfers into new positions. Occasionally, the term employee selection is used broadly to refer to the process of selecting individuals to participate in initiatives such as management training programs, high-potential programs, or succession planning programs, in which the individual does not immediately assume a particular role or job but instead participates in some developmental process. Candidates may be external applicants (i.e., applicants with no current association with the hiring organization) or internal candidates (i.e., current employees seeking other positions). However, employers sometimes seek candidates from only one source. For example, in some organizations, candidates for a first-line supervisory job come only from the pool of current employees performing the position to be supervised. In other cases, the candidate pool may be limited to groups of applicants (i.e., nonemployees) because of the nature of the job. For example, employees in a large organization may not desire the lowest entry-level position. Either the individual already holds that position, or the individual perceives that position to be a step backward to be taken only in exceptional circumstances. Thus, the organization selects only from an external pool of candidates. Employee Selection Instruments Most organizations have a goal of identifying the best candidate or a capable candidate and use some sort of tool to help them evaluate a candidate and make decisions about whom to select. These tools may be what industrial psychologists consider a test, an objective and standardized sample of behavior. Generally, these would include traditional standardized paper-and-pencil tests or computer-administered tests, work samples, simulations, interviews, biographical data forms, personality instruments, assessment centers, and individual evaluations. However, many organizations collect information using tools that would not normally be considered tests, because the processes or instruments are either not objective or not standardized. Examples include resume reviews, educational requirements, experience requirements, license or certification requirements, background investigations, physical requirements, assessments of past job performance, and interest inventories. Selection procedures should measure job-related knowledge, skills, abilities, and other characteristics (KSAOs). The KSAOs measured depend on the job requirements and the tasks performed by the job incumbents. Typically, selection procedures used in business settings include measures of cognitive abilities (e.g., math, reading, problem solving, reasoning), noncognitive abilities (e.g., team orientation, service orientation), personality (e.g., conscientiousness, agreeableness), skills (e.g., electrical wiring, business writing), or knowledge (e.g., accounting rules, employment laws). Selection procedures that involve assessments of education and experience are generally used as proxies to assess knowledge and skill in a particular area. For example, a college degree in accounting and 5 years of experience as an accountant may suggest that an individual has a particular level of knowledge and skill in the accounting field. Employee Selection Objectives When using employee selection procedures, employers have a number of objectives. Perhaps the most prevalent reason for employee selection is to ensure a capable workforce. Employers simply want to measure the job-related skills of the candidates to identify the most able or those who meet some minimum standard. In some cases, employers focus on other criteria, such as turnover, instead of or in addition to job performance. Higher levels of job performance lead in turn to organizational benefits such as higher productivity and fewer errors. When recruiting, hiring, and training costs are high, the advantages of lowering turnover are obvious. Employers may use formal, standardized selection procedures to facilitate meeting other important organizational goals in addition to the enhancement of job performance. An organization may use these selection procedures to ensure a process that treats all candidates consistently. Other organizations may use these procedures because employee selection procedures incorporating objectively scored instruments are a cost-effective method of evaluating large numbers of people when compared with more labor-intensive selection methods such as interviews, job tryouts, work simulations, and assessment centers. Employee selection in the United States is heavily influenced by the legal environment. Federal laws, guidelines, and court cases have established requirements for employee selection and made certain practices unlawful. State and local legal environments are generally similar to the federal one, although in some states or localities, the requirements and prohibitions may be extended. Typically, the definitions of protected classes (i.e., subgroups of people protected by equal employment opportunity laws, such as racial and ethnic minorities and women) are broader at the state and local levels than at the federal level. The federal requirements for employee selection are complex; however, the key elements include the following: (a) The selection procedure must be job-related and consistent with business necessity; (b) the selection procedure used does not discriminate because of race, color, sex, religion, or national origin; (c) equally useful alternate selection procedures with lower adverse impact are not available; (d) the selection procedure should not exclude an individual with a disability unless the procedure is job-related and consistent with business necessity; and (e) adverse impact statistics must be kept. (Adverse impact is operationally defined as different selection ratios in two groups.) In addition, (f) where there is adverse impact, evidence of the validity of the selection procedure must be documented. The laws for employee selection are enforced through two primary processes: (a) the judicial system and (b) enforcement agencies such as the Equal Employment Opportunity Commission, the Office of Federal Contract Compliance Programs, and a local human rights commission. It merits noting that the legal definition of an employee selection procedure encompasses all forms of such procedures, regardless of the extent to which they are objective or standardized. Choosing the Type of Employee Selection Procedures The kind of employee selection procedure used in a particular situation depends on many factors. Perhaps the most important consideration is the kind of knowledge, skill, ability, or other characteristic (KSAO) being measured. Some instruments are better for measuring some skills than others. For example, an interview is a good way to assess a person’s oral communications skills, but it is not a particularly efficient means of determining a person’s quantitative skills. Industrial and organizational psychologists often consider the known characteristics of a particular type of employee selection procedure when choosing among various types of selection procedures. The typical levels of validity and adverse impact for an instrument may affect the choice of instrument. Most organizations want to maximize the validity and minimize the adverse impact. Many organizational factors also influence the choice of selection procedure. Sometimes an organization will consider the consequences of failure on the job and design a selection process accordingly. When the repercussions of an error are high (e.g., death or bodily injury), the organization may use lengthy selection procedures that extensively measure many different KSAOs with a high level of accuracy. When the repercussions of an error are minor (e.g., wrong size soft drink in a fast food order), the organization may opt for a less comprehensive process. Some organizations consider other factors that are related to a high need for success in selection. Often, the cost of hiring and training and the time required to replace an individual who cannot perform the job at the level required influence the choice of selection instruments. Some instruments may not be workable in the context of the organization’s staffing process. A 2-day assessment center composed of work sample exercises and requiring two assessors for each candidate is not often practical when the hiring volumes are high. A test requiring the test taker to listen to an audiotape will not work if the equipment is unavailable. Some instruments may not be feasible with certain candidate groups. Candidates who are current employees may resist extensive personality assessments. Measures of past performance may not be feasible if the applicant pool contains external applicants. Some organizations attempt to minimize the need for reasonable accommodations under the Americans With Disabilities Act and avoid selection procedures, such as highly speeded tests, that often generate accommodation requests. Most organizations consider the costs of selection instruments in their selection process. Often, selection procedures such as assessment centers, which typically cover many job-relevant KSAOs, present a great deal of face validity (i.e., the extent to which the measure looks like it would measure job-related KSAOs), and predict job performance well with moderate adverse impact, are rejected because of their costs. Some organizations systematically consider costs and benefits when choosing selection instruments and choose instruments that provide more benefits than costs. These organizations may be willing to spend a lot on selection procedures if the value of the resulting candidate pool is commensurately higher. Validating the Employee Selection Procedure Ideally, an employer uses a systematic process to demonstrate that a selection procedure meets the legal and professional requirements. Employers often avoid a systematic process when using less formal selection procedures, because they believe that such procedures are not required or fear the outcomes. However, compliance with current legal procedures generally requires some demonstration of job relevance and business necessity. The validation process for a selection instrument typically involves determining job requirements, assessing the relationship of the selection process to those requirements, demonstrating that the selection process is nondiscriminatory, and documenting the results of the research. Job analysis or work analysis is the process for determining what KSAOs are required to perform the job. The purpose of a job analysis is to define which KSAOs should be measured and define an appropriate criterion. A variety of techniques, ranging from interviews and observations to job analysis questionnaires, can be used to determine what tasks incumbents perform and what KSAOs are necessary to perform the tasks. The extent and formality of the job analysis varies with the particular situation. When tests that are known to be effective predictors of performance in a wide range of positions, such as measures of cognitive ability, are used, the job analysis may be less detailed than in cases in which a job knowledge test is being developed and information sufficient to support the correspondence between the job content and test content is necessary. Often when a test purports to predict criteria such as turnover, the need to analyze the job and demonstrate the relevance of turnover is obviated. A validation study establishes a relationship between performance on the predictor and some relevant criterion. Validity refers to the strength of the inference that can be made about a person’s standing on the criterion from performance on the predictor. Many ways exist to establish the validity of an inference, and current professional standards encourage an accumulation of validity evidence from multiple sources and studies. Perhaps the three most common approaches are content-oriented strategies, criterion-oriented strategies, and validity generalization strategies. Content-oriented strategies involve establishing the relationship between the selection procedure and the KSAOs required in the job, and criterion-oriented approaches involve establishing a statistical relationship between scores on the predictors and a measure on some criterion, often job performance. Validity generalization strategies (e.g., synthetic validity, job component validity, transportability) usually involve inferring validity from one situation in which formal studies have been done to another situation, based on demonstration of common KSAOs or tasks. Demonstrating that selection procedures are non-discriminatory is usually accomplished through a bias study. Accepted procedures involve a comparison of the slopes and intercepts of the regression lines for the protected and nonprotected classes. Often psychologists will evaluate mean group differences and adverse impact; however, it is important to note that neither of these statistics indicates bias. Government standards require documentation of research related to the job analysis, validity, and bias research. Careful industrial and organizational psychologists who want a successful implementation of an employee selection procedure will also provide detailed user’s guides that explain how to use the selection procedure and interpret scores in addition to the technical report documenting the validity studies. Using and Interpreting the Results of the Employee Selection Procedure Multiple ways to combine, use, and interpret employee selection procedures exist. Like the selection of a particular type of format for the employee selection procedure, the choice of how to use scores on employee selection procedures depends on scientific and practical considerations. Each approach has its advantages and disadvantages. One of the first questions in using the results of selection procedure is how to combine data from multiple procedures. Two approaches are frequently used. In a multiple hurdles approach, the standard on one selection procedure must be met before the next is administered. In a slight variation, all selection procedures are given, but the standard on each procedure must be met for a candidate to be qualified. Another approach frequently used is the compensatory model, in which all components of a selection battery are administered and a standard for a total score on the battery must be achieved. Multiple hurdles approaches have the advantage of ensuring that a candidate possesses the minimal level of the KSAOs being measured to perform the job at the level specified by the organization, whereas a compensatory model allows for a higher level of skill in one area to compensate for lower skill in another. For example, strong mental abilities may compensate for low levels of job knowledge: a candidate may not have all the job knowledge needed but may possess the ability to acquire additional knowledge quickly and efficiently. There are situations in which skills are not compensatory, and minimal levels of both are required. For example, in a customer service position, an employer may expect problem solving skills and interpersonal skills. Some organizations prefer multiple hurdles because this approach allows them to spend their staffing resources wisely. For example, an organization may use a short test of basic reading skills to eliminate candidates who do not read well enough to complete a lengthy job knowledge test. In other situations, such as competitive labor markets, asking candidates to return for multiple testing events is not feasible. Consequently, organizations prefer a compensatory approach, or a modified multiple hurdles approach in which all instruments are given to all candidates. Whether looking at one selection procedure or a combination of procedures, there are several ways to use the scores. In top-down hiring, the employer chooses the top scorer, then the next highest scorer, and so on, until the positions are filled. Top-down hiring maximizes performance on the criterion and frequently results in the highest levels of adverse impact, particularly when the employee selection procedure contains a significant cognitive component. Top-down selection procedures work well in batch selection— that is, the selection procedure is administered to a large number of individuals at one time, and all selections are made from that pool. When selection is continuous, meaning the selection procedure is administered frequently, the top of the list changes often, making it difficult to provide information to candidates about their relative status on a list of qualified individuals. In addition, if the skill level of the candidates tested is low, the employer runs the risk of selecting individuals who lack sufficient skill to perform the job at the level required by the organization. Cutoff scores are often used to solve the problem of identifying who is able to perform the job at the level specified by the organization, specifying the pool of qualified people, and reducing the level of adverse impact. Using a cutoff score, the employer establishes for the procedure a minimum level that each candidate must achieve. Typically, all candidates who score above that on the procedure score are considered equally qualified. Although this approach solves several problems, it reduces the effectiveness of the selection procedure by treating people with different skill levels the same. The extent to which adverse impact is affected depends on where the cutoff score is set. Both top-down selection and cutoff scores can result in individuals with very similar scores being treated differently. Sometimes banding is used to overcome this problem by grouping individuals with statistically equivalent scores. Yet the bounds of the band have to be set at some point, so the problem is rarely surmounted completely. Another approach that is often used by employers is the piece ofinformation approach. The staffing organization provides the score on the employee selection procedure and interpretive information such as expectancy tables, and allows the hiring manager to determine how to use the selection procedure information and combine it with other information. For example, a hiring manager might compare test information with other indicators of achievement, such as college grade point average, and make a judgment about the mental abilities of a candidate. The advantage to this approach is its recognition that no test provides perfect information about an individual. The problem with the approach is that it opens the door to inconsistent treatment across candidates. Usefulness of an Employee Selection Procedure The usefulness of a selection procedure can be assessed in several ways. One approach determines the extent to which the number of successful performers will increase as a result of using the employee selection procedure by considering three factors: (a) the validity of the instrument; (b) the selection ratio (i.e., the percentage of candidates to be selected); and (c) the base rate for performance (i.e., the percentage of employees whose performance is considered acceptable). Another approach is to calculate the dollar value of using the selection procedure by applying utility formulas that take into account the research and operational costs of the tests, the dollar value of better performers, the number of employees hired per year, the average tenure, and the validity of the test. - American Educational Research Association, American Psychological Association, and National Council on Measurement in Education. (1999). Standards for educational and psychological testing. Washington, DC: American Educational Research Association. - Equal Employment Opportunity Commission. (1978, August 25). Uniform guidelines on employee selection procedures. Federal Register, 38290-393150. - Schmitt, N., & Borman, W. C. (1993). Personnel selection in organizations. San Francisco: Jossey-Bass. - Schmitt, N., & Chan, D. (1998). Personnel selection: A theoretical approach. Thousand Oaks, CA: Sage. - Society for Industrial and Organizational Psychology. (2003). Principles for the validation and use of personnel selection procedures (4th ed.). Bowling Green, OH: Author.
<urn:uuid:7ba8dab0-7a86-4ca5-bd02-df83d6f8b9d0>
CC-MAIN-2020-16
https://psychology.iresearchnet.com/industrial-organizational-psychology/recruitment/employee-selection/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510287.30/warc/CC-MAIN-20200403030659-20200403060659-00114.warc.gz
en
0.93161
3,717
2.890625
3
Scientific research is indeed widening our knowledge of nutritional needs during pregnancy. One of the most recent developments in this arena is the prominence of omega-3 fatty acids in the growth of a healthy baby and in the health of the mother. Omega-3 Fish Oil during Pregnancy: Benefits & Dosage Omega-3s are a clan of long-chain polyunsaturated fatty acids that are crucial nutrients for health and development of the baby. Unfortunately, these are not synthesized by the human body and therefore they must be attained from diet or supplementation. However, the typical diet is greatly deficient in Omega—3’s. Research shows that the two most advantageous omega-3s are EPA (eicosapentaenoic acid) and DHA (docosahexaenoic acid). Although EPA and DHA naturally befall together and work together in the body, studies show that each fatty acid has exceptional benefits. EPA supports the heart, immune system, and inflammatory response. DHA provides the brain, eyes, and central nervous system, which is why it is exclusively important for pregnant and lactating females. Why is Omega-3 vital? Sufficient intake of Omega-3 fats is vital to continuing the balanced synthesis of the hormone-like substances called prostaglandins. Prostaglandins help control many important physiological functions including blood pressure, blood clotting, nerve transmission, the inflammatory and allergic responses, the purpose of the kidneys and gastrointestinal tract and the synthesis of other hormones. Depending on the type of fatty acids in the diet, certain kinds of prostaglandins may be synthesized in huge quantities, while others may not be synthesized at all. This prostaglandin imbalance may lead to diseases. The role of omega-3s in synthesizing beneficial prostaglandins can explain why they have been made known to have so many health benefits, as well as the prevention of heart related diseases, improving cognitive function and the regulation of inflammation. High doses of omega-3s have also been used to treat and avoid mood disorders, and new studies are finding their potential benefits for a wider range of conditions comprising cancer, inflammatory bowel disease and various other autoimmune diseases such as lupus and rheumatoid arthritis. The Benefits of Omega-3 Fish Oil Omega-3s has been found out to be essential for both neurological and early visual growth of the baby. However, the standard diet is severely lacking in these critical nutrients. This omega-3 dietary absence is compounded by the fact that pregnant females become exhausted in omega-3s because the foetus uses omega-3s for its nervous system development. Omega-3s are also used post birth to make breast milk. With each successive pregnancy, mothers are more depleted. Researches have confirmed that adding EPA and DHA to the diet of pregnant females has a positive outcome on visual and cognitive development of the baby. Studies have further shown that higher intake of omega-3s may lessen the risk of allergies in infants. Omega-3 fatty acids have optimistic effects on the pregnancy itself. Increased consumption of EPA and DHA has been shown to prevent pre-term labour and delivery, lowers the risk of preeclampsia, and might increase birth weight. Omega-3 deficiency also rises the mother’s risk for depression. This may explain why postpartum mood disorders become worse and begin earlier with subsequent pregnancies. Which Foods Contain Omega-3 Fish Oil? The finest sources of EPA and DHA are cold water fish such as salmon, tuna, sardines, anchovies, and herring. Many people are rightly concerned about mercury and other toxins in fish, especially during pregnancy. For the same reason, consumption of purified fish oil during pregnancy are often the best source of EPA and DHA. A superior quality fish oil supplement from a trustworthy manufacturer delivers the health benefits of EPA and DHA minus the risk of toxicity. Many individuals think that flaxseed or flaxseed oil has omega-3s. But flaxseed contains the shorter-chain omega-3, ALA (alpha-linolenic acid), which is unlike the long-chain EPA and DHA. EPA and DHA are the omega-3s that our body requires for optimal health and development. While it was once believed that the human body could change ALA to EPA and DHA, recent research shows that such conversion barely and inefficiently occurs. Consumption of Fish oil during pregnency is a more reliable source of EPA and DHA. Fish oil benefits in pregnancy Omega-3 Fish Oil and Pregnancy: Benefits & Dosage. Omega-3 is a family of long-chain polyunsaturated fatty acids that are important nutrients for health and development. Unfortunately, these are not produced by the human body and therefore must be gained from diet or supplementation. The above are fish oil benefits in pregnancy. Superior fish oil is safe to take during gestation. Fresh fish can often have environmental toxins like mercury that hoard during its life span. These toxins can be almost eliminated during the production and processing of fish oil, with the use of high quality raw materials and a cutting-edge refining process. Some varieties of fish oil are of higher quality than others. A trustworthy fish oil manufacturer should be able to make available documentation of third-party lab results that confirm the purity levels of their fish oil, down to the particles per trillion level. Consumption of Fish Oil during pregnancy; Recommendations: - Investigate the production process–How is the fish oil mass-produced and what are the quality standards that the producer is using? The quality standards that exist for fish oil-includes the Norwegian Medicinal Standard, the European Pharmacopoeia Standard and the voluntary U.S. standard recognised by the Council for Responsible Nutrition’s 2006 monograph-guarantee quality by setting maximum grants for toxins. - Smell–Does the fish oil have a fishy odour? Research has shown that fish oils only smell nasty when the oil has begun to degrade and has started turning rancid. A high quality fish oil supplement won’thave a fishy odour. - Taste–Does the fish oil taste fishy? The freshest and most superior-quality fish oils should not taste fishy. Evade fish oils that have categorically strong or artificial flavours added to them because they are most likely trying to hide the suspicious flavour of rancid oil. Asthma is a chronic disease that implicates swelling of the lungs. Airways swell and check airflow in and out of the lungs, making it difficult to breathe. The word asthma is derived from the Greek word for “panting.” People with asthma pant and wheeze since they cannot get sufficient air into their lungs. Normally, when you breathe in something irksome, or you do something that causes you to require more air, like workout, your airways relax and open. But with asthma, muscles in the airways stiffen, and the lining of the air passages surges. Asthma is the most common lasting childhood illness. About half of all cases nurture before the age of 10, and many kids with asthma also have other respiratory allergies. Asthma could be allergic or non-allergic. With allergic asthma, an allergic rejoinder to an inhaled irritant, such as pet dander, pollen, or dust mites, prompts an attack. The immune system jumps into action, but instead of helping, it causes swelling. This is the most common type of asthma. Non-allergic asthma does not include the immune system. Attacks can be caused by stress, anxiety, cold air, smoke, or a virus. Some people have signs of it only when they workout, a condition known as exercise-induced asthma. While there is no medicine for asthma, it can be controlled. Patients with moderate -to- severe asthma should use orthodox medications to help control symptoms. Complementary and alternative therapies, used under your medic’s supervision, may also help, but should not replace orthodox treatment. Signs and Symptoms Most people with asthma go for periods of time sans any symptoms, then face an asthma attack. Some people have chronic loss of breath that tends to get worse during an attack. Asthma attacks can last from a few minutes to several days, and could become dangerous if airflow to the lungs becomes strictly restricted. Primary symptoms comprise: - Loss of breath. - Wheezing, which generally begins suddenly. It may be worse at night or during the wee hours. It can be made worse by cold air, workout, and heartburn. Wheezing is calmed by using bronchodilators which are medicines that open the air canals. - Chest tightness. - Cough (dry or with phlegm). In cough-variant asthma, this may be the only sign. - If you have any of these signs, seek emergency treatment: - Extreme exertion breathing or stopping breath. - Lips and face turn bluish in colour, called cyanosis - Severe anxiety - Rapid pulse - Excessive sweating - Weakened level of consciousness, such as drowsiness or confusion Asthma is most likely caused by numerous factors. Genes play a vital part. You are more likely to get asthma if others in your clan have it. Among those who are vulnerable, being exposed to environmental factors, such as allergens, substances that cause an allergic reaction, or infections, might rise the chance of developing asthma. The following factors might increase the risk of developing asthma: - Having allergies. - Family history of asthma or allergies. - Abridged lung function at birth. - Being exposed to second hand smoke (Passive Smoking) - Having upper respiratory infections as a child. - Living in a large city. - Sex. Amongst children, asthma develops twice as much as in boys as in girls. But post puberty, it is more common in girls. - Gastroesophageal reflux (heartburn). There are some important asthma tests your doctor will prescribe while diagnosing asthma. Some asthma tests, such as lung (or pulmonary) function test (LFT), measure lung function. Other asthma tests can help conclude if you are allergic to specific foods, pollen, or other particles. Blood tests give a clear picture of your general health; specific tests also measure levels of immunoglobulin E (IgE), a vital antibody that’s released throughout an allergic reaction. While everyone synthesize IgE, people who have allergies synthesize larger quantities of this protective protein. Lung Function Tests Lung function tests (LFT) are asthma tests that measure lung function. The two most common lung function tests used in diagnosing asthma are spirometry and methacholine challenge tests. Spirometry is a simple breathing test that assesses how much and how fast you can exhale out of your lungs. It is often used to determine the quantity of airway obstruction one has. The methacholine challenge test may be done if your symptoms and screening spirometry do not clearly or persuasively diagnose of asthma. Your doctor will know which test is best for you. While a chest X-ray is not considered as an asthma test, it could be used to make sure nothing else is the cause for your asthma symptoms. An X-ray is an image of the body that is produced by using low doses of radiation to see internally. X-rays are used to diagnose a wide range of conditions, from bronchitis to a fractured bone. Your medic might perform an X-ray on you to see the structures inside your chest and lungs, including the heart and bones. By taking a look at your lungs, your medic can see if asthma is likely to be the cause your symptoms. Evaluation for Heartburn and GERD Gastroesophageal reflux disease, generally known as GERD, is another condition that may worsen asthma. If your doctor doubts this problem, he or she may recommend specific tests to look for it. Fish oil in asthma An additional consumption of fish oil during pregnancy may considerably reduce the risk of asthma in children. Omega-3 fatty acids are one of the two major types of polyunsaturated fat. They can be found in certain foods items such as flaxseed and fish, as well as in dietary supplements such as fish oil. These are hereby the benefits of fish oil in asthma. Fish oil benefits for asthma The reason fish prevents asthma attacks has to do with the effect of eating fish on inflammation. Fish has fatty acids that can help the body to regulate inflammation. If you have asthma, your body has a chronically unstable immune response to the outside environment. The above are the fish oil benefits for asthma.
<urn:uuid:4503f8ca-f050-4220-a56e-1ac88ded321b>
CC-MAIN-2020-16
https://www.lalpathlabs.com/blog/how-fish-oil-is-beneficial-in-lowering-a-childs-risk-to-asthma-during-pregnancy/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370519111.47/warc/CC-MAIN-20200404011558-20200404041558-00194.warc.gz
en
0.944454
2,651
2.90625
3
How many times have you heard somebody say: “My dog is aggressive?” Most likely, if you are a dog owner, or frequent people with dogs, you have heard somebody make this statement at some time. Perhaps you may have even made that statement yourself to warn others about your own dog. Depending on who you talk to, the word “aggressive dog” may translate into mental images of a dangerous, snarling dog or perhaps thoughts about legal liabilities. Today, we’ll be discovering how labeling dogs as aggressive is not only harmful to the dog itself, but also inaccurate if we take a closer look into the dynamics taking place behind those “language barriers” between humans and dogs. Aggression in Humans What exactly is aggression? Psychology expert Kendra Cherry, defines aggression as “a range of behaviors that can result in both physical and psychological harm to oneself, others or objects in the environment.” This definition is quite clear and easy to understand for us humans overall, but when it comes to dogs the problem with this definition is in its interpretation. It seems like many people may interpret things differently, depending on who you ask. What behaviors in dogs are really meant to harm? Sure we may list lunging, barking, growling, snarling as behaviors that could potentially harm a person or other dog, but is the dog intently wishing to harm when he engages in such behaviors? As humans, we have complex minds and we often engage in sophisticated thought processes. We plan attacks, go to war, behave out of spite, take revenge and we are even able to harm others emotionally, but what about dogs? Are our dogs really “aggressive?” ” In the end, we may rightly call much human behaviour aggressive. However, dogs are not human, and it’s not fair to project human qualities onto them.” ~Alexandra Semyonova Aggression in Dogs When it comes to dogs, things are quite different than in humans. Dogs don’t act out of spite, they do not plot revenge, they don’t strategically plan a war or look for ways to hurt others emotionally. In dogs, “aggressive” behaviors are often adaptive, meaning that they have a survival purpose and the purpose in this case is attaining a certain level of control over their environment and its associated events. This doesn’t mean that dogs are taking every chance they can get to take control over us, “dominating” us as some television show may portray. It simply means that dogs may engage in aggressive behaviors so they can avoid certain things and attain others that make them feel safer. There is often an element of reinforcement playing a part in the background of dogs who are engaging in aggressive displays. For example, if a dog is fearful of men wearing hats, his barking and lunging keeps men with hats away and the dog soon learns that his behavior works so he’ll be likely to engage in the same behavior next time. Same goes with dogs who “hate” the mailman or a dog who growls when in possession of something. This latter dog is likely telling the person or other dog something along the terms of “I don’t trust you near my resource, now please back off!” Obtaining distance can be highly reinforcing to a dog who feels threatened by someone who risks taking his resource away. “Survival itself is the ultimate goal of adaptive behavior. In order to achieve survival, an animal must adapt and control events that impact upon its needs. Aggression is one behavioral response towards that goal.” James O’ Heare Aggression to Avoid Aggression Dogs often engage in natural behaviors that are actually meant to avoid aggression in the first place. In other words their “aggressive” displays are meant to actually avoid causing harm. The barking, growling and tooth displays are ways dogs are trying to inform other people or dogs about how they feel. They’re a dog’s plea to please listen to his feelings so he doesn’t have to escalate his behavior to a potential bite. It’s the canine version of a child “using his words” if you will.” How any times do we will tell children who resort to hair pulling or pushing: “Use your words!” If you therefore understand a dog’s language, you may see that dogs generally try to do “everything in their power to avoid aggressive encounters” as Alexandra Semyonova points out in her book “The 100 Silliest Things People Say about Dogs.” Dogs therefore tend to engage in what biologists refer to as “ritualized aggression.” While barking, growling and teeth displays are common straightforward ritualistic displays, there are several more subtle ways dogs attempt to manifest their discomfort in a situation. Whale eyes, lip licks, head turns and yawns are all part of a dog’s extended early warning system. Too bad these subtle warning signs of increasing stress are often missed by many dog owners. If these signs aren’t noticed doesn’t mean the dog didn’t send them out, it’s likely they simply weren’t recognized by the owner, or worse, were suppressed using punishment (never punish a dog for growling!) Avoid punishment-based techniques because they do more harm than good, leading to more defensive behaviors down the road). Then, dogs are blamed for suddenly “lashing out” when they instead tried really, really hard to communicate with us, but we didn’t give them a chance. Talk about language barriers! “Hard stares, growling, snarling, snapping and biting without maiming force are the “legal” conflict resolution behaviors in dog society.”~Jean Donaldson The Problems With Labels What happens when dogs are labeled as aggressive? This “umbrella term” gives the impression that dogs are dangerous, unpredictable and untrustworthy all of the time. Instead, most dogs who are labeled as are aggressive are only acting “aggressively” in specific contexts and situations. Dogs may therefore act “aggressively” when they feel threatened when people or other dogs come near their bone or when people come near their perceived properties. Just because a dog acts aggressively in a certain context, doesn’t make him aggressive all the time! Same goes with humans. If you get angry at a person who cuts in front of you when you are in line or tries to steal your wallet, does that mean you are “aggressive?” Certainly not! It’s human nature to over-generalize behaviors. We therefore end up with dog owners making absolute statements such as “my dog hikes his leg ALL the time” or my dog is NEVER listening. And then comes the labeling cliche’ with its associated statements “my dog is stubborn, my dog is hyper or my dog is aggressive” when in reality the dog is acting this way only certain times. “There are very few dogs who are prone to aggression regardless of the situation. That’s why it’s helpful to think in terms of of aggressive behaviors rather than aggressive dogs when trying to reduce your dog’s tendencies to growl or bite. Usually these behaviors are related to specific events, relationships or environments.~ Dog Time Aggression isn’t Descriptive When we label a dog or a specific dog breed as aggressive, we are perpetuating a belief that the behavior is reflecting the dog’s essence. This can be harmful to both dog and owner because it often implies the belief that that specific dog cannot change. And every time the dog behaves in a negative manner, it’s taken as evidence that the dog is bad, and thus “aggressive.” We therefore end up missing the important fact that the dog is most likely just a dog who behaves normally most of the time, but just happened to react aggressively in a particular context. Also, labeling a dog as “aggressive” gives little information about what is really happening and it doesn’t help much with arranging a plan to tackle the issue. “Aggression as it used to describe a dog’s behavior, is not an adjective, it’s a verb.”~ Sarah Hodgson What happens though when we replace the term aggressive with something else? This makes us see things from a whole different perspective. So instead of saying “my dog is aggressive” using the word aggressive as an adjective, we would perhaps say “my dog acts aggressively” or “uses aggression” or “behaves aggressively” when he has a bone.” This description can be further broken down by removing the term aggressive altogether and describing the aggressive behavior instead, as such: “My dog growls when he has a bone” or even better “my dog growls when he has a bone and I come close to him.” We now have a clearer picture of what the dog is doing and in what circumstance the behavior is taking place. This can be very helpful for when we consult with a professional and are describing the issue and it helps us also see the behavior from a more positive perspective. “Actions can be changed, DNA cannot. If you believe your dog IS shy, scared, soft, aggressive, etc., you will become crippled in your training of him by his personality. However, if you believe your dog is acting in a certain way, you will treat him very differently because you will believe you can change his behavior.”~ Connie Cleveland Normal, Natural Behavior - Dog Time, Understanding canine aggression, retrieved from the web on August 13th, 2016 - The 100 Silliest Things People Say about Dogs, By Alexandra Semyonova, Hastings Press (July 27, 2009) - Mine! A Practical Guide to Resource Guarding in Dogs, by - Aggressive Behavior in Dogs, by James O’Heare, 2014, Distributed by Dogwise Publishing Share Your Comments
<urn:uuid:ef1957f6-ffaf-48f4-bf5b-390dfd8413bc>
CC-MAIN-2020-16
https://dogdiscoveries.com/dog-is-not-aggressive-as-thought/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370494349.3/warc/CC-MAIN-20200329140021-20200329170021-00154.warc.gz
en
0.957447
2,127
3.03125
3
Most of us have a kitchen of some sort, right? Well, you can use your kitchen to teach your kids science every day. I don’t mean lecturing them, making notes and doing experiments that have to be written up. Just talk to them, constantly, about what is going on right in front of them, all the time. Sometimes maybe buy them a few props and science kits, you can find some great home science resources in this post, but much learning comes from every-day chatter. I’ve found that it is the quickest and best way to get all sorts of useful knowledge cemented in those little heads. The Chemistry of Cooking Eggs and Other Cool Kitchen Science So, arm yourself with a bit of background and a few cool tricks and lets get the science started, here are a few ideas: Physical and Chemical Change Physical changes are concerned with energy and states of matter, no new substance is produced. Physical changes can be caused by forces, motion, temperature or pressure. Water may become ice but it is still H2O even though it looks different, it has simply changed states. Physical changes can be reversed (in theory). Chemical changes take place on the molecular level. A chemical change produces a new substance that wasn’t there before. Examples of chemical changes include combustion (burning toast), cooking an egg, rusting of an iron pan and decalcifying a bone ( see below). A chemical change may produce light, heat, color change, gas, odor, or sound. Chemical changes cannot be reversed. Lets give you a few kitchen science examples of these changes, turn it into a game “What sort of change is happening here kids? can it be reversed?” Keep it light and fun and the kids will love it and think they are really clever, that’s exactly what we want . Pop some water in a plastic cup in the freezer, it becomes ice, it has changed state from liquid to solid, this is a physical change. It can be reversed by giving those water molecules some of the energy you took away from them by cooling. ie. warm the ice, increase the energy, you are back to water. This is a fully reversible physical change. Chop some vegetables, they are certainly changed, but it’s physical, the carrot is still a carrot, it hasn’t become a cucumber. Dissolve some salt in water. It may look like the salt has disappeared and a new substance has formed, but it hasn’t. Stand your salt solution in the sun so that some of the water evaporates ( the heat from the sun gives the water molecules extra energy to change state from liquid to gas) and the salt crystals will be forced out of solution and reappear. They were there all the time. It’s a physical change. Now cook an egg. What has happened? Could you reverse the process? No, the protein molecules have changed structure, become denatured. It’s a chemical change. The Science of Frying an Egg An egg is made up of the yolk and the white, the albumen. The albumen is a solution of proteins in water, proteins are made up of long chains of amino acids. On heating, the protein chains break, and recombine in a different form. This stiffens and whitens the albumen, the process is called denaturation. When you cook a protein so that it stiffens, any protein, you are denaturing it, changing it’s structure. Different methods of cooking: boiling, poaching, frying, give basicly the same result, the proteins are denatured through heat energy. But why does a fried egg taste different to a boiled egg? Frying is cooking at a very high temperature, above boiling point, without the presence of water. Water in the food quickly gains enough energy to evaporate and leave the food drier and crisper. At the same time a tasty crust is formed by proteins ( Maillard reactions) and sugars ( caremelisation) being heated to high temperatures. This is why fried food is crispy and brown. The inside of the food can stay moist as water is trapped and the food is cooking more quickly at the high temperatures produced in frying. Decalcifying An Egg. The Rubber Egg Trick Take a raw egg, in its shell and place it in a glass jar filled with white vinegar, watch and wait! Almost immediately bubbles will start forming on the outside of the egg shell. As more gas bubbles appear the egg’s buoyancy increases and it will float. Within 24 hours the egg shell will have completely dissolved, revealing the pale, translucent membranes that are now the only thing protecting the egg. The egg will feel soft and squidgy, you may even be able to bounce it if you do so very gently. The egg will be fractionally bigger as some of the liquid passes through the membrane causing it to increase in size. No need to worry about the heavy scientific equations for little kids. It’s just fun for children to see the change in the egg, the gas bubbles given off ( carbon dioxide, CO2) and to understand that a chemical reaction involving an acid is taking place. The soft egg kitchen science trick is a demonstration, but you could easily turn it into an experiment, we just have to introduce one test parameter, keeping all other parameters constant. For example, you could use 2 eggs, as close to identical as possible, identical glass jars and identical volumes of vinegar. You could introduce the variable factor of temperaure, put one jar in the fridge, one in a warm spot. I would predict that increased temperature would make the reaction progress more quickly, we can test this prediction using an experiment. Record the temperatures and ensure they are constant. For the test to be valid, all external factors should be the same, neither jar should be covered or stirred and both must be in the dark if the fridge is dark. Use your observations to show if this chemical reaction proceeds more quickly with heat. Does it? This is the scientific method, the basis of all good science. Or How About a Rubber Bone? This is basically the same demonstration, but using a bone instead. A raw chicken thigh bone is ideal. I used to work in a pathology laboratory, this is what real scientists do to make bones soft enough to cut thin sections from, so that we can make microscope slides. The bone is decalcified by the acetic acid, that means, the calcium is taken out. The bone will become progressively more soft and rubbery. Now is a good time to explain to the children that they need to eat foods with plenty of calcium or their bones won’t grow strong and hard. Who needs bendy legs! Acids and Bases in the Kitchen No need to buy chemicals, you have plenty of acids and bases sitting on your shelves already. If you can’t get your hands on some Ph (Litmus) Paper or indicator solution, there are natural kitchen indicators of pH. Turmeric changes from yellow to red at pH8.6, if you don’t have turmeric handy, there should be enough in your curry powder. Beetroot changes from red to purple and red cabbage from blue to red. You have strong bases in soaps and bleach, strong acids in vinegar and citrus fruits. Fun With Yeast You can use this experiment to demonstrate that ordinary baker’s yeast is a living thing. Half fill a plastic bottle with warm water and add your yeast, a sachet or a heaped teaspoon. Now add a couple of teaspoons of sugar. The sugar is food for the yeast, all living things need some sort of food. The dry yeast will begin to become active on contact with water and a food source. Stretch the balloon a bit to make it soft and pop it over the top of the bottle. Watch and you will see that the balloon begins to inflate. The yeast, a tiny micro organism, a member of the fungus kingdom is producing a gas, carbon dioxide (CO2). Each molecule of carbon dioxide is made of one carbon (C) atom and two oxygen (O) atoms. It is this process that gives us light, bubbly bread, the yeast mixed into the flour produces little gas bubbles, the heat of the oven kills of the tiny organisms but the gas bubbles remained, trapped in the hardened, cooked bread dough If you can keep this sort of patter up every day your children will quickly turn into expert kitchen scientists and the knowledge will stand them in good stead for all their future scientific endeavours. I’m sure you can think of even more ideas, these are just some of my favourites. Items You Need For Kitchen Science I’m sure we alll have the basucs, vinegar, bleach, baking soda and glass jars ( recycled hopefully!) The other items that are very helpful for demonstrating scientific ideas in your kitchen are these. Just click through to buy and get them deliveres. Most are very cheap - pH paper, litmus paper or indicator paper. - a pH meter is cheap and useful in the garden too ( another good place to learn science). It can be exhausting sometimes, to focus on explaining all day, every day, but kids are naturally curious, it’s how they learn. Maybe arm yourself with some great science books, you can look at them together if you don’t know the answers to any of their questions. Or invest in some science toys, kits and games if they are keen and want to take their science further. You don’t have to do formal experiments, just let the kids play around with whatever they have, nothing is going to explode, much. With very young kids you can also introduce some messy play and sensory activities in your kitchen, see our homemade snow made from regular household items here. Good luck making scientists in your kitchen!
<urn:uuid:a507b7f8-0654-4894-a19c-9dbd10d195a1>
CC-MAIN-2020-16
https://homeschoolgrouphug.com/kitchen-science/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505366.8/warc/CC-MAIN-20200401034127-20200401064127-00514.warc.gz
en
0.940312
2,073
3.328125
3
The brain evolved to give special representation to the space immediately around the body. One of the most obvious adaptive uses of that peripersonal space is self-protection. It is a safety buffer zone, and intrusions can trigger a suite of protective behaviors. Perhaps less obvious is the possible relationship between that complex protective mechanism and social signaling. Standing tall, cringing, power poses and hand shakes, even coquettish tilts of the head that expose the neck, may all relate in some manner to that safety buffer, signaling to others that one’s protective mechanisms are heightened (when anxious) or reduced (when confident). Here I propose that some of our most fundamental human emotional expressions such as smiling, laughing, and crying may also have a specific evolutionary relationship to the buffer zone around the body, deriving ultimately from the reflexive actions that protect us. The attention schema theory provides a single coherent framework for understanding three seemingly unrelated phenomena. The first is our ability to control our own attention through predictive modeling. The second is a fundamental part of social cognition, or theory of mind – our ability to reconstruct the attention of others, and to use that model of attention to help make behavioral predictions about others. The third is our claim to have a subjective consciousness – not merely information inside us, but something else in addition that is non-physical – and to believe that others have the same property. In the attention schema theory, all three phenomena stem from the same source. The brain constructs a useful internal model of attention. This article summarizes the theory and discusses one aspect of it in greater detail: how an attention schema may be useful for predicting the behavior of others. The article outlines a hypothetical, artificial system that can make time-varying behavioral predictions about other people, and concludes that attributing some form of awareness to others is a useful computational part of the prediction engine. As a part of social cognition, people automatically construct rich models of other people's vision. Here we show that when people judge the mechanical forces acting on an object, their judgments are biased by another person gazing at the object. The bias is consistent with an implicit perception that gaze adds a gentle force, pushing on the object. The bias was present even though the participants were not explicitly aware of it and claimed that they did not believe in an extramission view of vision (a common folk view of vision in which the eyes emit an invisible energy). A similar result was not obtained on control trials when participants saw a blindfolded face turned toward the object, or a face with open eyes turned away from the object. The findings suggest that people automatically and implicitly generate a model of other people's vision that uses the simplifying construct of beams coming out of the eyes. This implicit model of active gaze may be a hidden, yet fundamental, part of the rich process of social cognition, contributing to how we perceive visual agency. It may also help explain the extraordinary cultural persistence of the extramission myth of vision. In the attention schema theory, awareness is an impossible, physically incoherent property that is described by a packet of information in the brain. That packet of information is an internal model and its function is to provide a continuously updated account of attention. It describes attention in a manner that is accurate enough to be useful but not so accurate or detailed as to waste time or resources. In effect, subjective awareness is a caricature of attention. One advantage of this theory of awareness is that it is buildable. No part of it requires a metaphysical leap from chemistry to qualia. In this article we consider how to build a conscious machine as a way to introduce the attention schema theory. Many people show a left-right bias in visual processing. We measured spatial bias in neurotypical participants using a variant of the line bisection task. In the same participants, we measured performance in a social cognition task. This theory-of-mind task measured whether each participant had a processing-speed bias toward the right of, or left of, a cartoon agent about which the participant was thinking. Crucially, the cartoon was rotated such that what was left and right with respect to the cartoon was up and down with respect to the participant. Thus, a person's own left-right bias could not align directly onto left and right with respect to the cartoon head. Performance on the two tasks was significantly correlated. People who had a natural bias toward processing their own left side of space were quicker to process how the cartoon might think about objects to the left side of its face, and likewise for a rightward bias. One possible interpretation of these results is that the act of processing one's own personal space shares some of the same underlying mechanisms as the social cognitive act of reconstructing someone else's processing of their space. Visual attention and awareness can be experimentally separated. In a recent study (Webb , Cortical networks involved in visual awareness independently of visual attention. 2016a;113:13923-8), we suggested that awareness was associated with activity in a set of cortical networks that overlap the temporoparietal junction. In a comment, Morales (Measuring away an attentional confound? 2017;3:doi:10.1093/nc/nix018) suggested that we had imperfectly controlled attention thereby jeopardizing the experimental logic. Though we agree that attention behaves differently in the presence and absence of awareness, we argue it is still possible to roughly equate the level of attention between aware and unaware conditions, and that an imbalance in attention probably does not explain our experimental results. The purpose of the attention schema theory is to explain how an information-processing device, the brain, arrives at the claim that it possesses a non-physical, subjective awareness, and assigns a high degree of certainty to that extraordinary claim. The theory does not address how the brain might actually possess a non-physical essence. It is not a theory that deals in the non-physical. It is about the computations that cause a machine to make a claim and to assign a high degree of certainty to the claim. The theory is offered as a possible starting point for building artificial consciousness. Given current technology, it should be possible to build a machine that contains a rich internal model of what consciousness is, attributes that property of consciousness to itself and to the people it interacts with, and uses that attribution to make predictions about human behavior. Such a machine would “believe” it is conscious and act like it is conscious, in the same sense that the human machine believes and acts. The attention schema theory of consciousness describes how an information-processing machine can make the claim that it has a consciousness of something. In the theory, the brain is an information processor that is captive to the information constructed within it. The challenge of explaining consciousness is not, “How does the brain produce an ineffable internal experience,” but rather, “How does the brain construct a quirky self description, and what is the useful cognitive role of that self model?” The neural basis of autism spectrum disorder (ASD) is not yet understood. ASD is marked by social deficits and is strongly associated with cerebellar abnormalities. We studied the organization and cerebellar connectivity of the temporoparietal junction (TPJ), an area that plays a crucial role in social cognition. We applied localized independent component analysis to resting-state fMRI data from autistic and neurotypical adolescents to yield an unbiased parcellation of the bilateral TPJ into 11 independent components (ICs). A comparison between neurotypical and autistic adolescents showed that the organization of the TPJ was not significantly altered in ASD. Second, we used the time courses of the TPJ ICs as spatially unbiased "seeds" for a functional connectivity analysis applied to voxels within the cerebellum. We found that the cerebellum contained a fine-grained, lateralized map of the TPJ. The connectivity of the TPJ subdivisions with cerebellar zones showed one striking difference in ASD. The right dorsal TPJ showed markedly less connectivity with the left Crus II. Disturbed cerebellar input to this key region for cognition and multimodal integration may contribute to social deficits in ASD. The findings might also suggest that the right TPJ and/or left Crus II are potential targets for noninvasive brain stimulation therapies. Information processing in specialized, spatially distributed brain networks underlies the diversity and complexity of our cognitive and behavioral repertoire. Networks converge at a small number of hubs - highly connected regions that are central for multimodal integration and higher-order cognition. We review one major network hub of the human brain: the inferior parietal lobule and the overlapping temporoparietal junction (IPL/TPJ). The IPL is greatly expanded in humans compared to other primates and matures late in human development, consistent with its importance in higher-order functions. Evidence from neuroimaging studies suggests that the IPL/TPJ participates in a broad range of behaviors and functions, from bottom-up perception to cognitive capacities that are uniquely human. The organization of the IPL/TPJ is challenging to study due to the complex anatomy and high inter-individual variability of this cortical region. In this review we aimed to synthesize findings from anatomical and functional studies of the IPL/TPJ that used neuroimaging at rest and during a wide range of tasks. The first half of the review describes subdivisions of the IPL/TPJ identified using cytoarchitectonics, resting-state functional connectivity analysis and structural connectivity methods. The second half of the article reviews IPL/TPJ activations and network participation in bottom-up attention, lower-order self-perception, undirected thinking, episodic memory and social cognition. The central theme of this review is to discuss how network nodes within the IPL/TPJ are organized and how they participate in human perception and cognition. The attention schema theory offers one possible account for how we claim to have consciousness. The theory begins with attention, a mechanistic method of handling data in which some signals are enhanced at the expense of other signals and are more deeply processed. In the theory, the brain does more than just use attention. It also constructs an internal model, or representation, of attention. That internal model contains incomplete, schematic information about what attention is, what the consequences of attention are, and what its own attention is doing at any moment. This “attention schema” is used to help control attention, much like the “body schema,” the brain’s internal simulation of the body, is used to help control the body. Subjective awareness – consciousness – is the caricature of attention depicted by that internal model. This article summarizes the theory and discusses its relationship to the approach to consciousness that is called “illusionism.” The attention schema theory is a proposed explanation for the brain basis of conscious experience. The theory is mechanistic, testable, and supported by at least some preliminary experiments. In the theory, subjective awareness is an internal model of attention that serves several adaptive functions. This chapter discusses the evolution of consciousness in the context of the attention schema theory, beginning with the evolution of attentional mechanisms that emerged more than half a billion years ago and extending to human consciousness and the social attribution of conscious states to others.
<urn:uuid:dd10854a-b34a-4fd1-8788-dc4c35b7169c>
CC-MAIN-2020-16
https://grazianolab.princeton.edu/publications
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371886991.92/warc/CC-MAIN-20200410043735-20200410074235-00434.warc.gz
en
0.93562
2,310
3
3
It merely takes a brief stroll through an art museum to realise that art changes constantly. There is a shocking contrast between medieval paintings and those produced during the Renaissance era. However, there is an even more shocking contrast between art created in the late XIXth century and art in the first decades of the XX century. What happened? This essay will pursue to understand the changes that occurred in the art world in the beginning of the XXth from a philosophical perspective. By doing so, the intention is to achieve a deeper knowledge of how art evolved from being primarily an experience of the senses into another of much more philosophical nature: this transformation can be observed, for instance, in the constant use of philosophical terminology in contemporary art, with topics such as existence and death being present. This connection between art and philosophy can also be seen in the fact that many philosophers were interested in modern art, such as Heidegger; and also, artists interested in philosophy, for example, Rothko. Abstract art, first theorised by Wassily Kandinsky, was one of the most important pillars of this new metaphysical art as it separated from the tangible world, and directed its attention towards an inner reality. Therefore, this paper will seek to explain and explore his writings, mainly Concerning the Spiritual in Art and Point and Line to Plane , in order to better grasp this revolution in art that eventually changed the artistic world forever. Colour cannot stand alone; it cannot dispense with boundaries of some kind. A never- ending extent of red can only be seen in the mind; when the word red if heard, the colour is evoked without definite boundaries. It follows that each period of culture produces an art of its own. The XXth century was a time of great changes, and naturally this was reflected in art. Ideas that originated during the XIXth century were tested and became a reality. Boston: MFA Publications. Originally 2 Ibid, p. Communications also became easier thanks to better railroad structures and the appearance of the automobile, as well as the more generalised use of the telephone and telegraph. Distances shortened. In brief, life became faster and easier. Evidently, all material changes and alterations do not come without powerful ideological changes. In fact, cultural and philosophical ideas contributed to the transformation of the world. And from this transformation, new ideas emerged. In the turn of the century, two concepts which originated in the XIXth century collided: the materialistic obsession of dominating new territories in order to acquire glory and success, inherited from the imperialistic ideals; and the quest for spiritual fulfilment focused on exploring non-western ideologies. Overall, it was a quest for novelty and a desire to explore the world, for one reason or another. Art absorbed all these new concepts and by doing so, it started its own revolution. Nowadays, looking at a Contemporary piece in a gallery or museum, one often feels ridiculous and insecure, wishing there was an easy technique or system for understanding and valuing art; and one wonders if it is more absurd to praise or to criticise what one does not understand. But how did the art world reach this point? There is a process which art went through, were it completely reinvented itself. Likewise, society experimented changes that allowed it to become more tolerant towards what it did not comprehend. First, it took as step forward as nature became less important and the artist and his or her vision became more. Then, the appearance of abstract art, the main focus of this essay, where it was decided that art could express and analyse topics without using figurative language. From this separation from nature, the path was free for art to experiment with different ways of communicating a message, for instance, conceptual art and performance art. This essay pretends to establish the importance of abstraction in art, as it opened the door to a new, more intellectual way of understanding art, and in this sense, helped transform the nature of art. Improvisation Sintflut Wassily Kandinsky was one of the first to paint abstract pieces and also, to theorise about them. He constitutes a true pioneer of art theory in the XXth century and without him, art would be very different from what it is today. This is the reason why this essay is based on his work and theories.libertyinsurancebrokers.org/plugins Der Blaue Reiter He was recognised by many of his contemporaries as one of the greatest artistic minds of his time. Diego Rivera, in , in San Francisco after attending an exhibition, stated: I know of nothing more real than the painting of Kandinsky - nor anything more true and nothing more beautiful. A painting by Kandinsky gives no image of earthly life - it is life itself. He organises matter as matter was organised, otherwise the Universe would not exist. He opened a window to look inside the All. Someday, Kandinsky will be the best known and best loved of men. The richness of his artwork and theories and the way they evolved is the result of this. Being a sensible child, he tried to evade this harsh reality by immersing himself in stories and fantasies, most of which he learned from the Russian and German tales his aunt used to read to him. These tales would mark him, and would be present in many of his paintings. The influence of folk tales and traditions made a big impact on him. His love for art was present from the start, especially music. He returned to his dear Moscow in order to attend university, where he studied Law and Economics. However, his creative mind was always filled with doubts on whether his deep love for art was merely a passion to enjoy in his free time or perhaps something else: his true calling. Ohio State nav bar And suddenly, for the first time, I saw a picture. That it was a haystack, the catalogue informed me. I found this nonrecognition painful, and thought that the painter had no right to paint so indistinctly. I had a dull feeling that the object was lacking in this picture. And I noticed with surprise and confusion that the picture not only gripped me, but impressed itself ineradicably upon my memory, always hovering quite unexpectedly before my eyes, down to the last detail. It was all unclear to me, and I was not able to draw the simple conclusions from this experience. What was, however, quite clear to me was the unsuspected power of the palette, previously concealed from me, which exceeded all my dreams. Painting took on a fairy-tale power and splendour. And, albeit unconsciously, objects were discredited as an essential element within the picture. He said: In Lohengrin, I saw all my colours in my mind; the stood before my eyes. Wild, almost crazy lines were sketched in from of me. It became quite clear to me that art in general was far more powerful than I had thought, and on the other hand, that painting could develop just such power as music possesses. Reminiscences, After these two key experiences, he decided to abandon his career teaching Law at the university and instead, dedicate himself fully to painting, in order to explore these new ideas he was developing of art as a musical experience for the soul. Nevertheless, by he had left these classes, as he discovered his style was far from academic and he was not achieving what his spirit was hoping for. During these years, he began experimenting with new techniques and using a very colourful style; his heart was yearning for something he could not yet describe exactly. Concerning the Spiritual—and the Concrete—in Kandinsky’s Art - Lisa Florman - Google книги After realising this group did not prosper, he chose to leave. With her, he travelled around Europe looking for inspiration and knowledge. - Saving Mr. Terupt. - Concerning the Spiritual in Art by Wassily Kandinsky, Paperback | Barnes & Noble®; - (PDF) The Search for the Absolute in Art in Wassily Kandinsky | Ana Baños - acazgikakuns.ml; And also, they had a summerhouse in Murnau, a very special place for him, where he often painted the views, and where he felt safe and at ease. In it, they put together a group of artists that shared with each other interesting ideas and concepts, creating and atmosphere for discussion and exchange of points of view. This was also the time were he published his first book, Concerning the Spiritual in Art, where he explains his theories of the role of art and how colours and forms affect the soul thoroughly explained in chapter 3. He arrived to a very different country than that which he remembered: Russia was going through very rapid changes, not only economical and social but also artistic. Although Russian art shared his ideas of abstraction being the future of art, it was abstraction understood in a very different way, which searched for rationality and useful art, such as Constructivism; or abstraction understood as a search for the absolute, abandoning any reference and connection with the tangible world, such a Suprematism. Although these ways of envisioning abstract art were very different from the art Kandinsky produced in Germany, he did take them into consideration, evolving his style. As a result, his compositions became much more ordered and more attentive to geometry. At this point, he made the decision of returning to Germany. Upon his arrival in Germany, he encounters a country wounded by the war but thirsty for renovation and recovery, with a deep desire for greatness. Kandinsky joined them in the year and taught several classes, and he also had a space to reflect and develop his theories. Although he was never a fan of Constructivism, he incorporated some of its principles to his own theories. It has been said that in Russia, his style became more organised and simplified, and in the Bauhaus period this became ever more so. In his first time in Germany, he focused on colour and the possibilities it held; in this second stay in the country, he instead dedicated himself to the study of form. In , he published his new book Point and line to plane, where he goes through this new perspective. As a consequence, he left to Paris, where he found a very different artistic environment to what he was used to: Paris was flourishing with different artistic propositions, that were not very interested in abstract art, such as Dadaism and Surrealism. As has been said before, his style was constantly evolving and taking into consideration the different artistic proposal of his time and place. Nevertheless, Kandinsky managed to preserve his vision that art was meant for something great, not mere decoration that was aesthetically pleasing or to serve a superficial purpose by transforming into propaganda. For Kandinsky, the XX century would be the beginning of a new era: the Spiritual Era, one in which the focus would cease to be on material things and, instead, swift towards transcendental truths and intangible emotions; in other words, the world would evolve to concentrate on a more abstract reality. Therefore, it would be only logical that abstraction in art was the reasonable conclusion of this new era. According to him, it all started as a human spiritual quest. In other words, abstraction was a logical conclusion; the world needed it. It is quite reasonable to ask oneself whether it was the world that changed art or perhaps it was art that changed the world. Who changed whom? When pondering upon these events, a word comes to mind: revolution. A revolution is commonly seen as a political renovation. - Sound, Symbol, Sociality: The Aesthetic Experience of Extreme Metal Music; - The Neurobiology of Orthodontics: Treatment of Malocclusion Through Neuroplasticity? - Bestselling Series; - Concerning the Spiritual—and the Concrete—in Kandinsky’s Art | Lisa Florman! - Summary of Der Blaue Reiter. However, that would be a very narrow definition, as it fails to consider social and ideological changes that deeply alter the state of things. In art, a revolution means a transformation of the way art is done or understood, which is exactly what happened in the XX century. Said transformation was not a superficial one, where new techniques were discovered and used but actually, a complete rebirth of the concept of art. Art no longer feels the obligation to present what is beautiful; the resources it uses to communicate rarely awaken in the spectator the same response as when he encounters beauty. On the contrary, contemporary art is conceptual even when this is not its intention, as it exemplifies a reflective take on creation, on reality; in other words, not intuitive. Art understands itself as thought, not as techne, artisanal labour. Flamarique, Flamarique, 6 Now, without the pressure of a certain style or technique, artist can experiment and create guided only by the need of expression.
<urn:uuid:9407614d-d0ac-40f5-ba51-2782d6415c98>
CC-MAIN-2020-16
https://acazgikakuns.ml/concerning-the-spiritualand-the-concretein.php
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493120.15/warc/CC-MAIN-20200328194743-20200328224743-00514.warc.gz
en
0.979598
2,651
3.359375
3
How did a Brit of Roman descent find his way to being named the patron saint of Ireland? It seems that every writer and most historians have a different theory on who St. Patrick was and what he did. And they all claim to be right. So let’s sort it out a bit. First the orthodox view, handed down by the church and state … Patrick landed in Ireland in 432AD near the village of Saul. Patrick sought to force a confrontation with Laoghaire, the supposed High King of Tara by lighting a paschal fire on the Hill of Slane. He died in Saul in 461AD. He left some writings, the Confessions and the Epistle, and he is a footnote in several historical works of the time. Things get a little more colorful in the legend, some of which actually comes from his own writing. Patrick was the son of a Roman family living in Briton, born about AD 389, and was kidnapped and taken to Ireland by Celtic pirates as a slave. After six years in captivity he begins to hear voices from God, and he walks away from his captors, halfway across Ireland and onto a ship bound for the mainland. The ship flounders and the crew is washed ashore. Patrick leads them across a devastated wasteland, for a real long time (this is legend remember). Finally at the last possible minute, just before the crew kill and eat Patrick, God intervenes and leads them to safety and civilization. Patrick spends the next 21 years in various monasteries, till one day he has a vision. In the saint’s own words, “I saw a man coming, as it were from Ireland. His name was Victoricus, and he carried many letters, and he gave me one of them. I read the heading: “The Voice of the Irish”. As I began the letter, I imagined in that moment that I heard the voice of those very people who were near the wood of Foclut, which is beside the western sea and they cried out, as with one voice: ‘We appeal to you, holy servant boy, to come and walk among us.'” Fate intervened in the form of a heresy, which prompted Pope Celestine to send a missionary, Palladius, to Ireland to quell the pagans. The celts don’t take much of a shine to the bishop and kill him more or less on the spot. Then Patrick is named bishop and sent to Ireland. Patrick seeks spiritual confrontation with the celtic leaders, and manages to convert at least one long enough to establish a church in Ulster, which spreads like wildfire. In his free time he drove the snakes from Ireland, which according to the saint caused the water to become impure (due to the large population of drowned reptiles), and unfit to drink. Hence the need to drink beer instead of water. This is the legend. The truth probably falls somewhere in between. Most historians now believe that there were two Patricks. The only way you can make all the dates fit is to have Patrick living to a ripe old age of something like 112 years old, which in the dark ages was damned unlikely. There are written, contemporary accounts of Palladius being called Patrick, and early legends that there were two Patricks, and the first Patrick waited for the death of the second Patrick before ascending to heaven. The idea that Patrick was the first Christian missionary to Ireland is now also discounted. It’s thought that Palladium’s main mission in going was to serve and protect the Christian communities already in existence at the time of his arrival, as well as converting others. Whether there was one or two Patricks, it’s hard to deny that Ireland changed under his influence. Patrick himself claimed to have baptized thousand of people, including royals, converted noble women to nuns and established nunneries, monasteries and ordained countless priests. According to some legends, Patrick preached and lived a life of peace, and to others, was almost a warrior for Christ. It’s even said that he tried to convert a couple of ancient warriors, members of the Fianna who had managed to avoid extinction like the rest of their clan, centuries before. Traveling around Ireland, you could be forgiven for understanding why if half the legends of Patrick and the emerald isle were true, it would have taken two lifetimes for him to have completed the circuit. One story, set at the Grianan of Aileach, an ancient hill fort dating at least in part to the Iron Age in county Donegal, tells how Patrick converted Eoghan son of Niall NoÃgiallach and king of this particular region of Ireland. Dating from at least the eighth century, it tells how Eoghan complains to Patrick about how he feels ugly, which isn’t a good characteristic for a king. So Patrick lays Eoghan on the ground, and beside him, a fair-haired good looking fellow, covers them with a sheet, and when Patrick whisks the sheet away, Eoghan has taken on the good looks of the other. Eoghan then complains of being too short, Patrick tells Eoghan to show how tall he’d like to be, and instantly Eoghan grows to that height. Now convinced, Patrick and Eoghan return to the Grianan and there at a well still called Patrick’s Well, Patrick baptizes Eoghan. Patrick, who carried the Bachall Isa, or Staff of Jesus – slammed the iron end down hard at the end of the baptism, unknowingly piercing Eoghan’s foot, who bore the pain without a sound. Patrick, finally noticing the pool of blood oozing from around the royal foot, asked why he didn’t say something, only to be told that the king thought this was part of the ritual. Before taking his leave, Patrick consecrates a flagstone as the spot where future kings of Ireland shall be coronated, and gives Eoghan this blessing “When thou shalt put thy feet out of thy bed to approach and thy successors after thee, the men of Ireland shall – tremble before thee. My blessing on the tribes, I give from Bealach Ratha. On you descendants of Eoghan, graces till doomsday. So long as the fields shall be under crops, victory in battle shall be on their men; The head of the men of Ireland’s hosts to their place. The seed of Eoghan, son of Niall, provided that they do good, rule shall descend from them for ever. What of the staple of St. Patrick’s Day celebrations, the shamrock? According to legend, Patrick used the shamrock to explain the concept of the trinity to the pagans, who no doubt countered his theory with one of their own, the triple goddesses Brigid, Ãriu, and the Morrigan. Patrick didn’t drive the snakes from Ireland, because there never were snakes in Ireland. The island broke off the mainland before serpents found their way that far west, and land snakes aren’t known as being prolific swimmers in oceanic tides. Green beer? Please. Not even the Irish of today express a fondness for green beer. The Irish beer of choice is Guinness, which is about as black as beer can get. The main purpose of beer in Ireland might well be thought to be to wash down Irish whiskey, believed by many to be among the best in the world (editor’s note: having researched this subject extensively, I can vouch for the sentiment). The rule, as lain down to me in a pub someplace on the island, is that you must choose a whiskey not coming from the north. According to pub logic, since Northern Ireland is still tied to Britain, and whiskey is taxed, by purchasing for instance, Bushmills, your money ends up in the British tax system. “And I’m not paying any fucking tax to the fucking queen.” Why was Patrick important? On Patrick’s arrival in Ireland, there were small isolated groups of Christians. A hundred years later Christianity had taken root. This led to a culture which prized the monastic calling, and championed literature copying the writings of the ancient Greeks and Romans. The work of the Irish monks saved the works of these writers as the fire of the dark ages destroyed them elsewhere. Then the Irish went across Europe, teaching their brethren to read, and through the literature which they had saved, how to think for themselves. St. Patrick’s Day in Ireland has always been a more somber affair compared to its American counterpart. Whereas the Americans celebrate in bars by donning stupid hats and green carnations while swilling beer out of little buckets, the Irish celebrated with mass, prayers and somber reflection. That is until a few years back when Ireland discovered in a big way that St. Patrick’s Day sells. Myself, I’ve grown more tolerant of the day in which the world celebrates the most negative stereotypes of the Irish. It’s certainly true that the Irish as a nation are fond of their drink. But it’s also true that other nations drink more, and that many in Ireland make the decision to avoid it, knowing from experience the downside of alcohol abuse. The cartoonish leprechaun character belies one of the worlds most vibrant and well documented mythos, reducing an art form to little more than an icon on a cereal box. It’s hard to imagine for instance, a worldwide holiday celebrating a Hassidic Jew hoarding cash and dancing around a menorah on Yom Kippur without also imagining the charges of anti-semitism which would arise from that. But St. Patrick’s Day is more of a celebration of a country, than a man. The diaspora has meant that there are more people of Irish descent living around the world, particularly in America than in Ireland itself. So perhaps it’s only natural that the celebrations of Ireland will eventually be filled with stereotypes based more on films, media and Riverdance than in memories of their home country, which after all are likely several generations removed. Like Patrick, we forgive the Irish of their sins, just as they forgive us for not being Irish, and accept us as one of their own. Top Image: Romanesque Doorway Carvings of Faces, 12th century, Dysert O’Dea Monastery, Corofin, County Clare, Ireland
<urn:uuid:ad88b50d-0531-4ad6-84fd-0ac38c477318>
CC-MAIN-2020-16
https://www.gothichorrorstories.com/behind-urban-legends/a-brief-history-of-st-patrick-who-was-patrick-and-what-does-he-have-to-do-with-green-beer/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370521876.48/warc/CC-MAIN-20200404103932-20200404133932-00514.warc.gz
en
0.96979
2,192
3.359375
3
Psoriasis is a disease whose main symptom is gray or silvery flaky patches on the skin which are red and inflamed underneath when scratched. In the United States, it affects 2 to 2.6 percent of the population, or between 5.8 and 7.5 million people. Commonly affected areas include the scalp, elbows, knees, navel, and groin. Psoriasis is autoimmune in origin, and is not contagious. Around a quarter of people with psoriasis also suffer from psoriatic arthritis, which is similar to rheumatoid arthritis in its effects. Psoriasis is driven by the immune system, especially involving a type of white blood cell called a T cell. Normally, T cells help protect the body against infection and disease. In the case of psoriasis, T cells are put into action by mistake and become so active that they trigger other immune responses, which lead to inflammation and to rapid turnover of skin cells. These cells pile up on the surface of the skin, forming itchy patches or plaques. The first outbreak of psoriasis is often triggered by emotional or mental stress or physical skin injury, but heredity is a major factor as well. In about one-third of the cases, there is a family history of psoriasis. Researchers have studied a large number of families affected by psoriasis and identified genes linked to the disease. (Genes govern every bodily function and determine the inherited traits passed from parent to child.) People with psoriasis may notice that there are times when their skin worsens, then improves. Conditions that may cause flareups include infections, stress, and changes in climate that dry the skin. Also, certain medicines, including lithium and beta blockers, which are prescribed for high blood pressure, may trigger an outbreak or worsen the disease. Types of Psoriasis Skin lesions are red at the base and covered by silvery scales. Small, drop-shaped lesions appear on the trunk, limbs, and scalp. Guttate psoriasis is most often triggered by upper respiratory infections (for example, a sore throat caused by streptococcal bacteria). Blisters of noninfectious pus appear on the skin. Attacks of pustular psoriasis may be triggered by medications, infections, stress, or exposure to certain chemicals. Smooth, red patches occur in the folds of the skin near the genitals, under the breasts, or in the armpits. The symptoms may be worsened by friction and sweating. Widespread reddening and scaling of the skin may be a reaction to severe sunburn or to taking corticosteroids (cortisone) or other medications. It can also be caused by a prolonged period of increased activity of psoriasis that is poorly controlled. Joint inflammation that produces symptoms of arthritis in patients who have or will develop psoriasis. Effect on the Quality of Life Individuals with psoriasis may experience significant physical discomfort and some disability. Itching and pain can interfere with basic functions, such as self-care, walking, and sleep. Plaques on hands and feet can prevent individuals from working at certain occupations, playing some sports, and caring for family members or a home. The frequency of medical care is costly and can interfere with an employment or school schedule. People with moderate to severe psoriasis may feel self-conscious about their appearance and have a poor self-image that stems from fear of public rejection and psychosexual concerns. Psychological distress can lead to significant depression and social isolation. Doctors generally treat psoriasis in steps based on the severity of the disease, size of the areas involved, type of psoriasis, and the patient's response to initial treatments. This is sometimes called the "1-2-3" approach. In step 1, medicines are applied to the skin (topical treatment). Step 2 uses ultraviolet ("light") treatments (phototherapy). Step 3 involves taking medicines by mouth or injection that treat the whole immune system (called systemic therapy). Over time, affected skin can become resistant to treatment, especially when topical corticosteroids are used. Also, a treatment that works very well in one person may have little effect in another. Thus, doctors often use a trial-and-error approach to find a treatment that works, and they may switch treatments periodically (for example, every 12 to 24 months) if a treatment does not work or if adverse reactions occur. Treatments applied directly to the skin may improve its condition. Doctors find that some patients respond well to ointment or cream forms of corticosteroids, vitamin D3, retinoids, coal tar, or anthralin. Bath solutions and moisturizers may be soothing, but they are seldom strong enough to improve the condition of the skin. Therefore, they usually are combined with stronger remedies. These drugs reduce inflammation and the turnover of skin cells, and they suppress the immune system. Available in different strengths, topical corticosteroids (cortisone) are usually applied to the skin twice a day. Short-term treatment is often effective in improving, but not completely eliminating, psoriasis. Long-term use or overuse of highly potent (strong) corticosteroids can cause thinning of the skin, internal side effects, and resistance to the treatment's benefits. If less than 10 percent of the skin is involved, some doctors will prescribe a high-potency corticosteroid ointment. High-potency corticosteroids may also be prescribed for plaques that don't improve with other treatment, particularly those on the hands or feet. In situations where the objective of treatment is comfort, medium-potency corticosteroids may be prescribed for the broader skin areas of the torso or limbs. Low-potency preparations are used on delicate skin areas. (Note: Brand names for the different strengths of corticosteroids are too numerous to list.) This drug is a synthetic form of vitamin D3 that can be applied to the skin. Applying calcipotriene ointment (for example, Dovonex*) twice a day controls the speed of turnover of skin cells. Because calcipotriene can irritate the skin, however, it is not recommended for use on the face or genitals. It is sometimes combined with topical corticosteroids to reduce irritation. Use of more than 100 grams of calcipotriene per week may raise the amount of calcium in the body to unhealthy levels. Topical retinoids are synthetic forms of vitamin A. The retinoid tazarotene (Tazorac) is available as a gel or cream that is applied to the skin. If used alone, this preparation does not act as quickly as topical corticosteroids, but it does not cause thinning of the skin or other side effects associated with steroids. However, it can irritate the skin, particularly in skin folds and the normal skin surrounding a patch of psoriasis. It is less irritating and sometimes more effective when combined with a corticosteroid. Because of the risk of birth defects, women of childbearing age must take measures to prevent pregnancy when using tazarotene. Preparations containing coal tar (gels and ointments) may be applied directly to the skin, added (as a liquid) to the bath, or used on the scalp as a shampoo. Coal tar products are available in different strengths, and many are sold over the counter (not requiring a prescription). Coal tar is less effective than corticosteroids and many other treatments and, therefore, is sometimes combined with ultraviolet B (UVB) phototherapy for a better result. The most potent form of coal tar may irritate the skin, is messy, has a strong odor, and may stain the skin or clothing. Thus, it is not popular with many patients. Anthralin reduces the increase in skin cells and inflammation. Doctors sometimes prescribe a 15- to 30-minute application of anthralin ointment, cream, or paste once each day to treat chronic psoriasis lesions. Afterward, anthralin must be washed off the skin to prevent irritation. This treatment often fails to adequately improve the skin, and it stains skin, bathtub, sink, and clothing brown or purple. In addition, the risk of skin irritation makes anthralin unsuitable for acute or actively inflamed eruptions. This peeling agent, which is available in many forms such as ointments, creams, gels, and shampoos, can be applied to reduce scaling of the skin or scalp. Often, it is more effective when combined with topical corticosteroids, anthralin, or coal tar. This is a foam topical medication (Olux), which has been approved for the treatment of scalp and body psoriasis. The foam penetrates the skin very well, is easy to use, and is not as messy as many other topical medications. People with psoriasis may find that adding oil when bathing, then applying a moisturizer, soothes their skin. Also, individuals can remove scales and reduce itching by soaking for 15 minutes in water containing a coal tar solution, oiled oatmeal, Epsom salts, or Dead Sea salts. When applied regularly over a long period, moisturizers have a soothing effect. Preparations that are thick and greasy usually work best because they seal water in the skin, reducing scaling and itching. Natural ultraviolet light from the sun and controlled delivery of artificial ultraviolet light are used in treating psoriasis. Much of sunlight is composed of bands of different wavelengths of ultraviolet (UV) light. When absorbed into the skin, UV light suppresses the process leading to disease, causing activated T cells in the skin to die. This process reduces inflammation and slows the turnover of skin cells that causes scaling. Daily, short, nonburning exposure to sunlight clears or improves psoriasis in many people. Therefore, exposing affected skin to sunlight is one initial treatment for the disease. Ultraviolet B (UVB) phototherapy UVB is light with a short wavelength that is absorbed in the skin's epidermis. An artificial source can be used to treat mild and moderate psoriasis. Some physicians will start treating patients with UVB instead of topical agents. A UVB phototherapy, called broadband UVB, can be used for a few small lesions, to treat widespread psoriasis, or for lesions that resist topical treatment. This type of phototherapy is normally given in a doctor's office by using a light panel or light box. Some patients use UVB light boxes at home under a doctor's guidance. A newer type of UVB, called narrowband UVB, emits the part of the ultraviolet light spectrum band that is most helpful for psoriasis. Narrowband UVB treatment is superior to broadband UVB, but it is less effective than PUVA treatment (see next paragraph). It is gaining in popularity because it does help and is more convenient than PUVA. At first, patients may require several treatments of narrowband UVB spaced close together to improve their skin. Once the skin has shown improvement, a maintenance treatment once each week may be all that is necessary. However, narrowband UVB treatment is not without risk. It can cause more severe and longer lasting burns than broadband treatment. Psoralen and ultraviolet A phototherapy (PUVA) This treatment combines oral or topical administration of a medicine called psoralen with exposure to ultraviolet A (UVA) light. UVA has a long wavelength that penetrates deeper into the skin than UVB. Psoralen makes the skin more sensitive to this light. PUVA is normally used when more than 10 percent of the skin is affected or when the disease interferes with a person's occupation (for example, when a teacher's face or a salesperson's hands are involved). Compared with broadband UVB treatment, PUVA treatment taken two to three times a week clears psoriasis more consistently and in fewer treatments. However, it is associated with more shortterm side effects, including nausea, headache, fatigue, burning, and itching. Care must be taken to avoid sunlight after ingesting psoralen to avoid severe sunburns, and the eyes must be protected for one to two days with UVA-absorbing glasses. Long-term treatment is associated with an increased risk of squamous-cell and, possibly, melanoma skin cancers. Simultaneous use of drugs that suppress the immune system, such as cyclosporine, have little beneficial effect and increase the risk of cancer. Light therapy combined with other therapies Studies have shown that combining ultraviolet light treatment and a retinoid, like acitretin, adds to the effectiveness of UV light for psoriasis. For this reason, if patients are not responding to light therapy, retinoids may be added. UVB phototherapy, for example, may be combined with retinoids and other treatments. One combined therapy program, referred to as the Ingram regime, involves a coal tar bath, UVB phototherapy, and application of an anthralin-salicylic acid paste that is left on the skin for 6 to 24 hours. A similar regime, the Goeckerman treatment, combines coal tar ointment with UVB phototherapy. Also, PUVA can be combined with some oral medications (such as retinoids) to increase its effectiveness. For more severe forms of psoriasis, doctors sometimes prescribe medicines that are taken internally by pill or injection. This is called systemic treatment. Recently, attention has been given to a group of drugs called biologics (for example, alefacept and etanercept), which are made from proteins produced by living cells instead of chemicals. They interfere with specific immune system processes. Like cyclosporine, methotrexate slows cell turnover by suppressing the immune system. It can be taken by pill or injection. Patients taking methotrexate must be closely monitored because it can cause liver damage and/or decrease the production of oxygen-carrying red blood cells, infection-fighting white blood cells, and clot-enhancing platelets. As a precaution, doctors do not prescribe the drug for people who have had liver disease or anemia (an illness characterized by weakness or tiredness due to a reduction in the number or volume of red blood cells that carry oxygen to the tissues). It is sometimes combined with PUVA or UVB treatments. Methotrexate should not be used by pregnant women, or by women who are planning to get pregnant, because it may cause birth defects. A retinoid, such as acitretin (Soriatane), is a compound with vitamin A-like properties that may be prescribed for severe cases of psoriasis that do not respond to other therapies. Because this treatment also may cause birth defects, women must protect themselves from pregnancy beginning 1 month before through 3 years after treatment with acitretin. Most patients experience a recurrence of psoriasis after these products are discontinued. Taken orally, cyclosporine acts by suppressing the immune system to slow the rapid turnover of skin cells. It may provide quick relief of symptoms, but the improvement stops when treatment is discontinued. The best candidates for this therapy are those with severe psoriasis who have not responded to, or cannot tolerate, other systemic therapies. Its rapid onset of action is helpful in avoiding hospitalization of patients whose psoriasis is rapidly progressing. Cyclosporine may impair kidney function or cause high blood pressure (hypertension). Therefore, patients must be carefully monitored by a doctor. Also, cyclosporine is not recommended for patients who have a weak immune system or those who have had skin cancers as a result of PUVA treatments in the past. It should not be given with phototherapy. This drug is nearly as effective as methotrexate and cyclosporine. It has fewer side effects, but there is a greater likelihood of anemia. This drug must also be avoided by pregnant women and by women who are planning to become pregnant, because it may cause birth defects. Compared with methotrexate and cyclosporine, hydroxyurea is somewhat less effective. It is sometimes combined with PUVA or UVB treatments. Possible side effects include anemia and a decrease in white blood cells and platelets. Like methotrexate and retinoids, hydroxyurea must be avoided by pregnant women or those who are planning to become pregnant, because it may cause birth defects. This is the first biologic drug approved specifically to treat moderate to severe plaque psoriasis. It is administered by a doctor, who injects the drug once a week for 12 weeks. The drug is then stopped for a period of time while changes in the skin are observed and a decision is made regarding the need or further treatment. Because alefacept suppresses the immune system, the skin often improves, but there is also an increased risk of infection or other problems, possibly including cancer. Monitoring by a doctor is required, and a patient's blood must be tested weekly around the time of each injection to make certain that T cells and other immune system cells are not overly depressed. This drug is an approved treatment for psoriatic arthritis where the joints swell and become inflamed. Like alefacept, it is a biologic response modifier, which after injection blocks interactions between certain cells in the immune system. Etanercept limits the action of a specific protein that is overproduced in the lubricating fluid of the joints and surrounding tissues, causing inflammation. Because this same protein is overproduced in the skin of people with psoriatic arthritis, patients receiving etanercept also may notice an improvement in their skin. Individuals should not receive etanercept treatment if they have an active infection, a history of recurring infections, or an underlying condition, such as diabetes, that increases their risk of infection. Those who have psoriasis and certain neurological conditions, such as multiple sclerosis, cannot be treated with this drug. Added caution is needed for psoriasis patients who have rheumatoid arthritis; these patients should follow the advice of a rheumatologist regarding this treatment. These medications are not indicated in routine treatment of psoriasis. However, antibiotics may be employed when an infection, such as that caused by the bacteria Streptococcus, triggers an outbreak of psoriasis, as in certain cases of guttate psoriasis. Are you a Doctor, Pharmacist, PA or a Nurse? Join the Doctors Lounge online medical community Editorial activities: Publish, peer review, edit online articles. Ask a Doctor Teams: Respond to patient questions and discuss challenging presentations with other members.
<urn:uuid:355ca9ab-2cf5-494f-b5d9-e3111f153431>
CC-MAIN-2020-16
https://www.doctorslounge.com/dermatology/diseases/psoriasis.htm
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370521574.59/warc/CC-MAIN-20200404073139-20200404103139-00073.warc.gz
en
0.933958
3,861
3.6875
4
During the second High Level Forum on Aid Effectiveness in Paris in 2005, the global aid community developed two targets for tracking their commitments to improving the effectiveness of non-financial flows: indicator four (50 percent of technical cooperation flows are implemented through coordinated programs consistent with national development strategies) and indicator six (reduce by two-thirds the number stock of parallel project implementation units). While donors concede that they have not achieved the measures dictated under indicator six (reduce by two-thirds the number stock of parallel project implementation units), the donor community’s 2011 Monitoring Survey of the Paris Declaration on Aid Effectiveness reported that their commitment under indicator four met and went beyond the 50 percent requirement. But where is the proof of their claim? Whereas the donor community claims to have hit their targets, non-financial aid flows not been effectively linked to the national development priorities of receiving countries, with Barbados standing as a prime example. Fragmented Aid Picture The European Union has been among the pioneers in attempting to focus more on improving the effectiveness of non-financial aid flows and has developed a strategy to hone in on this issue. The EU’s backbone strategy “Reforming Technical Cooperation and Project Implementation Units for External Aid provided by the European Commission” is the first real attempt at placing the microscope on non-financial aid flows. But what is non-financial aid? Non-financial aid is defined as any assistance which strengthens individual or organizational capacity through the provision of expertise, the provision of training and related learning opportunities, and the provision of computer software, vehicles, and equipment. Financial aid flows are any monetary grants, debt relief, and concessional loan financing. Foreign aid is an important piece of the growth puzzle in many developing countries. Financial aid flows to developing countries were approximately $107.2 billion in 2006. Africa and Latin America received the bulk of the financial aid flows while Jamaica, Haiti, and Guyana received the majority of financial aid to the Caribbean. Aid flows began to fall significantly in 2007, at the onset of the global financial and economic crisis; a slight increase was recorded in 2008, but flattened out to reach $90 billion at the end of 2010. According to the Organization for Economic Cooperation and Development’s (OECD) online data base, total financial aid to the health sector from Development Assistance Committee (DAC) donors rose from $4,999.22 million in 2009 to $5,079.98 million in 2010 to reach $6,054.77 million in 2012. Financial flows to support the education sector stood at $9,329.86 million in 2009, rose to $9,431.68 million in 2010 and fell to $8,881.35 million in 2012. The banking and financial sector received $2,670.23 million in financial aid in 2009 and $2,197.19 million at the end of 2012. The financial aid flow dataset is a gold mine for policy makers and researchers alike. Its year-to-year data points are mostly complete and trends can be analyzed as far back as the 1960s. Sadly, this cannot be said about the non-financial aid flow database. Global non-financial flows—which are captured through the OECD database—are broken down into eight categories. The categories are as follows: (1) costs of donor experts, (2) experts and other technical assistance, (3) donor personnel, (4) other technical assistance, (5) scholarships and student costs in donor countries, (6) scholarships/training in donor countries, (7) imputed student costs, and (8) free standing technical cooperation. The OECD dataset shows that non-financial aid flows were trending upward slowly between the period 2003 to 2006, but began to fall in 2007 with the onset of the global financial crisis. Global non-financial flows increased marginally in 2008 and continued to trend upward slowly to spike in 2011 with a drastic fall seen in 2012. The data provided by Organization for Economic Cooperation and Development (OECD-stats.oecd.org) shows that non-financial aid flows are significantly less than financial aid flows (which are in the billion dollar range). According to a representative from the OECD, donors are not mandated to submit information on non-financial flows, as they are too difficult to track. Therefore, the OECD non-financial aid database may be grossly understated. The picture of the global aid system that these statistics paint is fragmented, as the aid community has now begun to realize the importance of tracking non-financial aid flows. However, donors are not mandated to provide this information, so tracking non-financial flows aid flows has been muddled. Many may argue that tracking non-financial aid flows is too difficult a task to undertake and is not worth the effort and expense., but However, the Barbados case study demonstrates it can be done. If non-financial flows are not properly reported, tracked, and analyzed, how can the global aid effectiveness agenda be fully measured? Every year, the global aid forum monitors the implementation of the targets they set under the Paris Declaration on Aid Effectiveness, but achieving these targets has not been easy. Despite the targets agreed, only 15 percent of donor missions have been undertaken jointly with other donors, well below the 40 percent target. Only 9 percent of partner countries undertook mutual assessments, against a target of 100 percent. Reconciling national development priorities with the taxpayer-approved objectives of donor countries has been difficult. Less than a quarter of aid flows from Development Assistance Committee (DAC) donors are provided in the form of budget support, and in a few instances aid flow is part of a multi-year program. The statistics speak loudly to a seemingly failing global aid system and many argue that more pressure must be put on donors to achieve the targets they have set. Notwithstanding the positive contributions of donor aid in many countries and contexts, the quality and effectiveness of the aid continues to come under fire. Prominent critics of aid, such as Easterly (2006) and Moyo (2009), attribute a good share of aid’s failings to the lack of feedback and accountability. In a report by the Humanitarian Policy Group, the authors stated that donors’ approaches to decision-making and resource allocation have been criticized as being weakly articulated, ad hoc and uncoordinated. The report goes on further to state that aid is driven by political interests rather than need; funding allocations have often been inequitable, unpredictable and slow, with weak mechanisms of accountability and transparency. Despite the many critics of the global aid system, the donor community has vindicated itself with the announcement that they have not only achieved but have surpassed their commitment under indicator four. The current non-financial aid flow database adds no real value to the donor communities’ assertions of their renowned success of linking over 50 percent of non-financial flows to the national priorities of receiving countries. Since submission of this data is not mandatory, there is no real hope that this will change. Non-Financial Aid Flows, Growth, and Development: The Barbados Case Study Barbados is a Small Island Developing State (SIDS) which is classified as a high middle income developing country. Barbados does not attract high levels of Official Development Assistance (ODA) due to its economic and development rating. The island’s national priorities, which were expressed in its recent Medium Term Growth and Development Strategy (MTGDS, 2013-2020), are mainly hinged on building economic growth through the tourism and international business sectors and the development of the green economy. The social sectors are also important to the development path of the country and heavy investments are placed in education, health, and various poverty reduction and welfare schemes. Barbados has received financial and non-financial aid to assist in its efforts to achieve its sustainable development objectives since it does not have all the necessary resources. The data collected from the Congressional Research Service OECD aid flow database showed that the majority of financial aid flows which Barbados received were to aid in the implementation of various bilateral commitments (such as the implementation of the Economic Partnership Agreement). A large amount of financial aid also went toward the provision of humanitarian services to reduce poverty. The development of social infrastructure and social services received a total of $2.1 million for the years 2005, 2008, and 2010, which funded the construction of nurseries and the provision of hospital supplies. In addition to the aid Barbados receives for its social programs, it is also a recipient of aid toward its economic sectors. Financial aid flows to boost in the development of the economic sectors, though, were significantly less than what was received for social sector development. Tourism, international business, agriculture, and renewable energy have been touted as the economic development priorities of Barbados in its various development documents (Medium Term Development Strategy 2010-2014 and the former National Strategic Plan of Barbados). In 2005, the economic infrastructure and services sector received no aid allocation, but in 2008, the sector received $100,000, and in 2010, it received $40,000. The transport and communications sector received no aid allocation in 2005 or 2010, but $40,000 in 2008. Aid flows to production were approximately $600,000 in 2005 and $100,000 in both 2008 and 2010. Trade and tourism received $10,000 in 2005, $100,000 in 2008, and $30,000 in 2010. Agriculture, forestry, and fishing received approximately $600,000 in 2005 but just $30,000 in both 2008 and 2010. There is clear work to be done in effectively linking financial aid flows to the economic priorities of the country. Barbados has not only benefited from financial aid flows but also from the provision of technical assistance or non-financial aid. Currently there is no system in place that captures or measures technical cooperation or non-financial flows domestically, so analyzing its actual impact will be difficult; however, the information collected through independent research, from various government departments, shows that non-financial aid flows are significant. Barbados has received approximately $577,707 in technical cooperation flows over the period 2010 to 2012. Technical cooperation inflows recorded in 2010 were approximately $106,036, while the inflows for 2011 were approximately $142,835. By 2012, technical cooperation aid flows reached $328,836, or just under the combined total. The analysis shows that non-financial flows have increased significantly as financial flows decreased with the onset of the global economic and financial crisis. Therefore, one might hypothesize that non-financial aid flows appear to be more resilient to negative shocks; however, to conclude that would require rigorous analysis beyond the scope of this article. Source: Data collected from various local government agencies Non-financial flows appear strongly linked to one of Barbados’ social development priorities—provision of educational services. One-third of non-financial aid flows went to Barbados’ Ministry of Education. The provision of free primary and secondary education has stood as a major social development goal for years in Barbados. So has the provision of free health care, yet the link between non-financial flows and all of the county’s social development priorities is weak. One of the main economic priorities in the country has been to develop the renewable energy sector in order to reduce the fuel import bill and reduce the cost of production for the private sector to increase its competitiveness. Increased food security and strengthening of the tourism sector were also highlighted as priorities. Funding for studies in renewable energy integration studies in order to build the necessary skills to develop this sector was sought and the data collected showed that over the period 2010 to 2012, the country received non-financial assistance of approximately $19,200. Barbados did not have the appropriate programs or available funds necessary to build expertise in the installation of renewable systems across the country; the provision of renewable energy integration studies was a significant step to fill the skills void. The tourism sector suffered losses at the onset of the economic and financial crisis and new strategies had to be put in place in order to combat this trend. A closer look shows that skills in tourism policy and planning were lacking and training in tourism policy and planning was administered at an estimated cost $11,520 which was covered by various donor partners. Similarly, as a net food importing country, Barbados set a goal to achieve food security, but training in agricultural studies was needed. The donor community has assisted by providing various agricultural assistance and training programs over the period 2010-2012 at approximately $11,504. Source: Data collected from various local government agencies On the surface, the data shows that most non-financial flows were linked to the country’s economic development priorities and the donors would conclude that they have successfully achieved the target they set under indicator four of the Paris Declaration on Aid Effectiveness. To recap, indicator four speaks to ensuring that at least 50 percent of technical assistance flows are implemented through coordinated programs which are consistent with national development priorities. But further analysis would show that less than 50 percent of non-financial aid flows went to developing the skills necessary to boost the energy, tourism, and agricultural sectors. Instead, the bulk of the non-financial assistance—$145,000—received went to the development of a public awareness program on the advantages of renewable energy. The donor community flagged the provision of the public awareness program as an effective use of non-financial flows; however, the analysis shows that this may not be so. The Barbadian economy would have extracted greater benefits if more non-financial aid resources were provided to fund studies in manufacturing, installation, and maintenance of renewable energy systems. One of Barbados’ highest earning sectors—tourism—received only $11,520 in non-financial flows. It would have been more effective if some of the non-financial flows were extracted from training in poverty reduction policies and allocated to tourism. The government of Barbados should have taken greater ownership in this process by maintaining constant communication with the donors to establish flexibility in the various technical assistance schemes which would have allowed them to transfer some of the training funds from the poverty reduction training program to the renewable energy studies. The impact on the renewable energy sector would have been greater as more skills would have been developed. Although the reduction of poverty is an important goal in Barbados’ development process, the reduction of the fuel import bill through the installation of renewable energy systems might be more pressing in the near term. The importing of fuel represents the largest loss of foreign exchange to the country, and the cost of fuel continues to rise, pushing up the cost of doing business, which not only reduces export competitiveness, but also acts as a deterrent to investment. Barbados also indicated that increased food security is one of its national priorities in its Medium Term Growth and Development Strategy 2013-2020; however, the Ministry of Agriculture only received 3.5 percent of the total non-financial flows and $11,504 in training assistance. The Barbados case study shows that the donor community has not fully achieved their goal under indicator four of the Paris Declaration on Aid Effectiveness in linking 50 percent of non-financial aid resources to the national development priorities of Barbados. A detailed analysis of the global non-financial aid flow system provides evidence to refute the donor communities’ claims and reveals that they are yet to achieve their stated target in all contexts. The OECD has boasted that over 50 percent of global non-financial aid flows were consistent with national development priorities of receiving countries; however, the Barbados case study provides evidence which conflicts with this claim. Greater analysis of global non-financial aid flows may reveal that the global aid effectiveness forum may not have achieved or surpassed the conditions under indicator four of the Paris Declaration of Aid Effectiveness. The OECD non-financial database needs to be restructured to capture flows by year, donor, receiving country, program, and amount. In the program section, donors and receiving countries should be mandated to provide detailed information in terms of consultancies, training provided, software, and hardware procured. The development of such a comprehensive non-financial flows database will enhance the transparency of the global non-financial aid flow process and aid in strengthening the global aid effectiveness agenda. Accounting systems must be developed in both donor and recipient countries which would accurately capture non-financial aid flows in order to analyze the true impact and ensure that the aid flows are effectively linked to the national priorities of the recipient country. The global aid effectiveness forum should desist from claims of achieving non-financial aid targets in the absence of a comprehensive global non-financial aid database, because deeper analysis may reveal that targets have not been reached. Both donor and recipient countries must take greater ownership of the non-financial aid process as it is a key pillar to strengthening the global aid effectiveness architecture. About the Author This is where the Paris Declaration on Aid Effectiveness was developed. OECD, “Aid Effectiveness 2011: Progress in Implementing the Paris Declaration,” accessed August 12, 2014, http://www.oecd.org/dac/effectiveness/2011surveyonmonitoringtheparisdeclaration.htm. European Commission, Reforming Technical Cooperation and Project Implementation Units for External Aid Provided, (Brussels: July 2008). World Bank. Global Development Finance Report. Washington: The World Bank, 2012 See Figure 1 for a look at global non-financial aid flows. OECD 2014 World Bank. Global Monitoring Report. Washington: World Bank, 2006. Adele Harmer and Deepayan Basu Ray, “Study on the relevance and applicability of the Paris Declaration on Aid Effectiveness in Humanitarian Assistance,” Humanitarian Policy Group Overseas Development Institute (2009). http://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/6020.pdf Proxies were taken from various universities to provide costing for quantifying the technical assistance where no actual figures were available. The Economic Partnership Agreement (EPA) is the new reciprocal trade agreement between CARIFORUM (Barbados is a part of CARIFORUM) and the European Grouping which governs the new trading relations among the Parties. This data was collected directly from various government departments through independent research. This is based on the fact that while financial flows decreased, non-financial flows rose during the crisis period. Figure 3 shows the other subject areas funded by donors. See Figure 3. OECD. AID EFFECTIVENESS 2005-10: PROGRESS IN IMPLEMENTING THE PARIS DECLARATION. Paris: OECD, 2011.
<urn:uuid:a7643fc6-2cac-4fe4-a25e-f609522b33b6>
CC-MAIN-2020-16
http://yalejournal.org/article_post/a-case-for-tracking-non-financial-aid-flows-more-effectively/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371660550.75/warc/CC-MAIN-20200406200320-20200406230820-00435.warc.gz
en
0.9509
3,805
2.875
3
Discuss About The Engagement On Students Academic Success. The present study provides an overview on the influence of self- efficacy, motivation and engagement on the academic success of students. Academic success is described as the individual’s constant development whilst concurrently striving to achieve educational target. Academic success directly influences the development of students (Busse & Walter, 2013). There are several internal as well as external factors that influence various levels of academic performance of the students. Internal factors are those which are mainly attributable to characteristics of students such as – learning’s, expectations, feelings etc. On the other hand, external factors are those which impact student’s academic abilities through different circumstances including- social issues, extracurricular activities and environment of educational institutions. Three of the most vital elements that influence the academic performance of students are self-efficacy, motivation and students engagement. This research also aims to investigate the relationship among these variables in accordance with student’s academic success. Self-efficacy influencing student’s academic success Self efficacy is mainly defined as the self- evaluation of individual’s competence to effectively execute certain set of action that is required for reaching desired results. Self efficacy expectations are generally proposed for influencing initiating behavior of students and the persistence level applied to task in difficulties phase and setbacks. Shkullaku, (2013) opines that there is a huge difference in a way students feel and act mainly between those having low- self efficacy and high –self efficacy. It has been described by Yusuf, (2011) that, self efficacy is a motivational orientation which arouse grit during difficulties, motivates long term view and fosters self – regulation whenever required. Yusuf, (2011) found out perceived self efficacy enhances academic success in both indirect and direct way by influencing the goals of individuals. Moreover, self- efficacy along with a specific target impacts the academic performance of students. A student with higher degree of self-efficacy usually assign high target to themselves and exercise huge effort for accomplishing the academic goals. According to Social cognitive theory, it is one of the major variables which influence academic performance and success. The students with higher self-efficacy level have the ability to cope with the challenging situation and perform activities in effective way. Yusuf, (2011) cites that mastery experience is the interpreted outcome of student’s previous performance. Students usually engages in different task as well as activities, interpret outcomes of their own actions and also apply these interpretations for developing beliefs about their ability to involve in these tasks. Students interpreting their previous experience might have dramatic effect on self- efficacy. However, if they belief that their academic success is the outcome of their developed skills and hard work, they becomes more confident about their future success. Inactive mastery experiences are mainly authentic successes to deal with specific circumstances. These experiences are highly influential source to create sense of self-efficacy as they gives students proper evidence about their ability to succeed in a task. It is another source that facilitates students to develop self- efficacy beliefs. Influential communication is effective when the individuals who provide data are usually viewed by students as authentic and realistic. Although positive feedback heightens self-efficacy, verbal persuasion aids to create strong self-efficacy sense. Figure 1: Framework of self- efficacy Source: (Shkullaku, 2012) Motivation influencing academic success of the students Motivation is another vital element that plays huge role in academic success of the students. It involves both internal as well as external factors which enhances desire of students for focusing more on studies and making effort to achieve the target. It has been opined by Busse & Walter (2013) that, the students having optimum motivation mainly have edge since they have responsive strategies, which includes- target setting, self- monitoring and intrinsic interest. In addition to this, motivational belief is vital to student’s academic success as it facilitates to determine that extent to which students will put their effort for achieving the target. This element also leads the student’s attitude basically towards the learning procedure. Recent evidences reflect that motivation aids to determine whether the students will be successful in future. Furthermore, this element helps to direct as well as inspire student’s ability such that they become able to absorb knowledge which will be helpful for future use. From the cognitive psychological viewpoint, researcher’s points out that motivation refers to the process where target directed activity is maintained as well as instigated. There are various kinds of motivation that affects the academic performance of students, such as – intrinsic, extrinsic, mastery goals and performance goals. Intrinsic, extrinsic, mastery goals and performance goals As per Busse & Walter (2013), intrinsic motivation basically yields internal satisfaction and individuals growth. This kind of motivation arises without prediction of rewards and also provides experience which is achieved as outcome of this. Some researchers have found out that intrinsic motivation helps the students to achieve higher academic success. Extrinsic motivation relates to students who are basically motivated by the external stimuli for obtaining reward or avoid difficult impact. Niehaus, Rudasil., & Adelson, (2012) found out that the students who are being extrinsically motivated are less satisfied with results rather than those who are motivated intrinsically. Extrinsic motivation also has adverse impact on individual’s ability to affiliate with various kinds of people and relate socially with others. Mastery goals helps the students to develop new knowledge as well as skills. Few experimental studies have found out that students with mastery goals are more willing to take challenging task and provide huge effort in learning procedure. On the contrary, performance goals are usually concerned with the focus of students in obtaining better target or avoid failure. As per Niehaus, Rudasill & Adelson, (2012), performance goals correlates less on strategies and deep learning. Engagement influencing academic success of the students Student’s engagement is another multidimensional concept that refers to the condition where the individual is motivated for developing meaning for their experience and also willingness to make effort in attaining target. High student engagement level also involves combination of effort and motivation in the learning environment. Christenson, Reschly & Wylie (2012) opines that students engagement indicates the time spent by the students in their educational activities for contributing to desired results and as quality of efforts. Student’s engagement leads to better quality in the learning outcomes. Students active engagement enhances school awareness whilst motivating students for putting forth effort as well as energy into learning. It has been noted by Guthrie, Wigfield & Klauda (2012) that, engagement also permit the students to make investment in future learning processes. Few evidences supports that this student engagement definitely leads to positive results and also fosters resilience within students. Engagement also mediate the impact of socioeconomic ethnicity, status and other determinants that might impact various levels of students academic success. Moreover, the students also apt to underperform when individual priorities as well as expectation do not coincide with engagement level of students (Saeed &Zyngier, 2012). There are three main aspect of students engagement , such as- cognitive, emotional , behavioral. Cognitive, Emotional, Behavioral engagement Behavioral engagement is mainly defined as one of the efforts that students makes for using study skills involving environment management, metacognitrive strategies etc. It is the vital predictor of the academic success. Emotional engagement also termed as effective engagement is defined as the ability of students to maintain awareness about the individual’s ability regarding academics (Christenson, Reschly & Wylie, 2012) Cognitive engagement mainly requires huge effort that is channeled towards setting target and makes investment in learning. It usually relates to desire of individuals for committing as well as succeeding relating to personal obligations. Figure 2: Engagement influencing academic success Source: (Guthrie, Wigfield & Klauda, 2012) From the above discussion, it can be concluded that all the above mentioned factors such as self- efficacy, motivation and engagement positively affects the student academic performance. It also helps the students to attain effective learning and success in academics. This study has pointed out that students with high levels of self- efficacy, engagement and motivation helps them to obtain good quality learning results. Recent facts also show that students now days adopts these skill for improving their performance in academic field. Busse, V., & Walter, C. (2013). Foreign Language Learning Motivation in Higher Education: A Longitudinal Study of Motivational Changes and Their Causes. Modern Language Journal, 97(2), 435–456. doi:10.1111/j.1540-4781.2013.12004.x Christenson, S. L., Reschly, A. L., & Wylie, C. (Eds.). (2012). Handbook of research on student engagement. Springer Science & Business Media. Guthrie, J. T., Wigfield, A., & Klauda, S. L. (2012). Adolescents’ engagement in academic literacy. Adolescents’ engagement in academic literacy. Sharjah: UAE: Bentham Science Publishers. Niehaus, K., Rudasill, K. M., & Adelson, J. L. (2012). Self-efficacy, intrinsic motivation, and academic outcomes among Latino middle school students participating in an after-school program. Hispanic Journal of Behavioral Sciences, 34(1), 118-136. qualitative case study. Journal of Education and Learning, 1(2), 252 – 267 Saeed, S. &Zyngier, D. (2012). How motivation ?nfluences student engagement: A Shkullaku, R. U. D. I. N. A. (2013). The relationship between self-efficacy and academic performance in the context of gender among Albanian students. European Academic Research, 1(4), 467-478. Yusuf, M. (2011). The impact of self-efficacy, achievement motivation, and self-regulated learning strategies on students’ academic achievement. Procedia-Social and Behavioral Sciences, 15, 2623-2626.
<urn:uuid:633ad1d6-b755-4525-a713-a3cace6cc528>
CC-MAIN-2020-16
https://essayhub.net/essays/engagement-on-students-academic-success-assignment
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505359.23/warc/CC-MAIN-20200401003422-20200401033422-00073.warc.gz
en
0.950426
2,093
3.25
3
Walter Krivitsky in 1939 June 28, 1899 |Died||10 February 1941 (aged 41)| |Cause of death||bullet to the temple| |Nationality||Austrian (first), French (last)| |Occupation||spy, espionage, intelligence| |Spouse(s)||Antonina ("Tonya Krivitsky," "Tonia Krivitsky," "Antonina Thomas")| |Other work||Samuel Ginsburg, Samuel Ginzberg, Shmelka Ginsberg| Walter Germanovich Krivitsky (Ва́льтер Ге́рманович Криви́цкий; June 28, 1899 – February 10, 1941) was a Soviet intelligence officer who revealed plans of signing the Molotov–Ribbentrop Pact before he defected, weeks before the outbreak of World War II. Walter Krivitsky was born on June 28, 1899, to Jewish parents as Samuel Ginsberg in Podwołoczyska, Galicia, Austria-Hungary (now Pidvolochysk, Ukraine), he adopted the name "Krivitsky," which was based on the Slavic root for "crooked, twisted". It was a revolutionary nom de guerre when he entered the Cheka, Bolshevik intelligence, in around 1917. Krivitsky operated as an illegal resident spy, with false name and papers, in Germany, Poland, Austria, Italy, and Hungary. He rose to the rank of control officer. He is credited with stealing plans for submarines and planes, intercepting Nazi-Japanese correspondence, and recruiting many agents, including Magda Lupescu ("Madame Lepescu") and Noel Field. In May 1937, Krivitsky was sent to The Hague, Netherlands, to operate as the rezident (regional control officer), operating under the cover of an antiquarian. It appears that he co-ordinated intelligence operations throughout Western Europe. At the time, the General Staff of the Red Army was undergoing the Great Purge in Moscow, which Krivitsky and close friend, Ignace Reiss, both abroad, found deeply disturbing. Reiss wanted to defect, but Krivitsky repeatedly held back. Finally, Reiss defected, as he announced in a defiant letter to Moscow. His assassination, in Switzerland, in September 1937 prompted Krivitsky to defect the following month. In Paris, Krivitsky began to write articles and made contact with Lev Sedov, Trotsky's son, and the Trotskyists. There, he also met undercover Soviet spy Mark Zborowski, known as "Etienne," whom Sedov had sent to protect him. Sedov died mysteriously in February 1938, but Krivitsky eluded attempts to kill or kidnap him in France, including flight to Hyères. As a result of Krivitsky's debriefing, the British were able to arrest John King, a cypher clerk in the Foreign Office. He also gave a vague description of two other Soviet spies, Donald Maclean and John Cairncross but without enough detail to enable their arrest. Soviet intelligence operation in the United Kingdom was thrown into disarray for a time. In Stalin's Secret Service With the help of journalist Isaac Don Levine and literary agent Paul Wohl, Krivitsky produced an inside account of Stalin's underhanded methods. It appeared in book form as In Stalin's Secret Service (UK title: I Was Stalin's Agent), published on November 15, 1939, after appearing first in sensational serial form in April 1939 in the top magazine of the time, the Saturday Evening Post. (The title had appeared as a phrase in an article written by Reiss's wife on the first anniversary of her husband's assassination: "Reiss... had been in Stalin's secret service for many years and knew what fate to expect.") The book received a tepid review by the very influential New York Times. Violently attacked by the American left, Krivitsky was vindicated when the German-Soviet Nonaggression Pact, which he had predicted, was signed in August 1939. Caught between dedication to socialist ideals and detesting Stalin's methods, Krivitsky believed that it was his duty to inform. That decision caused him much mental anguish, as he impressed on American defector Whittaker Chambers, as he told Chambers, "In our time, informing is a duty" (recounted by Chambers in his autobiography, Witness). Krivitsky testified before the Dies Committee (later to become the House Un-American Activities Committee) in October 1939, and sailed as "Walter Thomas" to London in January 1940 to be debriefed by Jane Archer of British Military Intelligence, MI5. In doing so, he revealed much about Soviet espionage. It is a matter of controversy whether he gave MI5 clues to the identity of Soviet agents Donald Maclean and Kim Philby. It is certain, however, that the People's Commissariat for Internal Affairs, Narodnyy Komissariat Vnutrennikh Del, abbreviated NKVD, learned of his testimony and initiated operations to silence him. Krivitsky soon returned to North America, landing in Canada. Always in trouble with the US Immigration and Naturalization Service, Krivitsky was not able to return there until November 1940. Krivitsky retained Louis Waldman to represent him on legal matters. (Waldman was a long-time friend of Isaac Don Levine.) Meanwhile, the assassination of Trotsky in Mexico on August 21, 1940, convinced him that he was now at the top of the NKVD hit list. His last two months in New York were filled with plans to settle in Virginia and to write but also with doubts and dread. On February 10, 1941, at 9:30 a.m., he was found dead in the Bellevue Hotel (now Kimpton George Hotel) in Washington, DC, by a chambermaid, with three suicide notes by the bed. His body was lying in a pool of blood, caused by a single bullet wound to the right temple from a .38-caliber revolver found grasped in Krivitsky's right hand. A report dated June 10, 1941, indicates he had been dead for approximately 6 hours. According to many sources, (including Krivitsky himself) he was murdered by Soviet intelligence, but the official investigation, unaware of the NKVD manhunt, concluded that Krivitsky committed suicide. Two people with close ties to Krivitsky later recounted opposite interpretations of his death: - Suicide: Reiss' wife wrote: In the United States, he had to make a new start in life, without knowing the country or the language. He did find friends, good friends, but among them he realized how frightfully alone he was... He lived in relative security and even affluence from the sale of his articles. His family was safe and well cared for, he had friends, it seemed he could start a new life. But something else had happened. For the first time he had the leisure to see himself in his new situation. He had broken with his old life and had not built a new one. He went to a hotel in Washington, wrote a letter to his wife and one to his friends, and put a bullet through his head... To those who knew his handwriting, his style, his expressions, there could be no doubt that he had written them. - Assassination: Chambers recounted in his memoirs: One night one of my close friends burst into my office at Time. He was holding a yellow tear-off that had just come over the teletype. "They have murdered the General," he said. "Krivitsky has been killed." Krivitsky's body had been found in a room in a small Washington hotel a few blocks from the Capitol. He had a room permanently reserved at a large downtown hotel where he had always stayed when he was in Washington. He had never stayed at the small hotel before. Why had he gone there? He had been shot through the head and there was evidence that he had shot himself. At whose command? He had left a letter in which he gave his wife and children the unlikely advice that the Soviet Government and people were their best friends. Previously, he had warned them that, if he were found dead, never under any circumstances to believe that he had committed suicide. Who had forced my friend to write the letter? I remembered the saying: "Any fool can commit a murder, but it takes an artist to commit a good natural death"... Krivitsky also told me something else that night. A few days before, he had taken off the revolver that he usually carried and placed it in a bureau drawer. His seven-year-old son watched him. "Why do you put away the revolver?" he asked. "In America," said Krivitsky, "nobody carries a revolver." "Papa," said the child, "carry the revolver." Speculation persists into the 21st century. For example, in 2017, Anthony Percy's book Misdefending the Realm (Buckingham: University of Buckingham Press, 2017) argued that Krivitsky was the UK's most important source on Soviet plan, did not receive action from MI5 on the intelligence that he supplied, and was assassinated by Soviet intelligence after Guy Burgess informed Soviet superiors about him. The assassination, Percy argues, cleared the threat of exposure of the Cambridge Five and other moles.[deprecated source] At the first news of his death, Whittaker Chambers found Krivitsky's wife, Antonina ("Tonia" according to Kern, "Tonya" according to Chambers) and son Alek in New York City. He brought them by train to Florida, where they stayed with Chambers's family, which had already fled New Smyrna. Both families hid there several months, fearing further Soviet reprisals. The families then returned to Chambers's farm in Westminster, Maryland. Within a short time, however, Tonia and Alek returned to New York. His wife and son both lived in poverty for the rest of their lives. Alek died of a brain tumor in his early 30s after he had served in the US Navy and studied at Columbia University. Tonia, who changed her surname legally to "Thomas", continued to live and work in New York City until she retired to Ossining, where she died at 94 in 1996 in a nursing home. - In Stalin's Secret Service (1939) (second edition 1939, 1979, 1985, 2000) - Rusia en España (Spanish, 1939) - MI5 Debriefing & Other Documents on Soviet Intelligence (2004) - Kern, Gary (2004). A Death in Washington: Walter G. Krivitsky and the Stalin Terror. Enigma Books. pp. early life 3–12, Paul Wohl 20–23, 172–175, 314–317, 420–424, 448–454, especially 245–246, family's fate and money 400–401. ISBN 978-1-929631-25-4. - Krivitsky, Walter G. (1939). In Stalin's secret service: An Exposé of Russia's Secret Policies by the Former Chief of the Soviet Intelligence in Western Europe. New York: Harper Brothers. pp. 290, 294. LCCN 40027004.ISBN 0890935491 (1985) - Lownie, Andrew (4 October 2016). Stalin's Englishman: Guy Burgess, the Cold War, and the Cambridge Spy Ring. 1878: St. Martin's Press.CS1 maint: location (link) - Lewis, Flora (13 February 1966). "Who Killed Krivitsky?" (PDF). CIA. Washington Post and Times-Herald. - "Book Notes". New York Times. 4 November 1939. p. 13. - "Books Published Today". New York Times. 15 November 1939. p. 21. - Reiss, Elsa (September 1938). "Ignace Reiss: In Memoriam". New International. pp. 276–278. Retrieved August 30, 2010. - The New York Times and Joseph Stalin, David Martin, March 9, 2008. - Chambers, Whittaker (1952). Witness. Random House. pp. 27, 36, 47, 59, 317–318, 381, 402, 436fn, 457, 459–463, informing 463, murder 207, 337, 485–486, fate of family 486-487. ISBN 0-89526-571-0. - "The George Hotel". Kimpton Hotels & Restaurants. 2012. Retrieved 26 October 2012. - "Venona: Soviet Espionage and The American Response 1939-1957 (footnote 18: Charles Runyon [Department of State], Memorandum for the File, "Walter Krivitsky," 10 June 1947)". Central Intelligence Agency. 1996. - Hyde, Jr., Earl M. (July 2003). "Still Perplexed About Krivistky". International Journal of Intelligence and Counterintelligence. New York: International Journal of Intelligence and Counterintelligence (Volume 16, Issue 3): 431, 438. ISSN 1521-0561. Retrieved September 11, 2010. - Large, David Clay, Between Two Fires: Europe's Path In The 1930s, New York: W.W. Norton & Co. (1991), ISBN 0-393-30757-3, ISBN 978-0-393-30757-3, p. 308: Just prior to his death, Krivitsky confided to his friend Sidney Hook and others, "if I am ever found apparently a suicide, you will know the NKVD caught up with me." - Secret murders ordered from the Kremlin (Russian), Interview with Nikita Petrov, historian and vice-president of Memorial Society, at Echo Moskvy, - Knight, Amy W. (2006). How the Cold War Began: The Igor Gouzenko Affair and the Hunt for Soviet Spies. Carroll & Graf. pp. 304, n. 6. ISBN 0-7867-1816-1. - "Files on Walter G. Krivitsky". Federal Bureau of Investigation. - Poretsky, Elisabeth K. (1969). Our Own People: A Memoir of "Ignace Reiss" and His Friends. London: Oxford University Press. pp. 269–270. LCCN 70449412. - "Misdefending the Realm". University of Buckingham Press. 2017. Retrieved 27 October 2017. - Robinson, Martin; Harvison, Anthony (26 October 2017). "Winston Churchill 'was powerless to protect Britain from a protracted Cold War because his government was riddled with Soviet spies' claims new book". Carroll & Graf. Retrieved 27 October 2017. - Krivitsky, Walter G. (1939). In Stalin's Secret Service: An Exposé of Russia's Secret Policies by the Former Chief of the Soviet Intelligence in Western Europe. New York: Harper Brothers. LCCN 40027004. - Krivitsky, Walter G. (1939). In Stalin's Secret Service: An Exposé of Russia's Secret Policies by the Former Chief of the Soviet Intelligence in Western Europe. New York: Harper Brothers. LCCN 49034777. - "In Stalin's Secret Service". Library of Congress. 1979. p. 273. Retrieved 23 February 2014. - "In Stalin's Secret Service". Library of Congress. 1985. p. 273. Retrieved 23 February 2014. - In Stalin's Secret Service. Library of Congress. 2000. pp. 306. ISBN 1929631030. Retrieved 23 February 2014. - "Agent de Staline". Library of Congress. Retrieved 23 February 2014. - "Byłem agentem Stalina". Library of Congress. Retrieved 23 February 2014. - "Я был агентом Сталина. Записки советского разведчика". Терра-Terra. - "Rusia en España". Buenos Aires: Library of Congress. c. 1939. p. 31. Retrieved 23 February 2014. - Krivitsky, Walter G. (2004). Kern, Gary (ed.). MI5 Debriefing & Other Documents on Soviet Intelligence. Xenos Books. ISBN 1-879378-50-7. - Chambers, Whittaker (1952). Witness. Random House. ISBN 0-89526-571-0. - Hyde, Jr., Earl M. (July 2003). "Still Perplexed About Krivistky". International Journal of Intelligence and Counterintelligence. New York: International Journal of Intelligence and Counterintelligence (Volume 16, Issue 3). 16 (3): 428–441. doi:10.1080/713830442. ISSN 1521-0561. - Kern, Gary (2004). A Death in Washington: Walter G. Krivitsky and the Stalin Terror. Enigma Books. ISBN 978-1-929631-25-4. - Poretsky, Elisabeth K. (1969). Our Own People: A Memoir of "Ignace Reiss" and His Friends. London: Oxford University Press. LCCN 70449412. - "Files on Walter G. Krivitsky". Federal Bureau of Investigation (FBI). n.d. Retrieved September 11, 2010.
<urn:uuid:3aeb080c-ee86-403c-9b18-34ee0c9ce3bc>
CC-MAIN-2020-16
https://en.wikipedia.org/wiki/Walter_Krivitsky
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370497042.33/warc/CC-MAIN-20200330120036-20200330150036-00234.warc.gz
en
0.94098
3,740
2.625
3
“[A] bill of rights is what the people are entitled to against every government on earth, general or particular, and what no just government should refuse.” — Thomas Jefferson December 20, 1787 If you were to choose the single most important act in history and then, choose the single most important person behind that act, who would you choose? Buddha, for teaching the Middle Way that ends suffering and leads to enlightenment? Jesus, for resurrecting and saving mankind from its sins? Mohammad, for insisting on the greatness and priority of God? Edison, for bringing light to human society? Gandhi, for his commitment to non-violent social revolution? While these achievements and individuals continue to stand as inspirational examples that awaken and uplift humanity, there remains a contribution transcends and includes all of these. Thomas Jefferson insisted that guarantees of individual liberties be made social law. The rights to free speech, press, assembly, and religion as well as the rights of the accused mean that humanity, and in particular the government, is required to behave in a humane, or mutually respectful way. A strong case can be made that this act has done more to curtail the abuses of government and religions while supporting the power of individuals to grow and live in freedom than any other action in the history of civilization. The advocacy of a bill of rights transcends the interests of any particular nation and accomplishes transformations in social justice that religion has failed to do despite centuries of the opportunities that power, and control provide. Jefferson’s insistence that human rights be guaranteed by federal law continues to reverberate throughout the world, changing it for the better, as more governments make similar statements the law of the land. However, many of these principles are yet to be acknowledged, absorbed, and widely supported. They remain under serious attack in many places, including in the United States. While having such laws by no means indicates their enforcement (China has a bill of rights), it does publicly announce to the world the standards by which a country wants to be judged in the way it treats its citizens. It sets the bar higher for both humans and government. Why is Jefferson’s contribution so monumental? He knew that, by its nature, power constantly appropriates more power to itself. He saw a bill of rights as a way to concretely limit the power of central government, which inherently has more power than individual citizens. His goal was to create governmental structures that restrain politicians from diminishing the freedoms of the individual while prohibiting citizens from denying liberties to other members of society. The establishment of such rights has made slavery illegal and unacceptable, brought voting rights to a growing number of women all over the world, and reduced faith-based discrimination by one faith against followers of others. The concept of basic human rights that require governmental protection do not only have social, cultural, and political ramifications. This principle also has extraordinarily powerful implications for our personal lives and how we govern ourselves. Imagine that who you are is something like an iceberg. Your waking sense of who you are, called your waking identity, is the one-eighth or ten percent of the iceberg of yourself that is “above water” and that you are aware of. It controls your body, interprets sense information, makes decisions, thinks, feels, and acts. Your waking identity thinks it runs your life. It assumes that the ninety percent of the iceberg that exists “below the surface” of your awareness exists to support it and help accomplish waking goals. Your waking identity does not grant rights to other aspects of self that make up seven-eighths of the iceberg; it has probably never even considered the idea of doing so. Consequently, the great majority of who you are is disenfranchised, essentially existing as slaves, without any rights, totally dependent on the emotional whims, beliefs, and developmental level of the small minority of self-aspects that make up the ten percent of who you think you are right now. As the years pass, your waking sense of who you are accrues more and more power to itself. It develops strategies to repress, ignore, deny, and generally discount the rights of this great internal majority. Cut off from its emerging potentials and the constituency that it is supposed to represent, your waking identity starves itself of the resources that sustain it. Over time it withers, gets sick, and dies. The ten percent of your iceberg that is out of the water “melts.” An analogy would be to the political, financial, and corporate one percent which, through the constant accrual of privilege to itself, starves the ninety-nine percent of the population, whose consumption maintains the obscene and unjust wealth and privilege of that one percent. In time the rich, powerful, and status-conscious one percent collapses from the sheer weight of its own greed, selfishness, ignorance, and stupidity. A Jeffersonian response to this predicament would be to first recognize that the “citizens” that we govern are mostly disenfranchised and are functionally our slaves. We do not listen to them, nor do we grant them the right of free speech. We are not comfortable with the idea that there may be parts of ourselves that believe differently than we do, which means that there is no internal freedom of religion. Nor are we interested in groups of inner aspects getting together and acting in ways that we do not control, as happens in diseases like cancer, depression, and anxiety disorders. Jefferson believed all people are entitled to a bill of rights to protect them against any and all governments on earth. He believed that no just government should refuse such rights to its citizens. This has, since Jefferson’s time, a key criteria by which the justice of any government is evaluated. If it is the case that all governments, both general and particular, should guarantee such rights, doesn’t this mean that you, as the government ruling over your life, should extend the same to your own internal constituency? What would it look like if you were to do so for and to yourself, in particular, and if mankind were to do so, in general? To do so there must first be some means to guarantee basic freedoms of speech to your internal constituency. You must provide them with a way to be heard. In political terms the ability to have your voice be heard involves not only freedom of speech but the ability to petition the government regarding grievances, or injustice. On a personal level, what would such injustices consist of and what would a “petition” be? Integral Deep Listening listens to internal “petitions” when it treats dreams, nightmares, and life issues as wake-up calls generated by your larger internal and external identity. You learn to interview members of your internal constituency, called “emerging potentials,” that show up in your dreams and as personifications of the stressful feelings of your life issues. In this way freedom of speech is supplied to a sample of your internal constituency. It is similar to taking ice cores of the seven-eights of the iceberg that is below the water line. You don’t need to sample all the iceberg all the time to learn a lot about its condition. Internal “petitioning” of your waking identity is already happening all the time in your life. When you eat something that disagrees with you, your body petitions its “government” by complaining with a stomach ache or worse. When you mentally and emotionally attempt to “digest” something that is not healthy for you, like a horror movie, accident, or traumatic life event, your internal constituency petitions its “government” in the form of nightmares, bad dreams, troubled sleep, and increased waking anxiety. Such “petitions” are generally not accepted by the government, your waking sense of who you are, the ten percent of your iceberg self that is above water. The petitions of your internal constituency are ignored or put on a docket of complaints and then buried amid the pressure of ongoing waking priorities, similar to how the petitions of American Indians against the government were ignored, repressed, and denied for over a century. Addictions create other grievances, for which there are initially petitions that take different forms, depending on the imbalance. For smoking, the initial petitions take the form of coughing and throat discomfort. For drinking it might be a headache or throwing up. For eating too many sweets it might be a sense of agitation. However, there is soon established a strong internal constituency that benefits from, supports, and fights to defend the imbalance. Governmentally, this is similar to arms merchants, the NRA, the AIPAC (the American-Israeli Public Affairs Committee, which lobbies Congress for Israeli interests), drug manufacturers, bank, corporate, financial sectors, unions, churches, and anyone else that benefits from having their interests served at the expense of the majority. Similar internal interest groups form as addictions take hold. They then fight hard to maintain their power, presenting the addictive behavior that sustains them as normal, acceptable, necessary and even beneficial. Familial and cultural life scripts are powerful, universal examples of internal addictive cabals, grown so massive that they control the halls of your internal congress. Immersed in these cultural and social dreams, internal cabals and vested interests completely drown out your inner compass. The water of the iceberg itself has no constituency; only the demands of the impurities in the water are heard. Your sense that your waking identity is in control and “running the show” is largely a myth. It has no control over endocrine, genetic, and autonomic nervous system functions, nor should it, because it lacks the intelligence to do so. Can you imagine what would happen if you consciously tried to digest your lunch? Do you known where, how, and when to release and when to hold back all those peptides and enzymes that your body does as a routine matter? At night your waking identity surrenders all control in deep sleep and generally experiences limited control and awareness in dreams. A little self-observation discloses that there is an ongoing rich, interior dream life that you neither control nor understand. Putting deep sleep and dreaming together, a full third of your life is out of your control most of the time. Regarding the other two thirds, you most likely did not control who you were born to, the cultural and social scripting that you received, or the conceptual environment that you grew up that largely determine who you think you are today. What you think you control is largely a perceptual delusion that you maintain to help you feel secure. While your waking sense of who you are creates many fictions to support this delusion, aging, social events, and external physical occurrences are largely out if your control. You are largely unable to control what other people think about you or whether there are hurricanes, fires, robbers, or accidents in your life. If you think you know who you are and that you are in charge of your life you are less likely to get into the uncomfortable role of victim in the Drama Triangle. Coming to grips with how little control you actually possess does not mean that you have to feel out of control, victimized, and vulnerable. This realization is a first step in gaining genuine control, paradoxically by sharing it with the other ninety percent of who you are. How can you ensure the rights of the ninety percent of the iceberg that lies below your level of waking awareness? In Integral Deep Listening this is accomplished in several ways. When you interview a dream aspect or the personification of a life issue, you are giving freedom of speech, assembly, and press to aspects of yourself. When you practice becoming this or that emerging potential during your waking life you are at that moment including the will of the governed in your attitudes and decisions. When you attempt to put into practice a recommendation by an interviewed dream character or personification of a burning life issue you are listening to a petition from your constituency and making it a law of the land. Instead of focusing on what is wrong with politicians and the power struggles of your world you are shifting your focus to what you can do to correct and maximize the power distribution between your waking identity and your internal or intrasocial community. This empowers you by integrating more parts of you into your waking sense of who you are. At the same time it reduces internal status discrimination, analogous to class distinctions based on money, wealth, family, and fame. You are doing a truly amazing, revolutionary thing: sharing your power to rule and control your life with your internal constituency. Jefferson believed that the way to limit governmental power while curbing the excesses of citizens lies in giving to others those rights that one wants for oneself. What would it mean to apply this principle in your own interior life? What if you were to treat your dream monsters, trees, cars, toothbrushes, and trash cans the way that you want to be treated? Years of work with Integral Deep Listening demonstrates that you will discover sources of support and direction within that will amaze you. You will access “virtuous” aspects of yourself that score higher than you do in core qualities of enlightenment: confidence, generosity, wisdom, acceptance, inner peace, and witnessing. These qualities will become the glue that hold together the entirety of the iceberg of your identity, just as Jefferson imagined virtue as the force that holds together a republic. This virtue will replace your normal sense of unknowing separateness from the life, concerns, and rights of your citizen constituency. The artificial sense of status that your waking identity maintains in its ignorance of who it really is will give way to a perspectival egalitarianism, in which different and differing rights and points of view are encouraged. This will shift your style of self-governance from waking autocracy or dictatorship toward a more democratic rule, in a way similar to how Jefferson envisioned virtue replacing the patronage, dependency, and coercion that held together the monarchies of Europe. Jefferson saw government as a conflict between independence and dependency, Protecting and encouraging the expression of the rights of your intrasocial constituency allows your personal political development to move toward a state Jefferson did not envision, interdependency. Just as the water in the ten and ninety percent parts of the iceberg are the same, so listening to and integrating the recommendations of your interviewed emerging potentials will allow growth into a self-sense that expands to include the whole of who you are. Although icebergs contain fresh water while the ocean that surrounds is salty, it’s all water. Jefferson believed that only those with property could escape the dependency that limits one’s ability to participate in representative government, and so disenfranchised women and slaves. Time has enormously broadened Jefferson’s understanding of rights and who is capable of exercising them in a responsible way. Similarly, IDL demonstrates that no aspect of who you are, known or unknown, is dependent in a way that disallows its full participation in your body politic. Consequently, IDL extends the rights your waking identity enjoys to all aspects of yourself and to all of your emerging potentials. Why is this so important? Those aspects of yourself that you do not include in your personal political process, that is, in your decision-making processes, form revolutionary cabals. These rise up as addictions or guerrilla movements to overthrow your waking government, either temporarily, as in drunkenness, fits of rage, or criminality, or permanently, as in psychosis, accidents, and death. The solution is diplomacy. Enter into dialogue with your rogue cabals. Enter into dialogue with your emerging potentials before they form powerful internal alliances based on common grievances. Many humans dream of the day when Jeffersonian liberties are extended to all people, everywhere. That would indeed be a great day, and many people spend their lives working to make that dream come true. But how many of those brave and virtuous souls extend those same rights to their own internal constituencies? How many understand, much less practice, intrasocial democracy? How many know what dream politics is, much less practice it in their own lives? For a moment let us imagine that this idea were to become wide-spread. What would that mean for the world? First of all, the fight for social justice in the world would be balanced with serious, daily efforts to implement intrasocial justice on a personal level. The result would be more healthy, integrated individuals with more energy, creativity, and resources to fight for social justice. Secondly, familial culture everywhere would be transformed. Instead of parents demanding obedience out of fear that their children will not learn the skills necessary to find work, have the financial success necessary to have a family and achieve personal happiness, they would trust that as they teach their children to find and follow their own inner compass that their own emerging potentials will teach them how to best deal with these social and cultural challenges. Third, because people will no longer experience themselves in conflict with their emotions or their bodies, they will not perceive their relationships with power as overwhelmingly conflictual. They will not put themselves at odds with the parts of themselves that political power structures, such as government, elected representatives, the media, or the wealthy may represent. They will avoid getting into political Drama Triangles. The result is that they will be playing the game of life at a level that transcends and includes fear-based drama. This means that the common tools that power uses to control the population, fear and greed, will lose their effectiveness. Such an internal shift in individuals, groups, and society is not the only way to accomplish social change, nor is it always the best way. However, if you change yourself you maximize your own chances for happiness while maximizing your ability to transform society by the power of the example of what you are doing and who you area becoming. Could this work? Is it really possible to transform servitude to power by freeing your own enslaved voices? To conduct this experiment and decide for yourself you need believe no one. You need trust in nothing exterior to yourself. Simply learn the IDL interviewing process, implement those recommendations that grow out of the interview that make sense to you, and see what happens. Make up your own mind. If you believe in human rights, prove you are a human by extending those rights you desire for yourself to your internal constituencies. Learn and practice dream yoga. Create the government you yearn for within your own heart and mind. Become a Thomas Jefferson to your own disenfranchised masses. How do you do so? Why not begin by getting to know the Thomas Jefferson within yourself? By accessing, listening to, and applying the recommendations of this powerful and important emerging potential within yourself you can make changes that matter, both for yourself and for the world. To view an example of such an interview, see Interviewing Thomas Jefferson.
<urn:uuid:53a08408-40f0-49cb-8cc7-979a83dcd4f7>
CC-MAIN-2020-16
https://www.dreamyoga.com/thomas-jefferson-and-dream-politics/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505366.8/warc/CC-MAIN-20200401034127-20200401064127-00514.warc.gz
en
0.959357
3,840
3.84375
4
When visible and infrared waves penetrate human skin, they get absorbed and scattered through the skin layers. The wavelength and properties of each skin layer determine the penetration depth of these waves. By generating absorption and scattering properties as a function of wavelength for each layer of the skin, you can model these properties to figure out the penetration depth of various wavelengths into skin. Then, knowing the penetration depth, you can choose the optimal wavelength for specific biosensor applications. To optimize optical biosensors, you need to understand the behavior of light as it impinges upon and travels through the skin. Having this knowledge gives you the tools to accurately simulate penetration depth as a function of wavelength. In this article, we’ll look at the absorption and reduced scattering coefficients of human skin layers as a function of wavelength. You can then use these coefficients to simulate penetration depth as a function of wavelength and, ultimately, choose the optimal light source wavelength for a given biosensor application. Optical properties of skin layers Human skin has three main layers from the surface: the blood-free epidermis layer (100 μm thick), the vascularized dermis layer (about 1 mm to 3 mm thick), and subcutaneous adipose tissue (from 1 mm to 6 mm thick, depending on which part of the body). The optical properties of these layers are typically characterized by three factors: absorption (μa) coefficients, scattering (μs) coefficients, and the anisotropy factor (g). The absorption coefficient characterizes the average number of absorption events per unit path length of photons travelling in the tissue. Blood, hemoglobin, b-carotene, and bilirubin are the main absorbers in the visible spectral range. In the IR spectral range, the absorption of water defines the absorption properties of skin dermis. The scattering coefficient characterizes the average number of scattering events per unit path length of photons travelling in the tissue. Finally, the anisotropy factor g represents the average cosine of the scattering angles. Let’s next consider the biological characteristics of each skin layer and how they affect the propagation and absorption of light. A closer look at skin structure The epidermis, the first and outermost section of human skin, can be subdivided into two sublayers: non-living and living epidermis. Non-living epidermis, or stratum corneum (10 μm to 20 μm thick), is composed mainly of dead cells, which are highly keratinized with high lipid and protein content, and has relatively low water content1. In this layer, light absorption is low and relatively uniform in the visible region. The living epidermis (100 μm thick) propagates and absorbs light. A natural chromophore, melanin2, determines the absorption properties. Melanin comes in two forms: a red/yellow pheomelanin and a brown/black eumelanin which is associated with skin pigmentation. The amount of melanosomes available per unit volume dictates the melanin absorption level. The volume fraction of the epidermis occupied by melanosomes generally varies from 1 percent (lightly pigmented specimens) to 40 percent (darkly pigmented specimens). The scattering properties of melanin particles depend upon particle size and may be predicted by the Mie theory. The dermis is a 0.6-mm- to 3-mm-thick structure made up of dense, irregular connective tissue containing nerves and blood vessels. Based on the size of the blood vessels3, the dermis can be divided into two layers. Smaller vessels are closer to the skin surface in the papillary dermis. Larger blood vessels are in the deeper reticular dermis. Absorption in the dermis is defined by the absorption of hemoglobin, water, and lipids. Since oxyhemoglobin and deoxyhemoglobin have different absorption profiles, the oxygen saturation must be known. For an adult, the arterial oxygen saturation is generally higher than 95 percent4. Typical venous oxygen saturation is 60 percent 70 percent5. The tissue in the dermal layers is rather fibrous, a characteristic that defines the scattering properties of this layer. Light can scatter on interlaced collagen fibrils and bundles as well as single collagen fibrils. The average scattering properties of the skin are dominated by dermal scattering because of the relative thickness of this dermal layer. The subcutaneous adipose tissue is formed by a collection of fat cells containing stored fat (lipids). Its thickness varies considerably throughout the body: it doesn’t exist in the eyelids, but in the abdomen, it can be up to 6 cm thick. Absorption of hemoglobin, lipids, and water defines absorption of the human adipose tissue. Spherical droplets of lipids, which are uniformly distributed within the fat cells, are the main scatterers of adipose tissue. The diameters of the adipocytes are in the range 15 μm to 250 μm6 and their mean diameter ranges from 50 μm to 120 μm7. Blood capillaries, nerves, and reticular fibrils connecting each cell occupy the spaces between the cells, providing metabolic activity to the fat tissue. See Figure 1 for a planar five-layer optical model of human skin based on the stratified skin layers we’ve discussed. The model includes the stratum corneum, the living epidermis, the two layers of dermis (papillary and reticular), and the subcutaneous adipose tissue layer. Table 1 presents thickness of the layers as well as typical ranges of blood, water, lipids, and melanin contents; refractive indices of the layers; and mean vessel diameters. [Figure 1 | The five-layer optical model of the skin (not to scale).] [Table 1 | The parameters of skin layers used in the simulation.] Absorption coefficients of each skin layer In the visible and NIR spectral ranges, the absorption coefficient of each layer includes contributions from eumelanin, pheomelanin, oxyhemoglobin, deoxyhemoglobin, bilirubin, β-carotene, lipids, and water. The spectral extinction coefficients for these pigments, denoted ∈eu (λ), ∈ph (λ), ∈ohb (λ), ∈dhb (λ), ∈bil (λ), and ∈β (λ), respectively, are given from the curves shown in Figure 2. The total absorption coefficient for the kth layer is given by: μak = (ak,eu (λ) + ak,ph (λ)) 𝜗k,mel + (ak,ohb (λ) + ak,dhb (λ) + ak,bil (λ)) 𝜗k,blood + (ak,water (λ)) 𝜗k,water + (ak,lip (λ)) 𝜗k,lip + (abase (λ) + (ak,β (λ)) (1 − 𝜗k,mel − 𝜗k,blood − 𝜗k,water − 𝜗k,lip) where k = 1,…,5 is the layer number, 𝜗k,mel, 𝜗k,blood, 𝜗k,water, and 𝜗k,lip are the volume fractions of melanin, blood, water, and lipids in the kth layer, and ak,eu (λ), ak,ph (λ), ak,ohb (λ), ak,dhb (λ), ak,bil (λ), ak,water (λ), ak,lip (λ), and ak,β (λ) are the absorption coefficients of eumelanin, pheomelanin, oxyhemoglobin, deoxyhemoglobin, bilirubin, water, lipids, and β-carotene, respectively. abase (λ) is the wavelength-dependent background tissue absorption coefficient, denoted by 7.84e8 x λ-3.255 cm-1. [Figure 2 | Spectral extinction coefficient curves for the natural pigments present in skin tissues.] The eumelanin and pheomelanin absorption coefficients are given by: ak,eu (λ) = ∈eu (λ) ck,eu and ak,ph (λ) = ∈ph (λ) ck,ph where ck,eu = eumelanin concentration (g/L) in the kth layer and ck,ph = pheomelanin concentration (g/L) in the kth layer. The oxyhemoglobin and deoxy hemoglobin absorption coefficients are given by: aohb (λ) = (∈ohb (λ) ∕ 66500) ck,hb * γ and ak,dhb (λ) = (∈dhb (λ) ∕ 66500) ck,hb * (1− γ) where 66500 = molecular weight of hemoglobin (g/mol), ck,hb = hemoglobin concentration of the blood (g/L) in the kth layer,and γ = ratio of oxyhemoglobin to the total hemoglobin concentration. The absorption coefficient of bilirubin is given by: ak,bil (λ) = (∈bil (λ) ∕ 585) ck,bil where 585 = molecular weight of bilirubin (g/mol) and ck,bil = bilirubin concentration (g/L) in the kth layer. The β-carotene absorption coefficient ak,β (λ) is given by: ak,β (λ) = (∈β (λ) ∕ 537) ck,β where 537 = molecular weight of β-carotene (g/mol) and ck,β = β-carotene concentration (g/L) in the kth layer. The absorption coefficient of water is given by: ak,water (λ) = ∈water (λ) ck,water where ck,water = water concentration (g/L) in the kth layer. The lipid absorption coefficient is given by: ak,lip (λ) = ∈lip (λ) ck,lip where ck,lip = lipid concentration (g/L) in the kth layer. The total scattering coefficient for the kth layer can be defined as: μsk = 𝜗k,blood Ck μsblood (λ) + (1 − 𝜗k,blood) μsTk (λ) where Ck is a correction factor that is defined by the mean vessel diameter. The blood scattering coefficient as a function of wavelength and μsTk define the total scattering coefficient of the bloodless tissue layer. The following relations can be used for Ck8: Ck = 1/(1+ a (0.5 μsblood dk,vessels)b) where dk,vessels is the blood vessel’s diameter (cm) in the kth layer. In the case of collimated illumination of the vessels, the coefficients a and b have values a = 1.007 and b = 1.228. In the case of diffuse illumination of the vessels, the coefficients a and b have values a = 1.482 and b = 1.151. The total scattering coefficient of the bloodless tissue is given by9: μsTk (λ) = μs0k (577nm / λ) where μs0k are the scattering coefficients at the reference wavelength 577nm listed in Table 1. Note: μsTk falls monotonically with the increase in the wavelength. The expression for the anisotropy of scattering may be constructed to include the contribution from blood9: gk (λ) = (𝜗k,blood Ck μsblood (λ) gblood + (1 − 𝜗k,blood) μsTk (λ) gT (λ))/ μsk (λ) where gT (λ) is the anisotropy factor of the bloodless tissue and gk (λ) = 0.7645 + 0.2355 [1– exp ((– λ – 500nm)/729.1nm)] Finally, the reduced scattering coefficients is defined as μs'k (λ) = μsk (λ)(1 – gk (λ)). Applying computer simulations to determine penetration depth Zemax Optics Studio software was used to determine penetration depth as a function of wavelength. The software uses a Monte Carlo (MC) method to trace optical rays propagating in complex inhomogeneous, randomly scattering, and absorbing media. To perform basic MC modeling of an individual photon packet’s trajectory, we can apply the following sequence of elementary simulations: photon path length generation, scattering and absorption events, reflection and/or refraction on the medium boundaries. Scattering events can be characterized by the Henyey-Greenstein phase function fHG (θ), which describes the new photon packet’s direction after scattering: fHG (θ) = (1/4π)((1– g2)/(1 + g2 – 2gcosθ)3/2 where θ is the polar scattering angle. The distribution over the azimuthal scattering angle was assumed as uniform. The specular reflection from the air-tissue surface is also considered in the simulations. Using this MC methodology requires that you have the values of absorption, as well as the scattering coefficients and anisotropy factor of each skin layer, its thickness, and refractive index. You will also need to have the mean path defined as the inverse of the scattering coefficient. Using the optical properties we’ve discussed with the Henyey-Greenstein scattering phase function and Zemax optical software, we can simulate any biosensor configuration and determine the maximum penetration depth as a function of wavelength. As a use case, consider the following typical LED-photodiode (PD) biosensor configuration (Table 2 and Figure 3) and skin properties shown in Table 3. We performed a simulation to determine the maximum penetration depth as a function of wavelength. [Table 2 | Biosensor configuration used in simulation.] [Figure 3 | Dimensions of biosensor configuration used in simulation.] [Table 3 | Skin properties used in simulation.] The absorption coefficients of the skin layers were calculated based on the presented optical model, as shown in Figure 4. [Figure 4 | Absorption spectra of different skin layers calculated based on the presented optical model.] The scattering coefficient, anisotropy factor, and mean path of the skin layers have been calculated using the presented model, with the result presented in Figures 5-7. [Figure 5 | Scattering coefficient of different skin layers calculated using the presented optical model.] [Figure 6 | Anisotropy factor of different skin layers calculated according to the presented optical model.] [Figure 7 | Scattering mean path of different skin layers calculated using the presented optical model.] To determine the performance of a biosensor, it’s essential to consider the penetration depth of light into a biological tissue. Using the absorption and reduced scattering coefficient values presented earlier in this article, we simulated optical penetration depth, presenting the results in Figure 8. [Figure 8 | Simulated maximum penetration depth for the situation shown in Figure 3 and Table 3.] In this article, we have modeled human skin tissue according to a five-layer structure, with each layer representing its corresponding anatomical layer. To simulate light tissue interaction, we modeled the biological characteristics of each layer with three wavelength-dependent numbers, absorption coefficient, scattering coefficient, and anisotropy factor. We used a commercial ray-tracing software to calculate penetration depth of light into skin tissue to simulate the performance of optical biosensor architectures. - K. S. Stenn, "The skin, " Cell and Tissue Biology ed. by L. Weiss, Baltimore: Urban & Shwarzenberg, 541-572 (1988). - M. R. Chedekel, "Photophysics and Photochemistry of Melanin, " In: L. Zeise, M. R. Chedekel and T. B. Fitzpatrick, Eds., Melanin: Its Role in Human Photoprotection, Valdenmar, Overland Park, pp. 11-22 (1994). - T. J. Ryan, "Cutaneous Circulation," in Physiology, Biochemistry, and Molecular Biology of the Skin, 1, ed. by L.A. Goldsmith, Oxford University Press, Oxford (1991). - A. Zourabian, A. Siegel, B, Chance, N. Ramanujan, M. Rode and D. A. Boas, "Trans-abdominal monitoring of fetal arterial blood oxygenation using pulse oximetry, " J. Biomed. Opt. 5 391–405 (2000). - T. Hamaoka et al, "Quantification of ischemic muscle deoxygenation by near infrared time-resolved spectroscopy, " J. Biomed. Opt. 5 102–5 (2000). - M. I. Gurr, R. T. Jung, M. P. Robinson and W. P. T. James, "Adipose tissue cellularity in man: the relationship between fat cell size and number, the mass and distribution of body fat and the history of weight gain and loss," Int. J. Obesity 6 419–36 (1982). - Y. Taton "Obesity, Pathphysiology, Diagnostics, Therapy", Warsaw: Medical Press (1981). - W. Verkruysse, G. W. Lucassen, J. F. de Boer, D. J. Smithies, J. S. Nelson, M. J. C. van Gemert, "Modelling light distributions of homogeneous versus discrete absorbers in light irradiated turbid media," Phys. Med. Biol. 42, 51-65 (1997). - G. Altshuler, M. Smirnov, I. Yaroslavsky, "Lattice of optical islets: a novel treatment modality in photomedicine," Journal of Physics D: Applied Physics 38, 2732-2747 (2005).
<urn:uuid:98841090-c838-454e-ba06-42d60bc59370>
CC-MAIN-2020-16
https://www.embedded-computing.com/medical/how-to-choose-the-optimal-wavelength-for-your-biosensor-application
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510287.30/warc/CC-MAIN-20200403030659-20200403060659-00115.warc.gz
en
0.855618
3,909
3.53125
4
By now even the most amateur Web user has noticed that some websites start with the traditional, http://, while others the slightly longer. https://. But far fewer realize the significance of that extra letter. Do you know what it means? URLs that begin in, “https,” are encrypted to help prevent hackers from intercepting your data. If you haven’t figured it out yet, the extra “s” stands for security. More specifically, https stands for Hypertext Transfer Protocol Secure, and the sites are encrypted using SSL, or Secure Sockets Layer. As more users began to engage in eCommerce and online banking, the demand for data integrity, security and confidentiality became paramount. Brands found it necessary to convert to the more secure protocol to compete for increasingly-conscientious consumers. Because SSL promise a more secure Web, Google and other industry leaders began to encourage sites to migrate. The presence of SSL certificates became a ranking signal more than three years ago. But with this month’s release of Chrome v62, Google will begin marking non-HTTPs that include text input fields – such as search bars or contact forms – with the label “NOT SECURE” in the address bar. Ouch! Bad news for bloggers who don’t engage in ecommerce but offer even a simple comment section or email sign-up form. Similar labels will be applied to sites that have outdated certificates. What are SSL Certificates? According to SSL.com, Secure Sockets Layer, “is the standard security technology for establishing an encrypted link between a web server and a browser.” This important link ensures that any data passing between a web and internet browsers remains securely private, inaccessible to hackers. Millions of websites now use SSL certificates to protect the integrity of customer data and online transactions. How does it work? When a user’s Internet browser – such as Chrome, Firefox or Safari – connects to a website’s server, the SSL certificate binds the two together with a connection so secure it cannot be seen by anyone but the website submitting data and the user entering it. So that hacker who tries to place a listening program on the server? He or she captures nothing but scramble. SSL certificates also tell users details of a website’s authenticity. When a visitor click’s their browser’s padlock symbol or trust mark while visiting a page, they can read details regarding the identify of the person, business or organization that owns the website. History of Cryptology and SSL People have been encrypting messages since ancient times, when leaders like Julius Caesar used a cipher to rearrange the order of letters in messages sent to his generals. Since each letter in the message was replaced by another a fixed number of positions away in the alphabet, the recipients could decipher the message, but eavesdroppers and hostile interceptors would have no idea what they were looking at. Of course, modern cryptology is a bit more sophisticated. Ciphers used by computers can operate on large binary sequences and programs to instantly analyze encryption. In the mid-1970s IBM designed an algorithm that became the federal Data Encryption Standard, and other early data scientists published other key algorithms that supported more advanced cryptology. SSL was originally developed by then-search giant Netscape in 1994 amid growing concern over cybersecurity. But because of some serious security flaws in version 1.0, the protocol was not publicly released until version 2.0 launched in 1995. It wasn’t until version 3.0 came along in 1996 that Netscape found the right formula, and later versions have been based on this third draft. Technically, Secure Sockets Layer was replaced by Transport Layer Security protocols as early as 1999, but both are still generally referred to as SSL. The differences between the two was slight, but the privacy effects were significant and have only increased with each subsequent version. In fact, numerous updates have occurred over the years to respond both as weaknesses were recognized and as hackers found more sophisticated ways to crack the code. Why the confusing name change? Remember that intense war between Netscape and its Navigator browser and Microsoft’s Internet Explorer? The one Microsoft ultimately and dramatically won? If you’re too young too recall, there was actually a time when Internet Explorer was crowned king. At about the same time Netscape was working on version 3.0 of its SSL, Microsoft revised the flawed second version with its own protocol, one it named PCT. The budding internet community didn’t want a repeat of VHS vs. Betamax with two competing and incompatible protocols between which users must choose, so a deal was negotiated – one in which the competing tech companies would both support an open and standard protocol. Microsoft, however, insisted on a new name, and TLS was born. Apparently, the joke was on Bill Gates, though, since the SSL label has stuck to this day. Google’s changes to Chrome are really placing the heat on any blogger who hasn’t already migrated to https, but there’s plenty of additional value in SSL protocols. Obviously, if ecommerce occurs, https:// URLs lend to confident customers. And confident customers equal increased sales. Sites with a valid SSL certificate are immediately considered more trustworthy, credible and legitimate. SSL not only protects’ the privacy of visitors’ information, but it helps ensure data integrity for a site owner. With a valid SSL certificate, bloggers can be assured that data input onto their site hasn’t been modified or corrupted during transfer. And with the rampant issue with phishing websites impersonating legitimate pages to steal visitors’ information, secured sites provide evidence to users that they are, in fact, in the right place. SSL and SEO But for the past few years, SSL has also affected the ever-important search ranking. Google factors security status in its infamous ranking algorithm, a development welcomed by bloggers who have already migrated. Secure websites are now placed higher in search results than those without SSL certificates, all other factors being equal. In 2015 Google announced it would favor URLs beginning in https or http, under the following conditions: Understanding these conditions is important for bloggers hoping to capitalize on their TLS certificates. If a website is migrated to https, but the page includes links to other sites that do not have a valid certificate, Google will not rank the page as secure. Likewise, sites will not receive the extra ranking if they include images, videos or other graphics tied to URLS that have not migrated to TLS. Ahrefs tested the algorithmic update in early 2016. Blogger Christoph Engelhardt analyzed the top 10,000 domains to examine how much an https URL boosted their SERP rankings. While he found that qualifying websites indeed benefited in their rankings, his research determined only 10 percent of websites actually featured a “flawless” https setup that meets all of Google’s qualifications for preference. And 60 percent of websites at that time still had not migrated to https whatsoever. Later in 2016, Backlinko’s Brian Dean analyzed 1 million Google search results and found that a site’s overall link authority, based on meeting all of Google’s qualifications, strongly correlated with higher rankings. But that wasn’t all. Dean’s determinations also included: Ready to migrate your blog to https? You first need to set up an SSL certificate for your website’s domain, then install it on the server and update all permalinks to an https URL. But before you can do any of that, you must decide what type of SSL certificate is most appropriate or your needs. It’s definitely not a one-size-fits-all scenario. Types of SSL Certificate Types of SSL certificates can be classified by their validation level and the number of secured domains that they cover. While some bloggers only need to migrate a single landing page to https, most website owners have at least a couple of landing pages and subdomains, not ot mention a separate URL for each of their blog entries. SSL certificates also vary by their validity periods. While most standard certificates are available for one to two years before they must be renewed, longer-term advanced certificates are available for longer time periods. The least expensive of paid SSL certificates, domain validation is just as its name implies. A website owner must validate ownership of the domain using email or by adding a DNS record. It can be obtained in just a few minutes, and it’s ideal for those who aren’t supporting a larger organization and don’t need additional security. The minimum required certification for e-commerce portals, the SSL certificate validates domain ownership, and usually takes 2-3 days to activate. Because the validation is completed by the certificate authority, it’s more secure than a DV certificate. Highly-recommended for websites where transactions are preformed, the certificate requires a strict authorization process that takes 7-10 days to complete. The certificate displays organizational information and offers a green HTTPS address bar that instills greater consumer confidence. Thus, EV certificates are most popular among banking, finance and e-commerce sites. The certificate can only secure a single subdomain. Therefore, with this SSL certificate, the hypothetical URL example.domain.com can be secured, but not the coinciding example2.domain.com. Likewise, the main domain of domain.com would not be secured, either. The SSL certificate type secures unlimited subdomains for a single domain. Therefore, not only can domain.com be protected, but also example.domain.com, example2.domain.com, example3.domain.com and so on. Additional divisions beyond the subdomain, however, are not included in the certificate’s protection. So, for example, test.example.domain.com would not be covered under a wildcard certificate. The all-encompassing SSL certificate will secure all variations on a domain and its subdomains. It is highly recommended for site owners that want to secure multiple domains and subdomains. 7. Unified Communications Certificate The UCC certificate can be thought of as the group discount SSL certificate. It allows a customer to protect as many as 100 domains using the same certificate. These are specifically designed to secure Microsoft Exchange and Office communications environments. Registering a New SSL Certificate Once you’ve determined the type of SSL certificate you need to migrate your blog to https, it’s time to purchase and activate the certificate. Some hosting companies such as Hubspot and WordPress offer their own migration programs, but a host of certificate authorities issue SSL protection, including SSLs.com, Media Temple, Namecheap, GoDaddy and Comodo. Temok offers a variety of SSL certificates from many of the top issuers at as much as 70-percent less than vendor pricing. If you’re on a budget but also tech-savvy, you can a acquire a free SSL certificate with Let’s Encrypt. Once you obtain your SSL certificate, it’s time to install it on the server. The exact process of doing so will vary depending on your hosting environment and server setup. Check your host for details. After the SSL certificate has been installed, its’ time to update all content references. Remember, for a Google SERP rating, all URLs on a page must adhere to TLS protocol. The easiest way to update internal links and redirects is by employing search-and-replace in a database and HTML code. Ensure that all URLS for images, scripts and other content are also updated. After you’ve updated links, templates, images, tags and plugins, you’ll want to crawl the site and catch any URLs and tags that you might have missed. Searchengineland offers a detailed how-to guide that lists every possible script that might need updated. Fortunately, Google has also updated its Webmaster Tools to better accommodate https sites and their analytics. Be sure you track any SSL migrations within Google Tools and through appropriate analytics software. Renew an Existing Certificate SSL renewals don’t happen automatically. If you’ve rec3eived notice that your existing certification will soon expire, it’s a good idea to renew it ahead of time. Otherwise, you might have to repurchase it as a new certificate. The exact process will vary depending on the certificate authority. Namecheap, for example, offers these steps to renew an SSL certificate. It’s no longer any secret that encrypting website users’ information is not paramount to success. But Google Chrome now also visually penalizes websites that have not migrated to https://. Keep the following tips in mind when activating an SSL certificate for your blog: Are you ready to migrate your blog to https? We hope this guide has helped!
<urn:uuid:577dea9a-e112-4daa-b9f3-324fb23e45fc>
CC-MAIN-2020-16
https://www.temok.com/blog/bloggers-guide-to-https-and-ssl-certificates/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370526982.53/warc/CC-MAIN-20200404231315-20200405021315-00275.warc.gz
en
0.93584
2,667
3.125
3
Perfect for ages 7-10In Charles River Editors’ History for Kids series, your children can learn about history’s most important people and events in an easy, entertaining, and educational way. The concise but comprehensive book will keep your kid’s attention all the way to the end. The most famous debates in American history were held over 150 years ago, and today they are remembered and celebrated mostly because they included future president Abraham Lincoln, one of the nation’s most revered men. But in the fall of 1858, Lincoln was just a one-term congressman who had to all but beg his US Senate opponent to debate him. That’s because his opponent, incumbent US senator Stephen Douglas, was one of the most famous national politicians of the era. Though Douglas is remembered today almost entirely for his association to Lincoln, in 1858 he was “The Little Giant” of American politics and a leader of the Democratic Party. In particular, it was Douglas who had championed the idea of “popular sovereignty”, advocating that the settlers of federal territory should vote on whether their state would become a free state or a slave state. When Congress created the territories of Kansas and Nebraska in 1854, it followed this model, and Douglas believed it was a moderate position that would hold the Union together. But many in the North considered popular sovereignty a deliberate attempt to circumvent the Missouri Compromise, which was supposed to have banned slavery in any state above the parallel 36°30′ north. As a result, the Lincoln-Douglas debates would be almost entirely about issues pertaining to slavery.Douglas would go on to win reelection in 1858, but Lincoln would win the war, literally and figuratively. In the presidential election of 1860, Lincoln would win the Republican nomination and the presidency. 1. Language: English. Narrator: Tracey Norman. Audio sample: http://samples.audible.de/bk/acx0/108628/bk_acx0_108628_sample.mp3. Digital audiobook in aax. The Talbots are evacuating their home amidst a zombie apocalypse. Mankind is on the edge of extinction as a new dominant, mindless opponent scours the landscape in search of food, which just so happens to be noninfected humans. This book follows the journey of Michael Talbot; his wife Tracy; and their three kids, Nicole, Justin, and Travis. Accompanying them are Brendon, Nicole's fiancé and former Wal-Mart door greeter, and Tommy, who may be more than he seems. Together they struggle against a ruthless, relentless enemy that has singled them out above all others. The Talbots have escaped Little Turtle, but to what end? For, on the run, they find themselves encountering a far vaster evil than the one that has already beset them. As they travel across the war-torn countryside they soon learn that there are more than just zombies to be fearful of: With law and order a long-distant memory, some humans have decided to take any and all matters into their own hands. Can the Talbots come through unscathed, or will they suffer the fate of so many countless millions before them? It's not just brains versus brain-eaters anymore. And the stakes may be higher than merely life and death, with eternal souls on the line. 1. Language: English. Narrator: Sean Runnette. Audio sample: http://samples.audible.de/bk/tant/002450/bk_tant_002450_sample.mp3. Digital audiobook in aax. In Charles River Editors' History for Kids series, your children can learn about history's most important people and events in an easy, entertaining, and educational way. The concise but comprehensive book will keep your kid's attention all the way to the end. The American Revolution had no shortage of compelling characters with seemingly larger than life traits, including men like the multi-talented Benjamin Franklin, the wise Thomas Jefferson, the mercurial John Adams, and the stoic George Washington. But no Revolutionary leader has been as controversial as Samuel Adams, who has been widely portrayed over the last two centuries as America's most radical and fiery colonist. Among his contemporaries, Adams was viewed as one of the most influential colonial leaders, a man Thomas Jefferson himself labeled "truly the Man of the Revolution" and the one who the Boston Gazette eulogized as the "Father of the American Revolution." Adams was an outspoken opponent of British taxes in the 1760s, one of Boston's hardest working writers and orators, a leader of the Boston Caucus, active in the Sons of Liberty, and a political leader who organized large gatherings in settings like Faneuil Hall and the Old South Meeting House. When cousin John Adams was an Ambassador to France during the Revolution, he had to explain that he was not the "famous" Adams. 1. Language: English. Narrator: Tracey Norman. Audio sample: http://samples.audible.de/bk/acx0/096577/bk_acx0_096577_sample.mp3. Digital audiobook in aax. Eighty-two percent of parents believe myths about homeschooling. Keep reading to make sure you’re not one of them!Kent Larson understands the myths about homeschooling better than most because he believed them all. 12 Homeschool Myths Debunked presents his journey from opponent to advocate using compelling, fact-based answers to the most common homeschooling objections.12 Homeschool Myths Debunked tackles the myths head-on including those relating to socialization, academic performance, and college admissions. Did you know one of the most extensive studies ever conducted of homeschooler standardized test scores found them to be 37% higher than those of average public school students? If you’re surprised by this result, you’ll want to hear our study details, followed by SAT results which are even more shocking!Does the topic of homeschooling trigger arguments with your spouse? Do you doubt your ability to manage it all? Do your friends or family members think homeschooling is a dangerous idea? 12 Homeschool Myths Debunked delivers compelling answers to the toughest questions.In this book, you will discover: The one myth that must be debunked, because it keeps people believing all the othersWhy homeschoolers are getting into many of the best collegesReasons homeschoolers don’t become isolatedHow to keep unfounded biases, assumptions, and stereotypes from harming your kidsWhich schooling choice produces the best socializationWhy people wrongly assume they don't have the patience or aptitude to homeschoolThe real reason some kids are socially inept, goofy, awkward, and nerdyStacks of empirical evidence and relatable first-hand experiencesWhy men are resistant to homeschooling and how to change their mindsetDoes homeschooling seem overwhelming?Initially, I assumed homeschooling would require a burdensome amount 1. Language: English. Narrator: Kent Larson. Audio sample: http://samples.audible.de/bk/acx0/129715/bk_acx0_129715_sample.mp3. Digital audiobook in aax. Micky Ward is THE FIGHTER that inspired the major motion picture. Welcome to Lowell, where anything can happen. Rocky Marciano fought at the Auditorium in 1947. Mike Tyson fought there in his Golden Gloves days. Sugar Ray Leonard won there, as did Marvin Hagler. Each of them prepared for his battle downstairs in the boiler room, just like thousands of other kids. “Irish” Micky Ward grew up in the 1970s and ’80s as a tough kid from Lowell, Massachusetts - a town where boxers were once bred as a means of survival. A hard worker who overcame bad luck, bad management, and chronic pain in his hands, he avoided the pitfall of poverty and dead-end work that plagued Lowell to become a Golden Gloves junior welterweight. Ward participated in street fights from an early age and was forever known by his opponents and spectators as the underdog. But with his incredible ability to suddenly drop an opponent late in a fight with his trademark left hook, he kept proving everyone wrong. After fifteen years of boxing, a string of defeats, and three years of retirement, Micky battled Arturo Gatti in 2002 in the battle that was later named “Fight of the Year” by Ring magazine and dubbed “Fight of the Century” by boxing writers and fans across the country. Ten rounds of brutal action ended with Micky winning by decision, and reviving enthusiasm for a sport that had been weighted down by years of showboating and corruption. ESPN and Boston television reporter Bob Halloran recounts Micky’s rise to hero status, his rivalry with his imprisoned brother, and the negotiations, betrayals, and drugs that ultimately shaped a wild youth into a nationally respected boxer. BOB HALLORAN is the weekend news and sports anchor at WCVB-TV in Boston. He is also a former ESPN anchor and columnist for ESPN.com. He has worked as a news and sports anchor in New England for over twenty years, and he writes a weekly column for Boston’s Metro newspaper. /p 1. Language: English. Narrator: Bronson Pinchot. Audio sample: http://samples.audible.de/bk/blak/004016/bk_blak_004016_sample.mp3. Digital audiobook in aax. Today is the day! Nick wakes up for his very first tennis tournament. It's important that he remembers all his supplies, like his tennis shoes, racket, and balls. He also has to have a great breakfast because tennis takes a lot of energy. Then, he sets off with his parents to New York City and the home of the famous US Open. Nick is nervous at first. Everything seems so big and bright, especially the court. He meets his opponent and is relieved to see that he is nervous too. The match begins, filled with exciting serves, returns, and scores. Through it all, Nick tries to have fun and not lose himself in the crazy competition, but he sure does want to win that trophy! My First Tennis Tournament intends to teach young readers the rules of tennis while also showing how much fun this amazing sport can be. There's even a glossary of important tennis terms in the back. So will Nick win his first tournament, or will he lose with grace? Either way, it's important for kids to learn good sportsmanship and respect for the competition. &#8220;TIGER QUEEN is a gorgeous, lush YA fiction&#8230;Highly recommend for anyone looking for a beautifully crafted stand-alone book.&#8221; (YA and Kids Book Central) Two doors. Two choices. Life or death. Kateri, an arrogant warrior princess, has to fight in the arena against her suitors to win her right to rule, and she is desperate to prove to her father that she is strong enough to take over his throne and rule the kingdom. But when she finds out her final opponent, she knows she cannot win. Kateri flees to the desert to train under the enemy she hates the most and the only one who might be able to give her a shot at winning. But what Kateri discovers in the desert twists her world&#8212;and her heart&#8212;upside down. There in the sand, away from the comforts of the palace, Kateri&#8217;s perception of her father is challenged and she discovers the truth about his treatment of her people. When she returns to the kingdom, the fate of the one she loves lies behind two doors in the arena&#8212;one door leads to happiness, and the other door releases the tiger. Secrets, suitors, thieves, and a fierce princess await readers in this YA fantasy re-telling. Tiger Queen: Is a fantasy re-telling of Frank Stockton&#8217;s famous short story, &#8220;The Lady, or the Tiger?&#8221; Features a slow-burn romance wrapped in fast-paced adventure Is set in a fantastical world wrought by fascism, classism, and climate crisis Reggie Miller on the New York Knicks: I'm telling you right now, I hate the Knicks. Absolutely hate those kids....Face it: The Knicks are dirty players. Let me take the back. They're not dirty players, but when things aren't going New York's way, they're going to do whatever it takes to win. And if that means hurting someone, then they'll do it. I'm not going to say that's dirty, but sometimes they take it to the extreme. On the mental side of the game: Everybody in the NBA knows how to play basketball or else they wouldn't be there. But what separates the good players from the great players is their mental capacity, not only to overcome their opponent, but to get through the tough spots...I always feel mentally stronger than any opponent I step on the same floor with. He might have more talent than I do, but I don't think anybody is mentally stronger than me. I'll match wills with anybody. On determination: On Cheryl Miller: 'Cheryl, I got 39.' 'Reggie, that's great.' 'Yeah, so how'd you do?' 'Uh, I got 105.' Thing was, Cheryl didn't say it to be mean, But, damn, 105 points in one game? But I got my revenge a few years later... We got out to the court and shesaid, 'Your ball.' I told her she could have it first. So she kind of crouched down, made her usual strong first move, got right past me and put up the shot.Cheryl paused for a moment and then said, in a real serious tone, 'We're going to play Hors This book contains specific, practical, and proven, psychological techniques that you can use to know a person's thoughts and feelings at anytime--often within minutes. Because the techniques can be applied instantly to any person in just about any situation, Dr. Lieberman has demonstrated their ease and accuracy on hundreds of television and radio programs. In a special report for FOX News, host Jeff Rosin declared, 'It's simply amazing! I was with him and he was never wrong . . . not even once. I even learned how to do it and that's saying something.' In fact, Dr. Lieberman has gone 'head-to-head' on live television, with skilled polygraph examiners and scored just as well-every time. You Can Read Anyone shows step-by-step exactly how to tell what someone is thinking and feeling in real-life situations. And when the stakes are high-negotiations, interrogations, questions of abuse, theft, or fraud-- knowing who is out for you, and who is out to get you (or a loved one) can save you time, money, energy, and heartache. The New York Times put it best. In a feature article they simply said, 'Don't lie to David Lieberman'. And now you too, can learn the most important psychological tools governing human behavior and do more than just put the odds in your favor. Set up the game so that you can't lose. A peak at what you'll learn: THE ULTIMATE BLUFF BUSTER: How would you like to know if the guy sitting across the poker table from you really has a full house or just a pair of deuces? Or if your top executive is serious about quitting if he doesn't get a raise? Find out if your opponent is feeling good about his chances or just putting up a good front dead giveaway a poker player is bluffing /sure fire sign good hand, even pros give themselves away IS THIS PERSON HIDING ANYTHING? Don't get the wool pulled over your eyes! The next time you have a 'sneaking' suspicion, that someone may be 'up' to something, casually find out if anyone- kids, coworker, spouse, or friend--is keeping something from you IS HE INTERESTED OR ARE YOU WASTING YOUR TIME? If you want to find out if your date likes you or not; if your co-worker is really interested in helping you with your project; or if your prospect is interested in your product, learn how to know, every time. WHOSE SIDE IS SHE REALLY ON? Is she out for you, or to get you? If you think that someone may be sabotaging your efforts, when she appears to be cooperating, find out whose side anyone is on, and fast. EMOTIONAL PROFILE: Learn the signs of emotional instability and potential for violence. From a blind date to the baby-sitter to a coworker, know what to look for, and what questions to ask, in order to protect you and your loved ones.
<urn:uuid:ad678204-78e7-42f9-83d1-78d93d256e6f>
CC-MAIN-2020-16
https://www.opponent.de/browse?q=Kids
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370521574.59/warc/CC-MAIN-20200404073139-20200404103139-00074.warc.gz
en
0.963376
3,546
2.890625
3
Beginning with the 17th century, Judaism, perhaps following the trend that had been shaking up Christianity, went through a theological revolution. The historical reasons for this revolution are debatable and beyond the scope of this project, but the results are not. Prior to this time, it is almost impossible to find serious Jewish scholarship that does not follow some path of traditional Judaism. It might be mystical, it might be philosophical, it might be Talmudic, it may even be straight out of the Bible, but it will almost invariably have some basis in what can be called traditional Judaism, or in more recent years, Orthodox. Though non-Orthodox Jews hate to acknowledge it, prior to about 1600, for all practical purposes Orthodox was the only game in town. This does not mean that everybody was observant. We have no records to show what percentage of Jews was keeping Halacha or studying Torah on a regular basis. For all we know, half the Jews in any given region were not the least bit observant. But, if that was the case, the rabbis, who have left the bulk of the written records, don’t make much mention of it. There were plenty of apostates, to both Christianity and Islam, some forced and others of their own volition. But as far as we know only one Jew ever went down the lonely path of rejecting religion altogether and abandoning the belief in God. That exception was the celebrated case of the 1st and 2nd century rabbi, Elisha ben Abuya, a heretic with somewhat uncertain leanings. It is probably safe to say that a) he abandoned his belief in the Jewish God, and b) his case was extremely unique. This all changed in the 17th century. For what may have been the first time, Jews were venturing down that forbidden road and enshrining their beliefs in a systematic form. The first bold steps involved abandoning belief in the personal version of God found in the Bible. A personal God has two primary characteristics: He has a personality (usually depicted as male), and He plays a direct interventionist part in the world, the Jewish nation, and the lives of individuals. The obvious direction to look as an alternative was pantheism. Pantheism, which for many Jews has a heretical ring to it, is a surprisingly tempting alternative to traditional Judaism. What is pantheism? In a nutshell, it equated God with everything that exists. Nature is God. God is nature. The universe and God are one and the same thing. Pantheism needn’t have a religious component, but neither is it necessarily antithetical to religion. What is so tempting about pantheism? For one thing, it eliminates, in one fell swoop, the question of how a benevolent God could let so much evil happen in the world. The answer, the pantheists say, is that God doesn’t let anything happen in the world. God is the world, the world is God, and **** happens. If God is the universe, nature, everything, then what room is there for the Bible, for miracles, for the Chosen People, for messianic prophecies, for God resting on the Sabbath, or a zillion other religious essentials? The pantheists have a simple answer to all this – there is no room for it. They don’t buy into any of that. If this is the case then obviously, they are not traditional Jews. So what are they doing in this survey of Jewish ideas about the meaning of life and the purpose of creation? First and foremost, it is because pantheism was an influential and widespread theology that attracted a good deal of Jews over the last few centuries. Second, it was bastard stepchild of the unlikely union of two of the great medieval Jewish systems of thought – philosophy and mysticism. How is this so? Philosophy was the forerunner of the scientific outlook. That outlook needed a transition phase before adopting atheism as its calling card. Pantheism was the perfect transition phase. There were stages that this transition went through – pantheism, deism, agnosticism, atheism – but the bottom line is a general trend from belief in a increasingly remote God until that God eventually vanished. Pantheism was an attempt to bring that remote God into the world in a manner that did not conflict with rational thought. Mysticism, on the other hand, had the goal of making God as present as possible. Immanent is the word mystical theologians like to use for this nearness. Pantheism is the logical conclusion of mysticism. It says that God is near because God is simply nature and the natural world. What could be more mystical than the awareness that a tree is not just a tree but God in disguise? All those angels that drove the forces of nature to do whatever they did were really stages in the evolution of belief in God. Instead of angels, why not just call the whole thing God? If this is so grounded in Jewish belief then why is it borderline heresy, or even flat out heresy? It is because pantheism lacks the crucial element of a personal God. God is as impersonal as ‘It’ could possibly be, devoid of all anthropomorphic elements. God is just the unemotional forces of nature. There is no reason to worship such a God, since God really does nothing for us and does not sense our devotion. It is easy to see how such a belief ultimately evolved into atheism. What is not so obvious is the direct path from philosophy and mysticism into the transitional stage of pantheism. The first Jewish scholar of note to venture into these unholy waters was a Dutch descendant of conversos named Baruch Spinoza. He became one of the most influential philosophers of the modern era and one of the most infamous Jews. Among other things, his pantheistic and anti-Orthodox beliefs earned him the dubious reward of excommunication by the Sefaradic community of Amsterdam. This doesn’t seem to have had much direct effect on his personality or on his beliefs, other than perhaps furthering his determination to go against tradition. In spite of his excommunication, or more likely because of it, Spinoza has become one of the best known Jews of modern history. Following Spinoza, Jewish pantheism leads directly into the world of science. The progression from the philosophical world of Spinoza to the theoretical and experimental world of science was not rapid. Scientific development during the 17th, 18th and even 19th centuries was frustratingly slow compared to the 20th century. Within Jewish circles it was even slower. There are almost no well known Jewish scientists whose major work was done before the late 19th century. By around the year 1900 a virtual torrent of Jewish names began to take a major role in the scientific world. Heading this trend, of course, are the two towering figures of Sigmund Freud and Albert Einstein. Freud will be a primary player in our section on atheists, so we’ll leave him to later. Einstein represents the scientific conclusion to Spinoza’s pantheism. Einstein was a scientist first, a thinker second, and a Jew third. No matter how many anecdotes one hears or reads about his attachment to Judaism, at the end of the day it was pretty minimal. As far as religion is concerned it was not minimal, but negative. But this does not mean that he didn’t have a spiritual side. Nor does it mean that he didn’t believe in some concept of a higher being. On the contrary, there are a considerable number of quotes that put that entire question to rest. He wasn’t the least bit religious, at least in a formal ‘organized religion’ manner. But he was a theist – a believer in some version of God. He was a pantheist, whose God was embodied in the equations that revealed the hidden unity underlying all things. The transitional position of pantheism was doomed from the start, as almost all transitional positions are. It was only a matter of time before the pantheists themselves would go the route of the atheists. If God was no longer a personal Redeemer, and was nothing more than a creator who acted once and then let things run on their own accord by becoming submerged in them, then it was only a matter of time before God was dropped altogether. If God is nature, then why not just call it nature and leave out the God part? This inevitable step led most Jewish (and non-Jewish) scientists to abandon any semblance of belief in God. It led most Jewish writers, artists, politicians, even many rabbis, down this same path. Though Jews were at the forefront of many fields in the 20th century, including science, the arts, social activism, environmentalism, business, and technology, only rarely did God play any role in their activities, and even more rarely were they religious. What could keep a pantheist holding on to a losing proposition such as belief in some version of God? There is one answer to this question. Those who held on did so because they sensed that letting go was admitting that there is no ultimate purpose to life. It was this final step that they could not take. Meaning in life was what kept them from abandoning ship and joining the atheists. This anchor was epitomized in the work of a psychologist who may not have agreed to being cast in with the pantheists. His name was Victor Frankl, and he is known for his groundbreaking study on the psychological effects of the drive for meaning, which he called logotherapy. His best-known book, called “Man’s Search for Meaning”, recalls his experiences in concentration camps and the powerful force exerted on the will by seeking meaning amidst the horror. Frankl, though not religious, could not agree with his predecessor Sigmund Freud, that we are nothing but glorified monkeys with human egos and ids. Ultimately, his God was lodged deep in the unconscious mind, but exists nevertheless in some sense. Technically, this is not pantheism. God is the source of meaning, an idea that perhaps overlaps with pantheism. This route is not a religious path, at least not in the conventional sense. But it can be intensely spiritual, possibly even more so than formal religion with its tendency to rote practice. It was, and still is, an alternative to the various religious avenues that sprang up as Judaism encountered the modern world of the last few centuries. It offers meaning and purpose to those who seek it, while leaving those seekers unencumbered by the bonds of tradition. Does it actually work? Does it really provide meaningful answers to the ultimate questions? Or is it a half-way point for those who cannot believe in the old but are unable to jettison it entirely? If God is everywhere then is God really nowhere? There may be no solid answers to these questions, but the questions themselves are intensely meaningful. Aren't confident enough to comment? Send an email to the author about any question pertaining to the essay - Please keep comments and questions short and to the point. - Try to keep things civil and overall try to keep the conversations respectful. - No four letter words. - No missionizing. - Site moderators reserve the right to delete your comments if they do not follow the guidlines or are off-topic. There are no Topics to show. Add a Topic to start a specefic discussion There are no Comments to show. Comment and start the discussion.
<urn:uuid:b24f8251-24e5-4b07-9095-ffac23f5a821>
CC-MAIN-2020-16
http://fourquestionsofjudaism.com/5760744339537920
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371896913.98/warc/CC-MAIN-20200410110538-20200410141038-00314.warc.gz
en
0.977884
2,385
3.015625
3
|Home||Energy||Nuclear||Electricity||Climate Change||Lighting Control||Contacts||Links| IMPLICATION OF CLIMATE CHANGE: One of the implications of reducing carbon dioxide emissions to the atmosphere is that much of the transportation energy and comfort heat presently provided by combustion of fossil fuels must instead have to be provided via electricity. It is reasonably anticipated that in Ontario the required amount of delivered electrical energy per capita per annum will increase at least 5 fold. Delivering this energy through the electricity transmission/distribution system will require both increasing the transmission/distribution system size and increasing the effectiveness of transmission/distribution system utilization. The electricity rate payers of Ontario are facing a potentially enormous increase in both generation and transmission/distribution costs. CAPACITY FACTOR DEFINITION: Generator capacity factor CF is defined by: CF = (average output power) / (maximum output power) Capacity factor is calculated monthly to minimize the financial impact of occasional random equipment shutdowns that affect the power transferred from the generator to the grid. BENEFITS OF A HIGH GENERATION CAPACITY FACTOR: Transmission/distribution costs are mitigated by increasing the generation capacity factor. A high capacity factor generator is more reliable and uses transmission much more efficiently than a low capacity factor generator. Hence the value per kWh of electricity from a high capacity factor generator is much higher than the value per kWh of electricity from a low capacity factor generator both because the ability to supply power-on-demand is higher and because the transmission cost per kWh-km is lower. The function of compensating generators based on capacity factor is to financially reward parties that maintain high generator capacity factors. Capacity factors operate by causing the average generator revenue per kWh from net generated energy to increase as the generator capacity factor increases. Incorporation of capacity factor into generation compensation rates encourages wind generators to build energy storage behind their meters to reduce variations in the rate of power transfer from the generator to the grid. INCENTING GENERATOR BEHIND-THE-METER ENERGY STORAGE: Capacity factor is used to increase the average per kWh compensation rate for high capacity factor generators as compared to low capacity factor generators. The purpose of Capacity Factor based generator compensation is to cause an increase in the generator's capacity factor by financially enabling behind the meter energy storage. The capacity factor incentive should encourage distributed generators to level their outputs. Capacity factor measurements can be used to reward efficient grid utilization by generators. The capacity factor increases generator compensation if the pattern of electricity generation improves the efficiency of use of the transmission/distribution system. Capacity factor based generator compensation is applicable to generators that are not dispatched by the Independent Electricity System Operator(IESO). The present renewable generator compensation rate structure does not convey the appropriate signal as to the equipment and operational changes that generators should adopt to reduce both their own costs and overall electricity system costs. The message that should be communicated via the generation compensation rate structure is that generators not subject to IESO dispatch should operate at high capacity factors. Usually behind the meter energy storage is required to increase wind and solar generator capacity factor. GENERATOR CAPACITY FACTOR: The IESO purchases generation capacity. When the IESO requests that a generator run in theory the generator should run at its rated capacity. However due to energy supply and maintenance issues when commanded to run in general a generator will only produce a fraction of its rated output capacity. The fraction of the generator's peak rated output that is available in a billing period to immediately meet requests for power-on-demand is the generator capacity factor for that billing period.CF. If there is a large fleet of statistically independent generators then the capacity factor for the fleet is given by: CF = (average fleet power output) / (peak fleet rated power output) In this case due to statistical independence if one generator is not performing there is a high probability that the other generators are performing so that available power on demand remains high at all times. However, for real renewable generators there is little statistical independence. For example, none of the solar panels produce power at night. When wind is low in one part of the province it is frequently low in other parts of the province. In the spring there is lots of run-of-river generation whereas in the fall there is little run-of-river generation. The problem with a fleet of statistically dependent renewable generation is that the minimum fleet output is much less than the average power output. Hence for renewable generation the CF in a particular billing period is given by: CF = (minimum fleet output power) / (maximum fleet output power). For renewable energy the difference between (average fleet output power) and (minimum fleet output power) is power that is only saleable via an Interruptible Electricity Service (IES). The market value of IES energy is typically only a small fraction of the market value of Firm Electricity Service (FES) supplied energy. Today most dispatched generators are primarily compensated for capacity instead of for energy. The payments that dispatched generators receive net of fuel costs are nearly constant and are nearly independent of the amount of electricity actually generated. For renewable generation the amount of electricity actually generated depends on the available IES load. The free market value of IES electricity is much less than the present wind generator compensation rate. RENEWABLE GENERATION COMPENSATION ISSUES: The first reality is that the outputs of renewable generators that are geographically close to each other and hence share the same transmission/distribution line are highly correlated. There is no statistical independence. Hence their transmission/distribution usage is proportional to the sum of the generator peak plate ratings, not the sum of the generator average outputs. This issue alone causes wind generation to use about three times as much transmission/distribution capacity per kWh per km as does a nuclear generator. The second reality is that even renewable generators that are geographically far apart in Ontario are not statistically independent. On average total wind generation in the summer is only half of total wind generation in the winter. Similarly run-of-river generation is consistently much greater in the spring than in the fall. Hence renewable generators require balancing generation the cost of which is not reflected in the existing rate model except through the global adjustment. The third reality is that renewable generation is generally located where renewable energy is readily available, which is usually geographically remote from major urban load centers. In Ontario the average transmission distance for a wind generated kWh is about four times the average transmission distance for a nuclear generated kWh. The combination of these factors causes the cost per kWh for transmitting wind energy to be about 12 times the cost per kWh of transmitting nuclear energy. The lack of energy storage causes generation constraint at off-peak times which has the effect of approximately doubling the cost of wind energy generation. Due to the combination of these generation and transmission cost multipliers wind energy is almost always sold to load consumers at a price far below its combined cost of generation, balancing and transmission. It is crucial that Ontario adopt a new generation compensation rate which has the effect of confining development of renewable generation to circumstances that make economic sense for electricity rate payers. For example, total wind generation connected to a distribution system should not exceed the load on that distribution system, so that the distribution connected wind generation does not impact transmission. Similarly direct connection of wind generation to transmission should only be permitted in circumstances where there is a comparable nearby dispatchable load, so that the cost impact of the wind generation on transmission can be minimized. At present the IESO attempts to address some of these issues through a very complex set of rules and regulations that are expensive to administer and are difficult to enforce. Some generators game the system. It would be much more efficient to use a generator compensation rate which causes generators to make choices that lead to economies for all rate payers. Such a compensation rate would value non-fossil generation based on its value to end users. The amount paid to a generator per kWh would diminish as that generator's capacity factor decreases. This generator compensation rate would only give renewable generation preference over nuclear generation in circumstances where the choice of renewable generation leads to a net cost saving for electricity rate payers. Reliable nuclear electricity generation is essential for meeting the grid load at times when the wind does not blow and the sun does not shine. There is no merit in wind generation or solar generation that makes essential nuclear generation less economic. One of the issues with generator metering is that many generators have significant parasitic loads that continue even when the generator is not producing electricity. The net power output of a generator is given by: (Erb - Eib) - (Era - Eia) / (Tb - Ta) Tb = value of T at time b; Ta = value of T at time a; Tb > Ta Erb = cumulative energy that has flowed from the generator to the grid at time T = Tb; Era = cumulative energy that has flowed from the generator to the grid at time T = Ta; Eib = cumulative energy that has flowed from the grid to the generator at time T = Tb; Eia = cumulative energy that has flowed from the grid to the generator at time T = Ta; When the generator is not producing net power: (Eib - Eia) > (Erb - Era) When a generator is producing net power: (Erb - Era) > (Eib - Eia) There is a a major problem with the electricity rate structure in Ontario because at present operating generators are not charged for transmission/distribution and the generator compensation rate does not reflect generator capacity factor or generator power factor. Artificially removing generator obligations to meet transmission/distribution costs has caused price distortions throughout the electricity system. It is essential that generators pay their share of transmission/distribution costs, so that each generator becomes responsible for its capacity factor and power factor. When Ontario Hydro was the dominant generator, Ontario Hydro looked after generator power factor issues because Ontario Hydro also had responsibility for transmission/distribution costs. However, now that Hydro One which is responsible for transmission is separate from Ontario Power Generation and is also separate from numerous other small and large independent generators, many of which are not under dispatch control, it is essential that every generator takes financial responsibility for maximizing its power factor in order to limit total system wide transmission/distribution costs. Thus, the generator compensation rate must be capacity factor and power factor dependent. In order for generator compensation to be properly capacity factor and power factor dependent the transmission and distribution rates must be the same for generators as for loads so that generators pay their share of transmission/distribution costs. At present generation and transmission/distribution are performed by independent entities. Developers of new generation have no effective means of obtaining the transmission that they need when and where they need it because they lack cash flow with which to influence transmission/distribution planning and construction decisions. This problem has led to serious delays in electricity system expansion. There has been no attempt to address this problem under the Green Energy Act. DISTRIBUTED GENERATION METERING: A problem that is particularly serious in many distributed generation systems is parasitic losses. Many distributed generation systems involve devices such as pumps, fans, transformers, etc. that cause continuous parasitic energy losses. The distributed generation system, when operating at 100% of its rated output capacity, may be 90% efficient at conversion of shaft mechanical energy into electrical energy. However at 33% of its rated output capacity, with the same parasitic losses, the same system is only 70% efficient. If the generator runs only 50% of the time at 33% of rated capacity but the parasitic losses continue 100% of the time, the system efficiency falls to 35%. Under some electricity rate structures the value per kWh of received energy is about twice the value per kWh of transmitted energy. Hence, a distributed generator operating at a low capacity factor can actually cause negative electricity cost savings. In these circumstances the issue of accurate directional electricity metering is of paramount importance. FEATURES OF CAPACITY FACTOR WEIGHTED ELECTRICITY RATES IN COMBINATION WITH DIRECTIONAL kWh METERING: 1. Capacity Factor weighted electricity rates can be applied to non-dispatched generators, non-dispatched loads and distribution connections of all sizes for fair allocation of generation and transmission/distribution costs. The required input data is obtained from direction sensitive interval kWh meters. 2. Capacity factor weighted generator compensation allocates more revenue per kWh to high capacity factor generators than to low capacity factor generators. 3. The use of Capacity Factor weighted generation compensation rates allows simple meter reading and account administration. Generator bills can easily be settled to the nearest metering interval. 4. Capacity Factor weighted generation compensation rates mitigate the cost effect of brief generation peaks and valleys but capture the value of prolonged generation peaks and valleys. 5. Use of Capacity Factor Factor weighted generator compensation rates would have the overall effect of encouraging more energy storage and load management. 6. Capacity Factor weighted generator compensation in combination with data from directional kWh meters should encourage high power factor and low harmonic content. 7. Capacity Factor weighted electricity rates encourage high generator capacity factor. GENERAL BENEFITS OF CAPACITY FACTOR WEIGHTED GENERATOR COMPENSATION IN COMBINATION WITH DIRECTIONAL kWh METERING: 1. Capacity Factor based generator compensation is fair. All non-dispatched non-fossil generators are subject to the same compensation formula. 2. Use of Capacity Factor weighted generator compensation would encourage wind generators to build energy storage at or near the generator site to reduce variations in net power output. 3. Capacity Factor weighted generator compensation is fair to behind the meter energy storage because it mitigates the cost effect of short equipment shutdowns for maintenance or repair. 4. Capacity Factor weighted generator compensation is applicable to non-dispatched generators of all sizes. 5. If a generator presents a constant resistive impedance to the grid, then the calculated daily net energy supplied is the same as the energy in kWh sensed by a kWh meter. 6. If a generator presents a reactive impedance or harmonic distortion to the grid then via power factor measurement that generator is allocated less compensation. 7. If a generator presents a low capacity factor to the grid, that generator will receive less compensation per kWh supplied than a generator that presents a high capacity factor to the grid. 8. The Capacity Factor weighted electricity charges are calculated from directional interval kWh values from interval meter data. 9. Directional kWh meters are able to respond to voltage and current harmonics up to at least the 30th harmonic of the power line frequency. Generally power transformers effectively absorb and filter out higher frequency harmonics. 10. The use of Capacity Factor weighted generator compensation in combination with interval kWh metering should encourage installation of behind the meter energy storage to minimize swings in the power transfer rate to and from the grid. 11. The use of Capacity Factor weighted generator compensation in combination with directional interval kWh metering allows transmission/distribution entities to fairly recover their costs. 12. The use of Capacity Factor weighted generator compensation in combination with interval kWh metering strongly encourages proper use of energy storage while elimiinating power instability problems that are caused by Time-Of-Use metering. 13. A further benefit of Capacity Factor weighted generator compensation is that the metering system is tolerant to loss of time synchronization between the meters and the central computer system. Hence the data traffic can be vastly reduced, which lowers the metering system operating cost and makes the metering system extremely resistant to computer hacking. This web page last updated November 26, 2016. |Home||Energy||Nuclear||Electricity||Climate Change||Lighting Control||Contacts||Links|
<urn:uuid:d9610d5d-aec8-4e5c-93b8-4360fe34d623>
CC-MAIN-2020-16
http://xylenepower.com/Capacity%20Factor.htm
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493120.15/warc/CC-MAIN-20200328194743-20200328224743-00514.warc.gz
en
0.908925
3,270
2.984375
3
Baking successfully with whole-grain flours requires putting them at the center of each recipe, rather than thinking of them as add-ons, and Tabitha Alterman shows you how to do just that in Whole Grain Baking Made Easy (Voyageur Press, 2014). From mainstays, such as wheat and rye, to less-common choices, such as amaranth and teff, learn how to craft more than 50 mouthwatering recipes by following the simple instructions and beautiful full-color photography throughout this guide. The following excerpt is from chapter 2, “Grain Mill Buyer’s Guide.” You can purchase this book from the MOTHER EARTH NEWS store: Whole Grain Baking Made Easy. Why Make Homemade Flour? Here are a few of the great reasons to try your hand at homemade flour: Flavor. Freshness, which can be equated with both flavor and nutrition, is the No. 1 reason to mill flour. The moment after grains become flour is the moment of the flour’s maximum potential flavor, after which oxygen goes to work scavenging flavor molecules and degrading fatty acids. Some types of fresh flour, including buckwheat, corn, oats, and rye, are even more susceptible than wheat to fast degradation. This is no different than what happens to coffee beans once they’re ground. Many coffee aficionados wouldn’t think of brewing coffee with beans ground a week or more ago. Variety. With more than thirty thousand varieties of wheat in existence, you’d think options for nutritious flours would be numerous. Sadly, this is not the case. Research conducted by Dr. Donald R. Davis, a former nutrition scientist at the University of Texas, demonstrates how wheat has declined nutritionally over the last 50 years as farms have become more industrial. “Beginning about 1960,” Davis told me, “modern production methods have gradually increased wheat yields by about threefold. Unfortunately, this famous Green Revolution is accompanied by an almost unknown side effect of decreasing mineral concentrations in wheat. Dilution effects in the range of 20 percent to 50 percent have been documented in modern wheats for magnesium, zinc, copper, iron, selenium, phosphorus, and sulfur, and they probably apply to other minerals as well.” In addition, some of today’s varieties have only half as much protein, and there is evidence that old wheat varieties often have substantially higher amounts of valuable phytochemicals. A few intrepid artisan companies are bucking the trend. Wheat Montana Farms, for example, is one of the few companies where you can buy wheat directly from the farmers who grow it. Their two special varieties, Prairie Gold and Bronze Chief, were selected for superior protein content. At Bluebird Grain Farms in Washington state, the nutritious heirloom wheats einkorn and emmer are grown sustainably and are milled to order or sold whole. Similarly, Bob’s Red Mill, King Arthur Flour, Pleasant Hill Grain, and Urban Homemaker, among others, sell high-quality, whole, unmilled grains for home grinding. If you have a great bakery near you, find out where they buy their flour—many small mills will accommodate special orders. If you notice that a farmers market stand offers fresh flour, ask if you can buy some grains to grind at home. If you want to bake with a variety of grains beyond wheat, sometimes the easiest way to get these flours is to grind your own. Buckwheat tastes nothing like wheat. This is an advantage, not a disadvantage. I use buckwheat when I want its earthy flavor. I use fresh cornmeal when I want sweet corn flavor. I use oat flour when I want tenderness, and flour made from toasted quinoa when I want extra nuttiness. Variety is the key. Control. Not only are many of today’s flours likely inferior to their predecessors, but they can also be inconsistent from one brand to another. For most bread making, high-protein hard wheat is ideal. Lower-protein soft wheat flours are better for pastries. Maybe you like the taste of white wheat better than red, or perhaps whatever you’re baking could use a little extra sweetness from sorghum or a bold accent from teff. By milling your own flour, you have control over all this. You can custom-blend exactly the mix you need, without buying several different bags of flour that you’ll then have to find room for in your freezer. Home milling affords control over texture, too. With a good grain mill you can turn any grain into a fine, medium, or coarse flour to suit your needs. Cost. Whole grains are less expensive than the flours which are made from them. Depending on the price you pay for unmilled grains, you can easily make homemade loaves of bread for less than a buck. However, don’t expect to be able to offset the purchase of a mill with grocery savings unless you plan to replace a great deal of store-bought goods with homemade versions. Convenience. When did people decide shelf life was the prime virtue? I don’t choose ripe tomatoes or fresh fish based on the fact that these items will last forever in my kitchen. Yet we’ve been trained to think flour should last forever, when it really shouldn’t. Unmilled grains, on the other hand, can easily last 20 or 30 years, or possibly forever. Meanwhile, they won’t take up prime real estate in your fridge or freezer. Choosing a Grain Mill There are a few different machines that can make whole-grain flour. Which one you need depends on how often you’ll use it, how easy you want it to be, whether or not you need the machine to perform other tasks, and how much money you’re comfortable spending. If you’re serious about putting the best food on your table, any of these is a smart investment. Some of these well-made machines may even stick around for your lifetime plus perhaps your children’s. Multipurpose Small Appliances The following appliances serve double- or triple-duty at least. These aren’t the ideal grain grinders if you’ll be making flour or cornmeal a heckuvalot, but they offer a nice compromise if it’s something you’ll do occasionally. A coffee grinder is good enough to make flour from some items, such as soft grains, seeds, and flakes. Sift anything ground in a coffee grinder through a fine sieve to remove chunky pieces. If you can grind it in a coffee grinder, you can grind it in a food processor. The blade technology is similar, but the capacity is larger. I’ve had great success using the KitchenAid 13-cup model. Food processors can be used for an amazing array of other tasks too. BlendTec and Vitamix both make powerful blenders that grind an impressive variety of items, even hard grains. Do not assume that another blender can handle this task, unless it has been explicitly rated to do so. A good blender can do many of the same things a food processor does. If you’ll use one a lot—for example, to make smoothies, soups, sauces, nut butters, and flour—you won’t mind coughing up the $400 or so. The BlendTec machine can grind nearly anything. The Vitamix can too, but it comes with separate pitchers for wet and dry ingredients, making it pricier. KitchenAid makes a grain-grinding attachment to fit their stand mixers. These are good for small batches, but be sure to give the motor time to cool between batches to prevent overheating flour. Stand mixers range in price from $350 to $650, and the grain-grinding attachment is $150, so this is no small investment. Like a food processor, however, a stand mixer has a number of useful applications, and they can last a lifetime. My mother-in-law has had her hard-working KitchenAid since the 1960s. Dedicated Grain Mills There are many types of grain mills on the market, ranging in price from $70 to more than a grand. Google “grain mills” or search for them on Amazon to begin comparing models. Some grain mills are hand-operated, but don’t think about getting one unless you seriously believe you will use it. It’s possible to enjoy the manual labor, but if you know you’re not that kind of person, it’ll be a waste of money. Grain mills are also classified based on how they crush grain: burr or impact plates. In a burr mill, grains are crushed between two plates into various degrees of coarseness. If you’ve heard of stone-ground flour or cornmeal, it was produced in a burr mill in which the plates were made of real stone. Most burr mills today have composite or metal plates. Real stone mills are prohibitively expensive, plus they require more maintenance over time. They also sometimes have trouble with especially hard items like dry beans or popcorn. Most composite stones can handle these materials. Some mills with metal plates can handle even oily nuts for making nut butter. Burr mills grind slightly more slowly than impact mills, usually just enough to prevent an undesirable amount of heat from ruining the nutrients and gluten in your flour. Durable, well-made, electric burr mills include Family Grain Mill ($280), KoMo/Wolfgang ($440 to $600), and Golden Grain Grinder ($600). High-quality hand-crank burr mills include Victoria (formerly called Corona, $70); Back to Basics ($80); Family Grain Mill ($150); Schnitzer Country Mill ($350); Country Living Grain Mill ($430); GrainMaker Grain Mill No. 99 ($675) and No. 116 ($1,200); and the wildly popular and well-made Diamant, which has been rated by Lehmans.com as the finest grain mill available today. Many of these are convertible to electric power with separate attachments (not included in these prices), and also offer flywheel attachments to make manual grinding easier. In an impact mill, two interlocking cylinders spin within one another while grains pass through. These don’t always make the finest flour. On the other hand, they are inexpensive compared to burr mills. The most popular electric impact mills include K-Tec Kitchen Mill ($180), GrainMaster Wonder Mill ($270), and Nutrimill ($290). Technique: How to Make Homemade Flour Before milling any grain, make sure it’s dry and mold free. Pick out any rocks and pieces of chaff. If using a coffee grinder or food processor, grind small batches. Let the machine cool between batches. Sift flour through a fine sieve to remove any chunks that made it through largely unscathed. If a good deal of the resulting flour is coarse, sift the finer flour out and return the chunky portion to the machine to grind again. If using a grain blender or stand mixer with grain mill attachment, follow your machine’s instructions to select the appropriate settings. If using a dedicated grain mill, select coarseness and pour grain into the hopper while the machine is running. Or, with a manually operated machine, select coarseness, add grain, and start cranking. Let your mill cool down between batches to prevent overheating flour. Keep your mill free of flour buildup by following its instructions for cleaning. With my KoMo mill, all I do is occasionally dust it with a stiff, little brush. Never grind items that your mill is not meant for. Some mills cannot grind oily items, such as corn and soybeans. After grinding oily items in mills that have been made to handle it, it’s a good idea to pass a handful of wheat berries through afterward to pick up and remove residual oil. More from Whole Grain Baking Made Easy: Reprinted with permission from Whole Grain Baking Made Easy: Craft Delicious, Healthful Breads, Pastries, Desserts, and More by Tabitha Alterman and published by Voyageur Press, 2014. You can buy this book from the MOTHER EARTH NEWS store: Whole Grain Baking Made Easy.
<urn:uuid:a9cdff10-2a0c-4d83-94db-20c40c498ab5>
CC-MAIN-2020-16
https://www.motherearthnews.com/real-food/how-to-make-homemade-flour-ze0z1502zcwil
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371826355.84/warc/CC-MAIN-20200408233313-20200409023813-00475.warc.gz
en
0.938243
2,609
2.734375
3
Short summary - The Thirty-First of June John Boynton Priestley The action takes place in the fabulous kingdom of Perador in the XII century and today in London on a lunar day on June 31. Kingdom of Perador In the tiny kingdom of Perador, in one of King Arthur's vassal possessions, King Meliot rules. Life goes on peacefully and quietly. The only daughter and heiress of the king, Princess Melicenta, and her two maids of honor Ninet and Alison spend whole days in embroidery or in the company of the court musician Lamison. Girls are tired of a monotonous lifestyle, they want entertainment. The wizard Malgrim lent Melicent a magic mirror. It displays the one who thinks about you. The princess saw there an unusual person named Sam. She was interested in a mysterious stranger, all her thoughts now are only about him. Sam is not in Perador and the princess sends the court dwarf Grumet to Malgrim, so that the wizard will help her meet Sam. Hearing this, the king declares the daughter of the patient, Malgrim - a charlatan, and the dwarf - a drunkard. He invites the court physician Jarvey. The king tells his daughter to take drugs prescribed by the doctor and embroider with maids of honor, and he himself goes to a conference in Camelot, in the residence of King Arthur. At an advertising agency, workers are working on a sketch for a stocking advertisement. Artist Sam Penton made a portrait of Princess Melicenta Peradora for advertising. During work, Sam was so carried away by the princess that she became the girl of his dreams. Sam thinks he is being watched by a dwarf. Hearing this, the employees are surprised that none of them has seen any dwarf. Suddenly, Sam reports that today is June 31st. Friends invite a doctor who looks like a doctor from Perador. The doctor says that Sam has hallucinations and prescribes a medicine for him. Meanwhile, in front of everyone's eyes, the dwarf steals the picture and disappears. Kingdom of Perador Princess Melicent pours out the soul of Nineth. The mysterious stranger Sam sunk into the girl's soul, but there is still no news from the dwarf. Malgrim offers the princess an exchange. Once the wizard Merlin gave her father a golden brooch. If Melicent gives her to Malgrim, he will help her connect with Sam. While Melicent thinks what to do, the uncle Malgrim appears a little old man Marlagram. He promises his help to the princess, and calls his nephew to the contest. Malgrim agrees with the envious Nineth to stop Sam and Melicent from meeting. Sam comes to relax at the Black Horse Bar. There he meets skipper Plunket. A dwarf also comes to the bar, which no one sees except Sam. Suddenly an imposing man appears. He appears to Sam as the illusionist Malgrim. Malgrim agrees with the artist that today, indeed, on June 31, he also sees the dwarf. He claims that Princess Melicent is a real person, but she lives in the 12th century in the kingdom of Perador. Malgrim suggests helping Sam meet her. Taking along a drunken Planet, who is sure that Perador is a bar, men leave through the wall. The chief of the advertising agency, Dimmock, is at a loss: you need to hand over the sketch to the customer, and the picture disappeared and the artist Sam also disappeared. The office is doing repairs and therefore, there is a terrible noise from drills, and then a big rat ran out of the closet. But in the office a very beautiful girl appears in a medieval costume. She claims that she is Princess Melicent, in love with Sam, and the rat is the wizard Malgrim. Dimmock decides that Melicent is the next model of the artist, who is passionate about him and sends her to film screenings. To all the problems from the cabinet squeaks. Out of anger, the chef looks into him and disappears. The secretary Peggy, frightened for the boss, disappears after her, and the visiting employee Ann also disappears. Called earlier by Peggy, Dr. Jarvis opens the closet and sees only shelves full of books. Kingdom of Perador Once in Perador, Sam lost Planet and Malgrim, but met Nineth. The insidious maid of honor fed the hungry guest, trying to convince him that Princess Melicent was stupid and naive, and that she was the most suitable party for him. The king who returned to the castle, having heard from Malgrim that Sam is the knight that the princess dreams of, sends an unknown type without credentials to the dungeon. Nineth and Malgrim triumph: Sam and Melicent are separated forever! On television, they are preparing for the next program, "Live Discussion." The host expected a "village type", but Marlagram and Melicent appear in his place. Guests of the program must answer questions from the host. Marlagram sprinkles with words, but Melicenta, who does not understand anything, worries about Sam. Upon learning from Marlagram that her lover is in prison, the princess leaves the show, despite the impending troubles of the advertising company. On the advice of Marlagram, Melicent heads to the Black Horse bar. Kingdom of Perador In the cold damp dungeon, Sam visits Plunket, who disguised himself as the captain of the guards to help out a friend. Plunket met the chief of the advertising agency Dimmock and hid him. After the skipper leaves, Marlagram enters the dungeon. He promises to help Sam meet with his beloved. Marlagram left the case and during his absence, his nephew played a cruel joke. Marlagram will pick up Melicenta from The Black Horse and deliver it to Sam. Malgrim appears in the Black Horse Bar. There he meets with his uncle. Melicent comes there, accompanied by an advertising agency employee, Philip. Marlagram pours him an exotic drink - dragon blood. While Philip tastes an unusual liquid, Marlagram disappears with the princess. Kingdom of Perador King Meliot receives Planet and Dimmock. Plunket invites the king to organize tourism in Perador. But the king, not understanding the modern tax system, is furious and orders the guards to seize the unlucky entrepreneurs. At this moment, Nineth and Malgrim enter the room. Malgrim takes the guests with him, and the insidious Nineth tries to persuade the king to remain alone with her. Her exhortations are interrupted by those who enter Marlagram and Melicent. The king announces to his daughter that tomorrow there will be a knightly tournament and the winner will be her husband. Hearing her sentence, the princess promises Marlagram Merlin's brooch if he helps her marry Sam. Marlagram announces that two dangers are approaching the kingdom: the unknown Red Knight and the all-devouring Dragon. The one whoever wins the Red Knight tournament and defeats the Dragon - only he will marry Princess Melicente. Since at that moment thunder struck, the king agrees with fear. At night, the guards put Sam in shackles. After they leave, Marlagram enters the prison. He brings Sam a delicious dinner and brings Melicent. They announce to Sam that at the tournament, which will take place at six in the morning, he needs to defeat the Red Knight and the Dragon, only then he can marry Melicent. Sam is terrified: he never fought with anyone, and even at such an early age! Sam prepares for the duel in the tent. Ninet and Malgrim replace the outfit that Melicent prepared for the old trash. They send a barmaid from the Black Horse to Sam. She should present a «tournament beer,» from which Sam will have weakness and dizziness. The duel has begun. The Red Knight presses and pushes Sam. Finding himself in the tent, the Red Knight asks for a tournament beer. Left alone with Sam, the knight takes off his armor and his head with them. To Sam's surprise, Plunket is under his head. The skipper explains to the artist that Malgrim turned him into a Red Knight. In the tent, without spectators, Sam and Plunket play the battle in which Sam wins. He takes the loser's head out of the tent and shows it to the audience. King Meliot knights the artist. Now you need to defeat the Dragon. The king offers Sam a choice: to leave, since he is already free or to fight with the Dragon, then in case of winning Sam will be able to marry Melicent. Sam chooses a duel. The princess brings her lover an old book, which contains all the information about dragons. Plunket looks after the Dragon sleeping in the clearing. Anne and Peggy emerge from the forest. The secretary is looking for her boss, and Ann is interested in local attractions. Sam, who came with the book, is studying the sleeping Dragon in order to fight him according to all the rules of dragonography. But the Dragon turns out to be none other than his chef Dimmock, who Malgrim turned into a Dragon. Dimmock asks Sam to find one of the wizards to be bewitched, and in gratitude he will introduce his savior to the board of the company. To create the appearance of fighting the Dragon, the Dimmock Dragon swallows half of Sam’s sword. Sam and Plunket, who liked Ann, go after the wizards, and Dimmock in the form of a Dragon dictates a letter to Peggy's secretary. Princess Melicent disappeared. The troubled king lifted all the servants to his feet. Sam, noting that there are no Ninet and Malgrim among the courtiers, suggests that they are involved in the disappearance of her beloved. Marlagram advises to go to prison. In the dungeon, Sam falls. Waking up, he finds himself in his native London, at an exhibition. Searching for Melicent in the crowd, Sam sees a scene in which the winners are honored. The leader is Malgrim himself, and Nineth helps him. The winner in the confectioners contest is announced to Melicent. When asked about her desire, the princess replies that she wants to see Sam. Hearing this, Sam rushes to Melicente, but the frozen fish merchant blocks the way. A fight ensues between the men and Sam loses consciousness from a blow to the head. The artist comes to the police. Police inspector releases the detainees. Shaking hands, they diverge in different directions. In a nasty mood, Sam wanders through a rainy city, in a Perador costume. Suddenly a taxi stops near him. The driver turns out to be Marlagram, while Melicenta sits in the car. Kingdom of Perador At breakfast, Marlagram and Malgrim sum up their competition. Their conversation is interrupted by the appearance of Sam, Plunkett, Melicent and Dimmock. Sam and Melicent are planning to get married and Sam should ask the princess for the king’s hands. Dimmock and the wizards agree to create a new company that will deal with tourism between England and Perador. To celebrate the wedding of Sam and Melicent, the wizards had to work hard. A wedding table connects Dimmock's office and the King Meliot banquet hall. In the chef’s office, repairs continue and the noise of the drills does not allow a welcoming speech. Then the wizard Marlagram invites guests to Perador to take a break from civilization. The young decide that they will live in both worlds. They thank the guests to the accompaniment of the lute of the court musician Lamison.
<urn:uuid:dc640981-9b0f-47bf-a99c-a2fb534f1fa6>
CC-MAIN-2020-16
https://recap.study/summary/2020/british/198.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500482.27/warc/CC-MAIN-20200331115844-20200331145844-00314.warc.gz
en
0.959225
2,447
2.515625
3
Developers might not want to read all the background on Unicode included in this earlier blog entry. Here is a quick distillation of how Unicode and the UTF encodings are relevant to a Hadoop user—just the facts and the warnings. Key points about Unicode v UTF’s: - Unicode proper is an abstraction - It maps each character in every language and symbol-set to a unique integer “code point.” - The code points are the same everywhere, and for all time. - To store or use Unicode, the characters must be encoded in some concrete format. Standard encodings include: - UTF-8 (variable length, can represent any code point) - UTF-16 (variable length, can represent any code point) - UTF-32 (fixed length, can represent any code point) - UTF-16LE (same as UTF-16, but specifically little-endian) - UTF-16BE (same as UTF-16, but specifically big-endian) - UTF-32LE (same as UTF-32, but specifically little-endian) - UTF-32BE (same as UTF-32, but specifically big-endian) - UTF-2 (obsolete, fixed length, for pre-1996 16-bit Unicode) - Both Unicode itself and the UTF’s are referred to as “encodings,” but when programmers say “encoding” they usually mean the UTF. If you’ve forgotten what endian-ness is, look here under “The Endian Problem.” 90% of misunderstandings about Unicode trace back to one of these: - UTF-8 is not an 8-bit encoding—it can encode all of 21-bit Unicode. - UTF-16 is not a 16 bit encoding—it can encode all of 21-bit Unicode. - Unicode itself is not limited to 16 bits. - In Granddad’s day, Unicode was 16 bits and could represent only about 60K distinct characters - It was changed to 21 bits in 1996 and now can handle up to 1,112,064 distinct characters. - The numbers 8 and 16 in UTF-8 and UTF-16: - Do not refer to the number of bits in the code-points that the encoding can express. - Do refer to the number of bits/bytes that are logically processed together. - UTF-8 takes bytes one at a time. - UTF-16 takes bytes two at a time. - It would have made more sense to call them UTF-1 and UTF-2, but when UTF-16 was named, the name UTF-2 was already taken. The key points to remember about encodings are: - The most widely used Unicode encoding today, by far, is UTF-8, but UTF-16 is not dead yet. - Sometimes you are forced by circumstance to ingest UTF-16, but the only reason to write any format other than UTF-8 is to accommodate legacy processing. - Occasionally, other formats, e.g., UTF-32, are used for special purposes internally to some program. If you need to know about this, then you are beyond needing this primer. - UTF-8 and UTF-16 are both “variable length” encodings, i.e., not every character is expressed with the same number of bytes. - ASCII by definition is 7-bit. - Range is 0 through FF hex, which includes FF+1 values or, in decimal, 128. - If the high order bit is set, it’s not ASCII. - UTF-16 and UTF-8 represent the ASCII characters with the same numeric values used in ASCII, but they encode them differently. - UTF-16 always uses either two bytes or four bytes. - ASCII characters will have an extra all-zero byte in addition to the byte with the ASCII value. - Whether the all-zero byte is leading or trailing depends on the endian-ness of the representation. - The BMP characters all fit into 16 bits. - UTF-8 uses: - 1 byte for code points < 8 bits (ASCII characters, i.e. the Latin alphabet) - 2 bytes for all code points that require from 8 to 11 bits - 3 bytes for all code points that require from 12 to 16 bits - 4 bytes for all code points that require from 17 to 21 bits - Note that this implies that much of the BMP requires three bytes, not two. - UTF-16 always uses either two bytes or four bytes. - Need for code-points outside the BMP, i.e., the low-order 16 bits is fairly unusual unless you’re using Chinese, and usually not even then. - ASCII (plain text) is UTF-8 - A file of pure ASCII is a valid UTF-8 file. - The reverse is not necessarily true. - Any file containing a byte with the high-bit set is not ASCII. - UTF-16, because it deals with bytes two at a time, is actually two physical encodings—little endian and big endian. - For UTF-16, the optional BOM character can be, but need not be, used as the first character in a file to distinguish little-endian and big-endinan encodings. - The BOM is guaranteed not to ever be valid for anything else. - If the first character of a UTF-16 file is read as U_FEFF, the file and the program reading the file will be in agreement. - If the first character of a UTF-16 file is read as U_FFFE, then the program must reverse the endian-ness. - This doesn’t actually tell a program which encoding it is using, only that the encoding is either the other one or the same one as the program is using. Advantages of UTF-8 - ASCII is the most common format for data and ASCII is UTF-8 - For ASCII, UTF-8 takes only half as much space as UTF-16. - No conversion needed for ASCII - If you jump to a random point in a UTF-8 file, you can synchronize to the next complete character in at most three bytes—one byte if it’s ASCII. - One disadvantage of UTF-8 is that it takes about 50% more space than UTF-16 when encoding East Asian and South Asian languages (3 bytes v. 2 bytes.) - UTF-8 is not subject to endian problems, while all multi-byte encodings, including UTF-16, are. Java and Unicode Java (unlike C and C++) was originally designed (before 1995) to use 16 bit Unicode, and later moved to 21-bit Unicode when the standard changed. The encoding used internally is UTF-16, but the Java specification requires it to to handle a variety of encodings. Two critical points: - Unlike C/C++, Java defines Strings in terms of characters, not bytes. This blog on Java and Unicode details it pretty well. - Java is not limited to 16-bit code points. Hadoop and UTF Formats In theory, Hadoop and Hive should work with either UTF-16 or UTF-8, but there are a couple of known Hive bugs that limit the use of UTF-16 to the characters in the BMP, and may cause problems even then. See this Apache bug report for details. Even if it Hadoop did work correctly with UTF-16, there would still be significant drawbacks: - UTF-16 doubles the space required for Latin-alphabet text (English and European languages) in an environment that already triples storage size. - Applications running over Map-Reduce and Tez (e.g. Hive) usually do a lot of sorting (in the shuffle-sort) - Lexical sorts of UTF-8 are significantly more efficient than UTF-16 sorts. - The reasons that are beyond the scope of these notes, but see: these notes for more details. - BOM markers are lost when files are split. See this page for details. - ORC requires UTF-8. If your project uses tabular data, you should almost always be using ORC. The fact that Hadoop does not work well with UTF-16 is less of a problem than you’d think for two reasons: - The majority of data ingested by Hadoop is ASCII, and ASCII is automatically UTF-8. - Most data that is not specifically ASCII is UTF-8, because UTF-8 dominates the Web. What to do if you are stuck with UTF-16 data? - Don’t monkey around with trying to get UTF-16 to work in Hadoop—convert it directly to UTF-8 or specifically to ASCII if none of the code points are greater than 127. - If it’s a reasonable amount of data, e.g., periodic ingestion of a few Gigs, you may be able to do it on the way in, e.g., with bash, or as part of the Oozie process. - The Linux iconv utility Linux iconv utility can be invoked from within a bash script. - iconv has been known to fail for very large files (15GB) but these can be chopped into smaller pieces with the Linux split utility: split utility. - You can do larger amounts of data with a simple MapReduce job. - Conversion is straightforward in Java - MR is fast for this because it’s map-side only. - You can find a clue to the Java code here. - ICU provides Java libraries for doing conversions and many other operations on Unicode: http://userguide.icu-project.org/conversion/converters If you want a little more depth on Unicode, endian-ness, representations, etc., be sure to check out Not Even Hadoop: All about Unicode.
<urn:uuid:91332d30-75b5-4991-abcb-077b12841026>
CC-MAIN-2020-16
https://hadoopoopadoop.com/2015/12/05/no-fluff-unicode-sumary-for-hadoop/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506988.10/warc/CC-MAIN-20200402143006-20200402173006-00353.warc.gz
en
0.893685
2,136
2.890625
3
The lab scenarios in this course are selected to support and demonstrate the structure of various application scenarios. They are intended to focus on the principles and coding components/structures that are used to establish an HTML5 software application. This course uses Visual Studio 2012, running on Windows 8. After completing this course, students will be able to: Explain how to use Visual Studio 2012 to create and run a Web application. Describe the new features of HTML5, and create and style HTML5 pages. Send and receive data to and from a remote data source by using XMLHTTPRequest objects and jQuery AJAX operations. Style HTML5 pages by using CSS3. Use common HTML5 APIs in interactive Web applications. Create Web applications that support offline operations. Create HTML5 Web pages that can adapt to different devices and form factors. Add advanced graphics to an HTML5 page by using Canvas elements, and by using and Scalable Vector Graphics. Enhance the user experience by adding animations to an HTML5 page. Use Web Sockets to send and receive data between a Web application and a server. Improve the responsiveness of a Web application that performs long-running operations by using Web Worker processes. This course is intended for students who have the following experience: 1 month experience creating Windows client applications 1 month of experience using Visual Studio 2010 or 2012 This course is not intended for developers with three or more months of HTML5 coding experience. Students choosing to attend this course without a developer background should pay special attention to the training prerequisites. Developers who have more than 5 years programming experience may find that portions of this training are fundamental in nature when presenting the syntax associated with certain programming tasks. Before attending this course, students must have at least three months of professional development experience. In addition to their professional experience, students who attend this training should have a combination of practical and conceptual knowledge related to HTML5 programming. This includes the following prerequisites: Understand the basic HTML document structure: How to use HTML tags to display text content. How to use HTML tags to display graphics. How to use HTML APIs. Understand how to style common HTML elements using CSS, including: How to separate presentation from content How to manage content flow. How to control the position of individual elements. How to implement basic CSS styling. How o create and use variables How to use: arithmetic operators to perform arithmetic calculations involving one or more variables relational operators to test the relationship between two variables or expressions logical operators to combine expressions that contain relational operators How to control the program flow by using if … else statements. How to implement iterations by using loops. How to write simple functions. Module 1: Overview of HTML and CSS This module provides an overview of HTML and CSS, and describes how to use Visual Studio 2012 to build a Web application. - Overview of HTML - Overview of CSS - Creating a Web Application by Using Visual Studio 2012 Lab : Exploring the Contoso Conference Application - Walkthrough of the Contoso Conference Application - Examining and Modifying the Contoso Conference Application - Describe basic HTML elements and attributes. - Explain the structure of CSS. - Describe the tools available in Visual Studio 2012 for building Web applications. Module 2: Creating and Styling HTML5 Pages This module describes the new features of HTML5, and explains how to create and style HTML5 pages. - Creating an HTML5 Page - Styling an HTML5 Page Lab : Creating and Styling HTML5 Pages - Creating HTML5 Pages - Styling HTML5 Pages - Create static pages using the new features available in HTML5. - Use CSS3 to apply basic styling to the elements in an HTML5 page. - Introduction to jQuery - Displaying Data Programmatically - Handling Events Module 4: Creating Forms to Collect Data and Validate User Input - Overview of Forms and Input Types - Validating User Input by Using HTML5 Attributes Lab : Creating a Form and Validating User Input - Creating a Form and Validating User Input by Using HTML5 Attributes - Create forms that use the new HTML5 input types. - Validate user input and provide feedback by using the new HTML5 attributes. Module 5: Communicating with a Remote Data Source This module describes how to send and receive data to and from a remote data source by using an XMLHTTPRequest object and by performing jQuery AJAX operations. - Sending and Receiving Data by Using XMLHTTPRequest - Sending and Receiving Data by Using jQuery AJAX operations Lab : Communicating with a Remote Data Source - Retrieving Data - Serializing and Transmitting Data - Refactoring the Code by Using jQuery ajax method - Serialize, deserialize, send, and receive data by using XMLHTTPRequest objects. - Simplify code that serializes, deserializes, sends, and receives data by using the jQuery ajax method Module 6: Styling HTML5 by Using CSS3 This module describes how to style HTML5 pages and elements by using the new features available in CSS3. - Styling Text - Styling Block Elements - CSS3 Selectors - Enhancing Graphical Effects by Using CSS3 Lab : Styling Text and Block Elements using CSS3 - Styling the Navigation Bar - Styling the Page Header - Styling the About Page - Style text elements on an HTML5 page by using CSS3. - Apply styling to block elements by using CSS3. - Use CSS3 selectors to specify the elements to be styled in a Web application. - Implement graphical effects and transformations by using the new CSS3 properties. - Creating Custom Objects - Extending Objects Lab : Refining Code for Maintainability and Extensibility - Inheriting From Objects - Refactoring Code to Use Objects - Describe how to extend custom and native objects to add functionality. Module 8: Creating Interactive Pages using HTML5 APIs This module describes how to use some common HTML5 APIs to add interactive features to a Web application. This module also explains how to debug and profile a Web application. - Interacting with Files - Incorporating Multimedia - Reacting to Browser Location and Context - Debugging and Profiling a Web Application Lab : Creating Interactive Pages by Using HTML5 APIs - Incorporating Video - Incorporating Images - Using the Geolocation API - Use the Drag and Drop, and the File APIs to interact with files in a Web application. - Incorporate audio and video into a Web application. - Detect the location of the user running a Web application by using the Geolocation API. - Explain how to debug and profile a Web application by using the Web Timing API and the Internet Explorer Developer Tools. Module 9: Adding Offline Support to Web Applications This module describes how to add offline support to a Web application, to enable the application to continue functioning in a user's browser even if the browser is disconnected from the network. - Reading and Writing Data Locally - Adding Offline Support by Using the Application Cache Lab : Adding Offline Support to a Web Application - Implementing the Application Cache - Implementing Local Storage - Save and retrieve data locally on the user's computer by using the Local Storage API. - Provide offline support for a Web application by using the Application Cache API. Module 10: Implementing an Adaptive User Interface This module describes how to create HTML5 pages that can dynamically detect and adapt to different devices and form factors. - Supporting Multiple Form Factors - Creating an Adaptive User Interface Lab : Implementing an Adaptive User Interface - Creating a Print-Friendly Stylesheet - Adapting Page Layout To Fit a Different Form Factor - Describe the need to detect device capabilities and react to different form factors in a Web application. - Create a Web page that can dynamically adapt its layout to match different form factors. Module 11: Creating Advanced Graphics This module describes how to create advanced graphics for an HTML5 Web application by using a Canvas element, and by using Scalable Vector Graphics. - Creating Interactive Graphics by Using Scalable Vector Graphics - Programmatically Drawing Graphics by Using a Canvas Lab : Creating Advanced Graphics - Creating an Interactive Venue Map by Using Scalable Vector Graphics - Creating a Speaker Badge by Using a Canvas Element - Use Scalable Vector Graphics to add interactive graphics to an application. Module 12: Animating the User Interface This module describes how to enhance the user experience in an HTML5 Web application by adding animations. - Applying CSS Transitions - Transforming Elements - Applying CSS Key-frame Animations Lab : Animating User Interface Elements - Applying Transitions to User Interface Elements - Applying Key-Frame Animations - Describe the different types of 2D and 3D transitions available with CSS3 Module 13: Implementing Real-Time Communications by Using Web Sockets This module explains how to use Web Sockets to transmit and receive data between an HTML5 Web application and a server. - Introduction to Web Sockets - Sending and Receiving Data by Using Web Sockets Lab : Implementing Real-Time Communications by Using Web Sockets - Receiving Data from Web Socket - Sending Data to a Web Socket - Sending Multiple Types of Messages To or From a Web Socket - Explain how Web Sockets work and describe how to send and receive data through a Web Socket. Module 14: Creating a Web Worker Process This module describes how to use Web Worker Processes to perform long-running operations asynchronously and improve the responsiveness of an HTML5 Web application. - Introduction to Web Workers - Performing Asynchronous Processing by Using a Web Worker Lab : Creating a Web Worker Process - Improving Responsiveness by Using a Web Worker - Describe the purpose of a Web Worker process, and how it can be used to perform asynchronous processing as well as provide isolation for sensitive operations. Học tại Hồ Chí Minh Học tại Hà Nội Học trực tuyến
<urn:uuid:c6890e53-73c5-43c6-91df-468afdc90fa8>
CC-MAIN-2020-16
https://robusta.vn/vi/chuong-trinh-dao-tao/microsoft/developer/programming-in-html5-with-javascript-and-css3-learn-html5
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370526982.53/warc/CC-MAIN-20200404231315-20200405021315-00274.warc.gz
en
0.775819
2,174
3.203125
3
January 21, 2018 When Jesus was transfigured his face shone like the sun and his clothes became white as light. This was un-reflected glory shining from Jesus, unlike the reflected glory that shone from Moses. The source of the glory is Jesus himself. Jesus is God. Now, it is important to note that in Jesus' transfiguration he did not change into something else. Jesus' outward appearance changed, but he was still the same Jesus, who walked up the mountain with Peter, James, and John. The glory that shined forth on the holy mountain was present even before the transfiguration, yet it was hidden. Jesus is both God and man. Yet, Jesus humbled himself so that his divine glory did not shine forth. Yet, Jesus remained God even then. The baby in a manger was God. The emaciated man tempted by Satan in the wilderness was God. Even as Jesus was scourged and nailed to a tree and finally laid lifeless in a tomb, Jesus remained God. This is very important, because when Jesus died on the cross for our sins, it was not just a righteous man who died. Everything Jesus does he does both as God and man. God bore the punishment for the sins of his people. This means that the price Jesus paid was greater than the debt incurred by the sins of the whole world. So, while we see a hideously bruised man dying on a tree, God is glorified by redeeming the world from sin. Likewise, Jesus never ceases to be a man. As he was transfigured before his disciples displaying his divine glory, he remained 100% human. And when he rose from the dead never to die again, Jesus rose as a human being. And when he ascended to the right hand of the Father, filling all things with all powers and authorities placed under his feet, Jesus did this as a human being. You cannot separate Christ's divine and human nature. This means that our flesh and blood now reign in heaven forever. Jesus' transfiguration foreshadows his resurrection. This is something Jesus' disciples were struggling with, in particularly Peter. Six days before Jesus climbed the Mount of Transfiguration he told his disciples how he must be betrayed and mistreated, suffer and die, and on the third day rise from the dead. Peter rebuked Jesus saying, "Far be it from you, Lord! This shall never happen to you." To which our Lord responded, "Get behind me, Satan! You are a hindrance to me. For you are not setting your mind on the things of God, but on the things of man." Peter could not understand that it was necessary for Jesus to suffer and die. And because such a thought distraught him so, he couldn't even think of the resurrection of the dead. So, here on the mountain Jesus shows Peter his glory, which will be revealed at his resurrection. Jesus shows the proof that death cannot defeat him and that his resurrection is unavoidable. And if Jesus, who shares in our human nature rises from the dead, that means that we too will rise from the dead. Jesus is the first fruit, and we will follow from our graves. And this is testified further by the witness of Moses and Elijah. These two men are living. All who are joined to Christ will live forever. They shall not die. Peter is stupefied by this marvelous sight. But he feels the need to say something. So, he proposes to build three tents, one for Moses, one for Elijah, and one for Jesus. Yet, God the Father interrupts Peter by declaring Jesus to be his beloved Son. And God gives a simple command. "Listen to him." Peter wanted to harness the glory of Jesus. But he didn't know what he was talking about. He needed to listen to Jesus. Jesus told him not to be afraid. And then Christ tells them to tell no one about his transfiguration until he has risen from the dead. Jesus was transfigured before his disciples in order to show them that he truly is God the Son and to assure them that he does have the power to conquer death. But Jesus must still die. Peter wanted to grab hold of this glory without Jesus' death on the cross. Jesus is telling Peter, "You cannot have my glory unless I die." Peter displays a problem that is rampant in our generation. People want glory. But they don't want the cross. They want the glory of Christ. But they don't want his crucifixion. It was not only in the first century that people despised a suffering Christ. People despise the suffering of Christ now. They want a winner, not a loser. Even more, people don't want to suffer themselves. Jesus died on the cross for all our sins. This means that we must repent of our sins and trust in Christ for our forgiveness. But this involves humility. Humility can be a tough cross to bear, even if God promises glory in return. Jesus says, "If anyone would come after me, let him deny himself and take up his cross and follow me." (Matthew 16:24) This means that your glory will be delayed. It means that you must grasp the glory of God through faith, while you do not experience it now. But Peter wanted it now. He wanted to keep Jesus in all his glory, and Moses and Elijah too. But Jesus makes it abundantly clear, there is no glory without the cross. Yet, even today, people still strive to obtain glory here and now. Yet, they don't do this through faith in Jesus. Faith in Jesus doesn't give you glory now. It gives you the promise of eternal glory in the future. The way people try to obtain glory now is through the law, that is, they try to obtain glory through their own works. This makes sense. The Gospel of Jesus' death and resurrection and the free forgiveness of sins that flow from it does not have to do with your works, but with God's work. Through the Gospel, God says to you in regard to glory, "My grace is sufficient for you." The Law, on the other hand, has to do with your works, what you do. This gives people the idea of control. If I show myself to be a good person, be generous and kind, hard-working and virtuous, then I can gain glory here on earth. And many will invent a Jesus, who fits this model: a Jesus who preaches prosperity now! And this glory seeking might seem to work, temporarily anyway. And there's a good chance that you'll gain the admiration and praise of many people and be considered a good and successful person. Yet, this earth won glory can only be temporary. The Law does not demand the approval of human beings. It demands the approval of God. This means that you must fulfill the law in all its parts without fail. Before God, the Law accuses you of sin and condemns you to death and hell. So, the Law which promised glory and which seemed to give it in this life proves to bring shame and death. If you are going to obtain glory through the Law, you have to go all the way. You must completely submit to the Law. And in so doing, you will find a cruel master, as St. Paul writes in Romans chapter 3, "Now we know that whatever the law says it speaks to those who are under the law, so that every mouth may be stopped, and the whole world may be held accountable to God. For by works of the law no human being will be justified in his sight, since through the law comes knowledge of sin." (vss. 19-20) But there is a Savior for those condemned by the Law. St. Paul continues, "But now the righteousness of God has been manifested apart from the law, although the Law and the Prophets bear witness to it- the righteousness of God through faith in Jesus Christ for all who believe." (Romans 3:21-22) This is of immense comfort to us sinners. When we try to achieve glory through works of the law, the Law shines brighter and brighter, exposing our failings and how far we are from God's righteousness. We are forced to shrink from this glory, just as the Israelites hid from the radiance of Moses' face. Yet, now God's righteousness is given to us through faith in Jesus Christ apart from the Law. The glory of the Law is a glory that condemns sinners to hell. The glory of the Gospel, however, is a glory we do not need to shy away from. The glory of the Gospel is the righteousness of God given to sinners as a free gift. The glory of the Law condemns those, who lack glory. The glory of the Gospel causes those without glory to be glorious, as 2 Corinthians 3 states, "And we all, with unveiled face, beholding the glory of the Lord, are being transformed into the same image from one degree of glory to another." (vs. 18) And so the glory that brings salvation to us far exceeds the glory that brings condemnation. This glory of the Gospel that gives us salvation can only be received through faith. This is why Jesus would not let Peter build three tents for him, Moses, and Elijah. This is why Moses and Elijah stood with Jesus. This is why God the Father told Peter, James, and John to listen to Jesus. And this is why after the vision they saw Jesus only. Faith comes from listening to Jesus' word. If you want Jesus' glory, you need to listen to his words. Moses and Elijah represent the Scriptures of the Old Testament. Jesus says in Luke 24, "These are my words that I spoke to you while I was still with you, that everything written about me in the Law of Moses and the Prophets and the Psalms must be fulfilled. … Thus it is written, that the Christ should suffer and on the third day rise from the dead, and that repentance and forgiveness of sins should be proclaimed in his name to all nations." (vss. 44, 46-47) Moses and Elijah stood as witnesses that Jesus was the Christ foretold in Scripture. And Christ stood as a witness that the Scriptures are true. If you want Jesus' glory, you need to listen to Scripture. Scripture is Jesus' Word. The Word of God is a rather despised thing. People generally don't want to listen to it, or read it, or learn it. It doesn't seem glorious. The message of Jesus' death on the cross seems the opposite of glorious. And the call to repentance is very unappetizing. Perhaps if God would speak through a bright cloud, more people would come to hear. Perhaps if the preacher's face shone like the sun and he brought people from centuries past, then more people would come to church. But Christ has chosen to hide his glory in his Word, spoken by ordinary men. He hides his glory under ordinary means here on earth, so that we might receive God's glory through faith. We've already learned that just because God's glory is hidden, doesn't mean that it is not there. Christ's glory can only be received through faith in Christ's cross, where he bore everything that would cause us to shrink from the glory of the Law. It is through faith in Christ's death and resurrection that we gain the hope of the future glory to be revealed to us, a glory from which we will not shrink back, but rather into its image we will be changed. Amen.
<urn:uuid:64027a83-8a78-4633-afaa-6127f2a0cea7>
CC-MAIN-2020-16
https://www.trinitylutheranottumwa.com/sermons/transfiguration-glory-through-the-cross
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370521876.48/warc/CC-MAIN-20200404103932-20200404133932-00513.warc.gz
en
0.97684
2,361
2.78125
3
Biographical Database of NAWSA Suffragists, 1890-1920 Biography of Emma F. Angell Drake, 1849-1934 By Chadwick Pearsall Graduate student, Idaho State University To label Emma F. Angell Drake as simply an Idaho women's suffragist, a medical doctor, a temperance advocate, an author, a minister's wife, or an Idaho state legislator, would be to sell her short. The truth is that she was all of those things and more. For a time she lived in Idaho, but she also lived in New York, Kansas, Michigan, Massachusetts, Colorado, Wisconsin, California, and Oregon. In order to grasp who Emma F. Angell Drake was we have to track her work and advocacy across the span of her life. I argue that she was not primarily a woman suffragist, though it is undeniable that she was involved in the women's suffrage movement. Instead, at her core Emma F. Angell Drake was an avid temperance advocate whose medical training and religious beliefs influenced her life's work. She was born Emma Frances Angell on September 15, 1849 in Angellville, New York. Her parents were Silas T. Angell and Deborah Angell. Little is known about her childhood, but by the time Emma was thirteen she was living in Lamont, Michigan with her family. By the mid-1860s she had become a primary school teacher in Lamont, and later in Robinson, Michigan. In the fall of 1870 she enrolled at Olivet College, a Congregational school near Lansing. After graduating in 1874, she returned to Robinson to continue teaching, until educational pursuits took her away to medical school at Boston University in 1878. During her time in Boston Emma was exposed to the American Woman Suffrage Association (AWSA) and the Massachusetts Woman's Christian Temperance Union (WCTU); she even became a member of the WCTU while in Boston. After graduating with a medical degree in 1882 she became the principal and physician at Northfield Seminary, a female seminary founded by Dwight L. Moody in Northfield, Massachusetts. It does not appear that the seminary was a good fit for her, as her tenure there lasted only one year. On July 3, 1883 Emma married Rev. Ellis Drake, who she likely knew from when she was in medical school and he was pastoring in Boston. After marrying, Emma disappeared from social activism for some years, which may have been due to the demands of her new role as a minister's wife and her duties to their church. What is known is that Emma and Ellis had three children during this period: Ruth, (1884), Philip (1886), and Paul (1891). By the time Paul was born the family had moved Kansas, which was a hotbed of advocacy for women's suffrage and prohibition. The Drakes continued their western migration by moving to Denver in 1896. In Denver Emma accepted a position as professor of obstetrics at the Denver Homeopathic College and Teaching Hospital. Around this same time Ellis became sick with an unspecified degenerative illness and son Philip died suddenly of appendicitis. Ellis's sickness would eventually result in husband and wife switching roles, with Ellis taking care of the home while Emma became more involved in public life. After being replaced at the Denver Homeopathic College, as the school moved to an all-male faculty, Emma began to try her hand at writing. Her career as an author, and public figure, took off when she won a national contest for her manuscript entitled What a Young Wife Ought to Know. She received a $1,000 prize and her book was published as part of Rev. Sylvanus Stall's "Self & Sex Series." By 1902 Emma had added two more books to the series with Maternity Without Suffering and What a Woman of Forty-Five Ought to Know. For a brief time Emma even owned and edited her own magazine, though she sold it in less than a year. While Emma was experiencing great professional success her personal fortunes sank, when Ellis died of pneumonia in 1906. When her youngest son Paul went off to college Emma stepped away from her medical practice and went to work for the WCTU. At their national convention in the spring of 1907, held in Denver, she was one of the featured speakers. By 1908 Emma was in Idaho speaking to the state chapter of the WCTU. In the 1914-1915 Woman's Who's Who of America Emma self-reports as being in New Plymouth, Idaho, but she did not permanently settle in Idaho until 1917, when she became the interim president of the Idaho WCTU. Later that fall she dropped the interim tag when she was officially elected president of the Idaho WCTU. By 1918 Emma had officially entered the realm of organized politics and ran for the office of State House Representative for Payette County. She won both the Republican primary (371 to 295 votes) and the general election (1,142 to 868 votes). Also elected that same year was Carrie Harper White, who would work with Emma on many of her legislative activities. They became the fifth and sixth women elected to the Idaho Legislature. When the Idaho legislature convened in January, 1919 Emma's first act was to move for a vote on the Eighteenth Amendment (enacting Prohibiton), which passed unanimously. The following month Emma and Carrie White attempted to pass a bill that would reduce the nine-hour work day for women to an eight-hour work day, but they were unsuccessful. Although they were the only female representatives in Idaho, Emma and White were not universally approved of by women's clubs. When a women's clubs-backed bill, which called for the appointment of a woman to the State Board of Education, came up for a vote both Emma and White voted against it, because they favored equal rights as opposed to specially protected rights for women. While in office Emma pushed hard for public health reforms, which is not surprising considering her medical background. One bill proposed further empowering the Idaho Board of Health, while another required licensing of maternity hospitals. Unfortunately for Emma, both bills failed. She was able to achieve a legislative victory when it came to a bill requiring physical exams and vaccinations for all school children, and providing public school nurses for all communities who wanted them. A later WCTU-backed bill that would have restricted the sale of alcohol-based patent medicines met the same fate as her earlier two health-related bills. Other notable bills that Emma helped pass were a $51,500 provision for separate women's housing at the St. Anthony Industrial Training School and getting the Child Welfare bill passed, after crossing party lines to filibuster the adjournment of the legislative session. After her session in the legislature Emma was back traveling on behalf of the WCTU, even going to London to attend a world WCTU meeting. Despite all her travels she was back in Boise for the special session which convened to vote on the Nineteenth Amendment, on February 11, 1920. After Emma made a "strong and logical speech" to introduce the legislation, it passed both houses, with only six dissenting votes in the Senate. It is worth noting that women in Idaho had already possessed the right to vote since 1896. Emma's speech in support of the Nineteenth Amendment would be her last as a legislator. Following the special session she was back on the road for the WCTU, speaking in San Francisco on "The Menace of Alcohol by Prescription." Later that year she went on to chair the inaugural meeting of the Idaho branch of the League of Women Voters. After reaching her zenith Emma began to slowly fade from the public eye. By 1925 she had relinquished her title as President of the Idaho WCTU. From 1926 until her death she moved around between California, Oregon, and Wisconsin, each time living with friends or family. She died in Inglewood, California on October 5, 1934, due to cancer that had spread to her stomach. So what are we to make of Emma F. Angell Drake? She was an extraordinary woman who poured herself into the causes that inspired her; chief among them being temperance. While she did live into her eighties, her list of accomplishments seems to have spanned multiple lifetimes. She was a teacher, a temperance advocate, a medical doctor, a women's suffragist, a mother, a minister's wife, an author, and one of the first female legislators in Idaho. Though she was involved in many areas, her lifelong commitment to temperance, specifically her fifty years of involvement with the WCTU, stands out above the rest. Image taken from the inside cover of What a Young Wife Ought to Know (1901) Binheim, Max, ed. Women of the West: A Series of Biographical Sketches of Living Eminent Women in the Eleven Western States of the United States of America, p. 121. Los Angeles: Publishers Press, 1928. Drake, Emma F. Angell. What a Young Wife Ought to Know. Philadelphia: Virginia Publishing Company, 1901. Harper, Ida Husted, ed. The History of Suffrage, p. 144. New York: J. J. Little and Ives Company, 1922. Leonard, John William, ed. Who's Who in America: A Biographical Dictionary of Notable Living Men and Women of the United States, 1906-1907. Chicago: A. N. Marquis & Company, 1906. --- Woman's Who's Who of America: A Biographical Dictionary of Contemporary Women of the United States and Canada, 1914-1915. New York: The American Commonwealth Company, 1914. Miller, Beverly A. What's a nice lady like you doing in a place like this? : the life and times of Emma Angell Drake. M.A. Thesis, Boise State University. 1998. Moulton, Charles Wells, ed. The Doctor's Who's Who. New York: The Saalfield Publishing Co., 1906.
<urn:uuid:14a17160-7838-4cc3-839a-3c3346c89aaf>
CC-MAIN-2020-16
https://documents.alexanderstreet.com/d/1009656460
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510287.30/warc/CC-MAIN-20200403030659-20200403060659-00115.warc.gz
en
0.982048
2,070
2.6875
3
I recently saw a tweet from the Twitter account of the New Zealand Parliament regarding the launch of an electronic petitions system. I’m not sure if the Australian House of Representatives social media people also read that tweet, but the next day I saw its account had sent a tweet reminding people that a new e-petition platform had been launched in September 2016, following the start of the 45th Parliament. The right to petition the legislature or the government is a feature of various democracies around the world, and the move to online platforms for receiving petitions is an example of the impact of technology on how parliaments engage with the public. A little history… Information about the history of the right to petition parliament can be found on the websites of both the Australian and New Zealand parliaments. The relevant chapter of Parliamentary Practice in New Zealand, available online, states: The earliest legislative acts of the English Parliament were often transacted by the Commons petitioning the King that a certain amendment be made to the law, but petitions as a source of legislation soon disappeared from the picture, apart from the field of private legislation. In New Zealand, the only vestige of the petition’s former role in legislating was formerly to be found in the field of private bills, which were initiated in the House by the presentation of a petition from the promoter of the bill. However, following changes to Standing Orders in 2011, a petition is no longer required to introduce a private bill. In 1993 Parliament passed legislation permitting the presentation of petitions seeking the holding of referendums. These statutory petitions are the subject of their own special rules. [Links added by author] The vast majority of petitions addressed to the House relate to public policy issues and private grievances of various kinds. From its first meeting in 1854, the House, continuing an ancient right exercised in England, has admitted petitions seeking redress for an almost unlimited range of real or supposed wrongs done to petitioners, advocating amendments to the law or changes in Government policy, or seeking public inquiries into unsatisfactory situations. By petitioning the House, the citizen can express his or her opinion on a subject of concern and address it in a public way to the country’s legislators. The act of petitioning may or may not have any practical consequences, but it ensures that the petitioner’s concerns are heard and given some consideration by those in authority. An Infosheet about petitions on the Australian website similarly states: In the United Kingdom the right of petitioning the Crown and Parliament for redress of grievances dates back to the reign of King Edward I in the 13th century. The origins of Parliament itself can be traced back to those meetings of the King’s Council which considered petitions. The terms ‘bill’ and ‘petition’ originally had the same meaning. Some of the earliest legislation was in fact no more than a petition which had been agreed to by the King. The present form of petitions developed in the late 17th century. The House of Commons passed the following resolutions in 1669: That it is an inherent right of every Commoner of England to prepare and present petitions to the House in case of grievance; and of the House of Commons to receive them. That it is the undoubted right and privilege of the House of Commons to adjudge and determine, touching the nature and matter of such Petitions, how far they are fit and unfit to be received. The effect of these resolutions was inherited by the Australian Parliament and the right of petitioning thus became the right of every Australian. In modern times the practice of petitioning Parliament does not have the same primary role as an initiator of legislation or other action by the Parliament as it did in early history. There are now other, and usually more effective, means of dealing with individual grievances—for example, by direct representation by a Member of Parliament, by the Commonwealth Ombudsman or by bodies like the Administrative Appeals Tribunal. It is hoped that the current arrangements for responding to petitions highlights petitioning as an important means of community involvement in the work of the Parliament. Among the most famous and influential petitions in the two countries are the women’s suffrage petitions in New Zealand in the early 1890s, and petitions in the 1950s regarding a constitutional referendum related to indigenous people in Australia. Each house of the Australian Parliament actually has its own rules for petitions. The House of Representatives now specifically allows e-petitions under its standing orders, while the Senate’s standing orders do not expressly address this. The Senate does, however, accept print-outs of petitions that have been posted online and signed electronically, provided they meet the other rules. There are also different processes for the presentation of petitions to the two chambers. The House of Representatives has a standing Petitions Committee. The role of this committee is “to receive and process petitions, and to inquire into and report to the House on any matter relating to petitions and the petitions system. The Committee does not make recommendations on, or implement, any actions requested in petitions. If a petition is deemed to have met relevant Standing Order requirements by the Committee, it will be presented to the House and referred to the relevant Government Minister for response” (since 2008, when new petitioning procedures were introduced, “almost all petitions presented have been referred to Ministers and received responses”). A petitioner’s local member can also agree to present the petition to the House. For a petition to be presented in the Senate, it must be presented by a Senator. However, “[w]hile there is nothing in the rules of the Senate to compel a senator to present a petition, most senators take the view that they should seek to present any petition forwarded to them, even if the views represented in the petition do not reflect the views of the senator presenting it.” Once presented, petitions are brought to the notice of the appropriate Senate committee, which may seek a reference from the Senate to consider the issues. Sometimes, Senators refer petitions to debate by the whole Senate. In New Zealand, a Member of Parliament (not necessarily a petitioner’s local member) must agree to present a petition to the Parliament, and can only do so once it has closed for signatures. It is the role of the Office of the Clerk to check that petitions meet the requirements. Previously, up until 1985, a dedicated committee had reviewed petitions (in fact, there were two such committees before 1962), but now any petitions presented in the House are referred directly to the relevant subject matter select committee. Once the committee has discussed a petition, it reports back to the full Parliament. If the committee has any recommendations for the government, the government must respond to these within sixty working days. There are various other technical rules around presentation and response processes. Generally, however, in both Australia and New Zealand, petitions that comply with the format and content requirements will receive some form of consideration. It is worth noting that neither country requires a certain amount of signatures to be obtained in order for a petition to be presented to the parliament. The New Zealand Parliament’s guide for petitions states that “[y]ou do not have to collect signatures. A petition with just your signature will go through the same process as one with many signatures.” The Australian House of Representatives frequently asked questions on petitions states that “[t]he minimum number of signatures required is one (1). The person requesting the petition (the ‘Principal Petitioner’) is the first signature on each petition.” Standard requirements for petitions In New Zealand, petitions of parliament must - be in English or Māori - use respectful and moderate language - ask the House of Representatives to take a defined action - not contain irrelevant statements. The guidance also states that “[p]etitioning Parliament should be your last course of action. If you have other legal options, like going to an Ombudsman or to court, then your petition will not be accepted.” Petitioners of the Australian House of Representatives are advised to make sure their petition is - about something that the House of Representatives is responsible for (the House cannot take any action on issues that are the responsibility of individuals, local councils, State or Territory governments or private companies) - addressed to the Speaker and the House, for example not the Prime Minister or an individual Minister - clear what you are asking for - does not promotes illegal acts and - does not contain language that is offensive (another page also says that petitions must be in English and written using “moderate language”) In order for a petition to be accepted by the Senate: - the petition must be addressed to the Senate - it must contain a request for action by the Senate or the Parliament - the text of the petition must be visible on every page - only original documents will be accepted – no faxes or photocopies - no letters, affidavits, or other documents can be attached Features of the e-petition sites The Australian and New Zealand e-petition websites are structured a little differently. The New Zealand site takes petitioners through a step-by-step process, requiring the completion of one separate step before moving on to the next. This includes a step for searching for existing petitions that are similar. A bar at the top of the form shows you what step you are at and what you have left to complete. The Australian site also provides an online form, with the first page requiring the reasons and request for action boxes to be filled out before moving to the next stage. The first page does not show what steps are remaining. Both sites also allow users to find and sign existing petitions online. The New Zealand site lists the petitions that are open for signature, along with the number of signatures that each has currently. A search box on the left side of the page also allows filtering by date. Once you click on a petition you can read the details, see when it is closing for signature, and click a button to sign the petition (there is also an option to view the details of all the petitions on one page). This takes you to a form where you provide your name and email address and confirm that you have read the privacy disclaimer before clicking the “Sign Petition” box. There is also a “share” toolbar at the bottom of the page for each petition so that people can spread the word on social media or by email. Similarly, on the Australian site you can view all of the petitions currently open for signature, their closing date, and the current number of signatures. You can also just view petitions that are “recent” or “popular,” and there is a search box as well. Clicking the “sign” button beside a petition takes you to a form that requires your name and email address, and to check a box stating that you agree to abide by the terms and conditions for ePetitions (and also declare that you are not a member of parliament). Each page on the site also has a share toolbar. I should also note that, apart from the federal parliament, some Australian state parliaments also allow e-petitions, specifically Queensland and Tasmania, although they seem to require that a request form to be filled out and printed, rather than petitions being submitted directly online. Once published online, people can join or sign petitions using the websites. E-petition platforms in other countries In researching this post, I learned that the United Kingdom House of Commons launched a new e-petitions site in July 2015, enabling online petitions of both the Parliament and the UK government. E-petitions were actually previously authorized by 10 Downing Street, which first launched its platform in 2006, with petitions directed to government departments. The UK government later launched a new site in 2011. Under the old approach, there was only a prospect of response if a petition obtained more than 100,000 signatures. Now, if a petition gets 10,000 signatures the government will respond, and if it gets 100,000 the petition will be considered for debate in Parliament. Earlier adopters of e-petitions were the Scottish Parliament (active since 2000) and the Welsh National Assembly. The German Bundestag has had an e-petition portal since 2005, and I also located sites in Ireland and Luxembourg. The European Parliament has such a portal as well. A comparative study of the right to petition in European countries, which includes a discussion of e-petitions starting at page 25, was published by the European Parliament in 2015. The library of the Parliament of Victoria, in Australia, also published a research paper that examines different systems in 2016. Outside of Europe, the Canadian House of Commons system for receiving petitions online went live in December 2015. The guidance for e-petitions shows that people need to create an account in order to submit a petition. They also need a member to sponsor the petition and must obtain at least 500 signatures in 120 days in order for the petition to be presented in the House. Once that occurs, the government has 45 days to respond. In 2016, an Australian blogger wrote an interesting comparison of the features of the new Australian, Canadian, and UK sites, as well as the “We the People” system on the White House website in the U.S. One observation, for example, was that with regard to the Australian system, “[t]he entire process felt very cold and impersonal, unlike the UK and US experiences – which were warm and inviting.” Overall, he was critical of the Australian approach and thought the site could be designed better with the user in mind. The Petitions Committee of the Australian House of Representatives is actually currently examining the e-petitioning system, including the extent to which it has “met the expectations of Parliamentarians and members of the public” and possible future enhancements. Do you know of any other parliaments (or governments) with e-petition portals?
<urn:uuid:86913f31-cd75-405f-bc40-c37f6196ee76>
CC-MAIN-2020-16
https://blogs.loc.gov/law/category/gov-2-0/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496227.25/warc/CC-MAIN-20200329201741-20200329231741-00033.warc.gz
en
0.960896
2,878
2.921875
3
Following the publication of the new OfSTED Education Inspection Framework 2019, education and training providers will be aware of the new Quality of Education judgement. This judgement has been introduced to focus on the curriculum, which sets out what apprentices need to know and be able to do. OfSTED’s definition of a curriculum is The curriculum is a framework for setting out the aims of a programme of education, including the knowledge and skills to be gained at each stage (intent); for translating that framework over time into a structure and narrative, within an institutional context (implementation) and for evaluating what knowledge and skills learners have gained against expectations (impact/achievement). The curriculum is made up of 3 distinct parts: Education and training providers need to be clear that curriculum intent is not a list of your curriculum aims published in a document or on a website. Curriculum intent is: a framework for setting out the aims of a programme of education, including the knowledge and understanding gained at each stage A framework of aims is very different from a bullet point list of aims When considering a curriculum intent framework, education and training providers need to ensure the following: |Can be demonstrated| |Your apprenticeship curriculum sets out how knowledge, skills and behaviours needed to take learners to the next stage of the education, training or employment will be developed.||Yes / No| |You can demonstrate, clearly, what learners need to be able to know and do at the end of their learning or training programme||Yes / No| |You have planned and sequenced the curriculum so that learners can build on previous teaching and learning and develop the new knowledge and skills they need.||Yes / No| |Your curriculum offers learners the knowledge and skills that reflect the needs of the local and regional context||Yes / No| |Your curriculum intent takes into account the needs of learners, employers, and the local, regional and national economy, as necessary.||Yes / No| |Your curriculum ensures that all learners benefit from high academic, technical and vocational ambitions.||Yes / No| |Your curriculum is ambitious for disadvantaged learners or those with SEND, including those who have high needs, and should meet those needs.||Yes / No| the translation of that framework over time into a structure and narrative, within an institutional context The next stage in the Quality of Education judgement is to consider how education and training providers demonstrate their implementation. Like curriculum intent, this cannot be covered by a statement in a document or posted to a website. Inspectors are going to use various methods to judge curriculum implementation and a statement is not going to be sufficient. Inspector will want to see: - The curriculum that learners follow - Intended end points towards which those learners are working - How well learners are progressing through the curriculum - Reviews of curriculum plans or other long-term planning - Observations of classes, workshops and other activities - Learner work - View of learners - How staff record, upload and review data - Content and pedagogical content knowledge When considering a curriculum implementation, education and training providers need to ensure the following: |Can be demonstrated| |Your staff have expert knowledge of the subjects that they teach. If they do not, they are supported to address gaps so that learners are not disadvantaged by ineffective teaching.||Yes / No| |Your staff enable learners to understand key concepts, presenting information clearly and promoting discussion.||Yes / No| |Your staff check learners’ understanding effectively and identify and correct misunderstandings.||Yes / No| |Your staff ensure that learners embed key concepts in their long-term memory and apply them fluently and consistently.||Yes / No| |Your staff have designed and they deliver the subject curriculum in a way that allows learners to transfer key knowledge to long term memory.||Yes / No| |The curriculum is sequenced so that new knowledge and skills build on what learners know and can do and learners can work towards defined endpoints.||Yes / No| |Your staff use assessment to check learners’ understanding in order to |Yes / No| |Your staff use assessment to help learners to embed and use knowledge fluently, to develop their understanding, and to gain, extend and improve their skills and not simply memorise disconnected facts.||Yes / No| the evaluation of what knowledge and skills learners have gained against expectations Finally, education and training providers will then need to address curriculum impact and be clear that the whole purpose of the Quality of Education judgement is for inspectors to focus more on the curriculum and less on the generation, analysis and interpretation of performance data. Inspectors will be interested in the conclusions drawn and actions taken from any internal assessment information but they will not examine or verify that information first hand. To make their judgement, inspectors will look at: - Learner attainment - Evidence of learner progress - Destination data - Conversations about what they have remembered about the knowledge and skills they have acquired and how their learning enables them to connect ideas. When considering curriculum implementation, education and training providers need to ensure the following: |Can be demonstrated| |Your staff have developed a well-constructed, well-taught curriculum that leads to good results and these reflect what learners have learned.||Yes / No| |Your staff ensure that disadvantaged learners and learners with SEND acquire the knowledge and skills they need to succeed in life.||Yes / No| |Your staff ensure that, as well as end-point assessments and examinations assessment of learners’ work, can demonstrate what knowledge, skills and behaviours have been developed.||Yes / No| |Your curriculum ensures that all learning builds towards an endpoint.||Yes / No| |Through your curriculum, all learners are being prepared for their next stage of education, training or employment at each stage of their learning.||Yes / No| DEVELOPING AN APPRENTICESHIP STANDARD PROGRAMME When developing an apprenticeship programme (standards), providers need to consider their curriculum intent, implementation and impact. As well as the detailed questions set out in the sections above, there are general questions that should be asked throughout the curriculum design process: - What is going to be taught, how will it be taught and how will it be sequenced over the course of the apprenticeship - How are learners going to acquire the necessary knowledge, skills and behaviours? - How will learners be supported in their progression and are they being provided with knowledge and skills which will benefit them in the future? - What elements of the programme give learners transferable skills and knowledge? - How are the apprentices existing skills and knowledge going to be built on? IDENTIFYING WHAT APPRENTICES NEED TO KNOW AND CAN DO The first step in designing the curriculum plan is to identify the knowledge, skills and behaviours of the apprenticeship standard. These standards set out the endpoints that learners will be working towards. However, when designing the curriculum, the intent, implementation and impact cannot be done in isolation. In order to be able to demonstrate impact, providers will first need to determine what the performance measures are for each outcome. This should be a four-step process; state what will be delivered or what the apprentices will need to do, identify the measure type that will be used, define the performance benchmark and finally schedule when the outcome will be implemented or completed. For each knowledge, skill or behaviour in the apprenticeship standard providers should be able to show what they are delivering to develop the KSB and how they will determine if the learners have absorbed and retained what has been taught. What types of measures can providers use? There is a wide range of performance measures that providers can use, such as: - Grades achieved by learners on assessed work - Products or work - Portfolio submissions Once education and training providers have identified the outcomes, what will be delivered and how learning will be assessed, the next step in the process is to create the curriculum sequence. Inspectors will make a judgement on this: how carefully leaders have thought about the sequence of teaching knowledge and skills to build on what learners already know and can do. Like curriculum intent statements, the curriculum sequence cannot just be a vague timetable added to a document or web page. There needs to be detail about what is being delivered when it is being delivered and why it is being delivered. Inspectors will check to see how “leaders have ensured that a subject curriculum includes content that has been identified as most useful and that this content is taught in a logical progression, systematically and explicitly for all learners to acquire the intended knowledge, skills and behaviours.” From an apprenticeship perspective, it makes sense to plan the curriculum chronologically. It can be broken down to a week by week basis but generally, it is more sensible to do this on a month by month basis. However, it cannot be something like this: |Month 1||Workplace Shadowing| |Month 2||Workshop 1| |Month 3||Block Course 1| |Month 4||Workshop 2| |Month 5||Block Course 2| |Month 6||Workshop 3| There must be detail – What is the activity? What will apprentices learn? How does it build on previous learning? Is the learning in a logical order? Which outcomes or endpoints are being covered? What learning material will be used? What assessment is planned? DEVELOPING CURRICULUM MATERIAL Inspectors will also focus on how the curriculum is taught at subject, classroom or workshop level. They will want to see examples of teaching but will also expect to see examples of teaching materials; powerpoints, elearning, assessments and workbooks. The development of learning and teaching material is key to successful delivery. It has to be relevant to the curriculum activity and the intended learning outcomes. The quality of the material has to meet learner expectation. If staff really cannot create a powerpoint presentation that is fit for purpose then there is a clear indicator that additional CPD might be needed. How well do learners enjoy lessons or workshops? Do you get learner reviews at the end of a session? Do you perform observation of teaching, learning and assessment? How do make sure that staff have the skills to deliver workshops? Providers also need to review the quality of eLearning and in some cases actually introduce it to their programmes. What sort of eLearning system do you use? Is content Scorm or Tin Can, how can you track learning and progress. Do you need a Learning Record Store to dive deeper into apprenticeship learning and activities? Can it be used on mobile devices? Have you considered gamification to encourage learner engagement? How do you feedback to learners? How does staff feedback support learner development? What do you use to award grades to learners? THINKING ABOUT CURRICULUM IMPACT When inspectors evaluate the impact of the education provided to learners, they will focus on what learners have learned, and the skills they have gained and can apply. If the curriculum intent planning has been done correctly then the delivery staff will already have identified the outcomes (end points) that need to be achieved as well as the performance measures that will be used to determine success. To measure the curriculum impact staff need to review performance on a cyclical basis. This tends to be done annually. Where apprenticeship providers run rolling programmes they will need to determine a suitable cut off point for their reviews. The review process should follow the format of: The review of the performance measures has to have a purpose in the improvement of the curriculum. Each performance measure needs to be worked through and staff need to add their findings and recommendations. Where findings need to improved then the reviewers should add this to a programme action plan. The action plan is the key document to making improvements to the curriculum and overall delivery. HOW DOES STEDFAST SUPPORT APPRENTICESHIP CURRICULUM DEVELOPMENT? Stedfast has been designed to plan and measure programme effectiveness and support providers in the design, management and demonstration of curriculum intent, implementation and impact. Step 1 Create an assessment plan The first step in the curriculum planning process in Stedfast is to create the apprenticeship assessment plan. Assessment plans have two purposes: - Identify the knowledge, skills and behaviours and set out what will be delivered / taught and how learning will be assessed or demonstrated (Intent) - Assessment plan reviews to add findings and determine if knowledge, skills and behaviours have been developed (Impact) It should be noted that assessment plans are not limited to just apprenticeship programmes. Providers are also able to conduct self-assessment reporting, strategic planning and accreditation reviews in the assessment plan module. Step 2 Working with Outcomes & Measures Each programme is made up of outcomes / endpoints (standards). If staff need to demonstrate cross curriculum links then this can also be done. It helps to demonstrate how activites map to other delivery plans such as maths, English, Prevent, Safeguarding etc Outcomes then need to have one or more measures, ie: Step 3 Create Curriculum Plans Adding Curriculum Activities The key feature in the curriculum planning module is the activity planner. Providers can decide if they want to create curriculum plans based on units, modules, courses, themes, topics, workshops etc In this example, the curriculum has been created around standalone modules and topic workshops: Adding an activity Adding a curriculum activity is a three-step process: 1 Activity Options Explain what the activity is, when will it be scheduled to be delivered and will it require Learning, Practice and/or Assessment. 2 Add Outcomes – Identify the outcomes that the activity will cover – these are directly linked to the outcomes assessment plan. 3 Delivery Details Add the delivery details for learning, practice and assessment depending on what has been selected in Step 1. Staff are also able to add resource documents, lesson plans, powerpoints, handouts etc. Curriculum Mapping – Check all outcomes have been covered by curriculum activities Curriculum Sequence – The curriculum sequence is a key requirement when demonstrating curriculum intent 20% Off the Job Training In the Stedfast Curriculum Planning module there is a useful feature for providers to plan their 20% off the job training. This is added after an activity has been created. Providers are able to add the number of expected learning hours against the scheduled weeks or months that has been set for the activity and against the learning, practice or assessment activities, again, dependent on has been selected. Measuring Curriculum Impact At a specific point decided by the organisations, curriculum plan will conduct a review of the assessment plan. These will typically be one-year cycles but can be shorter or longer depending on the provider requirements. Each measure is allocated to a member of staff making the review process collaborative. Staff will add their findings and recommendations. Staff are then able to add actions against each of the findings: Providers who are interested in Stedfast can contact us via the website https://stedfast.io We run online demos for providers who would like to see the features. For users of Stedfast, we run on boarding sessions for the system admin and there are comprehensive support pages. Other Modules in Stedfast - Staff Performance Management - CPD Management - Policy and Procedure Management - Risk Assessment Apprenticeship Curriculum Planning Checklist We have also launched a free and easy to use Apprenticeship Curriculum Planning Checklist which can be accessed using the button below:
<urn:uuid:67aaa725-6a2c-4a29-a2a2-91ed9ecb2a88>
CC-MAIN-2020-16
https://capella.systems/apprenticeship-curriculum-intent-implementation-impact/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371660550.75/warc/CC-MAIN-20200406200320-20200406230820-00434.warc.gz
en
0.933832
3,255
3.390625
3
Above: Dawlish in 1970, with a 'Western' heading west with an express from Paddington Left: Dawlish Station in 1861 with a west bound express Proof of a settlement in Dawlish did not come until 1044 when King Edward the Confessor, the last Anglo-Saxon king of England, granted the parish of Dawlish to his then Chancellor and chaplain, Leofric, on the condition that he built bridges and supplied soldiers to construct defences in time of war. The charter explaining this donation is kept in the archives of Exeter Cathedral and is well preserved. Much of it was written in Latin, although the boundaries of the land donated were given in English (Anglo-Saxon). This is the oldest record of Dawlish in history. However, it is quite possible that a community existed centuries before, even back to the period of the Saxon invasions in the 5th and 6th centuries AD, as the parish church of Dawlish was dedicated to St. Gregory the Great. When Leofric arrived, the manor of Dawlish extended from roughly Teignmouth in the south to Cofton and Cockwood in the north, and the top of Haldon Hill in the west. It was a large area, mostly uninhabited and covered in thick forest, as it would have been difficult to cultivate with poor soil. The main reason that settlers established a ‘village’ in the area was that it was protected, sheltered on three sides by hills and on the remaining face by the sea. The presence of the sea provided limited food, while the wooded area harboured animals which gave meat, and wood for burning to give heating and a method of cooking, as well as for building. The presence of several fresh water rivers gave drinkable water. Work was provided by cultivating salt marshes which also gave a method of preserving food but above all gave trade with other communities. At this time the settlement which grew into Dawlish was not on the coast, the sea was feared by most, nobody knew what was ‘over the water’s edge’, it was known by locals that damaging storms emerged from the sea which also flooded the land, so they kept clear of the coast line. Evidence of early farming settlements were found at Aller Farm, Smallacombe, Lidewell and Higher and Lower Southwood. The name of Dawlish has developed over the years, the earliest spelling recorded is ‘Doflisc’ (Anglo-Saxon) or ‘Dolfishe’ (Latin), The exact meaning or derivation is unknown although it is thought to have meant ‘a fruitful mead in a bottom, or on a river’s side’. Throughout the 1st century Dawlish was referred to by many names, but before that Dawlish had been synonymous with ‘Devil Water’ and after with ‘Meadowland by Running Water’, the latter being the motto adopted in the 20th century by the local council. Other local history will tell that the name ‘Devil Waters’, manifested from the red waters which flow from the hills after heavy rain - this still happens today and it is quite frequent to find the stream running through the town bright red in colour after heavy winter rains. At the time Deawlisc (Dawlish) was a poor community, there was virtually no way of travel, the only ways in or out would have been over very rough cart tracks on to Haldon. Early maps show one track headed towards Luscombe Hill and on towards the Teignmouth direction and one went in the direction of Ashcombe and towards Exeter. A more substantial track existed between the port towns of Exeter and Teignmouth and was met by the tracks out of the Dawlish community. When Leofric died in 1072 he gave his Manor of Dawlish lands to the Dean and Chapter of Exeter Cathedral. The area then remained under Church control until 1807. After Leofric’s Death in 1072, Dawlish is mentioned in the Doomsday book, outlining the land and property owned by Bishop Osbern, Leofric’s successor. It quotes that the bishop had 30 villeins (a villein was a person bound to the land and owned by the feudal lord), eight bordars, three serfs, three cows, two swine, 100 sheep, a coppice three furlongs in length and one in breadth, six acres of meadowland and 12 acres of pasture. It was valued at just £8 a year. The Doomsday entry shows that Dawlish had cultivated land with sheep as its main wealth. The villeins would have lived in cob houses that clustered around the church and worked the Bishop’s land. They would have lived mainly off beans, fruit and hard bread with lard whenever they could get hold of it. Local cider and beer would have been produced, providing a safer source of liquid than the water at the time. The population of around 400 in 1080 grew only slowly, sickness was rife and the only real developments came as land was improved and better food became available. Records show that the Black Death, or Bubonic Plague, came to Dawlish in the 1340s and 1350s which almost wrote off the entire local population. By this time some more wealthy gentry were starting to emerge, and these people were able to escape the effects of the plague as they seldom left their estates, not coming into contact with the sick working classes. The plague returned to Dawlish again in 1629. The Industrial Revolution, when it arrived in Dawlish, made significant changes to life, and the village quickly developed into a small town. The first industrial change was the operation of two flour mills, powered by water wheels, fed from the water course through the town. One, built in the late 1600s was located in what is now Brunswick Place and the other, in around 1730 in Church Street. A further mill was located near to Ashcombe. By the end of the 18th century life in coastal towns such as Dawlish was starting to change for ever, the fear of the sea was receding and people started to extoll the virtues of fresh sea air and possible healing qualities from the sea waters. Dawlish found itself fashionable with the well off or gentry. At the time travel was virtually impossible and the gentry were the only group who could afford to travel via private coach. Few indications exist, but it is widely considered that Dawlish did not have a regular (if you could call it that) stagecoach until around 1812. The new wealthy visitors to Dawlish changed the face of life for the area for ever, with transport so difficult visitors arrived for long durations often with extended families complete with servants typically for an entire summer season. In terms of town development this was fruitful, for many enjoyed the area, purchased land and built new property. The types of Cobb-built cottage properties in the village at the time were not what the gentry sought and thus the area of residence spread further, especially along the banks of the Brook. Soon a number of fine houses and even villas were built using new and improved methods of construction allowing the previously unthought of position for many adjacent to the sea. Indeed some early documentation actually refers to ‘sea views’ and bathing potential! After sea bathing was recognised as a healthy and pleasurable past time, it was still very much a gentleman’s ‘hobby’. Ladies seem not to have been welcome to sea bathe until the latter part of the 19th century. By 1803 Dawlish development was moving forward fast, Dawlishownian John Manning masterminded improving the land either side of the stream or Brook which ran right down the middle of the community, which eventually allowed modern houses to be built closer to the sea front. His work physically straightened the town’s water course, while embankments were built. This work led to the development of a new street, Pleasant Row, which is now known as The Strand. Many people often wonder why the two main roads of Dawlish, either side of the town, are so far apart, even with the relatively narrow water course down the middle. The reason lies in the problem of flooding, which although slightly controlled, has never gone away, even today. Heavy rain on the hills to the back of the town builds up both capacity and speed as they rush downhill towards the open sea. If a high tide, compounded by a south west wind hits at the same time the water from the hills has no where to go and floods the lower part of the town. One of the first reported incidents of serious flooding causing major damage was in 1810 when fast flowing waters washed away eight new bridges, much of the then newly created public lawns, embankments and two residential properties in what is now Brook Street. After this disaster The Brook was altered, and weirs built to prevent a recurrence. At this time, the grass area in the middle of the town was still grazed by sheep. As Queen Victoria arrived on the throne, early plans were being drawn up to bring the railway to the town. A number of propositions were put forward in the 1830s, which led to the building and opening of Isambard Kingdom Brunel’s Atmospheric Railway in 1846. The railway suddenly brought new life to the town. Many hundred ‘navvies’ worked on construction of the line by digging cuttings and boring tunnels. One of the most momentous days in Dawlish history was on Saturday 30 May 1846 when the first passenger train operated; by todays standards it was slow, but the news paper of the period hailed the train “taking only 40 minutes to reach Dawlish from Exeter”. It is fascinating that the opening of the railway made Dawlish the first seaside resort to be served by railway west of Weston-super-Mare. At the time, long distance transport was very much the preserve of the upper classes. The majority of people toiled six days a week and had to attend church once or twice on Sundays. Little time existed for people to visit the seaside except at Bank Holidays. Development of Dawlish slowed towards the end of the 19th century, but increased wealth for the town meant that living standards improved and saw the introduction of gas, a usable water supply, sewerage systems, and even street lighting. Household electricity was also laid on for those with sufficient funds. Protection and safety of town folk also improved with a police office opening in 1857, (which is more than the town has today). A Coastguard look-out was opened in 1868 to provide some protection for mariners. In 1906 a New Zealander introduced the now famous black swans to Dawlish Water or The Brook. John Nash, a Dawlish-born man who emigrated during adulthood but paid frequent visits to the town decided that the town needed some form of uniqueness. The Black Swans are still to be found on Dawlish Water, but today are supplemented by dozens of other species. In the early 1900s, some workers from bigger businesses and employees of the gentry began to receive paid holidays. This saw a major upturn in visitor numbers to the town, with some deciding after a couple of visits to settle in the area. These increases in numbers saw some smaller housing erected (quite large by todays standards) in streets such as Luscombe Terrace and Hatcher Street, while open spaces in other ‘older’ roads was built over with quality housing. World War I stopped most further building until the early 1920s. After the First World War was over, Dawlish became even more established for the day tripper. This had an adverse effect on town life which became less gentrified and more suited to the lower classes. The one and two week annual paid holiday became the norm and more and more people wanted to travel to the seaside town, mainly by train. At around the same time wealthy folk from industrial areas, especially London, Birmingham and Liverpool started to visit or retire to the area, this all leading to the once elegant villas being turned into hotels and guest houses. The area just east of Dawlish, which became known as Dawlish Warren owes its success to the Great Western Railway who first built a station known as Warren Halt close to Langstone Rock in 1905. Prior to this, only a few large houses and mansions were to be found on the hill behind the Warren. By 1929, with the introduction of air travel, Dawlish even had an airport! The Great Western Railway built a small aerodrome on Haldon, to serve the greater Torbay area on a Cardiff to Plymouth route. The Second World War saw an end to the poorly patronised service. However, the airport remained in use for many years under military control. Remnants of the old airfield can still be seen today. By the 1930s Dawlish had become popular as a low budget holiday resort with holiday camps and caravans turning up. The railway played a major role in this, bringing literally thousands and thousands of people to the area every summer, with through trains from most corners of the UK. The outbreak of World hostilities again in 1939 brought further development to a halt and considerably slowed down holiday travel. The Second World War also ended a plan which could have seen the railway disappear as we know it from Dawlish sea front, with plans put forward by the Great Western Railway to build a Dawlish ‘cut-off’ from Powderham via Gatehouse and Weech Road, making Dawlish served by a branch line. If this plan had been furthered, it would have gone right through the author’s house! In 1953, the year Queen Elizabeth II took the throne, the town of Dawlish adopted the Latin phrase ‘Pratum Juxta Rivos Aquarum’ as its motto, which translates (literally) as ‘Meadowland by Running Waters’. The heraldic emblem of the town incorporates the arms of Edward the Confessor (top left: a cross patonce between five martlets, blue in colour), those of Leofric (top right: a dark cross with a bishop’s mitre at the centre), and of the See of Exeter. Holiday travel started to resume after World hostilities ended in 1945, with massive growth in holiday trade in the 1950s. This generated much business for the newly formed British Railways and remained through the early 1960s. By the 1970s, the town and surrounding area was starting a major change; the annual UK holiday was becoming a thing of the past, with low-cost, easily-accessible air travel tempting the previous UK holiday maker to seek pastures new. The hotels started to close and be demolished, to be replaced by retirement and second home accommodation, to such an extend that by 2012 Dawlish only has just one large hotel of merit! The guesthouse market was also adversely affected by the changing holiday patterns with properties sold off for cheap one-room housing, while others have been demolished and rebuilt as high-price nursing homes. The vast majority of remaining holiday accommodation in the area now concentrates on huge camp sites offering a range of packages from self-catering holidays to sites to pitch your own van or tent. It is said that the camp sites in and around Dawlish, Dawlish Warren, Starcross and Teignmouth offer a staggering 20,000 beds. Dawlish station from the Coastgurads bridge in 1951.
<urn:uuid:a573f788-5e53-468f-bf4c-e3d0a2595f8d>
CC-MAIN-2020-16
https://www.marinetaverndawlish.com/dawlish---and-its-history.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371810807.81/warc/CC-MAIN-20200408072713-20200408103213-00274.warc.gz
en
0.983202
3,202
3.15625
3
Ball mill. A typical type of fine grinder is the ball mill.A slightly inclined or horizontal rotating cylinder is partially filled with balls, usually stone or metal, which grind material to the necessary fineness by friction and impact with the tumbling balls. Ball mills normally operate with an approximate ball charge of 30%. Ball Size as Initial Charge. Commercial ball sizes 10 – 150 mm; Number, size and mass of each ball size depends on mill load and whether or not the media is being added as the initial charge. For the initial chargin of a mill, Coghill and DeVaney (1937) defined the ball size as a function of the top size of the feed, i.e., d↓V = 0.40 K√F Ball mills designed for long life and minimum maintenance overflow ball mill sizes range from 5 ft. x 8 ft. with 75 HP to 30' x 41' . and as much as 30,000 HP. Larger ball mills are available with dual pinion or ring motor drives. Our mills incorporate many of the qualities which have made the Marcy name famous since 1913. Laboratory Horizontal Planetary Ball Mill 12 1 2 Next Last. Recommended Products. Dual Planetary Ball Mill. Cryogenic Planetary Ball Mill. Vertical Planetary Ball Mill for Glove Box Use. Heavy-duty Full-directional Planetary Ball Mill. Please Feel free to give your inquiry in the form below. We will reply you in 24 hours. For ball mill, Bond (1958) was proposing an empirical formula giving the top size of balls (make-up balls) DM function of the feed size xG (80% passing size - µm), the ore true specific gravity sg, the ore Bond Work Index Wi (kWh/st), the fraction of critical speed fc and the inside mill diameter D (m): Grinding Concentration Factory Steel Ball Mill Size For Grinding Iron Copper Quartz Ore Production Line In Mining, Find Complete Details about Grinding Concentration Factory Steel Ball Mill Size For Grinding Iron Copper Quartz Ore Production Line In Mining,Ball Mill For Grinding Copper,Grinding Gold Ore,Ball Mill Size from Mine Mill Supplier or Manufacturer … Grinding in ball mills is an important technological process applied to reduce the size of particles which may have different nature and a wide diversity of physical, mechanical and chemical characteristics. Typical examples are the various ores, minerals, limestone, etc. The applications of ball mills are ubiquitous in mineral Calculate and Select Ball Mill Ball Size for Optimum Grinding. In Grinding, selecting (calculate) the correct or optimum ball size that allows for the best and optimum/ideal or target grind size to be achieved by your ball mill is an important thing for a Mineral Processing Engineer AKA Metallurgist to do. The mill product can either be finished size ready for processing, or an intermediate size ready for final grinding in a rod mill, ball mill or pebble mill. AG/SAG mills can accomplish the same size reduction work as two or three stages of crushing and screening, a rod mill, and some or all of the work of a ball mill. Our ball mills are industrial grade and designed for continuous operation, equipped with oversize roller bearings and a complete drive system. All wear parts are highly abrasion resistant and replaceable. The capacity, or throughput, of a ball mill, is directly linked to particle size of the ball mill discharge. Wide Application of Ball Mill Ball mill, as the key grinding equipment of the materials, is widely used for mineral grinding such as cement, lime, quartz, slag, silica, iron ore, copper ore, gold ore, bauxite, calcite, barite, gypsum and other minerals in mining, quarry, chemical, cement and other industries.. Ball mill is the necessary equipment in ore beneficiation plant. Working Principle & Operation. The apparent difference in capacities between grinding mills (listed as being the same size) is due to the fact that there is no uniform method of designating the size of a mill, for example: a 5′ x 5′ Ball Mill has a working diameter of 5′ inside the liners and has 20 per cent more capacity than all other ball mills designated as 5′ x 5′ where the ... DOVE is a manufacturer for ball Mills, grinders, crushers, grinding crushing equipment for gold mining, gemstone mining, metal mining, mineral mining. DoveMining.com. ... DOVE Ball Mills are size reduction machines, designed for grinding applications, where fine material is required. A ball mill is a type of grinder used to grind, blend and sometimes for mixing of materials for use in mineral dressing processes, paints, pyrotechnics, ceramics and selective laser sintering.It works on the principle of impact and attrition: size reduction is done by impact as the balls drop from near the top of the shell. A ball mill consists of a hollow cylindrical shell rotati Ball Mill For Gold. Ball mill is the most widely used kind of grinding equipment. Zenith Ball mills are widely used in various types of ores' benefication, electricity, cement and chemical industries. With high comminution ratio,it can carry out dry or wet pulverizing and can meet demand for sustainable large-scale production. Ball-Mill Base with Ventilated Motor-Cover Installed Ball-Mill Base with Ventilated Motor-Cover Removed The Ball-Mill Drive System Looking inside the mill's motor compartment, you can now see the motor, with a small fan and pulley on its shaft. That small pulley is connected by a drive belt to a large pulley which is mounted on the unit's drive ... Our Gold Stryker® GS 4000HD is a high quality made here in the USA flail impact gold mill and can process and crush up to 2-3 tons of rock a day, all the way down to #300 mesh through the mill to release the gold. Perfect for the small gold mining operation. A wide variety of gold mining ball mill options are available to you, There are 7,189 gold mining ball mill suppliers, mainly located in Asia. The top supplying countries or regions are China, India, and South Korea, which supply 99%, 1%, and 1% of gold mining ball mill respectively. In Grinding, selecting (calculate) the correct or optimum ball size that allows for the best and optimum/ideal or target grind size to be achieved by your ball mill is an important thing for a Mineral Processing Engineer AKA Metallurgist to do. Often, the ball used in ball mills is oversize "just in case". Well, this safety factor can cost you much in recovery and/or mill liner wear and tear. Ball mill - Wikipedia. A ball mill is a type of grinder used to grind, blend and sometimes for mixing of materials for use in mineral dressing processes, paints, pyrotechnics, ceramics and selective laser sintering.It works on the principle of impact and attrition: size reduction is done by impact as the balls drop from near the top of the shell. · Video showing our ball mills for 1 and 2 tons per hour. These mills can crush quartz ore and liberate the gold and sulfides for concentration with our shaker tables. Check out our other videos of ... Introduction:Ball mills are used primary for single stage fine grinding, regrinding, and as the second stage in two stage grinding circuits.According to the need of customers, ball mill can be either wet or dry designs. Ball mills have been designed in standard sizes of the final products between 0.074 mm and 0.4 mm in diameter. Small Capacity Ball Mill For Gold Ore Grinding, Find Complete Details about Small Capacity Ball Mill For Gold Ore Grinding,Small Capacity Ball Mill,Ball Mill For Gold Ore,Gold Ball Mill For Sale from Mine Mill Supplier or Manufacturer-Henan Baichy Machinery Equipment Co., Ltd. · Gold mini ball mill – Small-scale gold mining and…GRINDING MILLS-BALL MILLS-New & Used Mining & …The ball mill is a key piece of equipm. Category People & Blogs; Show more Show less. Overflow Type Ball Mill A ball mill with simple structure. Production capacity: 0.17~170t/h. Product Improvement: Wet type overflow ball mill is lined with Xinhai wear-resistant rubber sheet with excellent wear resistance, long service life and convenient maintenance.. We sent a substantial amount of quartz gold ore to a mill for processing. After a couple hundred tests on the material an average grade was calculated. After milling for several days the mill says there is way less gold in the ore than what had been estimated. Once milling began samples were take Used Mining Equipment ColesMining.com provide comprehensive used equipment solutions used SAG Mills, used ball mills, used rod mills, used tower mills, used Svedala, used Allis Chalmers, used Krupp Polysius, used Mary, used Dominion Engineering, used , used Hardinge, used Denver, used Allis Chalmers, used Outotec, used , used Symonds, used Fuller Traylor, used Kemco, Dorr … Small Capacity Mqg1500x3500 Gold Mining Ball Mill Exported To Tanzania For 100tpd Gold Cil Plant, Find Complete Details about Small Capacity Mqg1500x3500 Gold Mining Ball Mill Exported To Tanzania For 100tpd Gold Cil Plant,Small Ball Mill,Gold Mining Ball Mill,Ball Mill from Mine Mill Supplier or Manufacturer-Henan Xingyang Mining Machinery Manufactory Ball Mills Steel Ball Mills & Lined Ball Mills. Particle size reduction of materials in a ball mill with the presence of metallic balls or other media dates back to the late 1800's. The basic construction of a ball mill is a cylindrical container with journals at its axis. The ball mill is very similar to the SAG mill, except it has a larger proportion of steel balls to assist in the grinding process. Following grinding in the ball mill, the material is returned via the sump (8) to the secondary hydrocyclone (9) for resizing.
<urn:uuid:7cc06acd-7b93-42a6-b81c-4d7558dcefbd>
CC-MAIN-2020-16
https://strawberrylounge.nl/2015-12-16/ball-mill-size-for-gold.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370508367.57/warc/CC-MAIN-20200402204908-20200402234908-00233.warc.gz
en
0.892873
2,118
2.78125
3
President Barack Obama’s aggressive new federal transgender rules require 55 million K-12 kids and teachers to speak in a new government-approved dialect, to ignore what science says about sex, and to comply with privacy-violating orders whenever even one teen in their school claims to have a “gender identity” problem. Obama described the new federal agency directive and instructions, which were announced May 9, as “here’s how there are schools that have been wrestling with this problem, and have, we think, done a good job in accommodating them in a way that is good for everybody.” The documents cite examples from far-left, pro-transgender schools in New York, D.C., California and elsewhere, that Obama is now imposing on all 100,000 K-12 schools and their 55 million American kids and teenagers. The italicized paragraphs are quotes from the 25-page instruction book. - First, all normal kids, teenagers and teachers must shut up, salute, and get with the government’s program or be punished, even for questioning the biological sex of a “transgender” youth during science class. They also have to learn a new language of novel pronouns — “ze” and they” instead of “he” and “she” — for transgenders who declare themselves as something other than male or female. The [D.C. Public Schools] Guidance provides examples of prohibited harassment that transgender students sometimes experience, including misusing an individual’s preferred name or pronouns on purpose, asking personal questions about a person’s body or gender transition, and disclosing private information. - If a girl is uncomfortable with a teenage boy in the girls’ locker room, too bad. She needs to follow orders, shut up and get out so the transgender can get in. The Washington State Guidelines provide that any student who wants increased privacy should be provided access to an alternative restroom or changing area. The guidelines explain: “This allows students who may feel uncomfortable sharing the facility with the transgender student(s) the option to make use of a separate restroom and have their concerns addressed without stigmatizing any individual student.” - If the girl refuses to comply with the new gender identity, anti-sexes orthodoxy, then transgender allies, AKA “LGBTQ liaison” can police the conflict. The DCPS Guidance recommends talking to students to come up with an acceptable solution: “Ultimately, if a student expresses discomfort to any member of the school staff, that staff member should review these options with the student and ask the student permission to engage the school LGBTQ liaison or another designated ally in the building.” As President Obama told Buzzfeed on May 16, “you can learn from these best practices, this is what we are advising.” - All officials must believe any student when he or she claims to have a novel “gender identity,” and cannot ask for evidence of actual feelings or related actions. The Departments interpret Title IX [law] to require that when a student or the student’s parent or guardian, as appropriate, notifies the school administration that the student will assert a [unusual] gender identity that differs from previous representations or records, the school will begin treating the student consistent with the student’s gender identity. - Science and data does not matter, nor does the fact that transgender adults comprise perhaps 0.03 percent of the population, or one in every 2,400 Americans. Schools generally rely on students’ (or in the case of younger students, their parents’ or guardians’) expression of their gender identity. Although schools sometimes request some form of confirmation, they generally accept the student’s asserted gender identity As President Obama told Buzzfeed, “you can learn from these best practices, this is what we are advising.” If some of the nation’s 100,000 public K-12 schools don’t accept Obama’s rules and impose them on their 55 million enrolled children and teenagers, those schools can lose federal funding or get sued, and there’s nothing the GOP-majority Congress can do about the November election. Obama’s new sex-regulations are intended to promote the far-left idea that people’s feelings about their own “gender identity” are legally and morally more important that normal Americans’ evolved civic respect for the different needs and feelings of the Americans in both of the two different, complementary and equal sexes. In effect, Obama’s rules put government power behind the sex-hating “gender identity” claim, and government power against normal peoples’ preference for the competing-and-fraternizing sexes. - When a youth declares himself or herself to be transgender, the sexual privacy of all other students in bathrooms and locker rooms must be subordinated so even a single mixed-sex youth can feel comfortable. The Washoe County Regulation provides: “Students shall have access to use facilities that correspond to their gender identity as expressed by the student and asserted at school, irrespective of the gender listed on the student’s records, including but not limited to locker rooms.” - Sports leagues need to get with the new “gender identity” orthodoxy, so boys’ tough contact sports must accept physically smaller girls, and girls’ leagues must accept physically larger juniors and seniors, or else the lawsuits will be filed. The NYSED Guidance explains that “physical education is a required part of the curriculum and an important part of many students’ lives. Most physical education classes in New York’s schools are coed, so the gender identity of students should not be an issue with respect to these classes. Where there are sex-segregated classes, students should be allowed to participate in a manner consistent with their gender identity.” As President Obama told Buzzfeed, “you can learn from these best practices, this is what we are advising.” - Under Obama’s new sex rules, loving parents can be kept in the dark while school officials let even troubled youths begin life-changing sexual experiments on themselves. Parents are often the first to initiate a conversation with the school when their child is transgender, particularly when younger children are involved. Parents may play less of a role in an older student’s transition. Some school policies recommend, with regard to an older student, that school staff consult with the student before reaching out to the student’s parents … California’s El Rancho Unified School District issued a regulation … [that] reminds school personnel to be “mindful of the confidentiality and privacy rights of [transgender] students when contacting parents/legal guardians so as not to reveal, imply, or refer to a student’s actual or perceived sexual orientation, gender identity, or gender expression.” - School counselors, and the government, can become substitute parents for kids who declare themselves to be transgender, for example, when loving parents are encouraging the child to delay a transgender claim until they’re older. School counselors can help transgender students who may experience mental health disorders such as depression, anxiety, and posttraumatic stress … Schools will be in a better position to support transgender students if they communicate to all students that resources are available, and that they are competent to provide support and services to any student who has questions related to gender identity. Obama’s rules create a new schoolhouse social structure, where even a single transgender teenager gets the government’s full power to help him or her rewrite the sex and privacy and social rules for all other kids and adults in the school. In practice, of course, that policy can only be imposed if there’s a police force of teachers, guidance counselors, “LGBTQ liaisons,” lawyers, plus some volunteer parents and cheerleading media, to actually enforce the youth’s wishes against all of the unwilling kids, teenagers and parents in the school. That’s OK with Obama — and his many political allies who will be paid to enforce his new national rules in 100,000 schools. Throughout his half-hour interview with BuzzFeed, Obama focused on the wishes of the sexually-conflicted less-than-1-percent, while ignoring the preferences and sexual-privacy needs of the normal 99 percent of schoolkids and parents. We’re talking about kids, and anybody who’s been in school, been in high school, who’s been a parent, I think should realize that kids who are sometimes in the minority — kids who have a different sexual orientation or are transgender — are subject to a lot of bullying, potentially they are vulnerable. I think that it is part of our obligation as a society to make sure that everybody is treated fairly, and our kids are all loved, and that they’re protected and that their dignity is affirmed. Obama has declared that his gender-identity rules are an issue of “discrimination” for judges to decide, not a democratic issue for voters and politicians to decide. That’s a powerful political strategy, in part, because it allows a few liberal judges to quickly impose national changes. So, for example, just two judges recently backed a legal claim by Gavin Grimm in Virginia, above, which may impose major changes to all the students in his high school. Obama’s discrimination strategy also encourages his political allies to label their normal opponents bigots, and it pushes his media allies to portray opponents as racists. The discrimination strategy often intimidates politicians and normal people, even though the polls show that the opponents are a majority in the nation, and that most are willing to take some steps to help sex-confused youths and adults. Generally, polls shows the public is strongly opposed to new sex rules, and even Obama’s supporters don’t like his one-size-fits-all federal rules. After reading the polls, Donald Trump is adjusting his policy to urge the federal government to allow state and local governments — not judges — to work out reasonable political compromises that help both ordinary kids and the few transgender kids. Already, some kids in Vermont and Missouri are rebelling against the pro-transgender, anti-sexes policy. Obama’s transgender ideology is now spilling beyond schools that get federal funds and into private businesses. In New York City, new directives from the Human Rights Council require employers, landlords and all businesses to accept the “gender identity” of anyone who claims one. The law requires that if someone wants to be known as “ze” rather than “he” or “her,” they have to call them that. The government forces people to call them that novel pronoun. The government compels speech even if that speech is something people and scientists do not believe. If a person’s violate the new trans-speech law, they can get fined $125,000. If they persist, they can be fined $250,000. UCLA law professor Eugene Volokh, writing in The Washington Post, postulates that someone could insist upon being called “glugga” instead of “he” since pronouns are now totally open ended. The New York law also requires people to use a person’s preferred “title.” No longer just Mr., Mrs., Miss, or Ms. Even that is now open-ended. Volokh suggested that someone might insist on being called “Milord” or “Your Holiness,” and Americans would be required by law to do it. Volokh explains that the Supreme Court in Wooley vs. Maynard (1978) decided that people can’t be compelled to use “Live Free or Die” on their state license plate because that would be compelled speech. “But New York is requiring people to actually say words that convey a message of approval of the view that gender is a matter of self-perception rather than anatomy, and that, as to “ze,” were deliberately created to convey that message.”
<urn:uuid:12f65268-6806-473c-a6c8-592300af2875>
CC-MAIN-2020-16
https://www.breitbart.com/politics/2016/05/18/transgender-sex-police-obama-rules/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371637684.76/warc/CC-MAIN-20200406133533-20200406164033-00553.warc.gz
en
0.952925
2,538
2.796875
3
Marchantia polymorpha L., a common thalloid liverwort, is a significant weed species in nursery and greenhouse operations across North America and Europe, being particularly problematic in propagation houses where the environmental conditions maintained for newly established potted plants are ideal for rapid liverwort establishment (Svenson et al., 1997) (Note: for the purposes of this article, the term liverwort refers only to the species M. polymorpha). Liverwort reproduces sexually through spore formation and asexually through tissue fragmentation and the production of gemmae, clonal fragments produced in specialized structures called gemma cups (Altland et al., 2003; Svenson et al., 1997). Combined, these reproductive strategies enable the rapid distribution and development of liverwort on the surface of nursery container growth substrates (Fig. 1). In potted plant production, liverwort infestations present a clear impediment to water and nutrient infiltration (Fig. 1), thereby reducing the growth and value of the crop (Svenson et al., 1997). This diversion results in higher water and fertilizer demands, which translates to greater production costs, reduced productivity, and environmental impacts in the form of excessive water taking and increased nutrient discharge from the production facility. A heavy liverwort infestation also provides a habitat for other pests and potential pathogens such as fungus gnats (Bradysia spp. Sciaridae), snails (e.g., Helix spp.), slugs (e.g., Deroceras spp.), and a host of microbial threats such as Fusarium spp. and Pythium spp. (Svenson et al., 1997). Additional costs to control these pests, combined with production losses resulting from their activity, further erode profit margins. Impacts of a significant liverwort infestation (on profit margins) continue to be realized once a potted crop reaches marketable size. The presence of liverwort is considered unsightly and often taken as an indication of reduced quality or plant vigor, all of which impact the final valuation of the crop. A significant amount of research has been conducted to evaluate chemical compounds for the control of liverwort (Newby, 2006). Svenson et al. (1997) provide a list of compounds purported to have some efficacy in the control of liverwort. Although potentially effective under prescribed conditions, many of the listed chemicals are not registered for liverwort control. Lack of registered control products leaves growers with few options beyond hand removal. Hand removal is a costly method of weed control by any measure and can increase the unit cost of production dramatically. Estimates put the cost of supplemental hand weeding (not exclusive to liverwort) at $1235–$9880 per ha (Case et al., 2005; Judge et al., 2004). In addition to the direct labor costs associated with hand removal, the physical removal of weeds also removes a portion of the upper layer of substrate (including surface-applied slow-release fertilizer), thereby damaging roots in the upper segment of the pot. The cost of hand removal and the impacts that the practice has on substrate structure and root vigor necessitates continued effort to develop alternative control strategies. Dissolving highly reactive ozone (O3) gas in irrigation solutions (ozonation) is an emerging agricultural water remediation technology that has garnered favor on both environmental and operational efficacy grounds. Ozone is a highly effective antimicrobial agent while also being reactive with many chemical contaminants that may be present in irrigation source water. Furthermore, in a time when organic markets are outpacing traditional agricultural commodity markets, with organic products commanding significant price premiums (Kendrick, 2008), ozone is one of the few disinfection options compatible with organic production methods and certification bodies. Ozone's acceptance as an organically compatible intervention technology is based largely on the fact that there are no ozone residuals remaining on the crop after application. Residual ozone (not consumed as a part of the treatment) spontaneously reverts to diatomic oxygen (O2) in a complex process that further enhances the antimicrobial effect. This study has focused on aqueous ozone [O3(aq)] (in the context of this study, aqueous ozone refers to water that retains a residual ozone concentration) as a potential component of an overall liverwort management program when the technology (ozonation system) is already used as an irrigation water remediation tool. Aqueous ozone has a long history of water and wastewater treatment applications and in recent years has also gained some momentum as an irrigation water treatment technology in nursery and greenhouse production (Ehret et al., 2001; Graham et al., 2009; McDonald, 2007). Operators that use ozonation as a component of their irrigation water treatment system tend to use it in batch format. The water is treated with ozone and stored in tanks to allow the residual ozone to revert to O2. Alternatively, the solutions are passed through filters that break down the residual ozone. The removal of the ozone before distribution to the crop provides an opportunity for re-inoculation of the solution from biofilms found on the distribution system hardware. The removal of the disinfecting agent also disallows any potential for in situ pathogen control through direct ozone contact with pathogen vectors on the plant or growth substrate surfaces. Justifiable prudence prompts the removal of ozone from irrigation solutions as ozone (gas) phytotoxicity is well established (Ashmore, 2005). Tropospheric ozone enrichment (photochemical smog) elicits phytotoxic reactions in a wide array of plant species over a range of concentrations (Bell and Treshow, 2002). Although phytotoxic, recent studies suggest that under conditions of controlled application in aqueous solution, ozone can be safely applied (foliar and directly to substrate) to select horticultural crop species (Fujiwara and Fujii, 2002; Graham et al., 2009, 2011; Ohashi-Kaneko et al., 2009; Sloan and Engelke, 2005). There is also limited evidence that ozone application to the root zone can improve some plant performance metrics (Graham et al., 2011; Sloan and Engelke, 2005). The capacity to safely retain residual ozone in the irrigation solution during distribution to the crop is significant in that it may allow for the control of pests/pathogens throughout the irrigation system and may in fact have some efficacy in the control of pests at the plant/pot level (Fujiwara and Fujii, 2002). Marchantia polymorpha (and thalloid liverworts in general) are distinctive within the plant kingdom in that they do not possess the stomatal machinery (guard cells) to actively regulate gas exchange between the bulk atmosphere and the thallus interior (Green and Snelgar, 1982). In place of a functional stomatal complex, liverwort has a pore structure that is a largely unregulated diffusion pathway. It is reasonable to surmise that this restricted capacity to regulate gas exchange would result in a greater flux of pollutant gases relative to plants fully capable of regulating gas exchange. If this is the case, then liverwort should exhibit a greater negative growth response to an application of ozone (gaseous or aqueous) with all else being equal (i.e., no species-specific unique antioxidant systems). It is on this premise that these studies were based. The objectives of the presented studies were to: 1) examine contact time (concentration multiplied by the time of exposure) as a process control parameter for liverwort management; 2) establish initial aqueous ozone toxicity thresholds for liverwort; and 3) evaluate the response of liverwort to aqueous ozone at exposure levels and application frequencies that are consistent with the tolerance thresholds of select woody perennial nursery species established previously (Graham et al., 2009). BellJ.N.B.TreshowM.2002Air pollution and plant life. 2nd Ed. Wiley West Sussex U.K. EhretD.AlsaniusB.WohankaW.MenziesJ.2001Disinfestation of recirculating nutrient solutions in greenhouse horticultureAgronomie21323339 FujiwaraK.FujiiT.2002Effects of spraying ozonated water on the severity of powdery mildew infection on cucumber leavesOzone Sci. Eng.24463469 FujiwaraK.FujiiT.2004Effects of ozonated water spray droplet size and distance on the dissolved ozone concentration at the spray targetOzone Sci. Eng.26511516 GrahamG.T.2002The effects of nutrient solution ozonation on nutrient balance and lettuce production in a recirculating hydroponic system. MSc diss. University of Guelph Guelph Ontario Canada. GrahamT.ZhangP.WoyzbunE.DixonM.2011Response of hydroponic tomato to daily applications of aqueous ozone via drip irrigationSci. Hort.129464471 JudgeC.A.NealJ.C.WeberJ.B.2004Dose and concentration responses of common nursery weeds to Gallery. Surflan and TreflanJ. Environ. Hort.22106112 KendrickJ.2008Organic: From niche to mainstream. Statistics Canada. Catalogue no. 96-325-XIE2007000 Canadian Agriculture at a Glance. 6 Dec. 2011. <http://www.statcan.gc.ca/pub/96-325-x/2007000/article/10529-eng.htm>. McDonaldG.V.2007Ozone (O3) efficacy on reduction of Phytophthora capsici in recirculated horticultural irrigation water. PhD diss. Texas A&M College Station TX. p. 121. NewbyA.2006Liverwort control in container-grown nursery crops. MSc diss. Auburn University Auburn AL. Ohashi-KanekoK.YoshiiM.IsobeT.ParkJ.-S.KurataK.FujiwaraK.2009Nutrient solution prepared with ozonated water does not damage early growth of hydroponically grown tomatoesOzone Sci. Eng.312127 RasbandW.S.1997–2011. ImageJ. U.S. National Institutes of Health Bethesda MD.
<urn:uuid:01304c00-8407-4b80-9e8a-e37a5fc4241d>
CC-MAIN-2020-16
https://journals.ashs.org/hortsci/view/journals/hortsci/47/3/article-p361.xml?rskey=8K5X6P
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371818008.97/warc/CC-MAIN-20200408135412-20200408165912-00153.warc.gz
en
0.884369
2,161
3.46875
3
Caregiving can be a full-time job, but help is available for this very important role. If you are helping to care for a loved one with cancer, you are a “caregiver.” You may see what you’re doing as something natural: taking care of someone you love. Still, for many people, caregiving isn’t easy. But there are many things you can do to make it less difficult. This e-booklet is designed to help you, the caregiver. It is filled with tips from the professional oncology social workers at CancerCare, a national nonprofit organization that has helped people with cancer and their caregivers for more than 70 years. Our social workers are specially trained to help people cope with the emotional and practical challenges of cancer. Read this e-booklet straight through, or refer to different sections as you need them. Some sections may not apply to your situation. Use this booklet in whatever way works best for you. Be sure to talk with your loved one often about what they feel would be most helpful. The Role of the Caregiver Caregivers provide important emotional, practical and physical care for a person with cancer. Often, caregivers are family members or friends. They may live with, nearby or far away from the person they care for. There are many different ways to be a caregiver. Caregiving can mean helping your loved one with daily activities, such as getting to the doctor or preparing meals. It can also mean helping the person cope with feelings that come up during this time. The kind of support that a caregiver provides will be different for each person. In general, caregiving tasks fall into three categories: medical, emotional and practical. This e-booklet provides many examples of things in each of these categories that caregivers can do to help. Helping to Manage Your Loved One’s Treatment Sometimes, a person diagnosed with cancer feels overwhelmed and may need someone to help them sort through treatment options. Or, they may want someone there to help listen to the doctor’s instructions. A person receiving treatment might need a caregiver’s help in managing side effects or taking medication. Here are some ways to help manage your loved one’s treatment: Gather information. Learn about your loved one’s diagnosis and possible treatment options. One good place to start is by asking the doctor or nurse what resources they recommend. There are also many reliable websites and cancer organizations that can provide accurate, up-to-date medical information. Please see the ‘Introduction’ tab for a list of reliable websites. Go to medical appointments together. Before a visit with the doctor, write down any questions the two of you would like to ask. Bring a notebook or portable voice recorder, so you can keep track of the doctor’s answers and refer to them later. If you need to speak with the health care team without your loved one present, find out about the rules of the Health Insurance Portability and Accountability Act (HIPAA). This law gives patients greater access to their own medical records and more control over how their health information is used. Your loved one will need to give written permission, by signing a consent form, before doctors can share information with you about their medical treatment. Learn how to help with physical care. Depending on how they are feeling, people going through cancer and treatment may need help with a wide range of activities they would normally do themselves, such as bathing or dressing. Ask your loved one to let you know how they want you to help with these tasks. Ask about special instructions. Check with the doctor or nurse to find out if there are any specific instructions you should be aware of. For example, are there any tips for managing a particular side effect, or does a special diet need to be followed during treatment? Keep the doctor’s phone number in a place that is easy to find in case you have questions. Learn about organizations that help with medical care. If you need help managing some of your loved one’s medical needs, ask your doctor or hospital social worker about local home health agencies. These agencies may send nurses to the home to give medications, monitor vital signs or change bandages, for instance. Home health agencies can also send care providers who attend to other personal needs such as bathing, dressing, cooking or cleaning. Providing Emotional Support Going through cancer is often described as an emotional roller coaster, with many ups and downs. As a caregiver, you may see your loved one go through a wide range of emotions. While this can be difficult for both of you, your willingness to listen and offer support will make a difference. It is hard to watch someone you care about go through so many difficult emotions. There are things you can do, however, to help both of you cope: Listen to your loved one. It is important to listen without judging or “cheerleading.” We are often tempted to say “you will be fine” when we hear scary or sad thoughts. But simply listening to and validating those feelings can be one of the most important contributions you make. Do what works. Think about how you’ve helped each other feel better during a difficult time in the past. Was a fun outing a helpful distraction? Or do the two of you prefer quiet times and conversation? Do whatever works for you both, and don’t be afraid to try something new or make modifications to plans that you enjoyed before. Support your loved one’s treatment decisions. While you may be in a position to share decision making, ultimately it is the other person’s body and spirit that bear the impact of the cancer. Get information about support groups. Joining a support group gives your loved one a chance to talk with others coping with cancer or caregiving and learn what they do to manage difficult emotions. Sometimes, support groups are led by social workers or counselors. Ask a hospital social worker for a referral, or contact CancerCare. We offer face-to-face, telephone, and online support groups for people with cancer. If it’s needed, continue your support when your loved one’s treatment is over. This can be an emotional time for many people. Despite being relieved that the cancer is in remission (stopped growing or disappeared), you and your loved one may feel scared that it will return. The end of treatment also means fewer meetings with the health care team, on which you and your loved one may have relied for support. You may also have questions about how treatment ending impacts your role as a caregiver, so getting support during this transition can be helpful. Recommend an oncology social worker or counselor specially trained to offer advice. If you think your loved one may need additional support coping with his or her emotions during this time, suggest speaking with a professional who can help, such as an oncology social worker. Helping Your Loved One with Practical Matters In addition to helping with medical and emotional concerns, caregivers often help by taking on many practical tasks. Some day-to-day activities caregivers can do include running errands, pitching in with household chores, preparing meals and helping with child care. Because cancer can also place a tremendous strain on a family’s finances, caregivers are often left with the task of managing financial issues, too. Fortunately, there are many resources available to help. Here are some tips for finding financial help for costs related to cancer: Review your loved one’s insurance policies to understand what’s covered. Your insurance company can assign a case manager who can explain what services and treatments the plan does and doesn’t cover and answer any questions. Case managers work for insurance or other types of agencies. They help clients gain access to resources and services. He or she can also help explain any out-of-network benefits the policy may offer, such as medical services from doctors not on your insurance plan. Understand what your loved one is entitled to. Some types of aid for people with cancer are required by law. These programs are called entitlements—government programs that give financial and other aid to people in certain groups such as those with cancer. A hospital or community social worker can direct you to the governmental agencies that oversee these programs. Ask for help. If you need help with hospital bills, speak to a financial counselor in the hospital’s business office. He or she can help work out a monthly payment plan. If your loved one expects to run out of money, or has already, talk to his or her creditors. Many landlords, utilities and mortgage companies are willing to work out a payment plan before a crisis develops. Reaching out for help early on is most helpful. Apply for financial help. For many people, expensive cancer medicines pose a financial challenge. Fortunately, there are many programs to help qualified individuals get medications for free or at a low cost. For more information, contact the Partnership for Prescription Assistance, listed among the resources. CancerCare also provides financial help. We provide limited grants for cancer-related costs such as transportation and child care. We also provide referrals to other organizations that can provide assistance. Call us at 800-813-HOPE (4673) to learn more. Taking Care of Yourself Taking care of a loved one can be a positive experience. For example, some people say that caregiving strengthened their relationship. But it can also be very stressful. Many caregivers say it often feels like a full-time job. Caregiving can be even more challenging if you have many other responsibilities, like working, raising children, or caring for your own health. Sometimes, caregivers tend to put their own needs and feelings aside. It is important, though, for you to take good care of yourself. This will make the experience less stressful for you. Caregivers spend a lot of time looking after the health of their loved ones. This often means that the caregiver spends less time focusing on his or her own needs, such as eating well and exercising. Yet taking care of your own physical health is an important part of caregiving. Here are some tips for caring for your health: Stay active. Experts recommend exercising for at least 30 minutes each day. Activities can include walking quickly, jogging, or riding a bike. Keep in mind that you don’t have to set aside a lot of time to exercise—you can work it into your day. For example, take the stairs instead of the elevator, or park your car farther away than you normally do. Some exercises can also be done in the home, such as yoga. Pay attention to what you’re eating. Keeping a balanced diet is an important part of taking care of yourself. Include fruits and vegetables in your meals. Nuts, yogurt, and peanut butter sandwiches are easy snacks with lots of protein that will keep your energy level up. Pack snacks if you know you will be with your loved one at the doctor’s office or the hospital all day. Get enough sleep. Caregiving can be emotionally and physically draining. You may find yourself more tired than usual. Try to get enough sleep.—The Center for Disease Control and Prevention (CDC) recommends at least seven hours per night for adults Also, take naps if you need them. Rest regularly. As a caregiver, you may find that it is hard to relax, even if you have time for it. Deep breathing, meditating, or gentle stretching exercises can help reduce stress. CancerCare offers a meditation app that can help with these exercises. Keep up with your own checkups, screenings, and medications. Your health is very valuable. Stay on top of your own medical appointments, and have a system for remembering to take any medicines you need to stay healthy. Getting Emotional Support Caregiving is hard work that can affect your emotional well-being. Taking care of yourself includes coping with many of your own feelings that come up as you care for your loved one. Many people feel more emotional than usual when they are coping with a loved one’s cancer. This is normal. You cannot make difficult feelings go away, but there are things you can do to feel better. Here are some tips for coping with the emotional impact of your loved one’s cancer: Take a break. If possible, take some time out for yourself regularly. Even if it’s just for a few minutes, doing something you enjoy can help you recharge. For example, listening to relaxing music or going for a walk might help you clear your head. Be aware of your limits. Remember that there are only so many hours in a day. Feel free to say “no” when people ask you to take on tasks you don’t have the time or energy to complete. Keep a journal. Writing sometimes helps people organize their thoughts and come up with practical solutions. Writing about your thoughts, feelings, and memories can also strengthen your spirit. Open up to friends and family. Ask friends or family members if they would be willing to be “on call” in times of stress. Or plan a regular “check-in” time when you can get together or call each other. Consider developing your spiritual side. For some people, this means participating in religious activities. Others find spirituality in art or nature. No matter what your beliefs are, developing your spiritual side could provide comfort during this time. Talk to a helping professional about your feelings and worries. Many caregivers feel overwhelmed and alone. You may need more than friends or family members to talk to. Speaking with a counselor or oncology social worker may help you cope with some of your feelings and worries. CancerCare’s oncology social workers are just a phone call away. Join a support group for caregivers. Talking with other caregivers can also help you feel less alone. CancerCare offers free face-to-face, telephone, and online support groups for caregivers. These groups provide a safe haven where you can share your concerns and learn from others who are going through similar situations. Go easy on yourself. Sometimes, you may feel you could have done something differently. Try not to be too hard on yourself. Focus on all the positive things you are doing for your loved one.
<urn:uuid:9991f34f-1ee2-4c79-9dd2-75ea173d820c>
CC-MAIN-2020-16
https://www.cancercare.org/publications/1-caregiving_for_your_loved_one_with_cancer
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505730.14/warc/CC-MAIN-20200401100029-20200401130029-00394.warc.gz
en
0.964476
2,960
2.703125
3
This tutorial shows how to use Max lists anf the jit.fill object to fill all or part of a matrix , and how to retrieve all or part of a matrix's contents as a list with jit.spill. We will also demonstrate the use of matrix names to access the contents of a matrix remotely, a concept that will be demonstrated further in Tutorials 12, 16, and 17. At the left of the patch, you'll see a blue jit.matrix object. The first argument gives the matrix a specific name, . The remaining arguments say that the matrix will have plane of data, and that the matrix will have only one dimension with cells. In Tutorial 2 we explained that every matrix has a name. If we don't give a matrix a name explicitly, Jitter will choose a name arbitrarily (usually something strange like "u040000114", so that the name will be unique). The name is used to refer to the place in the computer's memory where the matrix's contents are stored. So, why give a name of our own to a matrix? That way we'll know the name, and we can easily tell other objects how to find the matrix's contents. By referring to the name of a matrix, objects can share the same data, and can access the matrix's contents remotely, without actually receiving amessage. In Tutorial 2 we showed how to place a numeric value in a particular matrix location using the jit.fill object to place a whole list of values in a matrix. (Later in this chapter we'll also show how to retrieve many values at once from a matrix.)message, and how to retrieve the contents of a location with the message. Now we will show how to use the In this example, the list was exactly the right length to fill the entire matrix. That need not be the case, however. We can place a list of any length in any contiguous portion of a 1D or 2D matrix. The offset attribute By default, jit.fill places the list of values at the very beginning of the matrix. You can direct the list to any location in the matrix, though, by setting jit.fill's attribute. The subpatch demonstrates the use of the feature. This example chooses a cell index at random, uses that random number as the argument to an jit.fill object, then sends a 16-element list to be stored starting at that index in the matrix.message to the So far we've shown how to put a predetermined list of values into a matrix. When you want to generate such a list of numbers interactively in Max and place them in a matrix in real time, you'll need to use a Max object designed for building lists. We'll look at two such objects: multislider, and zl. The multislider object displays a set of individual sliders, and it sends out the position of all of its sliders at once as a list of values. (The sliders can be as small as one pixel wide, which can make it look more like a graph than a set of individual controls.) It sends out the whole list when you click in the window to move any of the sliders, and it sends the list again when you release the mouse button. In the draw_list subpatch, we've set up a multislider to contain 256 sliders that send values from 0 to 255, so it's just right for sending a list of 256 char values to the jit.fill object. As soon as jit.fill receives a in its inlet, it writes the values into the named matrix (at the position specified by the attribute). As soon as this is done, jit.fill sends a out its left outlet. You can use that to trigger another action, such as displaying the matrix. In some situations you might want to use a matrix to store numeric messages that have occurred somewhere in the patch: MIDI messages, numbers from a user interface object, etc. The jit.matrix are useful for that, but another way to do it is to collect the messages into a list and then place them in the matrix all at once with jit.fill.and messages to The zl object is a versatile list-processing object with many possible modes of behavior, depending on its first argument. When its first argument is , it collects the messages received in its left inlet until it has amassed a certain number of them, then sends the numbers out as a single list. (The values are grouped in the order in which they were received.) So, in the subpatch, we have placed a zl object that will collect 256 values in its left inlet, and when it has received 256 of them it will send them out its left outlet as a list (and clear its own memory). You can change the length of the list that zl collects, by sending a new list length in the right inlet from the List Length number box. And you can say where in the matrix you want to put it, by sending an message to jit.fill from the Location number box. By varying the list length and location, you can put any number of values into any contiguous region of the matrix. jit.fill with Multiple-plane Matrices jit.fill works fine with multiple-plane matrices, but it can only fill one plane at a time. The plane that jit.fill will access is specified in its attribute. In the subpatch, we've created another matrix, with four planes of char data this time, named . We've set up three multisliders and three jit.fill objects, each one addressing a different color plane of the matrix. This is a convenient way to generate different curves of intensity in the RGB planes of a matrix. The jit.pwindow that's showing the matrix is actually 256 pixels wide, so each of the 64 cells of the matrix is displayed as a 4-pixel-wide band. If you turn on the attribute of the jit.pwindow, the differences between adjacent bands will be smoothed by interpolation. jit.fill with 2D Matrices So far, all of our examples have involved one-dimensional matrices. What happens when you use a list (which is a one-dimensional array) to fill a two-dimensional matrix via jit.fill? The jit.fill object will use the list to fill as far as it can in the first dimension (i.e. it will go as far as it can the specified row), then it will wrap around to the next row and continue at the beginning of that row. We've made it possible for you to see this wrapping effect in action. The complementary object to jit.fill is jit.spill. It takes a message in its inlet, and sends the matrix values out its left outlet as a Max . You may have noticed that while you were using the red multislider the jit.spill object below was sending the values of plane 1 (red) out its left outlet and setting the contents of a message box. If you need to have the values as an immediate series of individual number messages rather than as a single list message, you can send the list to the Max iter object. For times when you need to retrieve every value in a matrix, there is an object called jit.iter. When it receives a message in its inlet, it sends out an as-fast-as-possible sequence of messages: the cell index (out its middle outlet) followed by the value(s) in that cell (out its left outlet) for every cell of the matrix in order. For a large matrix, this can be an awful lot of Max messages to try to send out in a single tick of Max's scheduler, so when it's done reporting all of the values in a matrix jit.iter sends a message out its right outlet. In the jit.iter object which receives the matrix information from the jit.matrix object. We use a swap object to switch the order of the cell index (coming out the middle outlet of jit.iter) and the cell value (coming out the left outlet of jit.iter). We then use the value of that cell as the y-value we want to store in a table object, and we use the cell index as the x-axis index for the table .subpatch there is a Note that this technique of using jit.iter to fill a table works well with a modest-sized one-dimensional one-plane matrix because a table is a one-dimensional array. However, the matrix of a jit.movie object, for example, has two dimensions and four planes, so in that case the output of jit.iter's middle (cell index) outlet would be a two-element list, and the output of the left (value) outlet would be a four-element list. Still, for one-dimensional matrices, or small 2D matrices, or even for searching for a particular value or pattern in a larger matrix, jit.iter is useful for scanning an entire matrix. For placing individual values in a matrix, or retrieving individual values from a matrix, you can use the jit.matrix (as was demonstrated in Jitter Tutorial 2). For placing a whole list of values in a matrix, or retrieving a list of values from a matrix, use the objects jit.fill and jit.spill. These objects work well for addressing any plane of a 1D or 2D matrix, and they allow you to address any list length at any starting cell location in the matrix.and messages to The multislider and zl objects are useful for building Max list messages in real time. With multislider you can draw a list by dragging on the sliders with the mouse. With zl you can collect many individual numeric values into a single list, then send them all to jit.fill at one time. You specify the starting cell location in the matrix by setting the jit.fill (or jit.spill). The jit.fill object requires that you set its attribute (either by sending it a message or by typing in a argument), specifying the name of the matrix it will fill. It accesses the matrix using this name, and sends a out its outlet whenever it has written a list into the matrix. You can use that to trigger other actions. In Tutorials 12, 16, and 17 we show some practical uses of accessing a matrix by its name.attribute of To output every value in an entire matrix, you can send the matrix to jit.iter. |ctlin||Output received MIDI control values| |jit.fill||Fill a matrix with a list| |jit.iter||Iterate a matrix as lists or values| |jit.matrix||The Jitter Matrix!| |jit.print||Print a matrix in the Max Console| |jit.pwindow||Display Jitter data and images| |jit.movie||Play a QuickTime movie| |jit.spill||Unroll a matrix into a list| |metro||Output a bang message at regular intervals| |multislider||Display data as sliders or a scrolling display| |prepend||Add a message in front of input| |random||Generate a random number| |slider||Move a slider to output values| |zl||Process lists in many ways|
<urn:uuid:8b734842-bd1d-4e8f-bd5a-a687e8d5403b>
CC-MAIN-2020-16
https://docs.cycling74.com/max7/tutorials/jitterchapter11
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371858664.82/warc/CC-MAIN-20200409122719-20200409153219-00234.warc.gz
en
0.88395
2,393
2.671875
3