text
stringlengths
1k
23.6k
id
stringlengths
47
47
dump
stringclasses
3 values
url
stringlengths
16
1.34k
file_path
stringlengths
125
126
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
2.05k
4.1k
score
float64
2.52
4.91
int_score
int64
3
5
Can science and technology not only change a person's life, but contribute to their self-improvement? What are the pros and cons of artificial intervention in the evolutionary process? These questions are at the core of the concept of transhumanism. Transhumanism is a contemporary philosophical concept whose ideas launched the eponymous social movement. Adherents to this movement believe we should look at humans and their future from a fundamentally different perspective. They believe that human evolution has not come to its end - instead, it’s only starting. Transhumanism can be viewed in both a narrow and broad sense. Implementation of modern technologies In a narrow sense, transhumanists are championing the implementation and widespread use of modern technologies. Their goal is to improve our physical and mental abilities. There is great potential in this area: preventing diseases, delaying aging processes (ideally, achieving immortality), bringing new superhuman abilities to the human body. Transhumanist ideas are aimed at overcoming limitations of the human body by using scientific developments in the fields of genetic engineering, robotics, biotechnology and others. A new step in evolution In a broad sense, transhumanism marks the start of a new step in evolution, a qualitative shift and transformation of the human body. The modern person is moving towards the next step in development thanks to science and technology. This is a fundamentally new step, because it allows humans to completely overcome all physical ailments, and eventually achieve immortality. Scientific immortalization (using science to delay the time of death) can be considered one of the primary goals of transhumanism. Posthumanists also believe that human evolution is not completed and will continue progressing. This concept has much in common with transhumanism. That’s why many experts share the opinion that transhumanism is just one of the types of posthumanism. The essential ideas of transhumanism The World Transhumanist Association was founded in 1997. In 2008, its name was shortened to Humanity+. The association supports projects and research geared towards the development and improvement of humans using technology, while popularizing transhumanist ideas and dispelling any myths about the movement. The core goals and values of this movement were described in a declaration made by Humanity+. The main aspect of transhumanist philosophy is their approach to technological achievements. Transhumanism takes a rational approach to them, understanding that any technology can either help or harm humans, the environment and the future evolution of life on our planet. Transhumanist ideologists proclaim their responsible attitude towards scientific innovations, warn about the importance of prior evaluation of potential outcomes and carefully weigh the positives and negatives. Humanity+ adherents insist that our innate human potential - characteristic for all of humanity - is not fully actualized. People have opportunities for further development and improvement. Transhumanist philosophy is clearly articulated to ensure its core values aren’t violated. Transhumanists must respect rights of individuals, strive for widespread access to technology (as opposed to elite), and care for all highly functioning beings, including animals. In mass culture, partially influenced by critics of the movement, there is a popular image of transhumanists as people obsessed with turning humans into robots. This is a huge misconception. In reality, transhumanist philosophy is far from these aspirations. As mentioned earlier, transhumanists believe their mission is to improve the human body and mind, minimize suffering, and overcome our natural limitations. Modern technologies and scientific discoveries are just tools that can be used to achieve these goals - not the goals themselves. We can already witness some successful attempts to improve the quality of life: for example, bionic prosthetics. Some less obvious examples from the medical field can also be attributed to transhumanism. For example, antidepressants or neural stimulants (nootropics) developed by medical scientists can help people overcome their suffering and expand the range of their abilities. The development of new technologies is the cornerstone of transhumanism. Its adherents pay particular attention to the development of ultramodern fields like information technology, biological engineering, genetic engineering, nanotechnology, AI development, and many others. Trends and concentrations Transhumanism only started actively developing quite recently, in the 1980s. It developed at such a rapid pace that resulted in several concentrations in this relatively young movement. Each of them has its own methods of achieving the final result. Radicalism and common sense One of the most radical concentrations allows for the option of completely transporting human consciousness into the virtual world, thus merging it with a computer. Identity becomes similar to artificial intelligence, with no reliance on a physical body. Both temporal and spatial limitations no longer matter - this entity achieves immortality and freedom from physical boundaries. But this view of the human body is an exception rather than a rule for transhumanism. Most of its concentrations allow for the idea of keeping a physical body (at least, for now), while improving and developing it in various directions. Humanity has some astonishing prospects for future development. We will gain full control over our appearance and mood, we will be able to manage our thought processes, expand the boundaries of sensual experience, and gain access to fundamentally new ways of understanding the world. The problems attributed to modern civilization, such as epidemics and famine, will be resolved. The main concentrations include libertarian transhumanism, technogaianism, anarcho-transhumanism and communist transhumanism. Each one of them is based on an existing social ideology and establishes a tight link with transhumanism. For example, libertarian transhumanism - uniting libertarianism and transhumanism - aims to remove any restrictions on expanding the potential of the human body. Its proponents believe that our bodies belong to us, so any ban on body modification is an infringement on civil rights and freedoms. The government should not interfere in any way. The main objective of technogaianism is environmental preservation. The movement’s adherents urge everyone to use new technologies to restore and protect the environment. The use of natural resources should be limited, with technology aiding humans in obtaining everything they need from alternative sources. Anarcho-transhumanism is a cross between transhumanism and anarchism. Its proponents rally against government tyranny, capitalism - and their own genes. They believe that people should be completely free - not just politically and economically, but biologically as well. Communist transhumanism is a mix of humanitarianism, scientism and rationalism. Its followers believe that transhumanism should help humanity achieve communism. Transhumans and posthumans Some followers of transhumanism believe that as a result of rapid technological development, a qualitative shift in life and a rapid evolution process, we will see the emergence of posthumans: people who have gone through so many changes that they can no longer be considered human. That’s why posthumans, who are qualitatively different from the modern human, will become the next step in evolution. But evolution will be gradual, which is why we need a transition stage. Within this philosophy, the transhuman (who is not yet prepared to become a posthuman) will bridge the evolutionary gap. The mental and physical capabilities of a posthuman will be incomparably more advanced than those of the average human. Disease and aging won’t affect their body, death will no longer affect them. Although it’s too early to make precise predictions, there is some likelihood that advanced artificial intelligence will serve as a basis for the posthuman. Transhumanism in Russia Today, the interest towards transhumanism is at a record high. Presenters at scientific events and conferences across the globe are actively discussing transhumanist issues (philosophy, objectives, difficulties, etc.). In Russia, this is mostly done by the Russian Philosophy Society and the RAS Institute of Philosophy. The main organization uniting transhumanists in Russia is the Russian Transhumanist Movement, founded in 2004. On their website, you can find detailed information about transhumanism and immortalism and discover facts about Russian and global projects/events dedicated to this movement. Transhumanism in Russian education Articles on transhumanism are being regularly published in many Russian philosophy publications. Russian transhumanists support online communities of interest, including the popular Life Extension Party. The ‘Russia 2045’ movement which emerged in 2011 is trying to actively promote the ideas of transhumanism within contemporary society. Transhumanism and posthumanism are described in detail in scientific texts from many noteworthy Russian philosophers, including V.S. Stepin, B.G. Yudin, I.V. Vishev, I.V. Artyukhov. The most interesting foreign philosophical research comes from Francis Fukuyama and Nick Bostrom. Transhumanism is a revolutionary movement, so it’s no wonder that it has had a mixed reception so far. Many people are violently opposed to this movement. It is the source of many discussions and neverending debates. Technological improvement of humans raises concerns among regular people and scientists alike. For example, American philosopher, political scientist and writer Francis Fukuyama believes that transhumanism is an extremely dangerous idea that puts the entire world at risk. Types of criticism All criticism towards transhumanist ideas can be grouped into two groups (which are often combined): - ‘Practical’ criticism (doubts about the possibility of achieving transhumanist objectives). - ‘Ethical’ criticism (disagreement with transhumanist worldview and ethical beliefs). Critics of transhumanism believe that the movement poses a real threat for the existing values of all of humanity; they also worry about violations of human rights and freedoms. In reality, many of these accusations lack objectivity. Today, most transhumanists: - Support social programs geared towards improving the education system and developing information technology. - Rally to support the protection of human rights and freedoms. - Adhere to democratic traditions. - Participate in development of technologies to resolve the ecology and poverty crises, improve living standards, etc. Loss of humanity Most critics of the movement tend to highlight the fact that transhumanism will strip us of humanity. Thus, humans will lose the traits that define our species. Here, it’s worth noting that some transhumanists really believe that their objective is to move to a new stage of evolution, for the posthuman to emerge. The trait that is actively criticized as a potential threat becomes the main objective for transhumanists. Interference with the natural way of life Critics of the movement often wonder how it’s possible to interfere with natural processes in such a bold manner: what about the risk of technological improvement bringing discord into the existing balance and leading to irreversible consequences? Won’t the posthuman project turn out to be a great tragedy as well as a pointless sacrifice in the eternal quest for perfection? There is a prevailing idea that new technology will be elitist, unavailable for most people. Many people believe that this will lead to elite members of society using these technologies to turn the rest of the population into their personal workforce, or even guinea pigs for their experiments. They even draw a parallel between transhumanism and eugenics, a movement that supports evolution of the fittest and destruction of the weakest members of society. Posthumans: a replacement for humans? Humanitarianism, transhumanism and posthumanism are usually compared to each other. Many critics are certain that transhumanism is the first active expression of posthumanism. These two movements share a common goal of creating a new type of intelligent being, which begs the question: will this new species replace humans, by destroying or bending the latter to their will? Criticism of transhumanism can be encountered in contemporary culture - for example, in science fiction films and literature. However, these pieces are more focused on fiction than objective analysis of real problems. In short, critical experts and regular observers agree that the movement offers ambiguous prospects for development, and humanity should seriously consider all implications before setting off on the enticing high tech journey. What do religions think of transhumanism? Representatives of the world’s major religions mostly hold negative attitudes towards transhumanism, noting that followers of the movement concentrate on physical improvement without considering the human soul and its needs. According to most religions, transhumanists mistakenly try to take on the role of Creator. In Russia, patriarch Kirill expressed the view of the Orthodox Church on this matter, stating that transhumanism threatens to completely eradicate humanity. People will lose the qualities that make them human. From the Christian perspective, transhumanism and immortalism are based around a materialistic atheist core, or even obviously against God. By trying to create Heaven on Earth and achieve immortality, transhumanists are basing their objectives on a purely scientific and rational foundation, without considering the Christian doctrine about God, humans, mortality of the body and immortality of the soul. Islam holds similarly critical views on transhumanism: followers of the religion believe that transhumanism is attempting to lead people away from God and the true meaning of life. They are opposed to technocratic development, where the idea of living comfortably with no suffering has reached its peak in attempting to get rid of the largest discomfort - death. Both Islam and Christianity agree that life without death is unnatural, with the potential to lead humanity towards catastrophe. It appears that only Eastern religions take an interest in transhumanism, despite their cautious attitude towards the movement. For example, the spiritual Buddhist leader - the 14th Dalai Lama - believes that science should be intertwined with spiritual traditions. He allows for the connection of a robotic body with human intelligence, and he believes that the creation of artificial intelligence represents a natural step in human evolution. Famous representatives of the movement: in Russia and abroad The most famous and active representatives of transhumanism in Russia include Valerya Pride and Danila Medvedev, the creators of the Russian Transhumanist Movement and members of its Coordinating Council today. Valerya Pride, as one of the most promising futurologists, sociologists and transhumanist theorists in Russia, created the first ever cryonic company in Eurasia, KrioRus. Cryonic technology can keep deceased people and animals in a state of deep cooling, with the hopes of reviving them in the future. Danila Medvedev is a Russian social activist, philosopher and futurologist. He is the Chairman of the Board of Directors of KrioRus and creator of a fascinating project titled ‘Systemic framework of human aging’. Contemporary foreign activists include Nick Bostrom, the founder of Humanity+; Eric Drexler, whose book on molecular nanotechnology ‘Engines of Creation’ had an enormous impact on the emergence of transhumanism; robotics scholar Hans Moravec; transhumanist and media artist Natasha Vita-More and exceptional philosopher and Extropy Institute founder Max More. What does it all add up to? Despite falling victim to various types of criticism and skepticism, transhumanism continues to gain rapid popularity, and today it represents a powerful international movement united by common intellectual, cultural and ideological values. The movement’s supporters strive to improve human anatomy and the ways we learn about the world. They mostly have good intentions, because they promise to relieve people from suffering linked to old age and disease. Transhumanist ideologists promise to use technical modifications to transform regular people into superhumans with amazing potential and superpowers. We can only guess what the world will be like if transhumanist objectives are met. But even today, we can safely say that this idealistic image of the future has a dark side with many problems. In their quest for perfection, transhumanists offer a path that might be dangerous for humanity at large. Over the past century, many dystopian writers have described an image of the future world, divided into castes: those who have access to a ‘happy’ life, and those who do not. What was once mere fiction can be considered a real threat today. Will people be happy in this new reality? Will each person be able to tap into this infinite potential promised by transhumanism, or will we be defeated by technology and transformed into mere cogs in the machine of the new world? Will there be conflicts and wars between regular humans and new posthumans? Or is it a blessing? Representatives of Humanity+ try to think positively, claiming that these fears are unfounded and pointless. They promise that new scientific discoveries and opportunities for innovation will bring the next generations of humans a society where peace and kindness rule supreme. New people won’t have to deal with disease, aging and death. They will have a large set of powers that they will use exclusively for the good of society. Time will tell Although we should all trust that things will turn out well, we should remain rational when observing the gradual changes in our civilisation. In the end, we are all responsible for the future of our world. Share this with your friends!
<urn:uuid:48f04739-37cf-4739-afa7-d51b02fcb9ec>
CC-MAIN-2020-16
https://hitecher.com/articles/what-is-transhumanism
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371807538.83/warc/CC-MAIN-20200408010207-20200408040707-00394.warc.gz
en
0.933729
3,517
3.359375
3
With the holidays upon us, a question that often comes up for those with diabetes is, “Can I drink alcohol?” It’s a perfectly reasonable question — after all, with the merriment of the season, it’s only natural that you might wonder if you, too, can join in and imbibe! The good news is that most people who have diabetes can drink alcohol. But there are some caveats and recommendations to heed in order for you to stay safe. Understanding alcohol’s effects You may have had a glass of wine or beer without giving too much thought to it. Interestingly, alcohol is classified as a sedative-hypnotic drug, which is a drug that depresses or slows the body’s functions. That’s because alcohol works on the central nervous system. However, alcohol can affect every organ system in the body. When you drink alcohol, some of it is quickly absorbed into the small blood vessels in the mouth and tongue. The rest of it reaches the stomach, where about 20% moves into the bloodstream. Depending on the amount of food in the stomach, some of the alcohol stays in the stomach where it’s broken down by enzymes. If your stomach is empty, the majority of alcohol is absorbed through the small intestine into the blood. Once in the blood, alcohol travels to all parts of the body including the kidneys, liver, pancreas, brain, lungs and skin. In pregnant women, alcohol passes from the placenta to the baby. The liver is the “gatekeeper” in that it’s charged with breaking down 90% of alcohol into water and carbon dioxide at a rate of about one standard drink per hour (the rest is excreted in the urine, through the lungs, and through the skin). To get cutting-edge diabetes news, strategies for blood glucose management, nutrition tips, healthy recipes, and more delivered straight to your inbox, sign up for our free newsletter! Gender differences in alcohol’s effects Alcohol can affect people differently — some people become the life of the party after a drink or two; others seem to withdraw and become quiet. Part of this is due to how alcohol is processed, or metabolized. Men have more and a faster-acting form of an enzyme called alcohol dehydrogenase (ADH) in their stomachs and livers than women; this reduces the amount of alcohol absorbed by 30%. Women have no ADH in their stomachs and the form of ADH in their livers is less active. This leads to a higher blood alcohol concentration (BAC) compared with men and helps to explain why women can become more intoxicated than men when consuming the same amount of alcohol. Effects of alcohol on blood sugar Not surprisingly, alcohol can affect blood glucose levels. On the one hand, drinking alcohol (especially on an empty stomach) can cause a drop in blood glucose. There are a couple of reasons why. First, alcohol stops the liver from releasing glucose into the blood. That’s usually a good thing if you have diabetes, but it’s not so good if you’ve had a couple of drinks and haven’t eaten anything for a while. Second, the liver’s priority is to break down alcohol so that it’s harmless. While it’s busy doing superhero work, it isn’t paying too much attention to what’s happening to your blood sugar. Also, if someone drinks a lot on a regular basis, glucose stores in the liver are quickly depleted, and if blood sugar levels drop, the liver is basically powerless to release glucose to help — the well has gone dry, so to speak. If you take insulin, or certain types of diabetes pills called sulfonylureas (for example, glyburide, glipizide or glimepiride) or meglitinides (for example, repaglinide or nateglinide) your risk of low blood sugar (hypoglycemia) is greater from drinking alcohol compared to someone who takes other medications such as metformin, sitagliptin or dapagliflozin. That’s because, with insulin or a sulfonylurea, you automatically are at risk of hypoglycemia. Is alcohol harmful or helpful? The pros and cons of drinking alcohol continue to be debated. There’s no real argument that drinking too much alcohol can lead to serious consequences. Short term, these include: • Loss of coordination, leading to injury • Poor judgment • Difficulty concentrating • Higher blood pressure • Loss of consciousness • Harm to an unborn child Longer term, excessive alcohol intake can lead to: • Heart disease • High blood pressure • Liver disease • Cancer of the liver, breast, head and neck, esophagus, colon and rectum • Weight gain • Higher blood glucose and A1C levels • Worsening of diabetic neuropathy • Diabetic retinopathy Yet a “moderate” amount of alcohol (more on that in a moment) may provide health benefits, such as a lower risk of: Alcohol can increase HDL (“good”) cholesterol and may improve insulin sensitivity in some people. However, while alcohol may provide a handful of health benefits, it’s never a good idea to start drinking alcohol — or increase your intake of alcohol, for that matter — in order to reap those benefits. There are other, safer steps that you can take to lower your risk of health problems without relying on alcohol. What does “drinking in moderation” mean? One of the best pieces of advice when it comes to alcohol is to talk with your healthcare provider. He or she should take into consideration a number of factors, such as how well your diabetes is managed, the presence and risk of diabetes complications, other health conditions you may have, any family history of alcohol abuse, and medications that you take. Women who are pregnant, planning a pregnancy or who are breastfeeding will likely be advised to abstain from drinking alcohol altogether. If your healthcare provider gives you the green light to drink alcohol, they’ll probably advise you to drink “in moderation.” But what does that mean? According to the federal government’s Dietary Guidelines for Americans, along with the American Diabetes Association and the American Heart Association, moderation means: • Up to one drink per day for women • Up to two drinks per day for men For reference, heavy drinking is defined as: • More than three drinks in a day or more than seven drinks per week for women and for men older than 65 • More than four drinks in a day or more than 14 drinks per week for men 65 and younger Binge drinking is four or more drinks within two hours for women and five or more drinks within two hours for men, according to the Substance Abuse and Mental Health Services Administration. Best choices of alcoholic drinks One drink is defined as: • 12 ounces of beer • 5 ounces of wine • 1 1/2 ounces distilled spirits (gin, rum, vodka, whiskey) Compared with most cocktails, beer (while containing some carb), wine and distilled spirits are generally better choices due to their lower carbohydrate content. And while it may seem like common sense to reach for a sugary cocktail in order to prevent hypoglycemia, the reality is that the carb in those drinks will be absorbed quickly, offering little or no protection against hypoglycemia. To give you a sense of how much carb is in different alcoholic beverages, check out the table below. As far as drink mixers go, think twice before you toast the New Year in with an alcoholic beverage laden with regular soda, tonic water or juice. These mixers contain a significant amount of carbohydrate and calories. Instead, choose diet soda, diet tonic water, club soda or seltzer water. Or, skip the mixers altogether and go “neat,” meaning no ice or water, or “on the rocks,” meaning with ice. Handling hypoglycemia from alcohol As noted above, your risk of hypoglycemia increases when you drink alcohol if you take insulin or certain types of diabetes pills (sulfonylureas and meglitinides). It’s important for everyone to be smart when it comes to drinking alcohol, but especially so if you take these particular medications. You should also keep in mind that alcohol can be somewhat unpredictable in terms of if, when and to what extent it can affect your blood glucose. So how do you stay safe while enjoying some holiday cheer? Here’s how: Be aware of your medications. This means knowing if your diabetes medications put you at risk for hypoglycemia. Ask your doctor, pharmacist or diabetes educator if you’re not sure. Other medications may raise your risk of lows, too — these include beta blockers, some heart arrhythmia drugs and even some dietary supplements, such as fenugreek. Line up your wingman/wingwoman. In other words, make sure someone you’re with knows that you have diabetes, can recognize signs and symptoms of hypoglycemia, and can help you (or get help for you) if needed. On a side note, if you’re making merry with alcohol and happen to have hypoglycemia, others may think you’ve had too much to drink — that’s because symptoms of a low can mimic signs of being intoxicated. Wear or carry medical identification. In the event of a medical emergency, a medical ID gives healthcare professionals a heads up that you have diabetes. Eat when you drink. Make sure to eat something that contains carbohydrate if you’re sipping on a libation. Doing so will help minimize the chance of hypoglycemia. It’s also a good idea to include some protein and fat in your snack or meal to help sustain your blood glucose level. And forget about substituting alcohol for carbs at your holiday feast — doing so is a recipe for low blood sugar. Know your glucose level. Check your blood glucose before the festivities start, midway through and then before you go to bed. Alcohol may cause hypoglycemia 12 to 24 hours later. If you use a continuous glucose monitor (CGM), pay attention to the low glucose alerts. Be careful mixing exercise and alcohol. If skiing, skating, or snowball fighting is part of the holiday festivities, have fun! Just don’t overdo the alcohol afterward. Exercise combined with alcohol greatly increases the risk of hypoglycemia. Keep tabs on your blood sugar by checking regularly. Stop and treat hypoglycemia. Any time your blood glucose is low, stop and treat the low with 15 grams of carbohydrate, such as 4 ounces of juice or regular soda or 4 glucose tablets. Make sure you have treatment for lows with you at all times. The holidays are often a time to celebrate, and it’s easy to overindulge in both food and drink. Know your limits with alcohol, and be especially sure not to drink and drive. More words of wisdom More often than not, people with diabetes can enjoy alcoholic beverages in moderation (as defined above). Remember to discuss the use of alcohol with your healthcare provider to be on the safe side, especially if you take insulin, a sulfonylurea or a meglitinide. There are other reasons why alcohol may not be a good idea. These include: • Being pregnant • Difficulty limiting alcohol intake • Trying to lose weight • Having high triglycerides (blood fats) • Having complications of diabetes, such as kidney or heart disease, eye problems or nerve damage • Taking non-diabetes medications that may interact with alcohol • Having other health conditions, such as liver disease Here’s to a happy and healthy holiday season. Cheers!
<urn:uuid:deac019e-a19b-4cac-afb8-e7a96fcab036>
CC-MAIN-2020-16
https://www.diabetesselfmanagement.com/nutrition-exercise/meal-planning/diabetes-alcohol-holidays/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370503664.38/warc/CC-MAIN-20200331181930-20200331211930-00194.warc.gz
en
0.925705
2,501
2.578125
3
When astrologers use the word ‘planet’ in their interpretations, they know full well they’re not referring to those huge rocks in the heavens. Rather, they are discussing an archetype, a timeless aspect of reality. Venus on a birth chart, likewise, is not really the Venus in the sky – it is the power, or principle, of love and the urge for relationship. Planets and signs are the intentions of the cosmos – they represent the hidden blueprints, or potential characteristics, of the Universe itself, and their true home is that higher dimension of Eternal Images and Forms to which the great Plato once drew our attention. These are, essentially, Universal Archetypes. And so, there is a goddess representing the archetypal principle of love in whatever culture you care to mention. There was the Aztec Xochiquetzal, patroness of beauty, pregnancy, prostitutes, and even women’s crafts. There was Hathor, Egyptian, divinity of the sky, love and music, or Freya, the love deity from Norse myth, or the Syrian Atargatis, a goddess of fertility, and protectress of her people. And, as anthropologists and mythographers have seen, each culture tended to import these archetypal characteristics from earlier ones. The Roman goddess, Venus, for instance was derived more or less wholesale from the Greek Aphrodite. In turn, as Wikipedia states: The cult of Aphrodite in Greece was imported from, or at least influenced by, the cult of Astarte in Phoenicia, which, in turn, was derived from the cult of the Babylonian goddess Ishtar, which itself was largely derived from the cult of the Sumerian goddess Inanna. However, the ancients had not quite distinguished between the Feminine as sexuality/ attraction; and the Feminine as Mother, or the Matrix (the moon in astrology). This is why many representations of the Goddess of Love were simultaneously mature Earth Mothers presiding over agriculture, say, and at the same time, courtship, love and sex. In short, the border between lover and mother in these ancient pagan myths is blurred. For example, there is Cybele, ‘an Anatolian mother goddess [who] … was partially assimilated to aspects of the Earth-goddess Gaia, her Minoan equivalent Rhea, and the harvest-mother goddess Demeter’ and whose ‘major mythographic narratives attach to her relationship with Attis, who is described by ancient Greek and Roman sources and cults as her youthful consort.’ (Wikipedia) Cybele was known as Magna Mater (the Great Mother) just as the Greek Gaia symbolised ‘Mother Earth’. And yet, they mated with other nature deities; Cybele with her youthful consort, Attis, Gaia with Ouranos (the Sky god) and Rhea with her brother, Kronos (Saturn). If these primal goddesses were literally mothers to the world, whichever god they mated with afterwards (logically) resulted in an act of incest! It is this primitive and earthy aspect to these ancient female deities that has left us with the association between Taurus and Venus in astrology. In the Roman world of the 2nd century AD, the goddess was known as Venus Caelestis (Heavenly Venus) and presided over the taurobolium when bulls would be sacrificed to her. Despite this, Venus is a bad ‘ruler’ for Taurus. The correlation is much better exemplified with Libra – all of the virtues (and vices!) we associate with this sign are also symbolised by Venus Venus (a Latin name meaning ‘sexual love/desire’) was, as we’ve seen, the Roman counterpart of the Greek Aphrodite, ‘the foam born’ after she came to life from the severed genitals of Uranus. This was the consequence of a violent act by Kronos who had dismembered his father Uranus and thrown the genitalia into the sea. Already there is a metaphor at work here – the power to transform something ugly into something beautiful. Indeed, the function of Venus is to harmonise – and it is based on archetypal forces present in creation: the powers of both attraction and equilibrium. Venus in the Natural World In the physical world, for example, we have the natural pull of masculine to feminine (think of the poles of a magnet) or the seemingly magical way that pairs of human chromosomes ‘line up’ in an emphatic way in the middle of the cell, whereupon each partner migrates either ‘north’ or south’. This co-ordinated movement has been termed the ‘dance of the chromosomes’ – revealing a holistic, self-regulating harmony that living entities always seek. One could cite this self-regulating function in a global context – think of Gaia theory and its self- sustaining feedback system between the earth and the life forms which inhabit it. This promotes a state of balance in nature, and it’s just the same in the psychological world, too – Jung discovered a counterbalancing function at work in the human psyche: often, our dreams ‘fill the gap’ (with fulfilled wishes) of what we lack in our conscious existence: ‘For instance, it is clear when one works with dreams that they regularly find a way to provide balance, support, and correction to the particular conscious attitude of the dreamer. This undeniable “compensatory” function provided by the Self proves its role as the central guiding force in an ongoing urge to realize the individual’s potential.’1 Indeed, nature compensates – think of the way a blind person’s hearing or sense of smell is enhanced, as if to make up for the sense that is missing. The overall psyche seeks a kind of balance, too – it’s why we’re attracted to our opposite. The stable and earthy type is attracted to the volatile and fiery; deep and emotional is drawn to light and airy (and vice-versa of course). Unconsciously, we’re trying to become ‘whole’ human beings. This is the essential function of Venus – this instinctive ‘balancing out’ function we find in nature (clearly related to the Balance symbol of Libra, the sign Venus ‘rules’). And what we feel we lack in ourselves, we look for elsewhere. This, of course, is what drives most human relationships: whether from loneliness, sexual desire or a less clearly defined want. We may call it emotional need, or elevate it to a more spiritual realm and call it the quest for the soul mate, but none of us are self-contained islands. It is the simple drive towards relationship of some kind. At its core, Venus is a Feminine ‘yin’ energy: receptive rather than assertive, passive rather than active. If we contrast it with its polar opposite, fiery Mars, we see that it is the exact complement. If Mars is action, individuality and selfhood, Venus is passivity, duality and compromise – the need to relate and keep things in their necessary balance. If we consider the kabbalistic system of manifestation in the Universe, the one-dimensional energy point at the source of creation has now extended to become a line: it now has two ends. In a very simple sense, we may now say there’s an awareness of polarity and opposition (either/or, north/south, inner/outer, me/you, and so on). The act of comparison has been created. There is consciousness of the Other. One can now relate to things. Even so, this is a double edged sword, for we project on to others qualities that really exist within. This, we may call the Law of Attraction in its psychological sense. The phenomenon of attraction is not simply about Venusian harmony and togetherness – it is there to wake us up to what is within, what lies in the Unconscious. For example, take the phenomenon of falling in love, or ‘hero-worship’, for example. These are periods where we find nothing but the Good True and Beautiful in the object. Some powerful factor in the Unconscious seems to overtake us, judgement is suspended and we become totally and magically besotted by someone. In love, we are at the mercy of all kinds of strange emotions. Our heart strings seem more finely tuned than ever – what has gotten into us? Why do we feel so good? Why do we swoon at the mere mention of our lover’s name? Isn’t this a kind of insanity? The French have a phrase for it: la folie – roughly translated as the ‘madness of love’. But, as we all know, that intense and overwhelming feeling never lasts – and either we settle down into a more realistic kind of relationship, or become disenchanted to find that the person before us is not quite love’s young dream, but mortal after all. Most of us are able to take the realistic option, but what has happened to cause this change in our beloved? Why, nothing! The psychological projection – that powerful ideal-image we projected on to them has begun to thin out and we’re now seeing more of the real person. However, one never attracts another person unless there is something similar within you – ‘like attracts like’ is always the general rule. The Attraction of Opposites phenomenon is really Like Attract Like – but inside out. That is, what appears to be very different at the conscious level (like the fact that one’s wife is obviously not like you) is in fact quite similar in the Unconscious. At the level of psychological energy. In other words we contain our opposite within us. This is the real lesson of Venus on your chart – the power of attraction which draws things to us with seemingly little effort on our part. This is why older astrologers called it the ‘minor’ benefic, responsible for small strokes of luck and good fortune. Wherever you find her on a chart, this ability to relate, harmonise, compromise, is evident. It has to operate through a certain sign, too – and so your power to attract is coloured very much by those energies. Your style of seduction, courtship and emotional ‘communication’ (how you express feelings) is there in your Venus sign and its aspects. It’s a two way street, of course – you are attracted by the things represented by Venus on your chart. Simply put, if you have Venus in Scorpio or Aries, then Scorpios and Ariens turn you on! If you have Venus in Gemini or Sagittarius, Geminians and Sagittarians are endlessly fascinating. Why? Because these qualities, these aspects of Being, exist somewhere within you, too! Like attracts like. 1. Polly Young-Eisendrath; Terence Dawson, The Cambridge Companion to Jung, New York : Cambridge University Press, 1997.
<urn:uuid:f5545de0-3550-465f-a6c1-b0c5f8afbd0b>
CC-MAIN-2020-16
http://www.astro.nu/2017/05/31/venus-astrology-birth-chart/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370521574.59/warc/CC-MAIN-20200404073139-20200404103139-00073.warc.gz
en
0.948966
2,325
2.546875
3
Disability Discrimination Act The Federal Disability Discrimination Act (DDA) 1992 aims to protect people with a disability from discrimination. Each state and territory of Australia also has complimentary legislation such as state Anti Discrimination/Equal Opportunity Acts that prohibit discriminatory treatment of people with disabilities. The DDA and each of the state acts aim to protect people with disabilities from discriminatory treatment in a range of areas including employment, education and access to services, facilities and public areas. While this resource focuses on the obligations under the Federal DDA, the provisions of state legislation are very similar. For further information about State legislation refer to the 'Resources' section of this document. Disability Discrimination Act (DDA) 1992 The objects of this Act are: (a) to eliminate, as far as possible, discrimination against persons on the ground of disability in the areas of: (i) work, accommodation, education, access to premises, clubs and sport; and (ii) the provision of goods, facilities, services and land; and (iii) existing laws; and (iv) the administration of Commonwealth laws and programs; and (b) to ensure, as far as practicable, that persons with disabilities have the same rights to equality before the law as the rest of the community; and (c) to promote recognition and acceptance within the community of the principle that persons with disabilities have the same fundamental rights as the rest of the community.(1)" What is disability under the Legislation? The definition of disability under the Act is very broad to encompass physical, sensory, mental and intellectual disability. "disability, in relation to a person, means: (a) total or partial loss of the person's bodily or mental functions; or (b) total or partial loss of a part of the body; or (c) the presence in the body of organisms causing disease or illness; or (d) the presence in the body of organisms capable of causing disease or illness; or (e) the malfunction, malformation or disfigurement of a part of the person's body; or (f) a disorder or malfunction that results in the person learning differently from a person without the disorder or malfunction; or (g) a disorder, illness or disease that affects a person's thought processes, perception of reality, emotions or judgment or that results in disturbed behaviour; and includes a disability that: (h) presently exists; or (i) previously existed but no longer exists; or (j) may exist in the future; or (k) is imputed to a person (2)" What is a disability discrimination under the Act The DDA covers discrimination and harassment on the grounds of disability. Discrimination includes direct forms of discrimination and indirect discrimination. Direct Discrimination is where someone receives less favourable treatment than a person without a disability in similar circumstances. "Jacinta was denied an interview for a position for which she applied, even though she demonstrated in her application and resume that she could meet the selection criteria. She was denied the interview because the selection panel decided her disability may result in her needing more time off work, than her peers." This is a form of direct discrimination, which the legislation aims to address. Nobody should be denied an interview on the basis of a disability. The assumptions made by the panel in relation to Jacinta's application, were based on preconceived beliefs about people with a disability. Jacinta was not given the opportunity to demonstrate, at interview, her capacity to fulfil the requirements of the job in the same way as other applicants. Indirect Discrimination is where a policy, practice or requirement is applied equally but has a discriminatory outcome for those with a disability. The policy, practice or requirement may appear fair and neutral but the effect is that the person with a disability is unable to meet it compared with someone without a disability. "Hari is unable to participate in a field trip that has been planned for his course because of his disability. The course coordinator has stated that anyone who can't attend the first field trip, can attend later and that Hari also has this option. In Hari's case he can't participate at any time and so he will fail this unit." This is indirect discrimination because even though Hari is offered the same opportunity as his peers to attend the field trip now or later, he is not able to attend at all. If Hari was offered a reasonable accommodation in his course, such as an alternative assessment, he may successfully complete the unit. Discrimination in education An educational institution cannot discriminate against a person on the ground of the person's disability or a disability of any of the person's associates: "by refusing or failing to accept the person's application for admission as a student; or in the terms or conditions on which it is prepared to admit the person as a student." It is also unlawful to discriminate against a student on the ground of the student's disability or a disability of any of the student's associates: - "by denying the student access, or limiting the student's access, to any benefit provided by the educational authority; or - by expelling the student; or - by subjecting the student to any other detriment.(3)" The Disability Discrimination Act is applicable to all students with a disability, including international students, part time students and full time students. It includes all processes related to undertaking study including: - applying for admission to a course or institution - enrolling in a course - while studying - examinations and assessments - participation in student activities What are 'reasonable adjustments' or 'education related adjustments' in education? The Act requires educational institutions to put in place actions to help ensure equal opportunity for people with a disability, commonly referred to as "reasonable adjustments" or "education related adjustments". The legislation does not specify the types of adjustments required to remove discrimination. Each case needs to be considered in its own circumstances and previous case law. Some examples of education related adjustments in the educational environment include: - changes to the physical environment, such as modified physical spaces or provision of equipment - modifying communication systems or information provision - providing course materials in alternative formats - provision of interpreters, readers etc - alternative assessments and/or examinations - provision of a private room for undertaking exams What are the academic requirements of a course The academic requirements of a course are determined by identifying academic achievement reasonably required in the course, including skills and abilities required and whether the academic requirements can be met in another way by making education related adjustments. "Joshua is doing his primary teaching course, but because of his disability he is unable to write in freehand on the board. However, through negotiation he has been able to use an overhead projector and other assistive technology to overcome this issue." The academic requirement of the course is not to write in freehand, but to have the material available to the students in an accessible manner. This is achieved with the overhead projector and other technologies. Further information may be obtained by referring to the Disability Standards for Education website. What is 'unjustifiable hardship'? For the purposes of this Act, in determining what constitutes unjustifiable hardship, all relevant circumstances of the particular case are to be taken into account including: (a) the nature of the benefit or detriment likely to accrue or be suffered by any persons concerned; and (b) the effect of the disability of a person concerned; and (c) the financial circumstances and the estimated amount of expenditure required to be made by the person claiming unjustifiable hardship; and (d) in the case of the provision of services, or the making available of facilities-an action plan given to the Commission under section 64 (8). An employer is responsible for thoroughly assessing the applicant's request for work related adjustments before claiming 'unjustifiable hardship'. This includes assessing: - direct costs - any offsetting tax, subsidy or other financial benefits available in relation to the adjustment or in relation to the employment of the person concerned - indirect costs and/or benefits, including in relation to productivity of the position concerned, other employees and the enterprise - any increase or decrease in sales, revenue or effectiveness of customer service - how far an adjustment represents any additional cost above the cost of equipment or facilities which are or would be provided to an employee similarly situated who does not have a disability - how far an adjustment is required in any case by other applicable laws, standards or agreements - relevant skills, abilities, training and experience of a person seeking the adjustment(9) . Disability Action Plans A Disability Action Plan is a strategy that is developed by an organisation to identify issues and develop strategies to eliminate discriminatory practices against people with disabilities. Any 'service provider' may develop action Plans. This term includes anyone who, or any institution which, provides goods or services or makes facilities available, with or without cost. It applies to: - educational institutions; - commonwealth and state government departments and agencies; - local government; - organisations and businesses in the public and private sectors; and An Action Plan must include certain components - these are listed in section 61 of the DDA (see Appendix 4): - a review of current activities; - devising of policies and programs; - goals and targets; - evaluation strategies; - allocation of responsibility; and - communication of policies and programs(6). Action Plans can be lodged with the Australian Human Rights Commission (AHRC) if these particular components are included in the Action Plan. A number of Universities and TAFE Institutes across Australia have developed, or are in the process of developing, Disability Action Plans. Action Plans are freely available, either through the education institution or the Australian Human Rights Commission (AHRC) website. If a Disability Action Plan has been successfully lodged with AHRC, this will be taken in to consideration if a complaint has been made against the institution. However, the Plan has to be seen to be implemented within the institution before favourable consideration will be given. Disability Standards for Education Under the Disability Discrimination Act (DDA), the Attorney-General may make Disability Standards to specify rights and responsibilities about equal access and opportunity for people with a disability, in more detail and with more certainty than the DDA itself provides. Standards can be made in the areas of: - public transport services - access to premises - accommodation and the - administration of Commonwealth laws and programs(7). A set of Education Standards have been developed that specifies how education and training are to be made accessible to students with disabilities. They cover the following areas: - curriculum development, accreditation and delivery - student support services and - elimination of harassment and victimisation. The development of the Standards for Education is intended to clarify how the DDA should apply to education, both for a student with a disability and the organisation. For further information on the Disability Standards for Education refer to the Australian Human Rights Commission website. Discrimination in employment The Disability Discrimination Act is applicable to employees, contract staff, commission agents, agency workers and partnerships of three or more people. It includes all processes related to employment including: - recruitment, incorporating advertising, providing information about jobs, application forms, interview arrangements, selection tests or examinations etc) - staff selection - conditions of employment (salary, duties, leave entitlements, superannuation etc) - opportunities for training and promotion - trade or Professional Registration - membership of unions or professional associations What are reasonable adjustments or work related adjustments? The Act requires employers and to put in place actions to help ensure equal opportunity for people with a disability, commonly referred to as "reasonable adjustments" or "work related adjustments". The legislation does not specify the types of adjustments required to remove discrimination. Each case needs to be considered in its own circumstances. Some examples of reasonable adjustments and work related adjustments include: - changes to the physical environment, such as modified work stations - provision of equipment - modifying communication systems or information provision - provision of interpreters - flexibility around hours of work What are the inherent requirements of a job or course? The inherent requirements of a job are determined by identifying the work reasonably required in the job, including skills and abilities required and whether the inherent requirements can be met in another way by making reasonable adjustments. "Juanita's disability means that she finds it difficult to work in a room with artificial lighting for more than a couple of hours at a time. Her work involves machine operating which cannot be relocated to a lighter environment. Juanita's employer has agreed to her having shorter shifts at this work and taking on other tasks in between to ensure she is able to maintain her position." Although the inherent requirements involve machine operation, the inherent requirement does not involve working on machines for long periods of time each day and the accommodation put in place by the employer ensures Juanita is able to continue her employment.
<urn:uuid:e9f705e7-b62a-4d51-bb6e-f8ddbb5ff067>
CC-MAIN-2020-16
https://www.westernsydney.edu.au/choosingyourpath/legislative_requirements/disability_discrimination_act
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371604800.52/warc/CC-MAIN-20200405115129-20200405145629-00035.warc.gz
en
0.951349
2,663
3.390625
3
The story of railway development in the north-west of the American continent. This chapter is by Harold Shepstone and is concluded from part 11. It is the third article in the series on Railway Engineers at Work. For more than four years thousands of engineers and labourers were engaged on the building of an enormous power scheme by which the waters of the River Shannon, in the Irish Free State, have been harnessed to drive massive turbines. The River Shannon, with a length of 240 miles, is the longest river in the British Isles. Above the well-known city of Limerick, water from the Shannon has been diverted by a huge weir into a headrace or large canal which is seven and a half miles long. At Ardnacrusha the headrace is dammed by the intake building, in which is incorporated a navigational lock. Through enormous pipes known as penstocks the water flows, with a head of 100 feet, to the turbines. Having passed through the power-house, the water is returned to the Shannon through a tailrace which is more than a mile long. This chapter is by Peter Duff and is the third article in the series on Wonders of Water Power. Hurricane Gulch Bridge “HURRICANE GULCH BRIDGE, on the Alaska Railroad, 281½ miles from Seward, the terminus on the Gulf of Alaska. The Alaska Railroad runs from Seward through the McKinley National Park, to Fairbanks, on the Tanana River, a tributary of the Yukon River. The length of the railway is 470 miles. The main object of the builders was to provide access to the important coal-mining centres at Matanuska (150 miles inland) and elsewhere. Trains normally run once a week in either direction, stopping for the night at Curry, 248½ miles from Seward. During the navigation season the trains connect at Nenana, 58½ miles short of Fairbanks, with Yukon River steamers.” Situated 1,292 feet below the level of the Mediterranean, the Dead Sea has a concentration of salts about eight times greater than that found normally in other seas. The extraction of these salts, which have great commercial value, called for many years of pioneering and experimental work. The Dead Sea itself is more than 1,000 feet deep and forms part of a great natural rift that extends from Syria right into the heart of Africa. The waters of the Dead Sea are impregnated with salts such as potash, which are of great importance to various industries, and, indeed, to our everyday life. Plant has been installed on the shores of this natural chemical store to extract the wealth from the waters. This chapter is by Harold Shepstone, who describes how a great pipe lines was sunk to the bottom of the Dead Sea so that its waters could be pumped into huge evaporating pans to extract the salts. The Headrace at Ardnacrusha “THE INTAKE WORKS across the headrace at Ardnacrusha form a gravity dam 405 feet long, 23 ft 6 in wide and 24 feet high. Across the dam is a long machine-room which contains the machinery that operates the sluices and penstock valves. In the background is the structure which contains the sluice of the upper navigation lock.” The Mouth of the Jordan “MOUTH OF THE JORDAN and northern end of the Dead Sea. The large white patches on either side of the river are the immense evaporating pans of the potash works. The pans cover an area of 1,000 acres. Besides potash, the waters of the Dead Sea are impregnated with bromine and magnesium chloride.” Although Trevithick is now known as the "father of the steam locomotive", his work on the development of steam engines, stationary as well as locomotive, was unrecognized until long after his death in 1833. A FEW years ago - and a hundred years after his death - tardy recognition was made of the genius of a great engineering pioneer, Richard Trevithick, by the unveiling of a statue of him at Camborne, Cornwall. In one hand of the figurer is a model of what was really the first locomotive to be built, that is, the first steam engine which could haul a load along a built track. Before Trevithick such tentative machines as had been made were only steam-driven carriages for travel on roads. Trevithick was born on April 13,1771, at Carn Brea, not far from Camborne, his father being manager of the Dolcoath Mine and other mines. He was the only boy of a family of five, and when he reached his schooldays he seems to have been anything but a model scholar. The letters of his manhood show that he had not mastered spelling, but he was good at arithmetic and so quick and observant in practical matters that he rapidly picked up mining methods. At the age of twenty-one his reputation for practical engineering was such that mineowners trusted him to report on the comparative merits of the different kinds of pumping engines in Cornwall. At that time the efficient Boulton and Watt engine had a number of imitators, but the patent protecting it could not be upset. In 1795 the activities of Trevithick and of another engineer named Bull were checked by an injunction for infringement. This, however, merely turned Trevithick's inventiveness into another channel and the next year or two saw his invention of a water-pressure pumping engine and the building of his first steam carriage. The steam carriage appeared in 1801. The Boulton and Watt patent had now expired and Trevithick was free to develop his ideas. In 1802 he patented steam engines for stationary and for locomotive use, which had pressures as high as 145 lb, as against the normal Watt pressure of about 5 lb per square inch. This increase was a marked advance in steam engineering. It was, indeed, before its time, as later engineers reverted to lower pressures, primarily for the reason that boiler making was for long carried on in a primitive way. One of the difficulties in the way of making sound boilers was that of obtaining suitable plates. Another was that the correct principles of design were barely understood. Trevithick used cast iron at first but later developed a simple and safe type made of plates. This, under the name of the Cornish boiler, is still manufactured. The most decisive step was taken in 1804, when the first real locomotive was built. This was made for a colliery tramroad at Penydarran, near Merthyr Tydfil, South Wales. On one of its first trials it went at a speed of five miles an hour, hauling 10 tons of iron and seventy men in five wagons, a feat, far in excess of what could be done with the horse haulage it displaced. This engine was ultimately discarded, as the cast-iron rails on which the train ran had not been built for such a heavy load and kept breaking. Trevithick made a second locomotive for a New- castle Colliery, and it was a poor copy of this that started the famous George Stephenson on his successful career of locomotive building. Trevithick's final locomotive was exhibited on a circular track in London, but it attracted little attention. The inventor then devoted his time to improving the stationary steam engine. The date of the London locomotive was 1808, and from then till 1810 Trevithick remained in that city, full of energy and ready to turn his hand to anything in the way of engineering. He fitted his engines to dredgers on the River Thames and was concerned with an attempt to drive a tunnel under the river between Rotherhithe and Limehouse (see The First Thames Tunnel). In this again he was before his day. The list of patents he took out while in London is striking. They covered machinery for towing ships and discharging cargo, iron tanks for cargo storage - a sound scheme, as the leaky wooden hulls of the time often ruined a cargo - iron floating docks, iron masts, iron ships and iron buoys. To a more worldly-minded man than Trevithick fortune would have now come, but he seems to have been habitually indifferent in monetary affairs, and imprisonment for debt and a serious illness followed. Having returned to Cornwall, Trevithick remained there until 1816, still actively inventive and energetic. Notable inventions of this period were steam-driven agricultural machines and a screw propeller. Then, having superintended the building of high-pressure winding engines and pumping engines for a Peruvian mine, he set out with them - and with high Hopes - for Lima. Eleven years of frustration and wandering followed, civil war having brought mining to an end. Back in London, Trevithick became a consulting engineer, drawing up schemes for reclaiming part of the Zuider Zee, and. investigating mechanical refrigeration, superheating and surface condensers - all to become the subjects of attention by others in later days. Plans for a tower 1,000 feet in height, in commemoration of the passing of the Reform Bill, anticipated Eiffel in intention though not in performance. Trevithick's last days were again clouded by poverty. He died at Dartford (Kent) in the year 1833 and lies there in some lonely unknown grave. “A SPILLWAY CHANNEL alongside the navigational locks at Ardnacrusha is designed to take an emergency flow of water from the headrace. Thus when a turbine is shut down a valve opens the spillway sluice and the release of water compensates for the sudden stoppage of the flow through the penstocks. On the left is the special sluice which empties the water from the lock into the tailrace.” By having made it possible to control the temperature and humidity of air supplied to factories, houses and transport vehicles, the engineer has added materially to the efficiency of many industries, and also to personal comfort. The science of air conditioning is receiving a great deal of attention nowadays. It has considerable influence not only on health and comfort, but also on industrial conditions. The various systems which engineers can now install in buildings, ships, factories and so on are described in this chapter by Sidney Howard. Coventry Colliery, Warwickshire “AT THE PITHEAD of Coventry Colliery, Warwickshire, a fine modern building houses the baths which are a feature of the improved conditions of an up-to-date coal mine. The pithead baths at Coventry Colliery accommodate 1,890 men and cost £14,200 to build. Miners coming off duty undress at one side of the baths and put on their ordinary clothes at the other side, so that the dirt of the mine is left completely behind. Every man has two lockers, one for his pit clothes and one for his ordinary clothes.” Continual experiment and research are being carried out by engineers to find new methods of mining for coal and to minimize the many dangers which miners incur in their work underground. Coal mining is one of the most important of engineering subjects, one in which the human element is a big factor. This chapter describes the marvellous improvements which are now used to aid the miner in his work and to minimize the danger in which he toils. This chapter by David Masters describes also machines such as the automatic cutters which are replacing where possible the more laborious method of hand cutting from the coal face. The author describes, too, how the coal is graded and brought to the surface from these underground works of man. The article is concluded in part 13. The Shannon Power Scheme: “GENERATORS at Ardnacrusha Power Station are rated at 30,000 kVA. The generator in the foreground is partly dismantled. To the generators are coupled turbines, each having a maximum output of 38,600 horse-power.” The Shannon Power Scheme: “THE FISH LADDER at Parteen Villa is one of the largest in the world, having a length of more than 600 feet. It consists of a series of steps and pools designed to allow salmon to pass the weir on their way to and from their breeding places upstream.” The Shannon Power Scheme: “THE ENORMOUS PENSTOCKS at Ardnacrusha have a diameter of 19 ft 8 in. They are 131 feet long and are laid on a slope of 31 degrees. They connect the headrace with the spiral casings of the turbines.” An Air Conditioning System “A CENTRAL STATION AIR CONDITIONING SYSTEM is one in which the air for the whole of a building is controlled by one set of machines. Sometimes all the plant is installed in the basement; sometimes, as shown in the above diagram, the conditioning apparatus is apart from the refrigerating plant. The refrigerant used in this system is a chemical known as “Freon 12”. Conditioned air enters rooms at ceiling level and leaves at floor level, to be returned to the apparatus by a duct from the ground floor.” No. 4482, Golden Eagle, illustrated below, is one of a class of seventeen A4 Pacifics built in 1936-7 to haul the fastest expresses on the LNER. Four earlier engines, of the Silver Link class, were built in 1935. Their duties include the haulage of the Silver Jubilee express between Newcastle-on-Tyne and King's Cross, London. This train, whose standard make-up is seven coaches, weighing 220 tons tare, is booked to cover the 268.3 miles between the two cities in four hours, with an intermediate stop at Darlington. Engines of the Golden Eagle class differ from the four of the Silver Link class, in that they are painted in the standard LNER colours, whereas the earlier engines were finished in grey to tone with the colour of the Silver Jubilee train. Locomotives of the A4 class have three cylinders 18½ in diameter by 26 in stroke. The boiler has a total heating surface of 3325.2 sq feet, to which the tubes contribute 2345.1, the firebox 231.2 and the superheater 748.9 sq feet. The working pressure is 250 lb per sq in. The cylinder diameter is half an inch less and the working pressure 30 lb more than in the A3 Pacifics. The grate area is 41¼ sq feet. Tractive effort, at 85 per cent working pressure, is 35,455 lb. The driving wheels are 6 ft 8 in, the bogie wheels 3 ft. 2 in, and the trailing wheels 3 ft 8 in diameter. The engine weighs, in working order, 102 tons 19 cwt, of which 66 tons are available for adhesion. The eight-wheeled tender has a coal capacity of 8 tons and a water capacity of 5,000 gallons. The weight of the tender, in working order, is 64 tons 3 cwt. Particular attention has been given to the streamlining. From the buffer beam a casing, extending across the width of the locomotive, rises in a curve to the top of the boiler and merges at the rear into the wedge-shaped front of the cab. A lower casing on either side of the engine covers the cylinders and sweeps back in a curve to the base of the cab. Thus the streamlining forms a horizontal wedge, which tends to lift the exhaust, steam clear above the cab. This is the fifth article in the series on Modern Engineering Practice.
<urn:uuid:d98d34f8-20c7-4463-a1a1-2e20fb395196>
CC-MAIN-2020-16
https://wondersofworldengineering.com/part12.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371637684.76/warc/CC-MAIN-20200406133533-20200406164033-00553.warc.gz
en
0.971562
3,269
3.40625
3
Please Check out the following doctors & web sites- Great information to educate oneself about healthy Diets. Link to how a plant based diet can improve your health. The Physicians Committee for Responsible Medicine (PCRM) is a non-profit organization based in Washington, D.C., which promotes a vegan diet, preventive medicine, alternatives to animal research, and encourages what it describes as "higher standards of ethics and effectiveness in research." Its primary activities include outreach and education about nutrition and compassionate choices to healthcare professionals and the public; ending the use of animals in medical school curricula; and advocating for legislative changes on the local and national levels. PCRM was founded in 1985 by Neal D. Barnard. Neal D. Barnard is an American physician, author, clinical researcher, and founding president of the Physicians Committee for Responsible Medicine (PCRM), an international network of physicians, scientists, and laypeople who promote preventive medicine, conduct clinical research, and promote higher standards in research. An advocate of a low-fat, whole foods, plant-based diet, he has also conducted research into alternatives to animal experimentation and has been active in the animal protection movement. As of 2013, he is an adjunct associate professor of medicine at the George Washington University School of Medicine and Health Sciences. Barnard is the author of more than 50 published research papers on nutrition and its impact on human health, and more than 15 books, including Power Foods for the Brain (2013), The 21-Day Weight Loss Kickstart (2011), Dr. Neal Barnard’s Program for Reversing Diabetes (2007), and Breaking the Food Seduction (2003). http://www.nealbarnard.org/ Prevent and Reverse Heart Disease Caldwell B. Esselstyn, Jr., M.D. This groundbreaking program backed by the irrefutable results from Dr. Esselstyn’s 20-year study proving changes in diet and nutrition can actually cure heart disease Heart disease remains the leading cause of death in the United States for men and women. But, as Dr. Caldwell B. Esselstyn, Jr., a former internationally known surgeon, researcher and clinician at the Cleveland Clinic, explains in this book it can be prevented, reversed, and even abolished. Dr. Esselstyn argues that conventional cardiology has failed patients by developing treatments that focus only on the symptoms of heart disease, not the cause. Caldwell Blakeman Esselstyn Jr. is an American surgeon and former Olympic rowing champion. He is a "leading proponent" in the field of "plant-based diets" and starred in the 2011 American documentary, Forks Over Knives. Dr John McDougall A physician and nutrition expert who teaches better health through vegetarian cuisine, John A. McDougall, MD has been studying, writing, and speaking out about the effects of nutrition on disease for over 30 years. Dr. John and Mary McDougall believe that people should look and feel great for a lifetime. Unfortunately, many people unknowingly compromise their health through poor dietary habits. Dr. McDougall is the founder and director of the nationally renowned McDougall Program: a ten-day residential program that he and Mary McDougall host at a luxury resort in Santa Rosa, CA where medical miracles occur through diet and lifestyle changes. In addition to her formal training as a nurse, Mary McDougall provides many of the delicious recipes that make the McDougall Program not only possible, but also a pleasure. Dr. McDougall has cared for thousands of patients for almost 3 decades. His program not only promotes a broad range of dramatic and lasting health benefits but, most importantly, can also reverse serious illnesses including high blood pressure, heart disease, diabetes and others, all without the use of drugs. A graduate of Michigan State University’s College of Human Medicine, Dr. McDougall performed his internship at Queen’s Medical Center in Honolulu, Hawaii, and his medical residency at the University of Hawaii. He is certified as an internist by the Board of Internal Medicine and the National Board of Medical Examiners. He and Mary are also the authors of several nationally best-selling books as well as the co-founders of Dr. McDougall’s Right Foods, which produces high quality vegetarian cuisine to make it easier for people to eat well on the go. Joel Fuhrman, M.D. is a board-certified family physician, New York Times best-selling author and nutritional researcher who specializes in preventing and reversing disease through nutritional and natural methods. Dr. Fuhrman is an internationally recognized expert on nutrition and natural healing, and has appeared on hundreds of radio and television shows including The Dr. Oz show, The Today Show, Good Morning America, and Live with Kelly. Dr. Fuhrman’s own hugely successful PBS television shows, 3 Steps to Incredible Health and Dr. Fuhrman’s Immunity Solution bring nutritional science to homes all across America. Dr. Fuhrman’s #1 New York Times best-selling book, Eat to Live, originally published in 2003 (Little Brown) has sold over 1,000,000 copies and has been published in multiple foreign language editions. In October 2012, Super Immunity (HarperOne) reached the New York Times best seller’s list and in January 2013, The End of Diabetes (HarperOne) became his third New York Times best seller. His latest book, Eat to Live Cookbook (HarperOne) debuted at #1 on the New York Times best seller list in October 2013, one week after its release. In addition, Dr. Fuhrman has written several other popular books on nutritional science which include: Eat for Health (Gift of Health Press), Disease Proof Your Child (St. Martin's Griffin), Fasting and Eating for Health (St. Martin's Griffin) and the Dr. Fuhrman's Nutritarian Handbook and ANDI Food Scoring Guide (Gift of Health Press). PLANT BASED DOCTORS & NUTRITION EXPERTS Dr. Michael Klaper is an experienced clinician who practices preventative and nutrition-based medicine and teaches his patients that “health comes from healthy living.” He graduated from the University of Illinois College of Medicine in Chicago in 1972. He served his medical internship at Vancouver General Hospital in British Columbia, Canada and under took additional training in surgery, anesthesiology, and orthopedics at the University of British Columbia Hospitals in Vancouver, and obstetrics at the University of California Hospitals in San Francisco. As Dr. Klaper’s medical career progressed, he began to realize that many of the diseases his patients brought to his office – clogged arteries (atherosclerosis), high blood pressure (hypertension), obesity, adult onset diabetes, and even some forms of arthritis, asthma, and other significant illnesses – were made worse, or actually caused, by the high-fat, high sugar, overly processed Standard American Diet (S.A.D.). Dr. Klaper believes strongly that proper nutrition and a balanced lifestyle are essential for health and, in many cases, make the difference between healing an illness or merely treating the symptoms. He is currently working a True North Dr. T. Colin Campbell is an American biochemist who specializes in the effect of nutrition on long-term health. He is the Jacob Gould Schurman Professor Emeritus of Nutritional Biochemistry at Cornell University. Campbell has become known for his advocacy of a low-fat, whole foods, vegan (plant-based) diet. He is the author of over 300 research papers on the subject, and two books, Whole (2013), and The China Study (2005, co-authored with his son), which became one of America's best-selling books about nutrition. Campbell featured in the 2011 American documentary, Forks Over Knives. Campbell was one of the lead scientists in the 1980s of the China-Oxford–Cornell study on diet and disease, set up in 1983 by Cornell University, the University of Oxford and the Chinese Academy of Preventive Medicine to explore the relationship between nutrition and cancer, heart and metabolic diseases. The study was described by The New York Times as "the Grand Prix of epidemiology." Dr. Greger is licensed as a general practitioner specializing in clinical nutrition and was a founding member of the American College of Lifestyle Medicine. He was featured on the Healthy Living Channel promoting his latest nutrition DVDs and teaches part of Dr. T. Colin Campbell's nutrition course at Cornell University. Dr. Greger's nutrition work can be found at NutritionFacts.org, a 501c3 nonprofit charity. As the director of Public Health and Animal Agriculture for The HSUS and Humane Society International and a physician specializing in clinical nutrition, Dr. Michael Greger focuses his work on the human health implications of intensive animal agriculture. His work involves examining the routine use of non-therapeutic antibiotics and growth hormones in animals raised for food, and the public health threats of industrial factory farms. Dr. Greger plays a vital role in The HSUS's efforts to shape public policy on agriculture and nutrition. In 2011, he launched NutritionFacts.org to profile the latest news in nutrition research. He also works on food safety issues, such as bovine spongiform encephalopathy (mad cow disease). He appeared as an expert witness to testify about mad-cow disease when cattle producers sued Oprah Winfrey for libel. Dr. Greger has been an invited lecturer at universities, medical schools, and conferences worldwide.He has lectured at the Conference of World Affairs, the National Institutes of Health, and the International Bird Flu Summit, among countless other symposia and institutions, testified before Congress, has appeared on shows such as The Colbert Report and The Dr. Oz Show, and was invited as an expert witness in defense of Oprah Winfrey at the infamous "meat defamation" trial. GABRIEL COUSENS, M.D. Reversing Diabetes Naturally Gabriel Cousens M.D., M.D.(H), D.D. (Doctor of Divinity), Diplomate of American Board of Integrative Holistic Medicine, Diplomate Ayurveda is considered the leading live-food medical doctors and spiritual nutrition experts in the world, and, is recognized as "the fasting guru and detoxification expert" by the New York Times. He is also a psychiatrist, family therapist, Ayurvedic practitioner, homeopath, acupuncturist, medical researcher, ecological leader, and bestselling author of books such as Spiritual Nutrition, Conscious Eating, Rainbow Green Live Food Cuisine, and There Is a Cure for Diabetes. He received his M.D. degree from Columbia Medical School in 1969 and completed his psychiatry residency in 1973. Dr. Cousens was the Chief Mental Health Consultant for the Sonoma County Operation Head Start and a consultant for the California State Department of Mental Health. He is listed in the Who’s Who in California, Who’s Who among Top Executives, Strathmore’s Who’s Who, and National Register’s Who’s Who and is a former member of the Board of Trustees of the American Holistic Medical Association (AHMA). Gabriel Cousens, MD has developed the first live-food, vegan Masters program in the world in conjunction with University of Integrated Science California. He is the director and founder of the Tree of Life Foundation and the Tree of Life Rejuvenation Center, called by Harper’s Magazine, “One of the world’s best 10 yoga and detoxification retreats”. Dr. Cousens has taught about live-foods, spirituality, and diabetes prevention throughout courtries all over the world.
<urn:uuid:1cd04e5b-10ea-4497-bf7f-8ab2a954f3f3>
CC-MAIN-2020-16
https://www.vegansaladmaster.com/plant-based-doctors
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496227.25/warc/CC-MAIN-20200329201741-20200329231741-00033.warc.gz
en
0.949184
2,459
2.8125
3
What is DNA? Different brands of DNA exist in all of us. Chromosomal DNA is made up of autosomal DNA, mtDNA (or X-chromosome DNA) and Y-DNA (from the Y chromosome.) Individual tests can be done on each of these types. Deoxyribonucleic acid (or DNA) is the magical ingredient that is found in all living organisms. It carries instructions for the growth, development, reproduction, and functioning of our bodies and the bodies of all living things. That is why some people have blue eyes, while others have green or brown. Did You Know? - No two people have the same DNA (except identical twins) - Half of your DNA comes from your mother, and half comes from your father - If all the DNA in your body was unraveled it would reach the sun and back over 600 times - 99.9% of all human DNA is identical, it’s the 0.1% that makes us unique - Your DNA can link you to locations and ethnicities you’d never thought were possible - There are different brands of DNA in all of us and you can test each of them Now that we know more about DNA, let’s find out about the different types and what they can tell us. Autosomal DNA Tests Wondering where in the world you come from? If so, then this is the test for you. Autosomal testing will give you the ability to trace back your ancestry some four or five generations back, letting you know where they lived hundreds, even thousands of years ago. How Does it Work? In the center of every cell in your body, there is a nucleus with 23 pairs of chromosomes. 22 of these are autosomes which contain a complete genetic record, while the 23rd determines your sex. Autosomal DNA testing can survey over 700,000 locations in your DNA and discover your genetic information passed through generations. This is the most common type of DNA testing completed by people all over the globe. Because many females change their surnames when they get married, it’s difficult to find out which females you might be related to. Thankfully, mitochondrial (mtDNA) testing changes this notion completely. This special type of DNA testing will tell the story of your mother’s heritage and will go all the way back to the woman that we all descended from, mitochondrial Eve. It will not, however, give you any information about your paternal lines. This DNA testing might be more limited than the autosomal test, but it can be useful in other ways. If you were separated from a female family member or have no contact with your mother, this is the test that will provide you with the most information. Y-DNA tests are what we commonly refer to as paternity tests, and can be useful for understanding complicated situations like illegitimacy or adoption. Because this test searches the Y-chromosome, which is only found in men, it is only available to men. Y-DNA tests will show you exactly how closely you might be related to a person with the same surname, going back many generations. How to Find the Right DNA Test Method for Your Needs? To determine to right method, you first have to consider your needs. If you’re looking for an overview of your family history, go for the autosomal test, as it covers both sides and looks at your gene-base as a whole. If you want to learn more about a particular racial heritage or merely explore where your ancestors came from, it’s the top choice. However, if you’re looking for more specific results, the mtDNA and Y-DNA options have the potential to uncover up to 10,000 years of your ancestry. Perhaps you want to know more about a particular ancestor or gain a detailed understanding of your health and the potential risks. If this is the case, then the latter options are the best choice. Getting Started with a DNA Test – How Does DNA Testing Work? It all depends on what you want to find out. Below you will find a quick and easy guide showing you which type of test will be the right one for your needs. Ancestry DNA Tests Have you always suspected you have Viking ancestry running through your veins? Ancestry DNA tests reveal European, African, Asian, Jewish and Native American records using a simple saliva or swab test. Price: Depending upon which service you choose from, prices range from $69-$99. Pet DNA Tests Do you have a sneaky suspicion that your feline was part of an Egyptian royal family? Pet DNA tests will show if your favorite four-legged friend is a pure breed, their history, and origin. Price: Pet DNA test kits retail around half the price of a standard ancestry test. DNA Paternity Tests DNA Paternity tests are now available at a fraction of the cost which you might have once paid. Although these results might not be recognized by the government, finding out the truth about your paternal heritage is now easier than ever before. Price: An at-home testing kit will set you back as little as $30. DNA Heritage Tests Wanting to understand where your heritage stems from is part of human nature. DNA heritage tests will break things down for you into percentages of your ethnic background. Price: Depending on which provider you choose, the price of this DNA testing kit can cost from $50-$100. Sibling DNA Tests Think you might have a long lost twin somewhere in the world? A sibling DNA testing kit will reveal both whether or not two people are related and if they share the same parent. Combining the sibling DNA test with a heritage or ancestry DNA test will give you a more comprehensive result. Price: Prices vary, but are generally around the same that you might pay for an ancestry DNA test. DNA Health Testing This is a popular choice for those wanting to find out risk factors for certain health issues. Discover your chances of genetic mutations, food sensitivities or predispositions to certain cancers and diseases. Price: The health DNA testing kit might be a little pricier than others it usually comes in combination with ancestry results so you’re getting two for the price of one. DNA Fitness Testing Discover your genetic fitness blueprint created to show you why you might have a slow metabolism. This type of DNA testing is ideal for people hoping to improve their physical fitness based on your specific DNA analysis. Price: Costs range from $100-$300 depending on how much information is analyzed. Prenatal DNA Testing This non-invasive test gives accurate paternity answers during pregnancy. It can be completed as early as 8 weeks’ into the pregnancy and can even reveal the gender of the baby. Price: Prenatal DNA testing is somewhat pricier, costing around $300, but peace of mind has no price. How to Order a DNA Test - Visit your chosen DNA Test company’s website· - Create an account - Add a DNA test to your account - Fill in payment and delivery details - Kick back, relax and wait for your package to arrive in the mail DNA Sample Collection Methods Have a fear of injections and faint at the sight of blood? Luckily for you, none of these elements are involved in providing a test sample. Each DNA testing kit will have its own set of instructions that it comes with, which you need to read carefully. Generally, DNA test samples come from saliva taken from the inside of your mouth, via a simple cheek swab or a spit sample. - Follow the postage instructions that come with the test - Depending on the services you use, the company doing the testing may provide a self-addressed envelope to make things that much easier to send your sample back - Activate your kit on the corresponding website so that your sample can be matched to your online account How Long Does a DNA Test Take? At-home DNA testing kits can take from 4-8 weeks to come back with the results. Make sure to check with your chosen provider for a more accurate time frame. Where can I see my Results? Usually, you will get notified that your test results are ready. You can then simply log into your account and see your DNA results online. Some will even send your information via post, but that is a process that takes longer. If curiosity gets the better of you, it is much quicker to check everything out online. Are DNA Test Results Easy to Read? Think you have to have a science degree to read DNA results? No way! It’s very simple, so let’s break it down. Your test results are broken down into a few sections so it is easier for the reader to interpret. If you had an ancestry test done then it will break down the main regions that your past ancestors came from. It will specify which regions and percentages. Secondly, it will go into more detail about specific ethnicities that your genetic code comes from. If you have any additional testing like, like a health report, that will come in at the end. How Accurate are DNA Tests? DNA tests measure your DNA at around 700,000 locations across your genome and compare it to populations spreading throughout over 350 regions in the world. Your DNA test will be at least 98% accurate. Although DNA testing kits will not be able to pinpoint the exact town that your ancestors came from, it will tell you their approximate geographical locations. You will be able to trace your heritage back to Eastern Europe, Scandinavia or other parts of the world. How much does a DNA Test Cost? It all comes down to which test you would like to take. There are some ancestry subscription packages going for $20-$40 per month. If you take the one time offer, ancestry DNA testing kits generally cost under the $100 mark. How Do I Pay For My DNA Testing Kit? Many services offer various ways of paying for your DNA test kit including PayPal, credit cards, bitcoin, or personal checks. Are My Results Confidential? Yes! There is no need to worry as your test results are strictly confidential and are stored in a secure database. Also, the lab that processes your DNA results does not have access to your name or contact information. Your DNA data can only be shared if you choose to make that data public, and this is entirely your own decision. This means that only people with similar DNA profiles will be able to view you as a DNA match, but they will only have an anonymous ID number, not your name or any personal information. Additionally, you can choose to delete your DNA test results. Be warned that by doing this you will no longer be able to recover your data. Can my Home DNA Testing Kit be used in Court? There is a difference between legal testing and home DNA testing. Your home DNA testing results will generally not be admissible in court, but they will be able to give you peace of mind for personal reasons. Can I Purchase a DNA Gift Kit? Did you know that the more people in your family who get tested, the more accurate your results will be? The more tested family members, the more distant relatives you will be able to locate! You can purchase a gift kit and give your loved one a DNA test as a birthday or holiday present. Some companies even give discounts for more than one DNA testing kit order at a time. Brilliant! What is the Best DNA Kit for me? The best answer to this question lies in one important element: what it is that you really want to find out about yourself. Once you have answered this question, you will find it much easier to understand which testing kit is the perfect one for you. You might want to: - Know more about your ancestors - Learn about your heritage - Confirm paternity - Find relatives - Improve your health and fitness - Find out about medical predispositions For further information check out our best DNA testing kits reviews. Is it Worth it? In short, definitely. DNA testing kits have become a powerful tool in researching not only your family tree but also in finding out information about your medical health too. Considering the fact that these testing kits have become easily attainable and much cheaper than they once were there is little reason not to expand your knowledge about your ancestors and your ethnicity. You never know, you might even get the chance to uncover family members you never knew existed and connect with people all around the globe.
<urn:uuid:0157e1e8-3181-4bba-9d9a-6e47afbe2f6a>
CC-MAIN-2020-16
https://www.bestonlinereviews.com/dna-testing/a-full-guide-to-dna-testing-kits-how-to-use-them/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496669.0/warc/CC-MAIN-20200330054217-20200330084217-00354.warc.gz
en
0.944076
2,597
3.171875
3
This weekend is a special time of remembrance for the Christian faith. It is the time where we reflect on the crucifixion and celebrate the resurrection of Jesus Christ. In light of that it’s a good idea to remember what makes this particular trial and execution so significant – significant enough to alter the history of mankind. Although scoffers try to claim Jesus never existed and that this never happened, we know from other ancient documents outside of the Bible that Jesus did exist and was crucified. In the writings of Tacitus, a Roman historian writing from AD 56 to 117, he relates this about Jesus: “Christus, from whom the name had its origin, suffered the extreme penalty during the reign of Tiberius at the hands of one of our procurators, Pontius Pilate.” The historical accounts from Tacitus and other ancient texts confirm that this Jesus was a real person who was crucified by Pontius Pilate during the reign of Tiberius. But if this Jesus who is called the Christ were just a man, what significance would there be in his execution? Why would this be noteworthy to Tacitus, or to history in general? It wasn’t just because he was an innocent man. If Jesus were just an innocent man, convicted and executed for a crime he did not commit, then his execution would be shameful and sad. But that would not really matter to the rest of the world. So one may say it was significant because of the reason behind his execution. Jesus was not convicted for something He did but for who He claimed to be. He claimed to be God which the High-Priest determined to be blasphemy punishable by death. (Although under Jewish law it would have been death by stoning. But since the Jews were under Roman authority at the time, the punishment was crucifixion). However, in studying Acts 5, we see that even that was nothing of importance. Gamaliel, the well-respected Pharisee, said that many had claimed to be the Messiah and thus sealed their fate to be executed as a blasphemer. A man named Theudas made that claim and even had 400 followers. When he was executed, his disciples scattered; and it came to nothing. Then Judas of Galilee rose up in the days of the census with many followers. When he was executed, his many followers dispersed; and it came to nothing. So why is it that we remember the crucifixion of Jesus of Nazareth? Because of what happened after His execution. See, when Theudas and Judas of Galilee were executed for claiming to be the Messiah, their followers scattered. Why? Because their leader, a mere mortal man, was now dead. But when Jesus of Nazareth was crucified for claiming to be the Messiah, He died and was buried. But He did not stay that way. He resurrected. And his followers did not scatter and disperse; they became bolder. They traveled far and wide preaching in the name of this Jesus of Nazareth – and they did so in the face of intense persecution. As Gamaliel had predicted, this was the evidence that this incident with Jesus was not like the others. When the Jewish council was debating what to do with Peter and John who were still preaching the name of Jesus, Gamaliel advised, “So in the present case I tell you, keep away from these men and let them alone, for if this plan or this undertaking is of man, it will fail; but if it is of God, you will not be able to overthrow them. You might even be found opposing God!” For Gamaliel, it was the after effects of the crucifixion that would prove whether this thing was from God or man. So it is the response to this claim of the resurrection that gives us the evidence of its truth. Consider the responses of those who loved and followed Jesus. The disciples’ response to the news of the empty tomb at first was skepticism – until they saw it for themselves. They were in hiding, mourning the loss of their beloved Messiah and fearing their own fate at the hands of the Pharisees. But once they witnessed the resurrection, their focus changed from their own security to the urgency of sharing the Gospel of salvation. The truth of what they had seen with their own eyes and touched with their own hands made a drastic impact on their lives. And they devoted the remainder of their lives sharing that good news of the resurrection to others around the world. But the news of the resurrection also impacted those who were not followers of Jesus. His ministry, crucifixion, and resurrection were all done publicly, out in the open for friend and foe to see. Peter reminds the Jewish people of that in his first sermon delivered at Pentecost. He said, ““Men of Israel, hear these words: Jesus of Nazareth, a man attested to you by God with mighty works and wonders and signs that God did through him in your midst, as you yourselves know— this Jesus, delivered up according to the definite plan and foreknowledge of God, you crucified and killed by the hands of lawless men. God raised him up, loosing the pangs of death, because it was not possible for him to be held by it.” Peter spoke on these things again after healing the lame man on the temple steps: “whom you delivered over and denied in the presence of Pilate, when he had decided to release him. But you denied the Holy and Righteous One, and asked for a murderer to be granted to you, and you killed the Author of life, whom God raised from the dead. To this we are witnesses.” As Paul would say later, those things were “not done in a corner.” All of those people had seen the work Jesus had done, His miracles, His power over sickness, nature, and demons. They had all witnessed, no, not just witnessed, but participated in his crucifixion. They had all cried out, “Give us Barabbas!” They had jeered and mocked Him at His death. And they had witnessed His resurrection. The response to note here though is that of the Pharisees to Peter’s statements. Did they say to the crowds that Peter and John were just as crazy as Jesus? Did they go to the tomb and produce the body of Jesus to shut them up? No. It says they were astonished at Peter and John’s boldness and wisdom – until they recognized they had been with Jesus. The Pharisees were unable to refute that the lame man had been healed in the name of Jesus. And they were afraid of the spread of Jesus’ name. The response of the Pharisees shows us the veracity of the claims made by Peter and John. They could not refute the power done in the name of Jesus. And they could not refute the claims of the resurrection. They could only make futile attempts to stop the spread of these things. The reaction of Paul to the resurrection gives evidence of its occurrence as well. Paul’s encounter with the resurrected Jesus drastically changed his life. It changed him from persecuting those who claimed the name of Jesus to being persecuted for proclaiming the name of Jesus. He changed from speaking against Jesus to speaking for Jesus. He went from being feared by the other apostles to being accepted into their fellowship. He transformed from a Jewish Pharisee despising Gentiles to the apostle preaching salvation to the Gentiles. Paul himself credited such a complete transformation to the one event of meeting the resurrected Jesus. To change that much from one event tells us that event did occur. There is also the reaction of James, the brother of Jesus, who was not a disciple or follower of Jesus. In fact, none of His brothers believed (John 7:5). They even tried to pull Jesus out of ministering to the crowds when the crowds became too great (Mark 3). Their unbelief is quite astonishing when you consider the testimony of their mother as to the conception of Jesus! Not much else is said about the family of Jesus throughout the Gospel accounts. However, the next time we see James he is presiding over the Jerusalem Council in Acts 15. James led the other apostles in determining the guidelines for new believing Gentiles. He became the leader of the church in Jerusalem (Acts 21). Paul referred to James as an apostle in his letter to Galatia. And James was eventually martyred for his faith by the Jewish leaders there in Jerusalem. What would have caused such a difference? How did James grow up in the same house as Jesus, witnessing His ministry from the very beginning, and not believe, yet after Jesus’ crucifixion he became the leader of the church in Jerusalem, even dying for his belief? For that answer we go to 1 Corinthians 15:7. Paul reminded the Corinthians of the core doctrinal truths of Christianity that he had already taught them: that Christ died for our sins; that he was buried and raised again; and that many witnessed His resurrection. But look at the list of names that Paul provided who saw the resurrection. Jesus appeared to Peter, the twelve disciples, more than 500 at once, James, the apostles, and Paul himself. Jesus specifically appeared to His unbelieving brother James. So the drastic change in the life of James is directly attributed to his witness of the resurrected Jesus. The reactions of friend and foe is what gives us, far removed in time and place, the confidence that it did indeed occur. It is the reaction to any historical even that confirms its veracity. For example, even if we didn’t have a single document remaining from the Revolutionary War, we know what truly happened because we have evidence of the reaction to it – the United States is functioning as a separate nation from England. Likewise, the ongoing reactions of both friend and foe to the resurrection of Jesus give us the confirmation that it really occurred. And the importance of that fact cannot be stated enough. It is the resurrection of Jesus that affirms His deity. It is that resurrection that conquers sin and death for those who believe. Without the resurrection, our faith is meaningless. But with the resurrection comes eternal hope and salvation for all mankind. This is how those living alongside Jesus reacted to the resurrection. What will your reaction be? Tacitus, Annals 15.44 All scripture quotes are from the English Standard Version (ESV) With Easter coming this Sunday, I wanted to talk about the significance of the crucifixion. Crucifixion was the Roman method of executing criminals. They crucified people on a regular basis. So for this particular crucifixion to be significant, it depends on who was being crucified. If Jesus were just a man, then this crucifixion may have been a sad travesty that someone so kind and nice, who did nothing wrong, would meet such a tragic end. But then we must ask why he was crucified in the first place? If he were just a nice man that did nothing wrong, what could account for him being crucified like a common criminal? That question ultimately leads us to who this man called Jesus really was. It was the most unusual trial and execution in all of history – because it wasn’t for what he did but who he claimed to be. He was executed for making the claim of being God. Now, this is oftentimes a point of contention with critics because many say that Jesus never claimed to be God. They think the claim of deity was something added generations later. However, the fact that the crucifixion happened at all flatly disproves this assertion, for it was his assertion of being God that led to his crucifixion. If Jesus didn’t claim to be God, then the crucifixion would not have taken place because that was the sole reason for it. In case that argument isn’t sufficient for some, let’s look at how Jesus really did make the claim to be God. First, Jesus exhibited characteristics that only God could have. He was all-knowing about the past, present and future. When he met the woman at the well in John 4:16-20 Jesus told her everything about her life. And it was from his intimate knowledge about her life that she knew there was something different about Jesus, that he might be the Messiah. He predicted his own death. He told Peter to get a coin out of a fish’s mouth to use for taxes in Matthew 17. He was all-powerful. Throughout the New Testament, He healed the lepers, raised Lazarus from the dead, brought sight to the blind, cast out demons, and caused the lame to walk again. He had power over the sea and the storms in Matthew 8. And in Matthew 14 he walked on water. Second, he receives praise from those around him as though he were God. If Jesus didn’t intend to be worshiped as God, then he certainly would have stopped the people from doing so. We know that was how Paul and Barnabas responded to being worshiped like gods at Lystra in Acts 14. The people saw what Paul and Barnabas could do through the power of the name of Jesus and fell down to worship them like gods. But Paul and Barnabas stopped that by insisting that they were just men. They rejected the idea of being worshiped like gods. Granted, having others worship you like a god does not in fact make you god. The Roman emperors and Egyptian pharaohs liked to be worshiped like gods but they certainly were not. But it tells us about what they intended. If the people wanted to worship Jesus as God and he did not stop them, then it tells us he intended for people to worship him like God. But we also must take into consideration what kind of people were doing the worshiping. When those at Lystra or Rome or Athens worshiped something as god, it was just adding one more name to a list of hundreds of gods they already worshiped. But when the Jewish people began to worship someone as God, a people who were unique in this time by having only One God, it meant what they were worshiping was part of that one true God. Jesus even says himself in Matthew 4:10 that we are to worship the Lord God only, yet Jesus openly and readily received worship as that God. His followers were devout Jews who believed in only one true God, yet they all confessed Jesus to be God. Third, Jesus makes the confession himself about his deity, which is really the whole point here. In John 5:16–18, the Jewish leaders confronted Jesus for healing a man on the Sabbath because that violated the law of resting on the Sabbath. Jesus’ response was “My Father has been working until now, and I have been working.” He referred to God as His own personal Father, not as “our” Father. He put His work on par with God’s work, making Him equal to God. The Jewish leaders clearly understood Jesus was claiming to be God. It says they “sought all the more to kill him because He not only broke the Sabbath, but also said that God was His Father, making Himself equal with God.” In John 10, Jesus was approached by the Jewish leaders who questioned Him about being the Christ. His response was “I and My Father are one." At this, the Jews picked up stones to stone Jesus. Jesus asks them for which miracle, which deed, are they stoning him and they replied, “For a good work we do not stone You, but for blasphemy, and because You, being a Man, make Yourself God.” The Jewish leaders understood exactly what Jesus was claiming, and they were ready to execute Jesus on the spot for that claim. In Mark 2, Jesus demonstrates his power and his deity by healing a paralytic but also by forgiving the paralytic’s sins. The scribes ask by what authority He is able to forgive sins. The scribes understood that only the one injured by someone’s sins can be the one to offer forgiveness. If you steal my money I can forgive you. But I can’t announce that I forgive you for stealing someone else’s money. This man’s sins were against God, so the only one who can forgive those sins is God Himself. Therefore when Jesus said that He could forgive sin, He was claiming to be God. He was forgiving sins as though He was the main person offended by those sins. He could only forgive those sins if He really was the God whose laws are broken and whose love is wounded in every sin. But the most definitive evidence of Jesus’ claim as God is in His trial. In Mark 14:60–64, the high priest directly asks Jesus, “Are you the Christ, the Son of the Blessed?” And Jesus answered, “I am.” This is exactly the statement the Sanhedrin was waiting for. Jesus claimed to be God, the Christ, the Son of the Most High God. He was tried and convicted for this claim and this claim alone. So there is no mistaking that Jesus clearly claimed to be God and knew the full weight of that claim. It was the very claim that cost him his life. But why does this matter? What if Jesus were not God? Then his death on the cross was insufficient to pay for our sins. If Jesus were just a nice, innocent man wrongfully convicted, then our sins are still upon us. It was only through the sacrifice offered by God Himself that we can have forgiveness. See, we all sin. And the punishment for those sins is death and separation from God. The only way to escape that punishment is by maintaining perfection and holiness – a standard that none of us can meet. Except God. Only God Himself can maintain that holiness and therefore provide the atonement for our sin-stained lives. If Jesus were not God, then our sins are left upon us. It is so critical to understand that Jesus was God. It is why He was crucified and it is how we have forgiveness. Otherwise, His death is insufficient and the wages of sin is still due us.
<urn:uuid:66c96217-b2b2-4658-b2e1-e77a6e8da295>
CC-MAIN-2020-16
https://www.defendthefaithministry.com/blog/category/jesus
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493120.15/warc/CC-MAIN-20200328194743-20200328224743-00515.warc.gz
en
0.987874
3,749
3.09375
3
Don’t short-shrift cleaning programs Studying the efficacy of detergent and quaternary ammonium sanitizer on reduction of shiga toxin-producing E. coli attached to stainless steel. Editor’s Note: This research originally published in Food Safety Magazine’s April/May 2015 issue, a portion of which is reprinted here with permission. To read the entire article, please visit http://bit.ly/MeatSciReview0715. Shiga toxin-producing Escherichia coli (STEC) are pathogens of concern across various food products as they have been connected to a wide variety of outbreaks and recalls. Most of the scientific literature concerning the removal of attached STEC cells focuses on E. coli O157:H7 as it was the first STEC to be considered an adulterant in non-intact beef products in the United States after a large outbreak from undercooked ground beef patties in 1982 (6). Worldwide, non-O157 STEC strains are estimated to cause 20 to 50% of STEC-related infections (5). A review of outbreaks from 1983 through 2002 found six serogroups (O26, O111, O103, O121, O145, and O45) to be the most common non-O157 STECs, causing an estimated 70% of non-O157 STEC infections in the United States (1). The United States Department of Agriculture Food Safety and Inspection Service (USDA-FSIS) has included these serogroups along with E. coli O157:H7 as adulterants in non-intact beef products (9). Biofilms are communities of microorganisms that can form on both living and non-living surfaces, including those found in food-processing plants. Biofilm formation depends on the microorganisms present and can be affected by a variety of environmental conditions including nutrient availability, temperature, the cleanliness of the surface and the presence of other microorganisms (4, 7, 8, 10). Previous studies have determined that E. coli O157:H7 can attach and form biofilms on surfaces such as stainless steel and plastic (2, 7, 8). A series of studies, including two conducted in our laboratory, have shown STEC attachment is strain-dependent (9) showing that assumptions cannot be made about the entire serogroup in terms of attachment to and biofilm formation on surfaces. A complete sanitation program including the removal of solids and utilization of both detergents and sanitizers within a food-processing environment is essential to producing safe, wholesome products for consumers to enjoy. However, only a few studies have utilized a combination of detergents and sanitizers to determine their effectiveness against biofilms containing pathogens like STECs that are attached to commonly used surfaces like stainless steel. Mimicking food-processing environments where STEC cells could be found was an important aspect of this study. The objective of this study was to determine the effectiveness of a detergent and a quaternary ammonium sanitizer to remove STEC cells attached to stainless steel. Quaternary ammonium is a commonly used sanitizer within the food industry that is effective in killing pathogens, but doesn’t cause corrosion of equipment. Multiple strains from all seven STEC serogroups (O157:H7, O26, O45, O103, O111, O121, and O145) were screened for their ability to attach to stainless steel in full and minimal nutrient media over time at 25°C in previous studies. Attachment to stainless steel was strain-dependent, and we found that attachment of STEC strains was higher under minimal nutrient conditions (data not shown). One strain from each serogroup that showed a high affinity to attach to stainless steel in minimal nutrient media was used. For each strain (n=7), 5 pieces (coupons) of stainless steel were incubated in minimal nutrient media for 24 h at 25°C to allow the STEC to attach to the surface. After 24 h of attachment, the loose cells were gently removed by rinsing with water. After the loose cells were removed, the stainless-steel coupons were subjected to one of five treatments: detergent only (detergent/water), sanitizer only (water/sanitizer), detergent/sanitizer combination (detergent/sanitizer), control (water/water), or untreated control (inoculated with no treatment). Each combination was tested separately for each strain and replications were conducted in triplicate. Detergent and sanitizer was prepared according to the manufacturer’s instructions with a target sanitizer concentration of 200 ppm. Treatment solutions were put into separate foaming hand soap dispensers to simulate foaming application of the chemicals in a food-processing environment. The coupons were exposed for 5 minutes to the treatments, then rinsed with water, and transferred to new wells to prevent continued contact with the previous treatment. All coupons were in contact with the treatments for 5 min, then rinsed with water, and transferred to a clean well for the colorimetric assay. Coupons were exposed through immersion only, and no mechanical action was applied to the coupons upon application of treatment. The colorimetric assay was used to determine the amount of STEC remaining on the coupons after treatment by measuring absorbance of the solution at 590 nm. Statistical analysis was performed to determine the least squared means (LSMs) with an aof 0.05. Significant (p < 0.001) differences were found among treatments as well as strains. Untreated stainless-steel coupons had a significantly (p < 0.0002) higher OD590 absorbance value as compared to the other treatments indicating the treatments removed a large number of attached bacteria as noted in Table 2. The most effective treatment was the detergent and sanitizer combination with an overall reduction of over 0.023 in absorbance from the untreated stainless steel coupons, although the reduction was not significant (p > 0.05) when compared to the control (water only) and detergent only treatments. The differences can be visually noted in Figure 1. A complete cleaning and sanitation program, including the application of both detergent and sanitizer at manufacturer recommended concentrations, can significantly reduce the amount of STEC bacteria attached to stainless steel. Because the STEC populations were not enumerated, we cannot confirm that all of the attached bacteria were removed. However, a study to determine the reduction of attached STEC bacteria using a complete sanitation program is currently in progress. Others have found a complete cleaning and sanitation program was more efficient in removing bacteria numbers and contributed the findings to the action of the detergent (3). These conclusions were made in part because the sanitizer became less effective as the soil residue increased over time within their testing system. In our study, the STECs were allowed to attach in laboratory media, so no food residue was present to reduce the effectiveness of the sanitizer, but our results still found decreased bacterial removal for sanitizer only applications as compared to applying both detergent and sanitizer. Because research on the removal of STECs from equipment is limited, we also chose to use the manufacturer’s recommended concentrations for sanitizer concentrations. Further research, included enumeration of these STEC strains after treatment is warranted to understand the total number of viable cells left on the equipment surfaces after the cleaning and sanitation program is complete. Additional research is needed to determine how these bacteria act when exposed to food residues and other microorganisms present, and how those residues may impact the efficacy of cleaning and sanitation programs on the removal of STECs from equipment surfaces. In conclusion, our study shows that a complete cleaning and sanitation program administered within food production facilities is more effective at removing STEC bacteria from stainless steel in laboratory media when the chemicals are applied using manufacturers’ recommendations. NP - Brooks, J. T., E. G. Sowers, J. G. Wells, K. D. Greene, P. M. Griffin, R. M. Hoekstra, and N. A. Strockbine. 2005. Non-O157 Shiga toxin–producing Escherichia coli infections in the United States, 1983–2002. J Infect Dis. 192:1422-1429. - Dewanti, R., and A. C. Wong. 1995. Influence of culture conditions on biofilm formation by Escherichia coli O157:H7. Int J Food Microbiol. 26:147-164. Dunsmore, D. 1981. Bacteriological control of food equipment surfaces by cleaning systems. I. detergent effects. J Food Prot. 44:15-20. Farrell, B. L., A. B. Ronner, and A. C. Lee Wong. 1998. Attachment of Escherichia coli O157:H7 in ground beef to meat grinders and survival after sanitation with chlorine and peroxyacetic acid. J Food Prot. 61:817-822. - Mathusa, E. C., Y. Chen, E. Enache, and L. Hontz. 2010. Non-O157 Shiga toxin-producing Escherichia coli in foods. J Food Prot. 73:1721-1736. - Rangel, J. M., P. H. Sparling, C. Crowe, P. M. Griffin, and D. L. Swerdlow. 2005. Epidemiology of Escherichia coli O157:H7 outbreaks, United States, 1982-2002. Emerg Infect Dis. 11:603-609. - Simpson Beauchamp, C., D. Dourou, I. Geornaras, Y. Yoon, J. A. Scanga, K. E. Belk, G. C. Smith, G.-J. E. Nychas, and J. N. Sofos. 2012. Transfer, attachment, and formation of biofilms by Escherichia coli O157:H7 on meat-contact surface materials. J Food Sci. 77:M343-M347. - Skandamis, P. N., J. D. Stopforth, L. V. Ashton, I. Geornaras, P. A. Kendall, and J. N. Sofos. 2009. Escherichia coli O157:H7 survival, biofilm formation and acid tolerance under simulated slaughter plant moist and dry conditions. Food Microbiol. 26:112-119. - Wang, R., J. L. Bono, N. Kalchayanand, S. Shackelford, and D. M. Harhay. 2012. Biofilm formation by Shiga toxin-producing Escherichia coli O157:H7 and Non-O157 strains and their tolerance to sanitizers commonly used in the food processing environment. J Food Prot. 75:1418-1428. - Wang, R., N. Kalchayanand, J. L. Bono, J. W. Schmidt, and J. M. Bosilevac. 2012. Dual-serotype biofilm formation by Shiga toxin-producing Escherichia coli O157:H7 and O26:H11 strains. Appl Environ Microbiol. 78:6341-6344.
<urn:uuid:13e3ad09-f0d8-4afc-9d64-bff556481552>
CC-MAIN-2020-16
https://www.provisioneronline.com/articles/102112-dont-short-shrift-cleaning-programs
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370491857.4/warc/CC-MAIN-20200328104722-20200328134722-00195.warc.gz
en
0.937864
2,375
2.78125
3
However, Pomeranz, Lee, and Li maintain that the lower Yangzi River basin was not characterized by a Malthusian crisis. The Eurasian demography project is an approach to historical demography that is intended to provide a substantially more detailed understanding of the historical trajectory of demography in different parts of the Eurasian land mass—population size, nuptiality, fertility, mortality, etc. The authors find that their results cast doubt on the Malthusian conclusions and generalizations about positive and negative checks and Europe versus Asia. They find that family practices, demographic institutions, and economic settings vary sufficiently across the map of Eurasia as to make it impossible to arrive at grand differentiating statements about European and Asian demography or English and Chinese demography. In particular, they find that the evidence shows that Chinese demographic behavior resulted in fertility rates that were broadly comparable to those of Western Europe. The behavior of agricultural productivity is crucial to this debate. How are we to attempt to resolve the disagreements involved in this debate? Since there is a substantial range of empirical disagreement between the two perspectives, it is logical to hope for some degree of resolution through more detailed factual and empirical research. Here the careful empirical work provided by Robert Allen and Bozhong Li is crucial to the debate. According to Li, the Chinese farm economy experienced steady labor productivity and rising land productivity, resulting in a level standard of living for rural workers and farmers. Finally, Li and Pomeranz observe that the two paths England and Jiangnan separated in the midth century, with sustained productivity increases in manufacturing and agriculture in England, and static or worsening productivity in Jiangnan. Robert Allen contributes to the debate by assembling a detailed and historically rigorous framework for aggregating costs on historical farming systems England and the lower Yangzi , and arriving at estimates of labor and land productivity, farm wage incomes, and farm family incomes Allen His farm model permits a consistent framework for estimating costs and outputs of Yangtze farming. His analysis supports detailed comparison of labor productivity in England and the lower Yangzi Delta, and his findings are two-fold. - Der Ausgleich von 27 v. Chr (German Edition)? - Great Divergence - Wikipedia. - The Reformation of the English Parish Church! - Its All Your Fault: How To Make It As A Hollywood Assistant. First, he finds that the overall level of farm labor productivity in the Yangzi Delta is a bit lower than that of England, but higher than several other regions of Europe; and second, he finds that this level of labor productivity is roughly constant between and Allen : table 5. There was significant change in the intensity of agriculture and fertilizer use beancake ; these changes led to rising output; and the cost of new inputs kept overall labor productivity roughly constant. And, most significantly, he finds that labor productivity was roughly unchanged through the two centuries between and —a finding that contradicts the expectations of the involution theory. Thus Allen finds that neither the involutionary nor the revolutionary model is adequate to the Chinese data. This supports the view that Chinese agriculture was neither leading to sustained per-capita growth, nor was it experiencing a longterm trend towards involution. The central question here is, how did rural real wages compare in England and China? This model incorporates data on crops, prices, and labor expenditures for Yangzi and English Midlands farms. He is able to calculate estimates for family incomes in the two settings. He finds that the Yangzi Delta family income per day was These data indicate that Yangzi family income fell during these centuries but remained slightly higher than rural English family income in And based on trends in English rural wages reported in Allen , we can infer that the Yangzi family income was measurably higher than its English counterpart in This index is based on a wage basket of staple food and clothing, for which there are very good price data in England and sporadic price data in China. He also provides a simpler index based on the price of a calorie of the basic foodstuff in each country. He then converts money wage data from several countries into a common real wage, and uses these estimates for England, India, Japan, and China to provide a quantitative answer to some of the most basic issues in the involution debate. This estimate is for a time period that falls within the period of dispute between Pomeranz and Huang, and it clearly favors the Pomeranz position. Throughout his writings Robert Brenner attempts to make a causal argument about differences in the profile of economic development, based on the two kinds of differentiation noted here; he argues that high and low economic developers correspond to differences in social-property systems Brenner , This is a simple causal argument with two foundations: first, an analysis of co-variation between outcomes and institutional settings, and second, an account of a possible social mechanism that shows why social-property systems of a certain sort should be expected to result in sustained economic growth. Brenner brings this perspective to bear in his contribution to the involution debate Brenner and Isett And, c implementation of technological innovation was rapid in England as a result of the incentives for capitalist farmers. The result of this combination of factors is a steady increase in productivity in England, sustained improvement in the standard of living, and the gathering financial capacity of elites to invest in modernizing technologies in manufacturing. By contrast, Brenner characterizes China as witnessing erosion of the standard of living and a failure to introduce modern technologies and agricultural improvements; and by inference, the explanation of this outcome is the less favorable institutional setting that Chinese society created for innovation and investment in agriculture. Pomeranz takes issue with both aspects of this theory. He disputes the premise that Chinese agriculture failed to make progress in implementing new technologies of irrigation, cropping, and fertilizers. Instead, he argues that that England shoots forward because of resources from the Americas, cotton and agriculture imports, extension of land in the Americas, and the exploitation of slave labor in the Americas. Elvin introduces this concept as an alternative way of assessing the degree of intensity with which the Chinese farming system had developed in its use of labor and environmental resources; extremely high environmental pressure would imply something very similar to the high-level equilibrium trap he had hypothesized earlier in his writings Elvin Elvin observes that innovations in technology, or the discovery of new external sources of resources, can dramatically change the degree of pressure experienced by a given economy; so a new water control technology can potentially greatly reduce the costs of restoration of the water system at the end of the production period. That said, the judgment that a given environment is under severe environmental pressure appears to represent an alternative basis for arguing for the conclusion that this economy is undergoing involution. Elvin then asks the question whether there is a basis for comparing China and Europe according to this measure Elvin : He notes that the decisive empirical basis for establishing this conclusion is currently unavailable, but he argues that the evidence of contemporary observations and comparisons offered by Jesuit observers permits some preliminary conclusions. Significantly, Elvin counts the cost of hydraulic maintenance work as a large component of the renewal cost for resources; other large components include the intensity of Chinese farming and the need for annual labor to replace soil fertility because of the lack of fallow. Sustainability requires restoration of the production system to its initial level of productivity. - Study and Communication Skills for Psychology. - Ave de Fortuna (Spanish Edition). - Developing Ambient Intelligence: Proceedings of the First International Conference on Ambient Intelligence Developments (AmID06); - Forms of Justice: Critical Perspectives on David Millers Political Philosophy; If producers choose not to invest the full amount needed for restoration, then the production system will have lower productivity in the next cycle—with the consequence, once again, of involution in the technical sense declining labor productivity. But the connection is not always so tight. For, as Elvin notes, there are multiple ways of dealing with environmental pressure. As he emphasized in his earlier work on the high-level equilibrium trap Elvin , innovations in technology and technique provide the means for pushing back the productivity frontier. Consider briefly the treatment that Pomeranz provides of resources and environment. Pomeranz makes a great deal of the fact that European exploration and colonialism provided vast sources of natural resources into the control of European nations, including England. So it would appear that Elvin is providing a conceptual basis for a new line of criticism of the thesis that England and China were in comparable economic situations at the beginning of the modern era. This approach is worthy of further empirical and historical investigation. It is now possible to delineate some areas of best judgment with respect to the primary disagreements involved in the involution debate. Thanks to detailed and rigorous empirical work by Bozhong Li and Robert Allen, the situation of agricultural productivity and the real wage in England and the Yangzi delta is somewhat more clear today than it was when this debate originated. It appears reasonable to conclude with Robert Allen that the real wage for Yangzi peasants was roughly equal to that of English farm laborers in the seventeenth and eighteenth centuries. This finding supports Pomeranz and Lee in their assertion that conditions for ordinary people in England and China were roughly comparable. Second, it seems reasonable to conclude on the basis of work by Bozhong Li and Robert Allen, that agricultural labor productivity was roughly comparable in these two regions as well. Third, the substantial progress that has been made in Chinese historical demography in the past decade effectively eliminates the crude Malthusian interpretation of Chinese population behavior. There was no unconstrained tendency towards population increase up to the carrying capacity of the land; instead, fertility rates and rates of population increase were essentially comparable to those of European populations. This finding too casts doubt on the involution hypothesis, since unrestrained population increase is the central causal mechanism that was hypothesized to push the process of involution. The best evidence available today supports the summary conclusions rehearsed above; but it is also possible that subsequent research will call some of these specific findings into doubt. Here the most promising perspective is that of R. Instead, we need to attempt to identify the conjunction of circumstances in Western Europe and East Asia—environmental, international, political, demographic—that created the characteristic patterns of development in the two settings. Let us turn now to a related debate that focuses on the status of the Chinese rural economy at the end of the Qing and into the Republican era. This debate raises some of the same issues, but in a later and shorter period of Chinese economic history: the transition from the final years of the Qing empire into the early decades of the Republican period. Many observers have regarded this period as one of agricultural stagnation, falling real rural incomes, worsening tenancy relations, and increasing rural inequalities. These unfavorable economic developments are often taken as preparing the ground for the successful peasant revolution in China. Living Standards in the Past: New Perspectives on Well-Being in Asia and Europe In the s several economic historians offered substantial criticism of this prevailing wisdom. Arguing from a neoclassical economic perspective, Thomas Rawski Rawski , Ramon Myers Myers , and Loren Brandt Brandt have argued that the early Republican economy was more dynamic and forward-moving than this interpretation would suggest. According to these historians, agricultural productivity was rising, rural incomes were improving, and labor markets permitted a degree of social opportunity to the rural poor. These are important and controversial claims; if sustained, they require a significant reevaluation of the state and direction of change of the Chinese rural economy in the early twentieth century. These unfavorable economic developments are often taken as setting the stage for the successful peasant revolution in China: increasing rural misery gave peasants a strong motive to support a party that promised land reform and a program aimed at improving the lot of the rural poor. There was growth of output, but it occurred at essentially the rate of population increase—resulting in stagnant per capita incomes Perkins a, b : Perkins acknowledges that there was sustained growth in certain modern sectors e. The benefits of modern-sector growth would only be realized in living standard improvement in later decades. Perkins also makes an effort to assess the direction of change in land concentration, tenancy and income distribution during the period. He holds that tenancy rates remained approximately the same during the period, and he denies that there was an abrupt increase in tenancy or landlessness during the early twentieth century Perkins : Feuerwerker maintains that agricultural techniques remained roughly unchanged throughout the period s , with output increasing in pace with population growth through small increase in cultivated acreage Feuerwerker : 3. He takes it as certain that rural living standards did not improve throughout the period, but doubts that evidence exists to demonstrate a significant decline in living standards p. Feuerwerker believes that tenancy rates probably did not increase in the early decades of the twentieth century, and he doubts that effective rent levels increased during the period p. He thus adopts roughly the same view as Perkins: that output approximately kept pace with population increase, with the result that average rural welfare remained about constant. Robert C. Allen - Google Scholar Citations Scholarship in the s focused more attention on distributive issues in the rural economy: the status of tenancy, landlessness, wage labor, peasant welfare and rural inequalities. Mark Selden emphasizes the deterioration of living conditions in Shensi. He details the destructive effects of warlordism and famine in Shensi, and he argues that tenancy in Shensi increased substantially in the s, accompanied by increasing landlessness Selden : Likewise, Carl Riskin emphasizes the significance of income and land inequalities in the Chinese rural economy Riskin : And Victor Lippit focuses attention on the disposition of the rural surplus: through rent, taxation and usurious interest rates the peasant was separated from the surplus available within the rural economy Lippit In short, the received view represents the Chinese rural economy as largely stagnant during the early Republican period. Living standards for peasants were stagnant or falling. One school of thought the technological school held that the chief obstacles to development were technological and demographic; population pressure on resources led to an economy in which there was very little economic surplus available for productive investment. The other theory was the distributional school, which held that the traditional Chinese economy generated substantial surpluses that could have funded economic development, but that the elite classes used those surpluses in unproductive ways. Brandt and Rawski focus their work on Chinese economic development in the late Qing and early Republican periods. They disagree about some issues; but they agree in rejecting many features of the received view. Rawski argues that economic growth was significant and sustained in pre-war China. It was driven by modernization of transport, factory industry and commercial banking Rawski : xx. He estimates that agricultural growth averaged 1. This process of growth led to sustained increase in output and income per capita Rawski : , and this increase led to rising living standards. He argues that there is good evidence of rising consumption of cotton cloth, which he takes to support the conclusion that living standards were rising Rawski : He holds that commercialization progressed rapidly during this period, bringing greater integration between domestic and international markets in rice, cotton, and other important commodities; and that commercialization in turn induced growth in agricultural output, improvement in the agricultural terms of trade, rising real incomes for farmers and laborers alike, and a probable overall reduction in the range of income inequalities in the countryside of central and eastern China. In fact, Brandt draws a parallel between the performance of the Chinese rural economy during this period of rapid commercialization and its performance during the period of the post-Mao rural reforms; in each case, he asserts, the gains were the result of greater market activity and specialization. He maintains that the early Republican period witnessed rising real incomes for farmers and laborers alike and a probable overall reduction in the range of income inequalities in the countryside of central and eastern China. Brandt uses these conclusions about real wages to argue that labor productivity increased between 40 and 60 percent during the time period Brandt : —suggesting that the rural economy was improving rather respectably during the period. proxy.littlelives.com/sol.php And he argues that commercialization of the rural economy had the effect of significantly narrowing income inequalities in rural China Brandt : , by increasing the demand and opportunities for labor. Finally, he denies that land concentration was increasing during this period, arguing that the relative share of income flowing to the bottom of the income distribution tenant farmers, small owner-farmers, landless workers, peddlers, handicraft workers improved during this period relative to landlords Brandt : Here I will maintain that the evidence that Brandt puts forward, while suggestive, falls far short of clinching his case, and the interpretation of the early twentieth century rural economy as static or worsening continues to be more credible. Surveying rice price data for South China, Siam, Burma, India, and Saigon the latter being the chief rice exporting markets in Asia , he finds that there are high and rising price correlations between South China and each of the major exporting markets Brandt : And he finds, further, that the interior Chinese economy showed similar integration with respect to rice prices. Without providing comparable detail from other locations, Brandt suggests that these results obtain as well in markets for cotton and wheat—supporting the contention that the Chinese rural economy was highly commercialized, reasonably competitive, and extensively integrated into the international economy. At the same time, this is the least novel portion of the argument; few would disagree with the conclusion that the Chinese rural economy was price-responsive and competitive in the period in question. And the well-documented shock to the Chinese economy produced by the Great Depression—through its disruption of cotton prices—would be unintelligible except on the assumption that Chinese cotton markets were integrated with international prices. So this line of thought is reasonably well grounded, but does not provide much support for the view that conditions in the countryside were improving. Is this a credible conclusion? Related Living Standards in the Past: New Perspectives on Well-Being in Asia and Europe Copyright 2019 - All Right Reserved
<urn:uuid:75064bbb-8f96-44a0-b81e-6514d6324b6d>
CC-MAIN-2020-16
https://ghomwebladeper.gq/living-standards-in-the-past-new-perspectives-on.php
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371896913.98/warc/CC-MAIN-20200410110538-20200410141038-00313.warc.gz
en
0.951272
3,717
2.6875
3
Music psychology is a branch of musicology and psychology. These branches help to understand and explain the musical experience in relation to musical behavior. The basic interpretations from the primary source help in the systems observation approach and the systematic interaction of the same with humans or in humans. This research paper seeks to explore the questions and issues that surround musical activities and experiences. Thus, the scientific research of the same entails human nature, human evolution, human identities and human values. There are many things that come in through a human being’s senses. However, this does not mean that there should be difficulty while focusing on a stimulus attention and ignoring another. This is because human beings have an exceptional ability to get rid of unwanted stimuli. Cognitive psychology is a fascinating sub-discipline that explores the mental processes and tries to understand how people think, remember, perceive, solve problems and speak. This branch of psychology is also related to other disciplines such as linguistics and neuroscience. It focuses mainly on how human beings acquire store and process information. Research of the same requires many different practical applications. These applications should focus on enhancing learning, increasing decision-making accuracy, improving the memory and structuring the educational process (David, 1996). In the historic times, the school of thought was more dominant is psychology, in the later years it focused on problem-solving and attention, memory and problem-solving after the cognitive evolution. It is significant to note that behaviorism focuses on the behaviors observed while cognitive psychology focuses on the internal mental states of human beings. The vital question is “why is it necessary to study cognitive psychology?” Well, cognitive psychology is a branch of psychology that studies many fields. It is studied by teachers, educators, scientists, architects, artists, designers and many other professionals. This psychology revolves around the idea and the notion that understanding of the minds internal processes is the key to understanding what people are made of or what makes people tick. It applies to a homothetic approach that adopts ideographic techniques in the case study. However, this ideology was not received favorably. This was because a few concerns were raised regarding the emphasis on external behavior approach. Many people were dissatisfied with this approach. In this case, this approach was revised and improved. In the recent times, cognitive approach is the most popular and most effective process used in the study and research of the minds internal processes (Eslie, 1995). Make your first order with 15% discount and get 10% OFF MORE for ALL orders by receiving 300 words/page instead of 275 words/page What is attention? Attention is the ability to concentrate, focus on a task, and allocate other processing resources. There are two different types of attention, namely, selective attention, divided attention and automatic attention. Selective attention entails the effects of trying to attend to two different things at the same time. In this case, the person needs to select the task to attend to. Divided attention, on the other hand, is the difficulty when trying to do two different tasks at the same time. The neurosciences and philosophy are known as a binding problem. These many different possible uses are useful at different anchor points. The process by which brains segregate elements is not unique but also extremely complex. The first problem faced in this type of conventional questioning is the neural mechanism that distributes most of the activities to the central nervous system. Visual perception is a process that separates information and sends them too many different regions or parts of the brain. This helps to top process color, motion and shape. In view of the above, this synthesis will feature on the synchronizing different neurons which are found in the cortex (Dale, 2008). Memory also depends largely on the biding problem. This simply means that it is associated with many different elements which help to maintain and create brain association. The binding problem is applied to unity consciousness in relation to a problem within the brain. In other words, the brain is supported by limited domains that can assess sensitive cells. Most of the researches carried out on music psychology have shown that all the music performers and listeners respond emotionally during and after listening to music. This has been attributing to the fact that after experiencing an emotion which is related to music, the music emotions are recognizable. This simply means that these emotions are a behavioral change towards a specific stimulus. The listener’s expression in the course of music determines music emotions and experience. According to a researcher, Waterman (Waterman, 1992) responds to music because they have many different musical responses. While choosing the best method to use in the research study, one should consider the cognitive neuroscience and contemporary cognitive psychology. The methods chosen should focus on the auditory attention. This simply means that the method should be introduced using a limited capacity in the attention given. The first method should try to understand human behavior and how music is processed. The method should evaluate and try to understand how humans interpret, remember, perceive and sense music. The method should be based on cross-cultural and developmental perspectives on music, performance anxiety, music therapy and neurological aspects. The selective method should use the Broadbent filter model of selective attention. It is necessary to note that the Broadbent method recognizes the information processing approach that contributes vital information to the same. Broadbent argues that any information given at any time has the capacity to enter a sensory buffer. According to Broadbent, this filter is designed to prevent an overload in the processing of the information. This method entails focusing on how the people listening to the particular type of music deliberately overload this information and how to read such a signal in the information process. This method also focuses on a repeated back. This simply means that people will make fewer mistakes if the music is repeated back. The other pure effective method to use is the dichotic listening task (Eslie, 1995). This method tries to determine why it is difficult for humans to listen to music or switch to two different channels at the same time. This simply means that human beings can only listen to one type of music at a given time, and the other type of music from the unattended year is lost. This simply means that the music is stored by extremely short-term memory. In this particular method, one type of music is introduced. It is necessary to note that this method takes into consideration the fact that there is a limited capacity to enable the processing of this information or the music. Many disciplines and mutual aspects help in research methodologies of music psychology. The other method will involve identifying a single motion while the listener is listening to different types of music. Emotions such as joy, pain and sadness, will be recognized from the same (Gabrielson, 2001). In this case, a questionnaire is used to experiment on the same. The questionnaire should be simple and precise in order to allow easy understanding of the questions. It should also be addressed to a large group of person and should not only be limited to a specific group of people. Hire our qualified writers! Not enough time to create an assignment by yourself?Order now - on time delivery - original content - quality writing The research method should also explore the relationship between musical psychology and music cognition. This simply means that the cognitive arrays can be transformed into musical sequences of multi-tracks. While constructing or developing the questionnaire, the researchers must consider the fact that music psychology methods are extremely difficult to answer. In this case, they should put quality control procedures in place in order to get expert review on the same (Patel, 2010). The research study must cover the perception of sound patterns, perception of musical sounds, music memory, absolute pitch, musical gatherings and rituals, musical instruments learning processes, emotional, musical behavior, personal role of music, musical, social influences, rhythm, tone, harmony, phrasing and meter in the music and the psychological processes involved in musical performance. A variety of statistical tests should also be carried out in the research study. This will help to understand and explain the interlocking sets of music in relation to musical psychology. This research method should be based on disciplines such as the use of computational models. This will be able to reflect most of the properties present in the human cognition process. In this case, a specific music task should be used in order to determine actual relations between the theories used and the statistical tests involved (Patel, 2010). Results from the filter or Broadbent model showed that this method was more effective if operated as a selective attention on the same. This simply means that human beings can only pay attention to one specific song or type of song at the same time. The results from the music psychological research method resulted in a conclusion that musical elements are related to emotional experiences depending on the type of music. However, many methodology problems were observed. This is because emotions only last for a short duration of time. This simply means that the questionnaires do not provide enough insight to help the researchers to understand other complex emotions which were experienced while listening to the music (Sun, 2008). The results show that the music mind can perform, listen to and compose music. It also shows that the mind is linked to body language and responses from music. The results music therapy lowers the heart rate, anxiety and diastolic and systolic blood pressure. It was also discovered that music therapy helps to improve the quality of life among a certain group of persons. Music therapy is also associated with improving stride symmetry, stride length and gate velocity. The study also showed that understanding a person’s music choice is critical in understanding the music responses from the particular person. Most of the people had no control over the music played and thus the effects were related to the particular type of music played. The challenge, however, comes from controlling these trials in order to understand the affects of the same on people from different environments and medical settings (Eysenck, 1990). This would significantly help to optimize the use of music, relieve anxiety and increase comfort on the people involved. In this way, music can be used medically to help to improve the life and health of the patients or individuals involved. The main aim for conducting this research is to study and understand the sound, verbal domain, which is a particular contemporary field and how it works. In this sense, the cognitive framework represents specific properties of the abstract or the musical sound environment. In this case, the sensory information or the frequency is ultimately transformed into a pitch. The input and output nature represents the transformation of the algorithms used. This research also explores the instantaneous biology of the human brain in order to present a functional base for the oriented neuropsychological work. As the sound is produced from different sources the behavior can change or remain constant through time. However, using a limited number of cues can give conflicting evidence or reinforcement on the same. This simply means that sound sources, which are present in the environment, depend on the acoustic information. In this case, the type of music or the arrangement of notes can significantly affect the perpetual results of the same (Honning, 2011). Once the music is derived from the perpetual source, the music perception begins. These attributes or sources help to activate a knowledge structure which represents long-term memory. For example, if a song behind from a particular pitch and from a certain key, then the future information in the psychology will conform to that specific key. It is necessary to note that a person’s ability also limits the long-term effects of the music on particular persons or a group of persons. Get a price quote: Music features rich, melodic and rhythmic structures plus the spoken language. Language and music largely depend on the functions of the brain. This simply means that the relationship between speech intonation perceptions and musical deafness or tone deafness are some of the behaviors that are illuminated in the neural foundations and cognitive domains. In this case, perception is not registered as passive but is an active interpretation that involves a constructive process (Sun, 2008). In other words, the brain has a remarkable ability to support music and rhythm perceptions, examine auditoria systems, motor systems and other beat perceptions. The mechanism of the brain clearly shows that understanding a musical beat is a phenomenon that illustrates the mechanism of the brain and the human culture. Temporal Dynamics of the Brain Activity To fully understand temporal details of the brain, one should understand how the brain responds to sound. A frequency tagging is a method that is used to research and study the activity of the brain and how it evolves over time. It also helps to understand the auditory processes and the brain mechanisms on the same. Stimulus affects the brain activity in a great way. This is because the brain activity related to the stimulus becomes temporally correlated (Patel, 2010). Music and Body Language Music is a culture that has survived for many generations from the historical times to date. Music not only has an effect on society but also has an effect on modern life. The body generally responds to music. From the many research and studies carried out on the same, it has been proven that music is simply a unique thought that is related to the physical, spiritual and emotional aspects of the world. This simply means that a person’s mood can be easily changed by music. Music can also cause simultaneous physical responses in human beings. Other people have perceived music as a way to weaken or strengthen a person’s emotion depending on the circumstances or environment. For example, a wedding or a funeral. However, this does not mean that a group of people will feel the same emotions after hearing or listening to a particular music (Broadbent, 1958). This is because some types of music may be too advanced for some people to understand or follow. It has also been proven that classic music relaxes the pulse and heartbeat of a person. In this case, the body becomes alert and relaxed. Music has many functions apart from relaxation it also decreases blood pressure, affects the breathing rate, increase heart rate and enhances a person’s ability to learn. Music Healthy Effects Healthy effects of music are related to the power of music. As mentioned before, music can slow down blood pressure, slow down the breathing rare among many other functions. In this way, the person is able to live a healthy life. Different types of music from different classical periods respond to the brain in purer distinct ways. The brain can respond to changes and repetition, mood contrast and pitch and patterns of rhythm. By playing two different music rhythms at the same time or by changing the music theme help the brain to respond in different ways. However, the mind can shut down if a particular song or rhythm is repeated more than three times. The repetition can also cause the person to release themselves to emotions (Jourdian, 1997).
<urn:uuid:312fdbfc-4718-4cb6-9677-f7f4bbe59787>
CC-MAIN-2020-16
https://specialessays.com/cognitive-psychology-and-attention/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371858664.82/warc/CC-MAIN-20200409122719-20200409153219-00235.warc.gz
en
0.946381
3,003
3.125
3
This study finds that girls were "invisible" to many professionals and to policy-makers and that initiatives and resources to address school exclusion are largely targeted at boys. The report takes a fresh look at school exclusion, focusing in particular on the experiences and accounts of the girls themselves, as well as those of the teachers and other professionals who work with them. It looks at school exclusion in its broadest sense, addressing disciplinary exclusion, truancy, withdrawal and self exclusion (while girls are actually present in class). The numbers of pupils excluded from school have been steadily increasing over recent years. Attention has focused on boys who form the vast majority of those formally excluded. This study, carried out by the New Policy Institute and the Centre for Citizenship Studies in Education, University of Leicester, examines girls' perceptions of school life and of the use of exclusion in its various forms, both official and unofficial. Interviews with girls and a wide range of professionals revealed a complex picture of concerns. The research found that: - Girls are generally not a priority in schools' thinking about behaviour management and school exclusion. Even when concerns were recognised, they were often over-shadowed by the difficulties of managing the much greater numbers of boys. - The 'invisibility' of girls' difficulties has serious consequences for their ability to get help. Since the problem is seen as so small compared with boys, resources are targeted at the latter. However, the nature of the support on offer to girls and their own responses when in difficulty can also lead to them not receiving help. - The nature of help on offer assumes that provision is equally available for both boys and girls. However, many girls are unwilling to take up current forms of support and many providers do not refer girls because they believe provision is inappropriate for girls. - Identification of girls' needs and the subsequent provision of services are compartmentalised. This applies particularly to girls who are pregnant or who have other health or childcare needs. Poor co-ordination of services can leave girls at risk of no one assuming responsibility for their support. - 'Self-exclusion' and internal exclusion (for example, truancy or being removed from class) appear to be widespread. - Many girls interviewed felt that schools use exclusion inconsistently, with clear differences between what teachers classed as acceptable behaviour from boys and girls. Professionals also reported differences in the way boys and girls are disciplined. - Bullying is a serious problem and appears to be a significant factor in girls' decisions to self-exclude. However, bullying amongst girls is not easily recognised and there is often an institutional failure to tackle bullying among girls effectively. Nationally girls comprise just 17 per cent of permanent exclusions. As a consequence, girls have been largely overlooked in school exclusion prevention strategies and research. Yet in 1998/99 around 1,800 girls were permanently excluded from school. These recorded permanent exclusions are a small proportion of the total number of girls excluded. Many more girls are excluded either informally or for a fixed period. There has been little research focusing on the experiences and specific needs of girls in relation to their disaffection with education. There are three additional reasons for the focus on girls: - There is growing evidence of unofficial and informal exclusions and girls appear more vulnerable to these types of exclusion than boys. Unofficial exclusions remain largely hidden and are absent from official statistics. As a consequence, policy fails to address the problem and few resources are allocated to it. - There is concern that current exclusion prevention and support strategies do not recognise the particular emotional and developmental needs of girls. Girls' needs, their experiences in school and their aspirations for the future may differ quite significantly from those of their male peers and may result in different behaviour and problems. - A number of experiences affect girls disproportionately or exclusively and may adversely affect their ability to attend and achieve in school, placing them at greater risk of exclusion. These include pregnancy and caring responsibilities. Girls' experiences of school Many girls perceived a lack of consistency in relation to formal exclusion. They also suggested that pupils' gender influenced teachers' management of behaviour and the choice of sanctions, including exclusion. They thought that varying rates of exclusion between schools were more to do with how the school managed and supported students than with the students themselves. The girls also perceived gender differences in responses to authority and experiences of bullying. Girls perceived boys to be more frequently subject to disciplinary sanctions because they tend to present a more direct challenge to authority by engaging in forms of behaviour that are more difficult to ignore in the school setting, such as fighting and overtly physically or verbally aggressive behaviours. Girls' friendships with each other were a source of support but also a potential source of tension and conflict that sometimes hindered learning or resulted in non-attendance. There was evidence that schools have greater difficulty addressing the psychological bullying that is more typically engaged in by girls. "I was bullied at this school for three years ... and the teachers ... I did go to them and my parents as well ... and, like, it helped a bit, but they couldn't suspend her or nothing because she hadn't physically touched me but to me, it wasn't about what she was doing physically ... she was just destroying me mentally." (Nina, Year 11, mainstream school, on fixed-term exclusion) Girls reported a range of ways in which they coped with difficulties at school but not all of these were helpful. The use of avoidance strategies, such as feigning illness and truancy, is problematic. Clearly, and as the girls themselves appeared to recognise, the long-term consequences of the resulting loss of education are likely to outweigh any immediate benefits. The girls' accounts suggest that they value education and that they do not want to miss out through disaffection and self-exclusion. The evidence suggests that the official statistics concerning girls' absences from school underestimate the extent of truancy. A considerable number of the girls who were interviewed reported truancy which was unknown to the school. "Sometimes, I would go in and get my mark so I'd get a full attendance but after I got my mark I'd go home and I'd come back at lunchtime and get my mark and go back home." (Nadine, Year 11, pupil referral unit, permanently excluded pupil) In combination with evidence of the widespread use of 'internal exclusion' (exclusion from particular classes or subjects), these findings suggest that the needs of a significant number of girls are not being adequately met within current systems. Girls are not seen as a priority in schools' thinking about behaviour management and exclusion. Throughout the study, a typical response was that girls were 'not a problem'. Such a viewpoint was also evident in many Local Education Authorities (LEAs). Only by exploring a little deeper did widespread concerns begin to emerge. Professionals suggested that girls' greater adaptability to the academic routines of school, conscious use of social skills and different teacher perceptions of similar behaviour based on gender, contributed to the lower permanent exclusion rates of girls and the view that girls are 'not a problem'. The link between criminality and boys' exclusion from school, as well as the widespread perception that girls are doing well academically in school in comparison to boys, may also be contributing. Whilst the research identified a diverse range of strategies to keep pupils in education, including greater use of further education, provision is largely dominated by boys. As a consequence, not only do many girls feel unwilling to take up the help on offer but many providers do not refer girls since they feel that the provision will be inappropriate for girls. This results in further male over-representation and makes it even more unlikely that girls will get support. "I think the biggest issue for girls in our centres is that they are largely male environments. If we didn't have our school refusers who are predominantly girls, we would have some centres where it was almost all boys." (Member of behaviour support team) Girls' needs and difficulties are often less visible and more likely to be overlooked than those of their male classmates. Faced with a range of competing pressures, many teachers focus their attention on those whose needs are overt and who present an immediate challenge in the classroom. Girls experiencing difficulties are less likely to engage in behaviour that attracts the attention of school authorities and support systems. Internalised responses such as anxiety, depression, eating disorders, and self-harming behaviour can be overlooked or assumed to relate to problems beyond, rather than within, school. Physical and emotional withdrawal is also less likely to be responded to immediately. "The difficulties faced by girls are due to them not acting out that much. ... They are not 'in your face'... They are quieter, they tend to stop attending and they often disengage from school... they may only come to attention if they turn to bullying." (Deputy head, mainstream school) The pressure of teaching and administration duties as well as the complexity of some difficulties means that, even when they do recognise that a girl is in difficulties, teachers are often not sure of the best way of supporting her. When a student is referred to other agencies, these agencies may only respond to an aspect of the problem, thereby compartmentalising it. Professionals recognise the importance of inter-agency work in tackling school exclusion and the wider problem of social exclusion, but are still encountering some challenges in this way of working. Service providers identified a number of factors that limit the chances of some girls succeeding at school. These included limited access to educational alternatives, a lack of Emotional and Behavioural Difficulty (EBD) provision for girls, parentally condoned absences, low aspirations, pregnancy, subtle forms of bullying, caring responsibilities and sexual exploitation, for example, pressure to become involved in escort agencies and prostitution. Many of these manifest themselves in non-attendance and the issue of self-exclusion and girls' 'opting out' was identified as a particular concern. Professionals also suggested that difficulties are occurring at an earlier age than in the past. Professionals and girls themselves generally identified similar problems. Nevertheless, service providers did not recognise girls' concerns about bullying, and the links they made between bullying and exclusion from school, as being particularly significant. There appears to be relatively little consideration of how school and LEA pastoral support systems are meeting the specific needs of girls. A recurring theme throughout the research was the way in which girls' needs are overlooked. While in principle girls and boys have equal access to pastoral support and educational alternatives, resources for disaffected pupils are largely directed towards boys. This is explained partially in terms of the less visible nature of some girls' problems, but also reflects how girls manage problems, some of which may go unnoticed within schools. It may be difficult to detect the stressful circumstances (for example, peer relationship difficulties) but it is also more difficult to detect that a student has withdrawn from learning. Support for vulnerable girls will help avoid school exclusion which often leads to subsequent social exclusion. This will require both a broadening of our views of exclusion to incorporate a wider range of factors that effectively exclude a pupil from learning and full participation in school life and also a commitment to keeping girls' needs on the education policy agenda. About the project This was an in-depth largely qualitative study focusing on six areas in England. The research involved: - Focus group and individual interviews with 81 girls of secondary school age drawn from schools and colleges in three LEAs and three Education Action Zones (EAZs). They included girls who were not causing concern in school as well as those who were at risk of exclusion and those who had experienced exclusion in the past. The sample included girls looked after by local authorities and girls from minority ethnic communities. Ten parents were also interviewed. - Face-to-face interviews with fifty-five service providers across the six areas. These included school, LEA and EAZ personnel and staff working within health, social services and voluntary sector agencies. Information was also sought from a range of alternative education providers including FE colleges, education facilities for pregnant schoolgirls and teenage mothers, and a number of special projects. - A review of relevant research and literature from government, academics and voluntary organisations working in this area and analysis of documents such as EAZ Action Plans, Education Development Plans, Behaviour Support Plans and Social Services Children's Services Plans. Girls who are not attending school as a result of pregnancy, caring duties or other reasons, were included in this study, whether or not they are recorded as truants. The underlying rationale is that individual students are not simply in one of two camps, that is to say, either excluded or included. Exclusion and inclusion need to be seen as part of a continuum, and an individual may move along that continuum at different points in her school life.
<urn:uuid:a2b7bbcf-ec59-49a6-8ef8-c6f00b23ad0f>
CC-MAIN-2020-16
https://www.jrf.org.uk/report/girls-and-exclusion-school
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371824409.86/warc/CC-MAIN-20200408202012-20200408232512-00034.warc.gz
en
0.977296
2,611
3.5
4
Futility in present days has been called upon as an idea to direct physician in keeping away from the provision of unsuitable care. The idea is value-laden and complex. It deals with some different concerns often confused, taking advantage of the word ambiguous, or else misleading (Anabaptist.org.) Historically, a considerable figure of surrogate makers of the decisions have insisted that the providers of health care utilise the medical technology to extend the life of a patient to a length considered medically suitable. These decision makers would like to carry on life-sustaining treatment that donors want to end. Whilst these futility arguments are determined unofficially in the hospitals, more have recently been taken to United States courts. Since the treatment of the judicial on these arguments will shed a dark and long shadow to the unofficial decisions on all others (Larue, 1999). Internationalisation has spread modern medicine, ethical values and public health. Attempts to extend the life of extremely ill patients frequently create extensive and painful dying (Bruckman,2009). These new problems surround the euthanasia, which entails a position where professional of the health care intentionally takes steps to terminate a patient’s life. Suicide assisted by physician is related although this position the physician offers the way that the patient can terminate their life thus the patient performs the final act. In spite of opposition from religions, the right-to-die emerged globally. Ethical standards should be created to cope with informed consent and choice, patient autonomy, quality and sacredness of life, treatment futility, and patient’s rights (Larue, 1999). The following section shows different perspectives on the euthanasia and their implications. Christians believe that God gave man the gift of life and that He is the only one who can sustain or terminate that life. To a Christian, simple biological existence is less than life in addition a human being has an everlasting soul residence in the man mortal body. Thus death of the mortal body set free the soul to the unending destiny (Anabaptist.org, 1999). For the reason that life is through God’s sanction, Christian consider that termination of life ought to be with His design. According to the Christian holy book (The Bible – Exodus 20:13; and 1 John 3:15), “it is morally wrong for one person to take the life of another.” The society is turning out to be morally impious in consideration to sacredness of life. Through legalising abortion, the resulting low perspective of life is getting into yet other places of human being’s way of life. Concepts of value and quality of life such as “dying with dignity,” “the right to die,” and such others are spreading through the society philosophy (Anabaptist.org, 1999). Euthanasia is one of the concepts. This is the way of intentionally relieving into death a man who is going through pain or is a handicap or suffering from incurable disease. The demand for “mercy killing” may possibly be voluntarily by the individual who is suffering or by the person legally in charge of such a human being. Whichever is used it is morally unacceptable, the former being equivalent to suicide and the latter, murder (Luna, 2006). An interconnected aspect is passive euthanasia or denying life-giving nourishment, for example making a newborn with congenital defect go hungry or holding back a sensible life support to an extremely ill patient (Anabaptist.org, 1999). From the approval of “mercy killing” in the medical practice, it is a move to rationalization for clearing up financially or socially burdensome persons to improve duty for their concern. Death of such intention is murder and must be an absurd alternative for a morally directed society or individual (Cirone, 2011). Besides moral insinuation of futility there also exist some social implications to think of, such as disrespect for the elderly, sick, handicapped and above all life itself. Furthermore, without doubt, there would be the worsening of the health care provision to such people. Therefore, society would disintegrate to “survival for the fittest” that is the strongest will be the most valuable (Anabaptist.org, 1999). The temperament of life is extremely too sanctified to be handed over to erratic human control. Therefore, it ought to be left in God’s hands, with a subsequent approval on human being’s part that all things are done by God perfectly. According to 1 Corinthians 6:19, 20) “"Or do you not know that your body is the temple of the Holy Spirit who is in you, whom you have from God, and you are not your own? For you were brought at a price; therefore glorify God in your body and in your spirit, which are God’s.” Supporters of Physician-assisted suicide and euthanasia argue that extremely ill patients must have given the right to terminate their pain with a dignified, compassionate and quick death. They also contend that the death right is defended by the similar constitutional protection that assures such rights like procreation, marriage, and termination or refusal of life-preserving treatment (Procon.org, 2011). On the other hand, challengers of physician-hastened death and euthanasia argue that physician or doctors have ethical responsibilities to maintain their patients breathing as shown “by the Hippocratic Oath.” They content that there can be “slippery slope” emanating from euthanasia to putting to death (murder), plus making euthanasia legal will unjustly aim at the disabled and the poor and build inducement for Insurance Companies to end lives so as to save money. There are various adaptations of utilitarianism and they vary on a number of features of euthanasia. It is a type of consequentialism based on act-utilitarianism, the correct action is one among the alternatives open to the action agent, has outcomes that are superior to other action. Based on the utilitarian rule, the correct action is one that complies with the rules that may be complied with in the case of appropriate situation. Although changing laws to allow voluntarily euthanasia instead of personal decisions to assist an individual to die, this feature is not relevant. Both rule and act judgment will be supported by whether alteration of the law will lead to superior consequences, than if law was not changed. Medical ethics is a system of moral principles that apply value and judgments to the practice of medicine, as a scholarly discipline medical ethics encompasses its practical application in clinical setting as well as work on its history philosophy theology and sociology. Values in medical ethics There are six values which are commonly apply to medical ethics, this includes autonomy, non-malfeasance, justices, beneficence, dignity truthfulness and honesty. These values do not give an answer on how to handle a certain situation but provide a useful framework for understanding conflicts (Ryan, 2010). When moral values are in conflict the results may be an ethical dilemma or crisis and at times there is no good solution to a dilemma I medical ethics exists. Sometimes, the values of the medical community that is hospital and its staff’s conflicts with the values of the individual, patient, family or large non-medical community, better still the conflict can arise between health care provider or among family members (Lakhan SE, Hamlat E, McNamee T, Laird C 2009). During the last decade the debate about legalizing euthanasia has grown in many developed countries such as France and medical journal has reflected this, surveys have assessed doctors’ attitudes towards euthanasia and bioethics have discussed the pros and cons. supporters of legalisation argue that euthanasia is a continuation of palliative care and that doctors must respect patient’s autonomy including a wish to die (Lakhan, 2009). Thereafter the argument suggests that cultural differences shape opinions about euthanasia this is because the emphasis on autonomy is greater in English speaking countries than in other developed countries (Bruckman, 2009). In year 2002, the Regional Center for Disease Control of South-Eastern France and Medical Research National Institute did a telephone survey of a sample of doctors stratified by specialty and selected general practitioners, oncologists and neurologist randomly from all French doctors kept on file by the National Health Insurance Fund (. The investigated respondents’ involvement in end of life care and palliative care their attitude towards terminally ill patients and whether euthanasia should be legalized as in the Netherlands. on the other hand the French doctors’ involvement in end of life care and palliative care their attitude to and communication with patients and their opinion on legalizing euthanasia in year 2002 values are numbers unless otherwise stated (Kimmelman, J. Weijer, C. Meslin, , 2009). Is euthanasia unethical or ethical? This is an individual decision to be made by the patient, the family when in good condition. A will or “living will” is good in a situation where you are unable to speak for yourself (Kimmelman, 2009). Although individual frequently judge euthanasia from various perspectives and they thus arrive at varying conclusions, they will at all time go back to their old ways to reliable themes to ground their arguments (Khushf, 2004). The difference between what will happen now and what will happen in case euthanasia was legalised, thus there is need to account for realistic consequences, and the unfeasibility of paying no attention to specific situations of each perspective, are exposed as real and appropriate concerns in everyday life (Anabaptist.org). The growth in medicine changed the subject of death. In reality, the ways that may be utilized to extend the life of a patient or another person are infinite these days. This infers issue, which the health care professionals should not ignore. Euthanasia, non-excessive treatment, assisted suicide are completely different acts, nonetheless coming across such circumstances lead to a misunderstanding harmful to every reflexion. It’s very important to avoid the argument and uncover legal, ethical and medical explanation that allows the confrontation of key challenges to development (Cirone, 2011).
<urn:uuid:bc16ae8c-71d9-41f0-9bf4-1a9ea3af4f03>
CC-MAIN-2020-16
https://writingscentre.com/essays/medicine/euthanasia-futility-essay
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370511408.40/warc/CC-MAIN-20200410173109-20200410203609-00434.warc.gz
en
0.934639
2,097
2.703125
3
“Any time not spent drinking Port is a waste of time.”, Percy Crof. Port is regarded historically as a British drink especially as a traditional British Christmas drink. Surprising for a country that has embraced drinking Port since the 17th century, most English households buy just one Port bottle a year – mainly at Christmas! Do you see the French or Italians buy just one bottle of wine a YEAR??? Or Russians buy just one bottle of Vodka per annum? Still, most Brits drink Port as a festive drink on a festive season; Christmas. Christmas or “Christ’s Mass” is actually Jesus Christ’s Birthday, it is a Christian holiday, celebrated in the old Jewish tradition of celebrating an occasion or a holiday on the eve of the day (from sun set to sun set), as the “official” birthday is on December 25th, millions of people around the world celebrate evening and midnight masses on a very important date on the Christian calendar Xmas eve, on December 24th. The reason celebrations occur on Christmas Eve, is because the traditional Christian liturgical day starts at sunset, an inheritance from Jewish tradition, which is based on the story of Creation in the Book of Genesis: “And there was evening, and there was morning – the first day.” (Genesis 1:3-5) –“ויהי ערב ויהי בוקר יום אחד” Christian religious, old pagan and secular themes are mixed in this holiday which is near or on the days of winter solstice. The ancient Egyptians decorated palm branches during the winter solstice to symbolize resurrection. Ancient Greeks decorated evergreen trees in worship of Adonia, who was resurrected by a serpent. Romans covered their trees with metallic decorations and candles to honor Bacchus (our wine god), The ancient Germans decorated evergreen trees in worship of Woden. Apart from Christmas tree, and presents giving (of course), the holiday include a special meal. The Christmas dinner is the primary meal traditional for Christmas Eve and Christmas Day (at Lunch). Christmas dinner around the world may differ and the local traditional aspects affect the content the traditional products used and even the colours of the dishes, they all usually serve as symbols to commemorate aspects of life from religious to cultural and local even climatic points of view. In the United Kingdom the main Xmas meal is usually eaten as lunch on December 25th. The dinner usually consists of roast turkey or better stills goose, duck, pheasant, or other Roasts, in medieval England, the main course was either a peacock or a boar, in fact King Henry VIII was the first English King to have turkey for Christmas. Served with stuffing, gravy and cranberry sauce or redcurrant jelly; bread sauce; Side veggies are traditionally: Brussels sprouts, parsnips and carrots (as if there was any other choice at the time the tradition was laid down and throughout the centuries and up to the late 1980’s). The dessert is Christmas pudding (or plum pudding), sometimes mince pies or trifle, with brandy butter…Port of different qualities may be served with each of the traditional dishes including the Roasts, but the best and most classic combination is Vintage Port served with a good quality Stilton or other blue cheese even well matured Cheddar, and Parmesan, accompanied by nuts mainly Walnuts, chestnuts, cashews, and hazelnuts that bring out the best in port. One story regarding the appearance of Port is that a wine merchant from Liverpool, sent his sons to Portugal in 1678 to find a wine source due to shortage of Bordeaux wines (Clarets). They came upon a monastery in Lamego in the Douro Valley where the abbot was adding brandy to the wine already during fermentation rather than after, producing a port-type wine with higher sugar and alcohol content. A nice legend which happens one hundred years after the first recorded shipment of Port in 1588, and so remains a legend, but certainly sometime during the end of the 1600’s or beginning of the 1700’s, someone came up with the idea of stopping the fermentation with brandy while the wine was still sweet, fruity, and strong of which the English aristocracy could not have enough… British wine merchants moved to a suburb of Oporto, Vila Nova de Gaia that lies opposite Oporto town and made it the true home to Port. The wine comes used to be shipped down the river from the spectacular terraced hillsides of the Douro valley to Oporto, shipped down the river by distinctive looking boats called ‘barcos rabelos’ . Oporto which lies at the mouth of the Douro river. Vila Nova de Gaia became dominated by British Port wine lodges, with over fifty wine companies based in its narrow, twisting streets many still carry British families names: Barros Gilberts, Sandeman, Robertson’s, Graham’s, Warre’s, Smith Woodhouse and Cockburn’s, The Symington Family, Croft, Taylor etc. It is here in Vila Nova de Gaia, that aging and blending of most of the world’s supply of Port wine takes place. One of these families (known as Croft), was originally Phayre & Bradley founded over three hundred years ago. The earliest evidence of the firm’s activity as a Port shipper dates from 1588, coincidentally the year of the first ever recorded shipments of Port wine. after its founding partners and took its present name Croft, in 1736 when it was joined by John Croft, a member of an old and distinguished family of Yorkshire wine merchants. Although well established in Oporto, the Crofts never lost touch with their Yorkshire origins. In his treatise, John Croft describes himself as ‘Member of the Factory at Oporto and Wine Merchant of York’. The family returned to England in the nineteenth century, after the Peninsular Wars, and there are no longer any Crofts in the firm. The family maintained its affection for the fortified wines of the Douro and the late Percy Croft, who died in 1935, is credited with the famous words: “Any time not spent drinking Port is a waste of time.” (I just can’t resist in reciting again this wonderful saying.) In 1911 the House of Croft was acquired by the Gilbeys, the distinguished English wine trade family. It is now owned and run by descendants of two old Port wine families, the Yeatman’s and Fladgate’s. It is those wondrous vineyards: the famous Quinta da Roêda, that are largely responsible to its ownership of one of the finest estates of the Douro Valley, earned for centuries by croft, that keep on ,the place of distinction earned by Croft and its wines. The best and most classic combination is Vintage Port served with a good quality Stilton but also consider blue cheese or a Cheddar, or Parmesan, accompanied with walnuts or chestnuts, in fact nuts of all sorts bring out the best in port. ‘Declared’ Vintages are the best Vintage years, averaging 2-4 each decade, which produce wines of great concentration and longevity. They are usually blended from the best produce of more than one estate. Croft’s declared Vintage Ports, although based on the wines of Quinta da Roêda, sometimes also contain wines from other top estates. Croft is one of the most famous Vintage Port houses and it’s declared Vintage Ports, such as the legendary 1945 and more recently the award winning Croft 1994 are among the most sought after Ports. (from Croft www site) This is all written due to my recent acquisition of one case (of 12 bottles) of Croft 1970) I have has a few bottles of this port in the past (from the Cambridge University cellars yearly “clear out”) and they were simply divine. The Croft 1970’s has incredible structure and good acidity and residual tannins to make it last, even beyond 2020. Croft 1970 has a tawny red colour with a touch of deep purple, the nose is delicate and refined. The fruit is still apparent with flavors of dried fruits: sultanas and prunes and touch of tobacco, aand a mixture of dried exotic sweet spices. Some bottles are better than others depending on the cork quality and endurance they tend to start seeping at a certain stage, I am sure it will give me pleasure in the coming years. But the cellar has more to offer: One of my favorites is a non “pedigreed” LBV (late bottled vintage) from a co-op called Porto Vilanova vintage 1977 (a great Port Vintage year,) which were bottled for me, on different years as the time gone by from 1987 and on, and it got better each time I tasted it and as the years went by and the wine was still “brewing” and maturing in its original Barrel (I shared one whole barrel) and still have some for special occasions. It has deep scent of spiced coffee, chocolate and black prune concentrated prune (Powidl style) taste, simply delicious. Powidła or Powidło in Polish is a plum stew. Unlike jam or marmalade, Powidl is prepared by cooking prunes (for hours) without additional sweeteners or gelling agents just sheer dehydrating, and achieving concentration of the fruit natural sweetness. The plums used should be harvested as late as possible, ideally after the first frosts, in order to ensure they contain enough sugar. 1937 and 1963 Barros Port The 1937 Port has brownish red appearance with deep amber hue, seducting aromas of vanilla scented tobacco, golden raisins and chocolate gingerbread. The rich, hazelnuts and dark raisins and a touch of fig. There are also: Grahams 1994 LBV, Cases of 1994 Vintage Warre, and Dow’s many more mainly 1977 and 1994 Port to sweeten our lives for years to come.. Not to forget the young but beautiful Neipoort Vintage Port 2009 and Neipoort 2001 Colheita we had recently (the complete story https://wine4soul.com/2012/05/15/the-magnificence-of-the-douro/ ) Now I have to choose which one is my “Festive Port” for this year’s celebrations, I have this tingle at the tip of my fingers and tongue to “go for” the 1963 Barros, This is an LBV in the Best British tradition as Yair says, and LBV’s always surprise you to the better, it is a good Barros Vintage, we’ll see after all I always choose by the guests, atmosphere, and the meal. …In the meantime I will leave you with warmest season’s greetings a a delightful version of this Christmas classic Vintage 1954 by the Drifters. Sweet animation with Santa and Reindeer Singing White Christmas (Animated version), of this Colheita quality Song. judge for yourselves! Cheers and a very HAPPY NEW YER from the very Holy Land…
<urn:uuid:f338c6a4-7ed9-4094-a050-01f0ba809eba>
CC-MAIN-2020-16
https://wine4soul.com/2012/12/26/my-port-this-holiday-season/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496669.0/warc/CC-MAIN-20200330054217-20200330084217-00354.warc.gz
en
0.958481
2,394
2.59375
3
Let us examine the portrait of an ideal woman designed by God and revealed some 3,000 years ago in Solomon’s book of Proverbs. It seems that women are very much in the news these days: - Leading marches in order to demonstrate their political power and influence. - Standing up to and denouncing predators and abusers. - Championing the rights of woman to compete fairly in every area of business, politics, sports, entertainment, etc. On the face of it, it would be hard to find fault with any of these individual initiatives and objectives. Women’s vote should be considered crucial by politicians and women should be judged on their skills and training and not their gender when it comes to employment and opportunity for advancement in any area of endeavor. And we, as a society, should never enable, ignore or defend predators or abusers, no matter how rich or famous or talented they are. All these issues are logical and just but I can’t help but think that the ultimate goal of these and other movements headed up by women is to erase any difference there may exist between the sexes. I would go one step further and suggest that there may be some that are hoping that the women’s movement will ultimately lead to a society where women dominate men. I have no idea of exactly how this would work but I am fairly confident that if dominance is the goal, women will eventually be guilted of the same kind of cruel and unjust actions that abusive men have made who sought the same kind of power. In today’s society it seems that men are encouraged to become more like women and women are demanding to be treated more like men. In addition to this, young people are told that they can explore every shade of gender identity until they find a sexual personification that they feel comfortable with. And we wonder why, according to Psychology Today, the suicide rate among young adults (millennials) has tripled since the 1950’s. And suicide is the second most cause of death among college students. In answer to this worrying trend and confusion over what is male and female the Bible makes a clear and defining statement: “God created man in His own image, male and female He created them” (Genesis 1:27). There are only two sexes, they are different and they are meant to be different. As the French say concerning men and women, “vivre la différence!” (long live the difference). Since I began this lesson referring mainly to women, I’d like to focus on the female gender in defining some of the important characteristics that defines, not just a women, but what defines a Godly women. You see, there is nothing wrong with a women who desires political and economic opportunity, and refuses to be victimized by some abuser. These are all well and good – it’s just that these goals belong to the world and are appreciated only here below. What I desire for women is that they aim higher, for goals that are above, that belong to the Kingdom of God, not the kingdom of darkness here below. For this reason, I’d like to share with you the portrait of an ideal woman designed by God and revealed some 3,000 years ago in Solomon’s book of Proverbs. In this passage, Solomon indicates some of the qualities possessed by the ideal women who is pleasing to God. Description of an Ideal Woman – Proverbs 31:10-31 At the end of the book of Proverbs there is a beautiful acrostic poem extolling the virtues of the ideal woman. Acrostic poems are those where each line of poetry begins with subsequent letters of the alphabet. In this poem the writer begins his description by saying one thing about the virtuous woman – She is rare. 10 A wife of noble character who can find? She is worth far more than rubies. Not every woman is like this, he says; just like not every piece of jewelry is precious – pearls are precious because they are rare and hard to find (all jewelry shines but not all are valuable). A virtuous woman (inner strength) is hard to find, even harder to find than precious Jewels. What makes her so valuable? – vs.11-12 11 Her husband has full confidence in her and lacks nothing of value. 12 She brings him good, not harm, all the days of her life. The writer summarizes her value in describing her relationship to her husband – she is trustworthy. The author tells us that the innate quality that this woman possesses is her trustworthiness. Not just to her husband but as an essential quality that she has as a person (with or without a husband, she is trustworthy). When you have found a woman like this, you have found a precious stone. Outward Signs of Inward Qualities – vs. 13-24 In the following verses the author goes on to describe the outward signs that reveal that precious inward quality of trustworthiness. She is a good manager and hard worker 13 She selects wool and flax and works with eager hands. 14 She is like the merchant ships, bringing her food from afar. 15 She gets up while it is still night; she provides food for her family and portions for her female servants. 16 She considers a field and buys it; out of her earnings she plants a vineyard. 17 She sets about her work vigorously; her arms are strong for her tasks. 18 She sees that her trading is profitable, and her lamp does not go out at night. 19 In her hand she holds the distaff and grasps the spindle with her fingers. 20 She opens her arms to the poor and extends her hands to the needy. 21 When it snows, she has no fear for her household; for all of them are clothed in scarlet. 22 She makes coverings for her bed; she is clothed in fine linen and purple. 23 Her husband is respected at the city gate, where he takes his seat among the elders of the land. 24 She makes linen garments and sells them, And supplies belts to the tradesmen. The author gives several examples of her hard work and good management. - 13 – Cheerful in her work. She doesn’t complain or see her work as a burden. - 14 – She uses imagination in preparing food and is a wise shopper, careful with her money. - 15 – Manages her responsibilities well in her home. She is “on top” of the situation concerning her affairs. - 16; 24 – She has good business sense and knows how to turn a profit. Without sacrificing her home, she is able to use her business talents to the advantage of her home. She doesn’t ruin her home with outside work She builds it up. - 17-19 – She is not afraid of hard work and does not waste her time at home. This is a woman who knows the difference between leisure and laziness. She demonstrates that a well-managed home is a profitable enterprise. She understands that “time” is “money” even for the woman who is at home and uses her time at home profitably. A well-managed home is like a second income. - 21-23 – By her work at home she contributes to her family’s and her husband’s reputation in the community. Her children are clean, well fed and mannered, as is her husband and this is a reflection of their home, of which she is the manager. If marriage is a partnership the woman that the author describes here is a good partner to have. So in describing the outward signs that point to the inward quality of the ideal woman the author begins by describing the things that make her a good manger and hard worker. Good Character and Reputation – vs. 25-27 25 Strength and dignity are her clothing, And she smiles at the future. 26 She opens her mouth in wisdom, And the teaching of kindness is on her tongue. 27 She looks well to the ways of her household, And does not eat the bread of idleness. The second outward sign that reveals this trustworthiness is her good character and reputation within her community. Says 4 things about her character: - 25 – Kind and generous. James tells us that benevolence to the poor and homeless is the sign of true piety (James 1: 27). She is truly a spiritual woman with a Godly character. She has confidence. She is not afraid of the future (near or far) because her faith and good works cover her with honor and power. She is a person who is at ease in her conscience because her heart and hands are busy doing what is right. She is not guilt ridden or depressed because she is busy giving herself away to others she loves. - 26 – She is wise. Her tongue is not for gossip but rather for edification. This is one of my own mother’s qualities and one I have also found in my wife. Both never use their words to destroy always to build others up beginning with myself and our children and then others. This is wisdom from above and the woman of the poem demonstrates that she has this. - 27 – She is concerned, but her first and primary concern is her home and family. It is not that she isn’t concerned with the problems of her society (She does help the poor etc.) but the concerns of her home are first. When we take care of our own home first there are usually less problems in the world. She is aware of the needs of her family and the community and concerned about fulfilling them using all of her skills and qualities refined through years of service and practice. Paul says in I Corinthians 11:3 that the man is the head of woman and consequently the head of the home but Lemuel, the writer of this material, balances out this picture by showing us that the woman is the heart of the home. When the head and the heart are in union with Christ as the Lord of the home, what a wonderful place that home is. The Rewards of the Ideal Woman – vs. 28-31 In the last few verses the author describes the rewards awaiting such a person and clear signs that she is a virtuous woman. She has this trustworthiness demonstrated by Good stewardship of her home and a Godly character and these bring her rewards: - Her family praises her 28 Her children rise up and bless her; Her husband also, and he praises her, saying: 29 “Many daughters have done nobly, But you excel them all.” Her children are thankful that they have a mother like her – what a reward for a mother, grateful children. Her husband sees her as the best of all women. Suggest his absolute fidelity and devotion. - Her community praises her Her neighbors, friends and community see her as a woman of value and character. In the end the author summarizes the true essence of the value of this person. 30 Charm is deceitful and beauty is vain, But a woman who fears the Lord, she shall be praised. 31 Give her the product of her hands, And let her works praise her in the gates. Her motivating factors are not beauty or charm (social acclaim) She is a person that fears (respects /obeys) the Lord – this is what motivates her. Her desire to work well, to serve others, to develop a good character are inspired by her basic faith and desire to obey God, who wants all of his daughters to become women of value. Notice some of the things that were not mentioned here: - Her looks (skin, hair, weight, height, figure) - Her independence (Not even a question for her) - Her knowledge / education These were not mentioned not because they are not in themselves important but rather because they did not make her more valuable one way or another. Notice however what was mentioned as important: - Her work concerning her responsibility towards her husband, family, community (N.T. times = church) - Her attitude of kindness and wisdom - Her confidence and lack of guilt - Her reward of praise from the three groups that she serves: Family / Community And of course God Himself praises her because she serves Him and He wrote this poem in her honor. We have extremes in recognizing woman in or society. Either we have a day that honors only those women that have children (Mother’s Day) or the various organizations that promote those women who see themselves as feminists. I want to encourage those women who work hard in raising children but I want to include all those women who are striving to become women of valor in our society, regardless of their status. – and who are these women in our day? Women who are resisting the pressure from the Media and society to work only on the outward beauty but through patient obedience to Jesus Christ are creating a beautiful inward person. Women who, in a thousand ways, every day serve their husbands and / or families, church and school and community and do so with a smile, sincerity and diligence. Women whose strongest desire is not to be free and independent but rather desire to be useful, kind, and generous to those who are in need. Women who are keeping themselves pure and ready for the return of Jesus Christ. For these women, whether they are married or widowed or single; with or without children – I pray that God will bless you as true women of valor. I also pray that as the precious jewels that you are, you will shine forth among all others and receive the reward of praise that you so richly deserve. For those women who want to become the virtuous women spoken of here: First step is to give your life to Jesus in repentance and baptism. In so doing you become pure again, no matter what you’ve done and special in Gods sight. If you’ve gone away from Him and not been the kind of woman God wants you to be, repent and come back to him for forgiveness and restoration.
<urn:uuid:2a9b05dd-b9e6-4b86-8664-e70368145c26>
CC-MAIN-2020-16
https://gsoulen.com/2018/10/08/the-ideal-woman/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500482.27/warc/CC-MAIN-20200331115844-20200331145844-00313.warc.gz
en
0.975013
2,945
2.515625
3
In this report, the topics of nonproliferation (Chapter 2), advancement of science (Chapter 3), and applications of science in the private and public sectors (Chapter 4) have encompassed a number of bilateral projects with impacts beyond the borders of the United States and Russia. Set forth below are a few additional examples of bilateral efforts with particularly pronounced regional or global reaches. The activities that are described have been generally successful in terms of achieving scientific objectives, thereby eliciting significant regional and, at times, global attention. While some programs are likely to continue for the next several years, the longer-term financial outlook for bilateral cooperation that contributes directly to international science is uncertain. As underscored in the Introduction of this report, both governments have made substantial financial contributions to joint efforts. These activities have often intersected with programs of international organizations, such as the World Health Organization, the Food and Agriculture Organization, and the United Nations Educational, Scientific, and Cultural Organization. At times, bilateral efforts have added momentum to more broadly based international programs with similar goals (e.g., HIV/AIDS programs). Also, bilateral initiatives can be important in jump-starting programs that had been developed within international or regional organizations (e.g., interest of the Arctic Council in black carbon effects on global warming). At other times, international organizations may be well positioned to encourage continuation of efforts rooted in joint U.S.-Russian initiatives. While individual projects that are cited have been implemented bilaterally, the coordination of these bilateral projects with multilateral activities that address global or regional issues with closely related objectives has generally been quite good. Indeed, frequently the same national officials have responsibilities for both bilateral and multilateral activities with similar objectives. Also, at times, the U.S. and Russian governments have decided to highlight their bilateral activities at international meetings. Then they usually take steps to ensure that other interested parties are aware of their activities before they publicly announce success stories. Set forth below are seven examples of bilateral activities with regional or global impacts. 1. Leading the world in space biology. The global leadership of the U.S. and Soviet-Russian manned-space programs is unquestionable. The two countries have been pioneers in developing space biology for the past 50 years. Lessons learned from U.S.-Russian efforts are gradually spreading to other countries interested in exploration of space. During the past decade, considerable attention has been focused on a future manned mission to Mars. At the same time, the immediate challenges of operating the international space station have required the constant attention of Russian and American doctors, researchers, and other medical professionals. Several joint activities being planned for the near future are set forth in Box 5-1. 2. Addressing HIV/AIDS. Formal U.S.-Russian cooperation in addressing HIV/AIDS began in 1989 with a bilateral agreement between the U.S. Institute of Medicine and the Russian Academy of Medical Sciences. Shortly thereafter, the program was taken over by the National Institutes of Health and the Soviet Planned Joint Space Research Programs • Isolation and confinement studies as analogs for long-duration crewed missions. Research topics include crew behavior, group interactions, crew performance, microbiological and immunological investigations, and clinical-psychological studies. • Space radiation health studies, including risks of cancer, chronic tissue effects, acute radiation sickness, and changes in central nervous system functions. • Analyses of robotic precursor missions to address toxicity issues that could affect human health. • Russian free-flyer mission to address partial gravity and long-duration effects of microgravity on living systems. SOURCE: NASA Headquarters, 2011. Reducing HIV/AIDS Problems in Russia For more than 15 years, USAID provided financing and expertise for selected aspects of the large Russian-led effort to help control the level of HIV-infected patients. During the 1990s, the emphasis was on raising awareness of the problem, particularly among the Russian youth, and on training medical professionals to provide advisory services to vulnerable populations. More recently, emphasis continued to be on counseling services targeted on the most vulnerable populations, with special attention to infected prisoners and injection drug users. SOURCE: USAID Moscow, February 2012. Ministry of Health (now the Russian Ministry of Health and Social Services). The two governments have worked together in this field ever since. In the 1990s, the U.S. Agency for International Development (USAID) initiated an important component of the overall HIV/AIDS effort focused on raising public awareness of the problems and advocating measures for combating the disease. (See Box 5-2.) This activity is now a component of the global effort of USAID to address HIV/AIDS issues in selected countries worldwide. The investment by USAID in this effort has been several million dollars per year for more than a decade. However, this level of investment has been small in comparison with the Russian investments in the overall effort. Also, international programs such as UNAIDS and programs of other governments have long supported significant efforts in Russia, and coordination with activities of others has been an essential dimension of the joint efforts of Russia and the United States. At the request of the Russian government, USAID is terminating its overall program based in Russia. Thus, continuation of a significant U.S.-Russia bilateral effort to address HIV/AIDS in Russia is uncertain. Perhaps some aspects of USAID’s global efforts will continue in Russia under the leadership of Russian counterparts. 3. Responding to outbreaks of infectious diseases across international borders and containing their spread. For many years, the U.S. Centers for Disease Control and Prevention (CDC) has teamed with a number of Russian institutions in responding to outbreaks of diseases in Russia and other areas that have had the potential for spreading across international borders. Particularly important training programs for Russian epidemiologists have been held, usually in Atlanta, Georgia. In 2012, CDC and the Federal Service for Surveillance on Consumer Rights Protection and Human Well-being signed a Protocol of Intent of indefinite duration, which will continue joint efforts to address key concerns of the two governments to the extent that funding is available. (See Appendix C.6 for additional information on CDC collaboration with Russian partner organizations.) An important example of collaborative efforts was the response to the outbreak of avian influenza in 2007, which is described in Box 5-3. 4. Preserving biodiversity. Both Russia and the United States have long histories of investigating the status of biodiversity resources throughout vast geographical areas, including areas outside their borders, such as tropical regions of South America and South Asia. Much of the interest of the two countries focuses on medicinal and food uses of plants that have been neglected in the past. An area of cooperation that has often been emphasized is inventorying species of concern and implementing practical steps to help prevent the near-term loss of important species. Activities of two key institutions in preserving biodiversity of global interest are set forth in Boxes 5-4 and 5-5. 5. Addressing the scientific aspects of genetically modified organisms (GMOs). This area is often plagued by arguments over health and environmental safety issues when formulating public policy. In 2010, the Russian Academy of Sciences and the National Academy of Sciences appointed a leading specialist from each of the two academies to prepare a joint assessment of the scientific basis for decision making concerning the ecological and food safety aspects of the introduction of GMOs in agriculture. A summary of that assessment is included Response to Outbreak of Avian Influenza, 2007 Russia is crossed by two major migratory flyways. Influenza A/H5N1 and other variants of avian influenza not previously found in Russia were isolated. There were two important tasks. Measures were taken to contain the spread of influenza A/H5N1, particularly through control of poultry. Research was initiated that quickly determined that one variant, influenza A/H4N6, had expanded its host range and that aquatic mammals, mainly muskrats, were involved in maintenance of the virus in nature. Russian specialists coordinated their efforts closely with related activities of U.S. specialists, particularly colleagues at CDC. SOURCE: NRC, Biological Research in Russia, 2007, cited in Appendix A.2. Preservation of Botanical Resources The herbarium and library of the V.L. Komarov Botanical Institute in St. Petersburg are among the world’s most significant global botanical facilities, containing key specimens of plants not only from throughout the territory of the former Soviet Union but also from many areas of China and other Asian countries. The herbarium and library were repaired extensively with help from American colleagues in the early 1990s. As a result, they have maintained their status as world centers for botanical investigations, and their research materials are widely used. During the past decade, an extensive program of preparing digital images of critical specimens in the herbarium has been supported by the Andrew W. Mellon Foundation in New York. The institute will undoubtedly continue to provide an important site for facilitating cooperative botanical investigations. SOURCE: V.L. Komarov Botanical Institute, September 2011. Maintaining a Repository for Agricultural Seeds The N.I. Vavilov Institute of Plant Industry in St. Petersburg is a large repository for seeds of agricultural and scientific interest throughout the world. It preserves extensive samples of crop plants and their wild and weedy relatives while mounting expeditions in the former Soviet Union and beyond. The U.S. Department of Agriculture (USDA), which maintains a similar facility in Fort Collins, Colorado, has cooperated in many activities. For example, 60 Russian scientists from the Vavilov Institute, St. Petersburg State University, All-Russia Institute of Plant Protection, and USDA prepared an AgroAtlas that documents the distribution of 100 species of crop plants, 560 species of their relatives, and 640 species of crop pests, weeds, and diseases in Russia and neighboring states. SOURCE: N.I. Vavilov Institute, 2011. in Appendix F.4. The assessment can help officials and scientists worldwide to separate the scientific issues from the many other factors that influence decisions of governments concerning whether and under what circumstances to permit the use of this rapidly advancing technology. The academies have sent the scientific assessment to the International Research Council for consideration. 6. Addressing polar interests. Even during the darkest days of the cold war, U.S. and Soviet specialists worked together to investigate conditions in Antarctica and occasionally coordinated investigations in the Arctic region. Both the United States and Russia now support research programs in these polar areas, even in times of tight budgets. The Arctic Council provides an intergovernmental framework for addressing issues, such as search-and-rescue operations, responding to oil spills, and licensing of exploration activities that target natural resources. A variety of governmental and nongovernmental research centers in the United States, Russia, and elsewhere help coordinate biological research activities of various countries in the Arctic and in Antarctica. 7. Carrying out joint efforts in third countries. Both Russia and the United States have outreach programs to engage other countries in selected aspects of the biological sciences. Set forth in Boxes 5-8, 5-9, and 5-10 are examples of opportunities for the two countries to work together in supporting the development of biology-related activities in third countries. Organizations that provide financial support for U.S. and Russian scientific efforts are increasingly aware of the rapid growth of global interests in biological research and biotechnology that have the potential for increasing the standard Circumpolar Scientific Observations in the Arctic Building on a number of international projects carried out during the International Polar Year (2007–2009), the Arctic countries are now operating the Circumpolar Coastal Observatory Network with established reporting requirements. This network of institutions from all of the Arctic countries provides a framework for up-to-date observations of changes in the region due to climate shifts and more direct effects. SOURCE: National Science Foundation, 2011. Assessing Effects of Black Carbon in the Arctic Understanding and reducing the impacts of black carbon emissions that affect climate change and also the health of people in Arctic regions is a growing international concern. In response to the interest of the Arctic Council, the U.S. government has taken the initiative to engage Russian institutions in joint assessments of the emissions, circulation, and effects of black carbon. Inventories of sources, assessments of atmospheric transport and changes in the chemical composition of black carbon, and engineering approaches to mitigate emissions are among the many topics of interest. Current interest focuses on near-term assessments of the role of black carbon, with plans for long-term joint efforts in this field still evolving. SOURCE: Department of State, March 2012. Eradicating Polio in Uzbekistan Russian and American scientists played leading roles in the extensive efforts of the international community two decades ago to rid the world of polio. Unfortunately, polio still remains in small pockets of the world. The United States and Russia have committed to work together toward eradication of polio in Uzbekistan, although to date on-the-ground activities have been limited. SOURCE: U.S.-Russia Protocol of Intent, 2011, and discussions with senior scientists in Russia, May 2012. of living. Thus, in the years ahead, interest in bilateral cooperation on projects of global or regional significance should increase. Indeed, financial resources to support joint U.S.-Russian efforts may be more accessible if bilateral approaches to high-visibility topics are cast within a global framework, while retaining an emphasis on investigations of localized problems that are important components of overall international concerns. Enhancing Public Health Cooperation in Central Asia The U.S. and Russian governments are interested in strengthening biological research capabilities of the countries of Central Asia, and most of these countries are currently expanding their research activities. With support from the international community, the countries are giving concomitant attention to biosafety procedures that are consistent with international standards that are evolving rapidly. U.S. and Russian biological scientists are beginning to work together in engaging counterparts in these countries. This is a useful step in establishing regional approaches that are carried out in a manner consistent with related efforts throughout the world. SOURCE: Russian senior scientist participating in government-sponsored cooperation, May 2012. Global Fight against Malaria In June 2012, the United States and Russia signed a Protocol of Intent to work together to help end preventable child deaths from malaria in Africa. Cooperation will entail training, capacity building, and operations research. The U.S. Centers for Disease Control and Prevention and the Russian Martsinovsky Institute of Medical Parasitology and Tropical Medicine will lead the effort. SOURCE: U.S. Embassy, Moscow, June 2012, http://moscow.usembassy.gov/pr_062712.html. Russian and U.S. institutions have worked well together in recent years in combating outbreaks of human and animal diseases, addressing the spread of health-threatening pollution that crosses international borders, and beginning the development of programs to adapt to climate change. Joint efforts to further strengthen the research, surveillance, institutional, and regulatory infrastructures in the two countries that can respond to these and other cross-border problems are important. Three conclusions in this regard follow: 1. Coordination of research and development efforts to improve the diagnostic capabilities of regional and global disease surveillance systems can be significantly improved with only modest financial investments by both sides. Of particular interest is reducing delays and uncertainties in the international reporting of outbreaks within the framework of the International Health Regulations. The Russian government proposed a major initiative in express diagnostics in 2008 during preparations for the G-8 Summit in St. Petersburg. Unfortunately, other governments, including the U.S. government, were preoccupied with addressing issues concerning HIV/AIDS and tuberculosis, and they did not give the attention to the Russian proposal that it deserved. Nor have they given sufficient attention to the broadly based declaration concerning cooperation in disease surveillance that was adopted at the summit. Nevertheless, as both countries focus on upgrading their own diagnostics capabilities, progress in infectious disease surveillance that is relevant outside their borders is being recorded. Of particular importance is the need to reduce the times required to (a) recognize outbreaks that may cross international borders, (b) ascertain the causes of the outbreaks, (c) increase the number of disease agents that can be simultaneously detected and characterized, and (d) link detection and characterization determinations to global surveillance systems. These steps in turn contribute to efforts to constantly update assessments of global health conditions, relying on electronic networks that produce various types of up-to-date health maps of the world. As an important example, growing interest in improved surveillance is reflected in the increasing investments in improving influenza test systems and diagnostic tools in both the United States and Russia. These efforts focus on many topics, including the following: • Rapid influenza diagnostic tests, and particularly point-of-care diagnostics. • Methods and materials for respiratory specimen collection. • Respiratory pathogen tests on existing platforms. • Advanced sequence detection methods for novel influenza strains. • Identification of influenza strains that resist to antiviral drugs. • Identification of influenza immunological response. 2. The two governments are well positioned to assume broader regional leadership roles in their areas of special competence—independently and jointly—in addressing scientific challenges in the biological sciences. Central Asia and the Arctic are regions where joint efforts can pay off in the near term. The two governments have demonstrated that they can effectively work together, in cooperation with local authorities, in addressing broad public health and related biosafety issues throughout Central Asia. Both countries have extensive contacts in the region. Specialists from both countries are respected for their competence in the biological arena. Joint efforts can forge relationships between Russian and American specialists while also developing coherence of approaches within the region. As to the Arctic, many common concerns provide a strong basis for cooperation in the area near the Bering Straits. Also, as climate change increasingly is recorded across the Arctic, the opportunities for expanding cooperation along the northern coastline of Russia are particularly important. Of special interest are technologies for effectively and economically converting biomass to new sources of energy, thereby reducing reliance on coal and other heavy polluting energy sources in snow-covered regions. 3. The two governments have made a good start in joint efforts to limit the spread of tuberculosis and other devastating diseases in Russia and neighboring areas. An important framework for promoting joint research and development efforts devoted to multidrug-resistant tuberculosis and other difficult diseases was established in November 2011, with a forum in Moscow involving key agencies from the two countries. The U.S. private sector also played an unusually active role in promoting cooperation. The seriousness of many of the problems in Russia—and indeed throughout the world—is widely recognized. Now there is a considerable need for more aggressive collaborative research efforts. (See Appendix F.5.)
<urn:uuid:2d565912-f437-409d-bcf5-fd86b04ce4be>
CC-MAIN-2020-16
https://www.nap.edu/read/18277/chapter/8
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496330.1/warc/CC-MAIN-20200329232328-20200330022328-00474.warc.gz
en
0.943016
3,851
3.125
3
Volume 22, Number 7—July 2016 Hepatitis E Virus Infection in Dromedaries, North and East Africa, United Arab Emirates, and Pakistan, 1983–2015 A new hepatitis E virus (HEV-7) was recently found in dromedaries and 1 human from the United Arab Emirates. We screened 2,438 dromedary samples from Pakistan, the United Arab Emirates, and 4 African countries. HEV-7 is long established, diversified and geographically widespread. Dromedaries may constitute a neglected source of zoonotic HEV infections. Hepatitis E virus (HEV) is a major cause of acute hepatitis worldwide (1). Four HEV genotypes belonging to the species Orthohepevirus A are commonly found in humans (HEV-1 through HEV-4). Genotypes 1 and 2 seem to be restricted to humans, whereas genotypes 3 and 4 also occur in domesticated and wild animals. Zoonotic transmission by ingestion of contaminated meat, mainly from pigs, is the most likely zoonotic source of infection (1). Recently, HEV sequences were reported from 3 dromedaries sampled in the United Arab Emirates (UAE) in 2013 and were classified as a new orthohepevirus A genotype, HEV-7 (2,3). Afterwards a human patient also from the UAE who had chronic hepatitis after liver transplantation was shown to carry HEV-7 (3,4). Until now, knowledge on HEV-7 and its zoonotic potential relied on these 2 studies, which provide no insight into the prevalence and distribution of HEV-7. To determine the geographic distribution of HEV-7, we conducted a geographically comprehensive study of HEV-7 prevalence in dromedaries by testing 2,438 specimens sampled in 6 countries during the past 3 decades. Serum and fecal samples were collected from dromedary camels in the UAE, Somalia, Sudan, Egypt, Kenya, and Pakistan during 1983–2015 (5–7). A total of 2,171 serum samples and 267 fecal samples were tested for HEV RNA by using reverse transcription PCR (RT-PCR) as previously described (8). Seventeen samples were positive for HEV RNA: 12 (0.6%) of 2,171 serum samples and 5 (1.9%) of 267 fecal samples (Table). Positive samples originated from UAE, Somalia, Kenya, and Pakistan and dated to 1983 (Figures 1, 2). Viral loads were measured by using real-time RT-PCR (9) calibrated on the basis of the World Health Organization International Standard for HEV RNA (10). Viral RNA concentrations ranged from 3.2 × 104 to 3.6 × 107 IU/g in feces and 6.2 × 102 to 8.3 × 106 IU/mL in serum. We sequenced a 283-nt fragment of the RNA-dependent RNA polymerase gene of all positive samples for phylogenetic analyses. All camel HEV clustered in a monophyletic clade with the human HEV-7 sequence (Figure 2), supporting the classification of camel-associated HEV to a separate Orthohepevirus A genotype (11). African viruses from Somalia and Kenya formed a monophyletic clade, whereas viruses from UAE and Pakistan were intermixed (Figure 2). Distances based on nucleotide identities were calculated for all sequences from this study and 1 reference strain from each orthohepevirus A genotype as defined by Smith et al. (11). This subset of references comprised GenBank accession nos. M73218 (HEV-1), M74506 (HEV-2), AF082843 (HEV-3), AJ272108 (HEV-4), AB573435 (HEV-5), AB602441 (HEV-6), and KJ496143 (HEV-7). Nucleotide diversity was remarkable among viral sequences from dromedaries, reaching a maximum distance of 22.7%, compared with a maximum distance of 29.9% among all genotypes. The internal distance among the African viruses was 14.2%, compared with 17.4% distance within viruses from UAE and Pakistan. The African viruses were 16.7%–22.7% distant from UAE and Pakistan viruses, which corresponds to the distance threshold of 22%–25% that separates the prototype HEV-4 sequence from HEV-5 and HEV-6 prototype sequences. This finding suggests that HEV-7 is a strongly diversified clade of viruses that might need to be further subclassified. HEV-7 was recently shown to belong to the same serotype as HEV-1–4 (12). Therefore, we conducted a preliminary serologic analysis with a subset of 210 specimens (35 per country) by adapting a human HEV ELISA (EUROIMMUN, Lübeck, Germany) for application with camel serum. Serum was tested at a 1:100 dilution. The signal-to-noise ratio was optimized by normalizing the optical density (OD) of test samples against ODs of a reference serum included in every run (Technical Appendix Figure). For confirmation of ELISA results and to determine an appropriate ELISA cutoff, we tested 56 samples covering the complete range of OD ratios by adapting the recomLine Immunoblot (MIKROGEN, Neuried, Germany). Thirty-two samples reacted against >2 of the presented antigens and were therefore ranked positive in the Immunoblot. All tested samples with ELISA OD ratios >0.46 were positive by immunoblot, whereas only 7 of 31 tested samples below this value were positive by immunoblot (online Technical Appendix Figure). Subsequently we set an ELISA cutoff of 0.46. Using this cutoff, we found 96 (46%) of the 210 serum samples originating from all 6 countries were positive (Table), which is comparable with the seroprevalences typically observed in pigs that are known zoonotic reservoirs for HEV-3 in developed countries (13). The percentage of ELISA-positive serum samples ranged from 31% in Kenya to 63% in Egypt but did not differ significantly among all 6 countries (p = 0.1, Yates’ χ2 test). These results suggest a wide occurrence and high prevalence of HEV in dromedaries. We investigated HEV-7 infection in dromedaries. The broad spatial extent, the high diversity of HEV-7 in dromedaries, and the detection of HEV-RNA in a sample collected in 1983 suggest a long evolutionary history of HEV-7 in dromedaries. Our study has some limitations. First, although most tested dromedaries seemed healthy, no detailed health information from the RNA-positive animals was available. Second, we studied limited genome fragments that prevented formal classification into genome subtypes (14). Third, although we used 2 different antibody detection methods, the antibody prevalence in camels should be confirmed by larger studies including virus neutralization studies to determine potential genotype variability. Investigations of camelids other than dromedaries could help to further elucidate the geographic and evolutionary origin of HEV-7. Furthermore, other wild or domestic ungulates with close contact to dromedaries could be investigated to assess the host range of HEV-7. Human infection with HEV is common in all studied areas (1). On the basis of clinical observations and HEV antibody detection tools, several HEV outbreaks mainly linked to water contamination or poor hygienic circumstances have been described for Pakistan, Sudan, Somalia, and Egypt. For Kenya and UAE, data about HEV prevalence is scarce (1). In large parts of the Middle East, human infections are unlikely to be caused by contact with swine or consumption of pork for cultural reasons. Even in Saudi Arabia, where pork is absent in diet, blood donors have antibodies at proportions of up to 18.7% (1). Thus, most HEV infections in the Middle East are assumed to be caused by nonzoonotic genotypes 1 and 2. However, our study and previous studies (12) showed that HEV-7 and other human genotypes form 1 serotype, suggesting a lack of discrimination in seroprevalence studies. The human HEV seroprevalence in the Middle East region might in fact be caused by HEV-7 infection. Furthermore, human HEV-7 infections might contribute to the HEV prevalence in all studied areas, where camel products are frequent parts of human diet (15). A foodborne transmission scenario is further suggested by the fact that 1 of 12 positive serum in the study was actually sampled in a slaughterhouse, documenting that meat from infected animals can enter the food chain (6). Detections of HEV-7 RNA in feces in this and a previous study (2) point at feces or feces-contaminated camel products, such as milk, as putative additional sources of human infection. Considering the importance of dromedaries as livestock animals (15), risk groups, such as slaughterhouse workers, should be screened for HEV-7 infection. Mrs. Rasche is a doctoral student at the Institute of Virology, Bonn, Germany. Her primary research interests include detection and characterization of novel zoonotic hepatitis viruses. We thank Monika Eschbach-Bludau, Sebastian Brünink, and Tobias Bleicker for providing excellent technical assistance. This study was supported by the European Commission (project COMPARE), the German Research Foundation (project DR772/12-1). A.L. and V.C.M were supported by the Centrum of International Migration and Development (Contract No. 81195004). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. - Aggarwal R. The global prevalence of hepatitis E virus infection and susceptibility: a systematic review [cited 2016 May 16]. http://whqlibdoc.who.int/hq/2010/WHO_IVB_10.14_eng.pdf - Smith DB, Simmonds P; International Committee on Taxonomy of Viruses Hepeviridae Study Group. Jameel S, Emerson SU, Harrison TJ, et al. Consensus proposals for classification of the family Hepeviridae. J Gen Virol. 2014;95(Pt 10):2223–32. - Kadim IT, Mahgoub O, Faye B, Farouk MM, editors. Camel meat and meat products Wallingford (UK): CAB International; 2013.
<urn:uuid:f177ae2e-a104-43ca-9e0a-e23384566563>
CC-MAIN-2020-16
https://wwwnc.cdc.gov/eid/article/22/7/16-0168_article
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370521876.48/warc/CC-MAIN-20200404103932-20200404133932-00514.warc.gz
en
0.925377
2,229
2.921875
3
Understanding EN 81-77: 2013 & prEN 81-77: 2017. Lifts Subject to Seismic Conditions Rory S. Smith University of Northampton This paper was presented at The 7th Symposium on Lift & Escalator Technology (CIBSE Lifts Group, The University of Northampton and LEIA) (2017). This web version © Peters Research Ltd 2018 Keywords: Lifts, Standards, Earthquakes Abstract. Around the Pacific Rim, the potential for earthquakes to severely damage lifts has been recognized for decades. EN 81-77: 2013, enacted in November 2013 now brings seismic standards to the rest of the world. This standard addresses the seismic risks to lifts and establishes standards for mitigation. European Standard prEN 81-77: 2017 makes changes to the existing standard . These standards are explained in practical terms, examples of seismic damage, particularly in California, are explored, and the reduction in damage that has occurred in subsequent earthquakes as a result of new codes enacted after each major earthquake are examined. EN 81-77: 2013 states the aims of the standard, describes the hazards to lifts caused by seismic accelerations, defines protective measures that can be taken to deal with the hazards, and quantify anticipated accelerations at a specific site . Before reviewing EN 81-77: 2013, it is important to have an overview of how earthquakes are caused and where they can be expected to occur. The outermost shell of the earth is made up of Tectonic plates . These individual plates are in contact with each other and in constant motion relative to each other. There are two types of plates; continental and oceanic. The major landmasses are part of the continental plates while most of the ocean’s floor is made up of oceanic plates. Oceanic plates are thinner and denser than continental plates. The motion between plates is not smooth. The plates are often bound together at a location known as an asperity and remain bound until there is sufficient stress to cause a sudden movement of the plates . These sudden movements are known as earthquakes. When an oceanic plate and a continental plate converge, the dense oceanic plate is driven under the less dense continental plate. This action is known as subduction . Friction between the plates causes intense heating which melts the rock and the molten rock being less dense that the continental rock rises through the rock and causes volcanos to appear on land. Subduction is not smooth and so the movement causes earthquakes. The Pacific Northwest of the USA, home of the Mt. St. Helens volcano, is an example of this type of convergence. When two oceanic plates converge, the underwater convergence also involves subduction of the denser of the two plates. Earthquakes are always a part of the subduction process. The friction between the two oceanic plates also melts the rock and creates volcanos that rise above the surface of the sea in the form of island arcs. The Japanese Islands are one such island arc . When two continental plates converge, mountain ranges are formed. The convergence of the Asiatic plate and the India plate has formed the Himalayan Mountains and resulted earthquakes. When two plates slide past each other they form a transform boundary. The two plates grind against each other creating earthquakes. The San Andreas Fault in California is an example of a transform boundary. 3 The Aim of EN 81-77: 2013 The Introduction of EN 81-77: 2013 states the following : Avoid loss of life and reduce the extent of injuries Avoid people trapped in the lift Avoid environmental problems related to oil leakage Reduce the number of lifts out of service 4 Hazards Identified in EN 81-77: 2013 The hazards to lifts identified in EN 81-77: 2013 that can be caused by seismic activity includes the following - Ropes, belts, chains, and traveling cables can get snagged by components in the hoistway. - Car frames can become separated from the rails. This can result in collisions with building elements and other lift components. - Counterweight frames leaving the rails. This has resulted in counterweights colliding with cabs, potentially at rated speed. - Counterweight filler weights leaving the frame. Falling filler weights can cause damage. A reduction in counterweight mass can result in a loss of traction. - Hydraulic pipe rupture. Unchecked, pipe rupture can cause a car to fall. Hydraulic fluids, depending on their type, can pollute. - Hydraulic tank rupture. Hydraulic fluids, in addition to having a potential to pollute can constitute a fire hazard. - Guide rail deflections that let the car or counterweight leave their guides. This creates a collision hazard. - Machinery anchorage. Poorly anchored machinery has been known to “dance” across the machine room floor during earthquakes. Such machinery will not be able to function after an earthquake. - Landing switches and final limit switches that need to be able to withstand the accelerations associated with an earthquake and be guarded against impact by ropes. - Loss of electrical power. An automatic rescue device can avoid entrapments. - Car doors can come open and that can permit passengers to become injured. Car door locks can prevent this condition. 5 Design Acceleration The accelerations that act on the lift as a result of an earthquake are directly related to the damage that the earthquake can produce. The greater the acceleration, the greater the effort required to mitigate the risk. For this reason, the standard requires that a calculation of the potential accelerations at the installation site to be calculated. The EN 81-77: 2013 provides the following two formulas are used to calculate design acceleration : αd represents the design acceleration in meters per second squared. g represents the gravitational acceleration 9.81m/s². Sα represents a non-dimensional seismic coefficient. γα represents an importance factor for a building. Minimum value is 1 but could be higher for buildings such as hospitals. qα represents the behavior factor of an element and has a value of 2. Where ag represents the ground acceleration expected for a particular location with Type A soil. Tα represents the fundamental vibration period, expressed in seconds, of the non-structural element. T_a = 0 if the lift does not affect the fundamental vibration period of the building. T1 represents the fundamental vibration period, expressed in seconds of the building. z represents the height, in meters, of the non-structural element above the application level of the seismic action. H represents the building height in meters above the application level of the seismic action. The values for local accelerations are in documents published by the individual countries. The values of S for the various ground types is shown in Table 1 below taken from EN 1998-1: 2004 : Table 1 Ground Types and S values |B||Very dense sand, gravel, or clay||1.2| |C||Dense sand, gravel, or clay||1.25| |D||Loose to medium cohesionless soil or soft to firm cohesive soil||1.35| |E||Surface alluvial layer of C or D, 5 to 20 meters thick over a much stiffer material||1.4| The values of γα are shown in Table 2 taken from EN 1998-1: 2004 below : Table 2 Building types and importance values |Importance Class||Building Type||γα| |I||Buildings of minor importance for public safety, (agricultural buildings, etc.)||0.8| |II||Ordinary buildings, not belonging in other categories||1.0| |III||Buildings whose seismic resistance is of importance in view of the consequences associated with a collapse, (schools, assembly halls, cultural institutions, etc.) |IV||Buildings whose integrity during earthquakes is of vital importance for civil protection, (hospitals, fire stations, power plants, etc.)||1.4| The design acceleration formulae can be simplified. In the formula below, q_α is a constant with a value of 2. Therefore the formula can be restated as follows: The shaded area, in many cases, has a value of 2.5, because z/H and Tα often have values of zero. Therefore, Sα is as follows: Combining the simplified formulae into one formula yields the following: It is now possible to understand how the various parameters affect the design acceleration as follows: The value of α increases in proportion to the magnitude of accelerations at a particular site. The value of S increases as the soil becomes less solid. The value of γα increases with the importance of the building. 6 Seismic lift Categories EN 81-77: 2013 (Table A.1) establishes Seismic Lift Categories based design acceleration. Table 3 defines those categories. Table 3 Design accelerations and Seismic Lift Categories |Design acceleration (m/s²)||Seismic lift category||Comment| |0||The requirements of EN 81-20 are adequate. No further actions required| |1||Minor corrective actions required| |2||Medium corrective actions required| |3||Substantial corrective actions required| 7 Corrective measures for Categories 0, 1, 2, and 3 The corrective measures for each category include the corrective measures for categories of a lower number. For example, Category 3 must address the corrective measures for Categories 0, 1, 2, and 3 while Category 2 must only comply with the requirements for 0, 1, and 2. Likewise Category 1 must only comply with Category 0 and 1 requirements. The corrective measures must be based on the design accelerations for the particular Category. In most cases, design documents must be prepared. 7.1 Category 0 The lift must only comply with EN 81-20. 7.2 Category 1 The following preventive measures are required: - Prevention of snag points - Machinery spaces and hoistway located on the same side of expansion joint - Counterweight retaining devices - Protection of traction sheaves - Compensating chain guides - Precautions against environmental damage - Guide rail system - Electrical installations in the hoistway - Information for use 7.3 Category 2 The following additional preventive measures are required for Category 2: - Car retaining devices - Car door locking devices - Special car behavior in case of power failure 7.4 Category 3 Category 3 requires the following measures in addition to those required for Category 1 and 2: - Seismic detection system - Seismic operation mode - Primary wave detection system (Optional) 8 The California experience Three major earthquakes in California caused serious lift damage. The earthquakes are known as the 1971 San Fernando Earthquake, the 1989 Loma Prieta Earthquake, and the 1994 Northridge Earthquake. Each earthquake revealed areas that needed protection and caused caused California code changes to be adopted. 8.1 1971 San Fernando Earthquake At the time this magnitude 6.6 earthquake struck on February 9, 1971, the lift code in place did not address seismic events. 674 counterweights came out of their rails . As a result of the lift damages, the lift code was modified in 1975 and required modifications to virtually all existing lifts. 8.2 1989 Loma Prieta Earthquake This magnitude 6.9 earthquake struck 70 km south of the San Francisco Bay area on October 17, 1989 . The electrical grid serving the San Francisco Bay area failed near the earthquake’s epicenter. As a result, most elevators were stopped due to lack of power before the seismic waves reached the lifts. Only 98 counterweights came out of their guides. However, there were 6 car and counterweight collisions that occurred when power was restored. Although, these lifts had seismic switches installed, they were not battery backed up. When the power was returned, the cars were free to run with counterweights out of their guides. Codes were changed requiring battery back-up or latching contacts on seismic switches. 8.3 1994 Northridge Earthquake Although this earthquake that struck on January 17, 1994 only was a magnitude 6.7 quake, sensors recorded the highest ground accelerations ever observed in North America. 688 counterweights left their guides [7, 8]. As a result of the experience gained by analyzing the damage caused by this earthquake, seismic codes were established not just in California, but in all of the USA. 9 The updated Standard European Standard prEN 81-77: 2017 makes changes to the existing standard. The changes are summarized as follows: - EN 81-20: 2014 and EN 81-50: 2014 are referenced in lieu of EN 81-1 and its revisions. - Additional references to EN 81-72, Safety rules for the construction and installation of lifts – Particular application for passenger and goods passenger lifts – Part 72: Firefighter lifts. - Reference is made to EN 81-73, Safety rules for the construction and installation of lifts – Particular application for passenger and goods passenger lifts – Part 73: Behavior of lifts in the event of fire . - Section 5, Protective Measures has some modifications. - Section 6, Verification of safety requirements and or protective measures has changes in Subsections 6.1 and 6.2. - Annex C, Primary Wave detection has changes in trigger level and frequency response - Annex D, Proof of guide rails uses additional parameters in the calculations. Earthquakes are a serious problem in seismically active areas. There are serious costs associated with addressing this problem. However, there are serious consequences if these issues are not addressed. EN 81-77: 2013 addresses this problem. This standard at first seems complex, however, in its simplified form one can assess its impact on most projects. - European Standard EN 81-77: 2013 Safety rules for construction and installation of lifts – Particular Applications for passenger and goods passenger lifts Part 77: Lifts subject to seismic condition. - Spooner, A. Geology for Dummies. Wiley, Hoboken, (2011). - Earthquake Glossary Available from: https://earthquake.usgs.gov/learn/glossary/?alpha=ALL Last accessed: 20 June, 2017 - European Standard EN 1998-1: 2004 Eurocode 8. Design of structures for earthquake resistance. General rules, seismic actions and rules for buildings. - 1971 San Fernando Earthquake Available from: https://en.wikipedia.org/wiki/1971_San_Fernando_earthquake Last accessed: 22 June, 2017 - 1989 Loma Prieta Earthquake Available from: https://en.wikipedia.org/wiki/1989_Loma_Prieta_earthquake Last Accessed: 22 June, 2017 - 1994 Northridge Earthquake Available from: https://en.wikipedia.org/wiki/1994_Northridge_earthquake Last accessed: 22 June, 2017 - FEMA (Federal Emergency Management Agency Reducing the Risks of Non-structural Earthquake Damage Washington, DC, (1994) - European Standard EN 81-77: 2017 Safety rules for construction and installation of lifts – Particular Applications for passenger and goods passenger lifts Part 77: Lifts subject to seismic condition. - European Standard EN 81-72, Safety rules for the construction and installation of lifts – Particular application for passenger and goods passenger lifts – Part 72: Firefighter lifts. - European Standard EN 81-73, Safety rules for the construction and installation of lifts – Particular application for passenger and goods passenger lifts – Part 73: Behavior of lifts in the event of fire. Rory Smith is Visiting Professor in Lift Technology at the University of Northampton. He has over 48 years of lift industry experience during which he held positions in sales, research and development, manufacturing, installation, service, and modernization. His areas of special interest are Machine Learning, Traffic Analysis, dispatching algorithms, and ride quality. Numerous patents have been awarded for his work.
<urn:uuid:65774306-8171-4951-8a41-695d481a4b5b>
CC-MAIN-2020-16
https://peters-research.com/index.php/support/articles-and-papers/254-understanding-en-81-77-2013-pren-81-77-2017-lifts-subject-to-seismic-conditions
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370511408.40/warc/CC-MAIN-20200410173109-20200410203609-00435.warc.gz
en
0.908711
3,327
3.234375
3
Oliver Twist: Against all Odds Oliver Twist has received its fair amount of critical attention for over a century, yet it is always a lovely text to explore at any level. Charles Dickens was perhaps the greatest fictional chronicler of Victorian England this country has ever seen, and in his time his writings taught and inspired many: from the upper and middle classes of Victorian society, write down to those fortunate enough to be able to read in the poverty stricken areas of London town. He chose the London poor as the subject matter for much of his work, and he chronicled the huge, poverty stricken, urban growth of the city in a way that prepared his readers to engage these problems. The interest of Oliver Twist lay not so much in its revelation of Dickens’ literary genius as in its revelation of his moral, personal, and political instincts which were the make-up of his character and that which went into the support of his literary genius. He offered not only great stories, but also feasible solutions to London’s malady by making its problems understandable in human terms. His works appealed to many because of their subtle and satirical onslaught of the establishment and the many failed attempts at social reform and improvement such as the poor law reforms of 1834. London Town is the focus of this introduction and Oliver Twist is a novel that shows London in all its foggy nastiness. London is the true centre of power in Oliver Twist, as it is in many of Dickens’ novels. In fact, London is the centre of overwhelming influence and power in many Victorian novels, not only those written by Dickens. Here we see a description of London taken from the novelThe Secret Agent (1907) by Joseph Conrad; more or less a contemporary of Charles Dickens: The vision of an enormous town presented itself, of a monstrous town more populous than some continents and in its man-made might as if indifferent to heaven‘s frowns and smiles; a cruel devourer of the world‘s light. There was room enough there to place any story…darkness enough to bury five millions of lives The Secret Agent was published in 1907 and yet even then London remained to many as a ‘monstrous town more populous than some continents’. London is the catalyst of many character’s downfall in Oliver Twistand in many cases their eventual destruction. There are two ways to read this fact. We can look upon the text as a grimly painted horror story about Victorian London and its unforgiving nature or we can see it as a story about how the character Oliver lives within this horror and makes the best of bad situation; it would be naïve to think Dickens enough of a pessimist this early in his career to choose the former. Despite some of the dreadful goings on in his fiction, in Oliver Twist Dickens refuses to be brought down to the same level as the city. This could be wholly attributed to his youth and early work, but Dickens was an optimist and a social reformist. He was also a realist, and knew too well that he could not deliver his social critique without showing London in all its terrible glory. This is the reason he opens our eyes to the awful workhouse and Oliver’s arduous journey through the unsuccessful governing systems in place. Oliver only really succeeds in the end though sheer resilience and good luck. Due to the huge population increase in London, the vast numbers of poor were compelled to seek work in conditions of great hardship, and this very often led to no work at all. As a result of this, thousands turned to crime, especially theft. Dickens was really quite accurate inOliver Twist, and in the 1860s police believed that around fifteen to twenty thousand children were being trained into the art of thieving just the same as Dickens described those boys in Fagin’s lair practicing their pick pocketing skills. In all of his work Dickens describes for his readers the societies and parts of London they never knew existed; thus succeeding in his first objective, to open the upper and middle-classes’ eyes to what was going on around them. The upper and middle classes did not often see the goings on in the east end of London, in the Whitechapel district for example. The differences between the West and East ends of Victorian London were so very many that they were almost like different countries. In Oliver Twist Dickens tells of the dreadful, dark places along the river bank where Fagin dwells and trains his boys as pickpockets; where: …the old smoke-stained storehouses on either side rose heavy and dull from the dense mass of roofs and gables, and frowned sternly upon water too black to reflect even their lumbering shapes. Here Dickens captures the very essence of Victorian overcrowding, filth and overall grimness in just one sentence. Before arriving in London, Oliver Twist has already resided in many of the filthiest places a young man could barely imagine, and endured hardship at the mercy of the workhouse, yet when he enters the city, even under darkness Dickens describes Oliver’s first impressions as: ‘A dirtier or more wretched place he had never seen. The street was very narrow and muddy and the air was impregnated with filthy odours.’ London is an over populated place where Dickens forces all of his characters to come together, and it is this city that all of them must come to terms with in order to survive in one way or another. In order to show the city’s disjointing effects upon its inhabitants, he makes London the only thing all characters have in common. The very detailed social life of the criminals and reprobates of London’s East end consists of a bewildering assortment of eccentrics, grotesques, amiable idiots and moral villains, who do not even share language in common, each has his or her own mode of speech. As well as its ability to stifle communications, Dickens tells us how powerful this foul city is over its inhabitants, sadly by presenting us with characters who cannot avoid the tide of social injustice that grows in such a place and cannot even gain support from each other to counter its effect; only by packing together as wolves do in a cold forest can the destitute exist. There is no other way a vagrant boy or any other poor unfortunate will be able to survive in London town, particularly one such as Oliver Twist, unless he takes to thieving with the likes of characters such as Fagin and Bill Sikes. As Jack Dawkins (The Artful Dodger) explains to Oliver: “Fagin will make something of you, though, or you’ll be the first he ever had that turned out unprofitable. You’d better begin at once; for you’ll come to the trade long before you think of it; and you’re only losing time, Oliver.” When Oliver Twist was first published, many criticised Dickens for introducing criminals and prostitutes to the general readership, but to him this was of paramount importance; how else would he show this city for what it really was? In his preface to the library edition of Oliver Twist in 1858 he writes: I saw no reason, when I wrote this book, why the very dregs of life, so long as their speech didn’t offend the ear, should not serve the purpose of a moral, at least as well as its froth and cream Nancy is a prime example of one of Dickens’ ‘dregs of life’ and an important character to detail the overpopulated city defeating yet another individual. Nancy cannot avoid the overwhelming grasp of London; it is all she has known. All she has accomplished is becoming one of Fagin’s youth, now grown into the dreadful criminal society she occupies; she laments at length, quite hysterically, that she was put to work by Fagin when she was much younger than Oliver. In one such outburst she tells Fagin that ‘…the cold, wet, dirty streets are my home; and you’re the wretch that drove me to them long ago…” Nancy is manipulated and kept in her place ‘by dint of alternate threats, promises, and bribes…’ from Bill Sikes and Fagin, and also because she knows nothing else. However, she is the character on which much depends in the novel, and knowing her chances are few she realises her own dashed hopes and aspirations through young Oliver; this is the primary reason she helps him to escape the dreadful city and be reunited with those who can love and care for him. Nancy is pleaded with by Rose to stay with her and not to return to her life among the ‘most noisome of the stews and dens of London.’ But Nancy cannot leave Bill Sikes, or any of the others for that matter, for they are all she knows. Her pain at betraying Bill fully reveals itself in her ‘confession’ to Rose, who cannot understand Nancy’s compulsion to return to Sikes any more than Nancy can herself: “I don’t know what it is,” answered the girl; “I only know that it is so, and not with me alone, but with hundreds of others as bad and wretched as myself. I must go back. Whether it is God’s wrath for the wrong I have done, I do not know; but I am drawn back to him through every suffering and ill usage; and I should be, I believe, if I knew that I was to die by his hand at last.” This grimly prophetic statement is very telling of Nancy’s position in the story. In many respects she is like Sikes’ dog in that she cannot get away because there is perhaps a mutual dependence there between her and Sikes that pulls the two of them together, primarily because like Bullseye, she knows no better; it is fitting then that in the end both of them should feature in his Sikes’ demise. Dickens’s London was a place where the sufferings of human beings needed remedy, and this is one of the most fundamental reasons Dickens had for writing his fiction. Perhaps better presented in a novel the size of Bleak House, although very well done in Oliver Twist, Dickens creates these dreadful little scenes of London’s underbelly, and then multiplies the effect by showing them through the eyes of many characters from all walks of life. In Oliver Twist the effects of the city are originally seen from young Oliver’s perspective, but the novel continues to expand and turn into a quintessential Dickens work where all characters are equally represented yet none of their plights are any more resolved. In early work like Oliver Twist this is Dickens’ way of displaying the severity of the situation in London town, and in his later work demonstrating the futility of any quick remedies; there are no rich, bumbling old gentlemen to rescue the innocent in his later work. In Bleak House for example, there are so many characters that none of them are focused upon for long, and everyone of them fails in their attempts to succeed; some (not just the antagonists) even die along the way, like Jo the crossing sweeper who resided occasionally at Tom-all-alones in the city’s slums; Jo is similar in many respects to Oliver, only much less lucky: Jo lived – that is to say, Jo has not yet died – in a ruinous place, known to the like of him by the name of Tom-all-Alone’s. It is a black, dilapidated street, avoided by all decent people; where the crazy houses were seized upon, when their decay was far advanced, by some bold vagrants, who, after establishing their own possession, took to letting them out in lodgings. Now, these tumbling tenements contain, by night, a swarm of misery. As on the ruined human wretch, vermin parasites appear, so these ruined shelters have bred a crowd of foul existence that crawls in and out of gaps in walls and boards; and coils itself to sleep, in maggot numbers, where the rain drips in; and comes and goes, fetching and carrying fever, and sowing more evil in its every footprint than Lord Coodle, and Sir Thomas Doodle, and The Duke of Foodle, and all the fine gentlemen in office, down to Zoodle, shall set right in five hundred years – though born expressly to do it. The character Oliver Twist however, represents Dickens’ will to survive against all odds. Indeed Oliver Twist was only Dickens’ second novel and he was still very much a young man on a mission to save the world, or at least Victorian society. Dickens believed that by making the general reading public care about Oliver Twist, he could make them care about a good many boys in his protagonist’s predicament. Despite all the characters falling around him, Oliver is the one beacon of light that Dickens and the reader cling to. We cannot help but have the feeling that whilst we read the novel, if Oliver can make it though then so can London, so can the populous; it can improve, and life will become better. After what we have read so far, a survivor is a reason for optimism in Dickens’ work. As I mentioned earlier, as much as Dickens was a sentimentalist, he was also a hardened realist, and knew only too well that despite the laughs throughout his novels, he had to tell it like it was. Dickens allows us this beacon of optimism throughout, and it is somewhat comforting to read the novel, because even if by some small miracle a reader should come along who had never heard ofOliver Twist, they would be quite assured throughout that a protagonist such as this boy could not fail, primarily because of all the hardships he overcomes. Despite all of his harsh treatment we can see right from the start that Oliver has a certain charm about him that makes many characters sympathise with him; even though they are on occasion subsequently left unnerved by their own feelings. We have seen how, without trying Oliver wins over Nancy, who is a hard and most destitute character. It is almost as though there is some kind of hope about the boy that other characters see clearly, and they don’t really want to see him come to any harm, despite their selfish endeavours and London’s attempts at breaking them. It is a part of Oliver’s luck that he always seems to find the right person at the right time. When he is first to be removed from the workhouse to go with the dreadful Mr. Gamfield, the old magistrate happens by chance look at Oliver before he signs the release papers: …his gaze encountered the pale and terrified face of Oliver Twist: who despite all the admonitory looks and pinches from Bumble, was regarding the repulsive countenance of his future master, with a mingled expression of horror and fear, too palpable to be mistaken, even by a half-blind magistrate. Of course this stroke of luck saves him from that particular distasteful future because the magistrate refuses ‘to sanction these indentures.’ Luck is predominant in Oliver’s fate throughout the novel, and for Dickens it was important to emphasise this, for what he tells his readers is that without miraculous luck a boy such as Oliver Twist would have died very early on in the adventure. Throughout his dreadful journey Oliver encounters one or two people along the way who help him, like the ‘good-hearted turnpikeman, and a benevolent old lady…’ who gave him food and shelter on his way to London, and Dickens makes sure to add that had it been for these kindly souls: Oliver’s troubles would have been shortened by the very same process which had put an end to his mother’s; in other words, he would most assuredly have fallen dead upon the king’s highway. Even after Oliver’s terrible first encounters in London, he is not entirely forsaken. Dickens keeps the reader on the edge of their seat by providing the boy a reprieve at the very last minute in many cases. When he is chased across the streets for a theft against Mr. Brownlow (that was actually committed by Charley Bates), and even under the scrutiny of Mr. Fang, a dreadful magistrate if ever there was one, he is found not guilty by virtue of the bookstall owner who arrives on the scene at the very last moment to clear his name: “This,” said the man: “I saw three boys: two others and the prisoner here: loitering on the opposite side of the way, when this gentleman was reading. The robbery was committed by another boy. I saw it done; and I saw that this boy was perfectly amazed and stupefied by it.” These brief moments of optimism allow us to recall one of the most significant, and often quoted passages of the novel: the one in which Oliver Twist asks for more. These days a social realist with a bee in his bonnet about reform, or perhaps an older Dickens, describing the goings on of the workhouse, would have made all the children pathetic and crushed, without one of them daring to speak at all. The children would not expect anything, they would not hope for anything and they would not get anything. The realist would have done his job well here and made his point. But Oliver Twist is not pitied because he is pessimistic and pathetic, he is pitied because he is an optimist; and this is the true tragedy of the story. The other boys expect nothing, but Oliver does. He expects the world to be kind to him with all the innocence a child has, and he firmly believes that he is living in a just world. Like Dickens himself, Oliver asks for more knowing well the wrongs that have befallen him, but he also asks for more because he innocently believes he deserves more. Dickens’ achievement in Oliver Twist is fundamental to its consistent appeal and numerous adaptations to stage, television and film. The novel captured the dreadful position of the poor in the Victorian era and in so doing detailed universal truths and evil which continue to dwell within mankind today. As twenty first century readers, many of us cannot imagine the possibility of such social iniquity as is realised in Oliver Twist. Oliver does exactly as Dickens intended; he represents the small voice of the innocent against the power and invidious nature of the city and its uncaring masses. For future generations it is important that this lonely child Oliver Twist becomes remembered as a character that stands as a reminder to all about how cruel society can become against its innocents, and perhaps more importantly stand as a symbol of profound hope against those evils. by: Ian Fenwick
<urn:uuid:aab098fc-143f-4bc9-a669-eaf6243673a0>
CC-MAIN-2020-16
http://site.iugaza.edu.ps/rareer/contact/courses/victorian-age/bad-city-oliver-twist-against-all-odds
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371818008.97/warc/CC-MAIN-20200408135412-20200408165912-00153.warc.gz
en
0.982835
3,896
3.046875
3
fertile territory, more especially in France and in the North of Italy, has been devastated; while in France also industrial centres and mining areas, of vital importance to her industries, have been competely destroyed and will not be able to resume production for years to come. In Belgium, similarly, the national industries suffered greatly during the period of occupation. Germany on the other hand, has its industrial establishments intact, but is paralysed by lack of capital and credit, and by the disorganisation bred of defeat; while in the case of Austria these conditions have led to the complete breakdown of her economic life. Russia has passed through all the throes of civil strife and is still the victim of confusion and anarchy. Each country suffers from a different difficulty, but each contributes its share to the common deficit. In agriculture, Russia, which before the war was the most important granary of Europe, and of whose products Europe is in such need, either has not been producing at all or has not been able to exchange with her neighbours such products as she has. Roumania, which before the war exported annually over six million quarters of wheat, has altered her system of land tenure, and is now ceasing to produce more than suffices for the immediate needs of her own population; indeed, on the 1st December last, it was stated that only 530,000 hectares had been sown as compared with an annual average before the war of 1,000,000 hectares, though some improvement has since taken place. Other countries again such as France and Germany which were largely self-supporting are unable at the present moment, owing to the devastation of the land, the destruction of buildings and machinery, or the lack of capital and fertilisers, to produce more than a fraction of what is required for their own needs, and have been increasingly driven to compete in the world market for the limited supplies now available. Again, in regard, to coal, production in every country has been decreased, the approximate figures of output in metric tons for 1913 and 1919 respectively being as follows:- United Kingdom. . . . France (including Germany* (excluding Saar and Lorraine) United States of America 292.000. 000 44,000,000 234.000. 000 22,000,000 Although detailed statistics are not available, such information as we have goes to show that the output of factories and manufacturing industries throughout the world is below the standard which prevailed before the war, and far below the demands now made upon them. The net result of under-production arising from these various causes is an acute shortage of the essential supplies on which the economic life of Europe depends. This situation requires to be met with the same courage as was displayed on both 6ides during the war. The energy which was then thrown into the production of foodstuffs must be reVived and redoubled in order to restore the situation. It must be made a point of honour with the tillers of the soil in every country to show that peace can extract from nature more than war. Europe must take Exclusive of lignite. measures to provide herself more largely with the food she requires in order that she may resume her full activities, and much can be effected if the necessary preparations are made without delay. In regard to industry generally, each Government must take steps to impress on its people that the limitation of production directly assists the upward movement of prices, and that it is by increasing production that they can best help to solve the problem. Every proposal which may assist in this direction deserves the closest attention. Governments must co-operate in the reconstruction of the common economic life of Europe, which is vitally interrelated, by facilitating the regular interchange of their products and by avoiding arbitrary obstruction of the natural flow of European trade. The Powers represented at the Conference reaffirm their determination to collaborate with a view to the execution of these aims. 4. Increase of Consumption.-Meanwhile, instead of restricting the standard of consumption in view of this shortage of supplies, there is a general tendency to make heavier and heavier demands for the limited quantities of goods that are available. The increase of consumption takes the form of an intensified demand for commodities of every description. The demand not only for foodstuffs, but for clothing, boots and other manufactured articles, is in most countries far in excess of the supply, while , luxuries of every kind command a readier sale than at almost any previous period. The general extravagance now observable throughout the world is a phenomenon which has almost invariably followed in the footstepe of every great human catastrophe. It is well known to those who have lived in a district which has suffered from earthquake, and the history of the great plagues of Europe amply illustrates it; and the results have always been economically disastrous for the populations affected. It must be one of the first aims of each government to take such measures as appear appropriate to the circumstances of its own people to bring home to every citizen the fact that for the time being, until supplies are increased, it is by diminished consumption and unselfish denial that they are best able to help themselves and the world and that extravagance increases the national difficulties and perils. 5. Credit and Currency Inflation.-The im-. mense increase in the spending power of Europe which is reflected in this extravagance has been brought about by credit and currency inflation during the war. Broadly speaking, the general level of prices may be said to be the expression of the ratio between spending power on the one hand and the volume of purchasable goods and services on the other. In order to prosecute the war, particularly in European countries, every Government found it necessary to increase the amount of currency in circulation. Unable to raise sufficient funds by taxation and by loans from real savings, they were compelled to resort to borrowing from the banks and the use of the printing press. Additional spending power was thus placed in the hands of the public at a time when the volume of purchasable goods was being reduced. For example, the note circulation has grown approximately as follows:- In the United Kingdom from 30,000*0002. in 1913 to nearly 450,000,0002. at the end of 1919. (About 120,000,0001. of the latter figure takes the place of gold coins in circulation in 1913) ; In France from 230,000,00*. in 1913 to 1.500.000. 000Z. in 1919 ; In Italy from 110,000,000*. in 1913 to 700,000,000*. in 1919 ; In Belgium from 40,000,000*. in 1913 to 200.000. 000*. in 1920 ; While the war debts (which are closely connected with inflation amount, in the case of the United Kingdom, to over 7,000,000,000*. In France to 6,750,000,000*. In Italy to 2,750,000,000*. In Germany (apart from liabilities for reparation) to 9,500,000,000Z. In the United States to 5,000,000,000*. The total war debt of the world is approximately 40,000,000,000*. Throughout Europe prices at present are with few exceptions paper prices. But gold prices have also risen, that is to say, gold has a lower purchasing power than it had before the war. This is the inevitable result of the many economies which have been effected in the use of gold for monetary purposes and, on the other hand, of the dispersal of stocks of gold previously held in Europe and their excessive accumulation in other countries. Thus, in the United States, although the gold standard remains effective, prices have advanced 120 per cent over the pre-waf level. As the purchasing power of gold is ultimately the measure of price, it must be obvious that this change is itself responsible for much of the increase in the price of commodities, when expressed in terms of the currencies of all countries. A considerable part of the rise in prices in Europe is due to this depreciation of gold, but there is an additional depreciation due to excessive issues of paper currency. The continual expansion of paper issues with its necessary consequence of continuously depreciating exchange prevents the grant of the commercial credits required by the situation, and thus fatally hampers the resumption of international commerce. It is essential to the recovery of Europe that the manufacture of additional paper money and Government credits should be brought to an end, and this must be effected as soon as the war expenditure has been terminated. 6. Profiteering.-Excessive profit making, commonly known as profiteering, has resulted from the scarcity of goods. Deflation and a check upon the continuous rise of prices will do much in itself to end the conditions that make profiteering possible. But it is essential, in order to obtain the co-operation of all classes in the increase of production, that each government should take such steps as are appropriate to the circumstances of its own people to assure and guarantee to the workers that the burdens that they are called upon by their efforts to remedy are not aggravated by those who would exploit the economic difficulties of Europe for their own personal ends. 7. Restriction of Government Expenditure.- Demobilization has been effected by the Powers represented at the conference at a far speedier rate than could have been anticipated, but heavy abnormal expenditure Resulting The national currencies have in each case been converted into sterling at approximately par of exchange. from the war still requires to be met (particularly in connection with the restoration of the devastated areas). Such charges must be regarded as part of the war burden, but in order to stop the process of inflation and to start the process of deflation, the necessary measures must be initiated by ever country to balance recurrent government expenditure with national income and to begin at the earliest possible moment the reduction of the floating debts. The best remedy of all is that debts should be reduced out of revenue, but in so far as this is not possible, floating debts should be consolidated by means of long term loans raised out of the savings of the people, and it is out of the savings of the people that any fresh capital expenditure must be provided. The governments here represented have undertaken the consideration of the measures required for this purpose. Restriction of Private Expenditure.-But private economy is not less urgent than economy in government expenditure. It is only by means of frugal living on the part of all classes of the nation that the capital can be saved which is urgently required for the repair of war damage, and for restoring efficiency to the equipment of industry, upon which future production depends. It is of the utmost importance that it should be brought home to every citizen in each country that just as in the war their private savings made available for the government goods and services urgently needed for the prosecution of hostilities, so in the period of reconstruction, economy by individuals will reduce the cost of essential articles both for themselves and for their fellows and will set free capital for the reconstruction of their country and the restoration of the machinery of industry throughout the world. 9. Collapse of Exchanges.-Commercial intercourse, on the resumption of which the recovery of the world depends, is governed by the foreign exchanges, and most of the foreign exchanges have been to a greater or less extent disorganized during the past year. The discount of European currencies on New York approximately stands as follows: Pound sterling 30Franc (Paris) 64Franc (Brussels) 62Lire 72Mark 96 The state of the exchanges does not reflect the true financial situation of the countries concerned, provided their industrial life can be resumed. It is in part the result of depreciation in the purchasing power of the several currencies, but in part it results from the failure of exports. Many countries are temporarily dependent on the importation of food, raw material, and other necessaries, and are not in a position to export nearly sufficient to furnish the requisite means of payment. The result has been severe competition for the very limited supply of bills of exchange which has forced down the rate of exchange beyond the point which properly represents the purchasing power of currencies in the buying and selling countries. In the degree in which rates of exchange are so forced down, the prices of imports are forced up and the prices of food and raw material increased. The ultimate cure is to raise exports to the requisite amount, and this should be impressed on the trading communities affected, but it is not im-
<urn:uuid:caf74dbc-b0ca-4e53-8132-25e379042cdb>
CC-MAIN-2020-16
https://www.lipad.ca/full/1920/05/04/1/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371858664.82/warc/CC-MAIN-20200409122719-20200409153219-00234.warc.gz
en
0.966342
2,547
2.625
3
Did you know that among the most effective ways to safeguard your children's health is through recommended vaccinations? Under the expert care of our pediatricians, Dr. Dolly Ubhrani and Dr. Jag Ambwani, here at Longwood Pediatrics in Lake Mary, FL, we offer vaccines to ensure that your child will be protected from various preventable diseases. With this in mind, below are some vaccine facts that highlight how crucial they are for you and your family. Vaccines are Extremely Effective and Safe Did you know that vaccines are only made available to the public after they have gone a thorough and extensive review by doctors, scientists, and relevant healthcare professionals? While, vaccines, in some cases, can cause minor fevers and discomfort at the injection site, this is nothing compared to the trauma and pain of the various diseases that these vaccines are capable of preventing. More importantly, the preventive benefits that vaccines provide outweigh the potential side effects. Vaccinations Could Save The Life of Your Child Certain diseases that once took the lives of thousands of people have now been successfully eliminated (while others are close to being eliminated) mainly because of effective and safe vaccines. An excellent example of this is polio. What was once considered the most dangerous and frightening disease in the U.S, has now largely been eliminated thanks to the polio vaccine. Vaccines Can Protect Others Who Can’t Receive Vaccines It’s unfortunate that some kids can sill contract diseases preventable by vaccines. Some kids might be too young, while others might not be ideal candidates for immunizations because of severe allergies or compromised immune systems due to leukemia. To help ensure the safety of these kids, it’s vital that your kids get timely and complete vaccines from your pediatrician in Lake Mary, FL. This way, besides safeguarding your family, you also help prevent the possibility of spreading these diseases. Vaccinations also Safeguard Future Generations Immunizations have minimized, and even eliminated (in certain cases), plenty of diseases such as smallpox that severely disabled or killed many people a couple of generations past. Children these days don’t need to get the smallpox vaccine since it has been completely eradicated. This means that continuing to vaccinate children will help ensure that the future generation won’t have to worry about certain diseases because they won’t exist anymore. Need More Information on Vaccines? Arrange an appointment with your pediatricians, Dr. Dolly Ubhrani and Dr. Jag Ambwani here at Longwood Pediatrics in Lake Mary, FL, by dialing (407) 644-9970 today. How your pediatricians can help if your child has asthma. Asthma signs and symptoms can be more dramatic in children because a child’s airways are smaller, so symptoms can be more severe. Your pediatrician can help your child feel better. The pediatricians at Longwood Pediatrics in Longwood, Florida Dr. Jag Ambwani and Dr. Dolly Ubhrani offer comprehensive care for children, including asthma treatment. They proudly serve residents of Lake Mary, Altamonte Springs, Sanford, and Maitland. It’s common for children to have allergies which can bring on asthma symptoms. Exposure to common allergens like plant or flower pollens, pet dander, dust, and mold can bring on a variety of asthma signs and symptoms like these: - Wheezing and severe coughing - Breathing problems - Chronic bronchitis - A tightness in the chest Infants and toddlers can’t tell you what’s wrong, so be on the lookout for signs and symptoms like these: - Slow feeding - Breathing difficulties while feeding - Less running or playing due to breathing problems - Tiring quickly or coughing while active - Colds and other viruses lasting longer than usual Asthma can also result in a life-threatening situation, so seek out emergency treatment immediately if your child is: - Gasping for air or having severe trouble breathing - Not speaking due to difficulty breathing If you believe your child may be suffering from asthma, your pediatrician may recommend: - Testing lung function with spirometry, peak flow measurement, and nitric oxide - Using rescue inhalers for short-term relief - Taking asthma medications for long-term relief - Treating underlying allergies with sublingual immunotherapy and other medications You should also monitor your child’s environment and keep your child away from cigarette or cigar smoke and allergy triggers. Your pediatrician can help you and your child feel better about dealing with asthma. To find out more about asthma causes, symptoms, and treatments, call the pediatricians at Longwood Pediatrics at (407) 644-9970, in Longwood, Florida, serving residents of Lake Mary, Altamonte Springs, Sanford, and Maitland, Florida. Call today. Does your child have allergies? An allergy involves an overreaction by the immune system, often to harmless substances such as pollen. The more you know about allergies—the causes, symptoms, and treatment options—the more prepared you will be to help your child. Led by Dr. Jag Ambwani, Dr. Dolly Ubhrani, and Dr. Nancy Zuker, Longwood Pediatrics in Longwood, FL and serving the Lake Mary, Altamonte Springs, Sanford, Maitland areas, offers a range of treatments for allergies. 1. Allergy Medication: Our Lake Mary FL Pediatrician may recommend medication to treat your child's symptoms. Several types of drugs are used to treat allergies. Allergy medications are available as eyedrops, liquids, pills, inhalers, and nasal sprays. They help ease and treat symptoms like a runny nose and congestion. 2. Allergy Injections: Immunotherapy, or allergy shots, are used to treat allergies. Immunotherapy involves injecting small and increasing amounts of allergens over regular intervals. Allergy shots help your child's body get used to the substance that causes an allergic reaction. They decrease sensitivity to allergens and often lead to lasting relief of allergy symptoms even after treatment is stopped. 3. Oral Immunotherapy: We may recommend oral immunotherapy to prevent allergy symptoms. Oral immunotherapy involves the regular administration of small amounts of an allergen by mouth. Oral immunotherapy gets your child's body used to an allergen so it doesn't cause a reaction. Oral immunotherapy works the same way that allergy injections do, except it doesn't require shots. 4. Nasal Rinses: Nasal rinses clean mucus from the nose and can ease symptoms there. Using a nasal rinse once daily can help clean bacteria from your child's nasal passages, curb postnasal drip, and wash out allergens that have been inhaled. After the patient's symptoms are gone, using a nasal rinse a few times a week should be enough to keep them symptom-free. 5. Emergency epinephrine. Epinephrine injection is used to treat life-threatening allergic reactions caused by medicines, food, insect stings or bites, latex, and other causes. Epinephrine works to reverse the life-threatening symptoms. Children with severe allergies may need to carry an emergency epinephrine auto-injector with them at all times. An epinephrine injection will reduce their symptoms until they get emergency medical treatment. For some children with allergies, life can be difficult. Don't delay- call Longwood Pediatrics at 407-644-9970 to schedule an appointment for your child. We proudly serve the Altamonte Springs, Sanford, Maitland, and Lake Mary, FL areas. Our treatments will help relieve your child's symptoms. It’s not always easy to spot whether your child is displaying symptoms of ADHD. While only a qualified pediatrician will be able to diagnose your child with attention deficit hyperactivity disorder (ADHD) often times the only way that a child is diagnosed in the first place is because a parent notices changes in their child’s behavior that affects their social, school and personal lives. Learn more about the common signs and symptoms of childhood ADHD and when to see our Longwood pediatricians, Dr. Jag Ambwani and Dr. Dolly Ubhrani for an evaluation. Ask yourself this: Does my child… - Have trouble staying focused or on task? - Make careless mistakes? - Jump from one activity to another usually without finishing the first activity? - Lose or misplace items like homework, books, toys, etc? - Interrupt conversations and have trouble waiting their turn? - Fidget and squirm when having to sit still - Often seem to be daydreaming or not paying attention when spoken to? - Often forget to do things even after several reminders - Seem easily distracted most of the time Now tally up how many “yeses” you had. If you are noticing that your child displays many of these symptoms then it might be worth talking with our Longwood pediatricians. While many of these symptoms are also normal childhood behaviors it can be difficult to determine whether this is just something your child will grow out of or whether they may have ADHD. A good rule of thumb is, if they are dealing with these symptoms regularly and it’s impacting school performance, relationships and home life then it’s time to talk to a pediatrician. Since ADHD stands for attention deficit hyperactivity disorder many people assume that children must have trouble with concentration as well as display signs of hyperactivity in order to have ADHD, but this isn’t always the case. The three most common symptoms of ADHD are inattentiveness, hyperactivity, and impulsivity and while some children may display all three behaviors some children may only have trouble with inattentiveness or hyperactivity. Longwood Pediatrics prides itself on offering the very best in pediatric medical care, and is proud to serve those in Longwood, Altamonte Springs, Sanford, Maitland, and Lake Mary, FL. If you are concerned that your child may have ADHD, then give our office a call today. It can be stressful if your child is not feeling well; however, even if this is the first time your child has gotten sick it’s important not to panic. After all, you have options. We know that it can be difficult to figure out whether to just monitor your child’s symptoms to see if they improve, or bring them in to see their pediatrician right away. Here’s when you may want to visit our pediatricians Dr. Jag Ambwani, Dr. Dolly Ubhrani and Dr. Nancy Zuker at Longwood Pediatrics in Longwood, FL and serving the Lake Mary, Altamonte Springs, Sanford, Maitland areas. When to Seek Pediatric Urgent Care Our children’s doctors make it possible for kids of all ages to get the care they need to treat non-life threatening injuries and illnesses without having to make an unnecessary trip to the hospital. Our clinic offers urgent care services where patients can walk right into our office during business hours without an appointment and get treated. We can treat a host of problems including, - Asthma or allergies - Ear infection - Abdominal pain - Tonsillitis or strep throat - Sprains and other sports injuries - Respiratory infections - Urinary tract infections - Symptoms associated with chronic health issues - Burns and lacerations - Abscesses and skin infections Just as it’s important to know when to come into our office immediately for care it’s just as important to understand when you need to go to your nearest emergency room for more advanced and specialized treatment. An emergency room will be able to handle potentially life-threatening and serious conditions. It’s time to head to the ER or call 911 if, - There is something blocking your child’s airways - If your child’s fever is accompanied by difficulty breathing, seizures, confusion or vomiting - They are wheezing or having trouble breathing - Your child seems disoriented, lethargic or doesn’t respond to noise or visual stimulation Longwood Pediatrics in Lake Mary, FL and serving the Lake Mary, Altamonte Springs, Sanford, Maitland areas, wants to make sure that your child gets the proper care they need when time is of the essence. Call our office today or just walk into our office and one of our pediatricians will provide your child with the care they require as soon as possible. This website includes materials that are protected by copyright, or other proprietary rights. Transmission or reproduction of protected items beyond that allowed by fair use, as defined in the copyright laws, requires the written permission of the copyright owners.
<urn:uuid:30862aa1-f6b1-4f8c-8529-2d8c2bd521d1>
CC-MAIN-2020-16
https://www.longwoodpediatrics.net/blog.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370529375.49/warc/CC-MAIN-20200405053120-20200405083120-00155.warc.gz
en
0.929818
2,662
2.734375
3
Image Credit : Klaus Roemer Bitis nasicornis is a viper species known for its striking coloration and prominent nasal “horns”. The head is narrow, flat, triangular and relatively small compared to the rest of the body. The color pattern consists of a series of 15–18 blue or blue-green, oblong markings, each with a lemon-yellow line down the center. These are enclosed within irregular, black, rhombic blotches. A series of dark crimson triangles run down the flanks, narrowly bordered with green or blue. - Scientific name : Bitis nasicornis - Distribution : West-Central Africa - Average Size : 0.85 m (2.8 ft) - Life Span : 20 years or more - Difficulty : Advanced Bitis nasicornis are a problematic species that are difficult for most keepers to maintain past about 2 years of age. While still an enigmatic species, we have been fortunate to have had some success keeping and repetitively breeding this species over 3 generations and have had longevities approaching 10 years of age. A lot of trial and error, frustrating failure, and building on the experiments and success of others have allowed this success. Caging, temperature ranges, and humidity are the 3 key factors. Neonates – 12 months of age : At this age, Bitis nasicornis appear bulletproof and easy to maintain, assuming well-hydrated animals are available and selected (usually imports). They tolerate a wide range of temperatures, stress, and usually feed quite well. Rest assured, this is a short-lived illusion. We maintain snakes within this age range in a standard rack system, 0.25 x 0.45 x 0.15 meters (0.8 x 1.5 x 0.5 feet). 12 months – 2 years of age : This is where these snakes begin to get tricky, and many simply die for unexplained reasons. At this point, we move the snakes into custom rack systems, 0.6 x 1 x 0.2 meters (2 x 3.3 x 0.65 feet). These racks are kept very dark, cool (as outlined below) and we interact with the snakes as little as possible with as little foot-traffic through the room as possible. Low stress is key. 2 years – 5+ years of age: At this point Bitis nasicornis are a very delicate snake and do not tolerate seemingly minimal stress. They prefer fairly tight, dark, dry enclosures. Each snake is maintained in a custom 1.2 x 0.6 x 0.6 meters (4 x 2 x 2 feet) enclosure, alone, and with the heavy plantings of Pothos vines and dense logs. During breeding season, September – November in Florida, USA, we have a simple system of 0.15 meters (feet) PVC pipe that can be connected between enclosures to allow males and females cages to be accessed by each, as they wish. Originally, we believed that male-male combat was required for successful breeding, as is sometimes the case with Gaboons. We no longer subscribe to this theory, though it may be helpful if all else fails to induce breeding. These cages are kept cool, dark, with very minimal interaction – a quick glance 2 times per day to check status, spot cleaning, and water introduction, as outlined below. Again, we understand these protocols do not immediately appear logical for a rainforest-dwelling species, yet they work, whereas the common “Gaboon Thinking” tends to kill these magnificent snakes by 24 months of age. We do not provide hide boxes for Bitis nasicornis, instead we allow them the ability to burrow into deep substrate, back up against wood/logs, and/or “disappear” within heavy plantings of Pothos vines. In our experience, this is not a species that uses standard hide boxes. Neonates – 12 months of age : We prefer dry cypress mulch at a depth of 0.07 – 0.1 meters (0.2-0.3 feet). 12 months – 2 years of age : At this point we change to a fast-draining substrate of 0.1-0.15 meters (0.3-0.5 feet) of organic potting soil/sand at a 50/50 ratio, topped by a loose peat moss/perlite plant mix, topped by large-chunk hardwood mulch, which dries very quickly and does not soak up excess water. Pothos vines are free to grow and proliferate within these cages. 2 years – 5+ years of age : Our substrate is as follows, from base to surface : A base layer of chicken-egg sized rocks, pea-gravel, a window-screen barrier, 0.02 meters (0.06 feet) of course sand, 0.1-0.15 meters (0.3-0.5 feet) of organic potting soil/sand at a 50/50 ratio, topped by a loose succulent plant-type mix of 50/50 soil/course Perlite and large-chunk hardwood mulch. Lighting – Heating Neonates – 12 months of age : Lighting consists of a 12/12 day/night cycle, with lighting coming from the indirect lighting of the room. The temperature is regulated for the entire room, no supplemental heating for individual racks/boxes. Daytime high does not go above 24 °C (75 °F) with a night-time drop to 20.5-21 °C (69-70 °F). 12 months – 2 years of age : Cages are kept very dark – think pre-dawn or dusk light levels. Temperatures become critical at this point, going forward. The entire room is maintained on a 12/12 day/night cycle. Temperatures NEVER exceed 24 °C (75 °F), and dip as low as 20 °C (68 °F) during the night. It is reiterated that cages are kept dry with no spraying or water introduction other than the large, shallow water bowl in each cage. (watering is outlined below). Interaction/stress are kept to a bare minimum. 2 years – 5+ years of age: Lighting and heating are maintained as outlined above. Bitis nasicornis originate from regions of frequent, heavy rainfall, with high humidity. Logic would dictate that we – the collective keepers of this magnificent species – replicate the same conditions to induce breeding and for long-term maintenance of healthy captives. Decades of trial and error and continued failure frustratingly, and paradoxically, shunt us in another direction that has yet to become scientifically clear. All of the previously outlined parameters being met, we now proceed with a watering protocol well-established with European keepers, yet frustratingly rejected by American keepers. It is our hope that this will change based on empirical, observatory, and real-world experience/evidence. A large, shallow water bowl should be maintained at all times, filled to the very top. With imported/newborn babies – soak, soak, soak daily in clean water. This cannot be emphasized enough. They can also be sprayed with commercial greenhouse sprayers to mimic rainfall and will drink and drink for long periods. They can also be gently hooked over to a shallow water dish, with the head gently tipped into the water, where they will drink large amounts of water. After a series of exposures, most will eventually recognize standing water as such and seek it out. When in doubt, soak, spray, or tip into the water. Commonly cited literature suggests using an aquarium air-stone to roil the water surface of a water dish to attract the snake and atomize water particles as an alternative. While we don’t discount this as it works very well for many tree viper species, we have never had success with this approach with Bitis nasicornis. All of this should be performed with the least amount of stress and may require watering outside of the primary enclosure to avoid excessive and potentially fatal wet conditions within the primary caging. For animals 12 months + we prefer to use a common greenhouse water sprayer to individually spray a stream of water over the mouth area of each animal. If they are thirsty, they will drink aggressively, and we do this until they are satiated and stop drinking. If they huff and puff and “run” from the water, we assume they are self-hydrating to an adequate level and discontinue efforts for that day. A thirsty Bitis nasicornis will drink no matter the noxious external stimuli. While we fully realize that it goes against logic for a rain forest animal to require anything but very high humidity, you will surely kill your Bitis nasicornis if kept at high humidity levels for an extended period of time. We are talking years here. These snakes are very slow to react to inappropriate husbandry (for the most part) and the failures of today may not manifest until 6-12 months down the line. Cage/room humidity of 60-70% is perfectly adequate, with elevations during shed cycles as outlined below. From day one, Bitis nasicornis, like Gaboons, will frequently eat as much food as is offered – as it is with many humans. The long-term ramifications of this practice will always prove catastrophic, much as with humans. Adopting a practice of feeding relatively small meals, never offered until the previous meal has been defecated, will ensure longevity. It’s as simple as that, though so many lack the patience to feed these snakes appropriately. Weekly soakings in tepid water may be required to induce defecation. Appropriately sized mice, rats, quail, chicks, etc., are usually accepted vigorously with healthy snakes and held firmly after the initial strike, as with Gaboon Vipers. Resist the urge to feed large, frequent meals, and half of the battle is won. Bitis nasicornis make efficient use of prey items. Except immediately after a meal, or when gravid, you should not be able to see the skin between the scales while the snake is at rest. As mentioned previously, Bitis nasicornis, especially from about 12 months onward, are extremely susceptible to stress and should be handled as gently and as infrequently as possible. While bites are rare, they possess an extremely potent venom and envenomations are a life-altering emergency and not to be taken lightly. Appropriate antivenin should be maintained on site at all times. Young animals are easily handled by standard methods. Large, heavy-bodied adults are best handled with wide hooks that spread the weight and bulk of a heavy animal over the largest surface area possible. While these snakes may appear slow and sedentary, they are capable of astoundingly fast movement and accurate strikes – usually when they are least expected. Snakes should always be removed to a secure location prior to cage cleaning. Daily spot-cleaning with regular deep cleaning is essential to long-term success with this species. We employ heavy plantings and micro-organisms such as springtails to establish a bioactive substrate that minimizes full cleanings. Anoles, skinks, etc., can be established for symbiotic fly control in large rainforest enclosures with excellent success. During shed cycles, misting with water may be increased, though soggy, wet conditions are to be avoided. Soaking may be required. Potential Health Problems Respiratory infections are occasionally an issue with high humidity and poor air circulation, though this is rare. More common are Protozoan infections/proliferations, as well as roundworm and lungworm infections/proliferations. While we will never advocate against routine fecal exams and wormings, we, for one, do not routinely worm imported Bitis nasicornis unless symptoms or problems present. This is for a few reasons: As mentioned previously, these snakes tend to come in dehydrated and many wormers heavily tax the kidneys and/or liver. If the animal is already at a metabolic disadvantage, the addition of wormers may tip the snake over the edge and kill it. We prefer to set the snakes up as outlined above and observe. After a period of establishment, proper temperatures, and solid feeding, with a 6-month quarantine period away from the main collection, we then evaluate the need for worming. While not a popular opinion, we find that prophylactic worming is only necessary in 25% of cases. If required, standard doses of Panacure (100mg/kilo, x 2 doses, 14 days apart) and Flagyl (150mg/kilo, x 2 does, 1 week apart) are employed per standard doses and norms easily available in a Google search. To recap: Bitis nasicornis are a magnificent species that do well in captivity if a small but seemingly in-congruent set of parameters are followed over the long term. It’s highly recommended for every venomous species that you keep or interested to keep to have the bite protocol. Each species has a dedicated bite protocol that includes general information regarding the species, information about their venom and signs and symptoms of envenomation if bitten. It also includes a detailed information about first aid (what to do and what not to do), specific treatment recommendations for medical personnel to provide appropriate care including information about the antivenom or antivenoms required for treatment. Finally it includes a list of people who specialize in snakebites and their contact information so they can be consulted to assist with the care if needed and a list of all the references used for the create the protocol. The information contained in this care sheet reflect the opinions and methods of the mentioned breeder, based on their expertise and long-established experience.
<urn:uuid:c94119cb-a47b-4ce7-818e-286c59a06a1e>
CC-MAIN-2020-16
https://www.reptiletalk.net/rhinoceros-viper/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505826.39/warc/CC-MAIN-20200401161832-20200401191832-00274.warc.gz
en
0.940468
2,881
2.828125
3
Alpharetta, GA – You’ve just taken your child to a pediatric dentist, only to find that he or she has a cavity in a baby tooth. A common question we get from parents is, “Since that tooth is just going to fall out anyway, can’t you just let the cavity go, instead of going through all the trouble associated with filling it?” “As I’ve discussed in other blogs, baby teeth are extremely important and should be cared for just like the permanent teeth,” says Alpharetta children’s dentist Dr. Nanna Ariaban. “They help your child eat and speak properly, and ensure the permanent teeth are healthy and can erupt properly.” So, is it necessary to have a cavity filled? Dr. Nanna says yes, with certain exceptions. For small cavities, there is a possibility that they can repair themselves through remineralization. If your pediatric dentist catches the cavity when it has just started, your dentist can give you tips to help better care for your child’s teeth and hopefully prevent the cavity from growing larger. This will include proper dental friendly diet and oral hygiene habits. Next, if the pediatric dentist determines that the tooth is close to falling out, it may not be necessary to fill it. If your child won’t have the tooth for very much longer, the dentist may recommend just allowing the tooth to fall out without repairing it. “But it’s important that parents follow the advice of a well-trained pediatric dentist, who has the unique knowledge of treating children,” says Dr. Nanna. “Our opinions are informed by years of careful study and treatment, and we know what can happen when decay is left untreated.” Related Article: What are Cavities? A 2014 report from the American Academy of Pediatric Dentists revealed that by age 5, nearly 60% of children in the U.S. will have experienced some level of tooth decay. The same report stated that when left untreated, this decay can lead to infection, difficulty in chewing and even malnutrition. Other studies show that children who have dental decay often experience difficulty in school due to pain associated with the problem. “It wasn’t that long ago that children didn’t come for their first dental visit until the late toddler or early preschool years, when they had a mouthful of teeth,” says Dr. Nanna. “But, we saw the rate of childhood dental caries continue to increase, so now it is recommended that children see a pediatric dentist for the first time by the first birthday. This way, we can work with parents to develop good oral hygiene habits, help with dietary tips, and monitor the teeth so we can intervene before an issue becomes a big problem.” But why do parents need to take the time, and spend the money, to fix teeth that will just fall out eventually anyway? Tooth decay is a disease, plain and simple. It’s caused by specific germs, and can be spread easily, and it can last a lifetime. And if the baby teeth have serious decay, the permanent teeth can become damaged even before the erupt. Do cavities in baby teeth affect permanent teeth? There may be some considerations that getting a cavity filled or treated would be a waste of money because baby teeth aren’t even the permanent teeth, but it’s important to understand the long term problems that come from lack of treatment. Primary, or baby teeth, are of a different consistency and thinner than adult teeth. Therefore, they require more attention when it comes to brushing, flossing and oral care. Cavities can quickly progress into very large cavities and can cause the need of baby root canals and crowns. If untreated this can form into dental infections causing pain and swelling. It’s also important to help children keep their baby teeth as long as they can because they help guide the development and positioning of your adult teeth. Primary teeth that must be pulled or are so infected they fall out, this can create orthodontic problems which will accentuate the need for braces or other orthodontic procedures. This can make your child need longer orthodontic treatment or even make children who didn’t need braces to be forced to get braces to correct their smile. Other Problems that Cavities can Cause when Untreated Besides the impact that baby teeth can have on the placement of permanent teeth, there are other consequences of leaving cavities untreated in baby teeth. These include: Impede strong nutrition: Not treating cavities can cause eating to be painful and uncomfortable. For children that experience pain when eating will avoid wanting to eat and will start to affect their overall nutrition. Some healthy foods can naturally be hard, including apples, carrots and celery. However, if a child is unable to eat these hard foods because of pain in the mouth they will start to loose essential nutrients that are important for their overall health. Affected Speech: Untreated cavities can also lead to problem with speech. Teeth are a part of speaking and can affect the sounds that children can make when speaking. However, if they have cavities and have teeth rot and fall out, it can begin to cause speech impediments that will affect the child’s ability to speak properly and can have lasting effects on their confidence in public speaking and communicating with others. Self Confidence and Appearance: Discolored or missing teeth can lead to children developing a poor self image of themselves. If they feel that their smile or teeth don’t look healthy or white, they may begin to refrain from smiling and wanting to show their teeth. This lack of confidence in their appearance will have a lasting affect on the belief in themselves and their social lives. Spreading Infections Across Other Teeth: Cavities can and will spread to other teeth if untreated. It is commonly thought that cavities are unlike other diseases or infections, that they cannot be spread. Cavities surely can and will spread to other teeth in the mouth if untreated. Cavities can also be spread to other people! It is important to treat cavities so they not only destroy a tooth, but that they do not set off other cavities in the mouth. How do I spot the early signs of cavities for my child? Its always harder spotting cavities early when its not your own mouth. However, if your child is complaining about sensitivity or pain in their mouth, it may be caused by the formation of a cavity. If your child complains about sensitivity or pain when they eat foods that are cold or hard, it can signal cavities. - Toothache, spontaneous pain or pain that occurs without any apparent cause - Tooth sensitivity - Mild to sharp pain when eating or drinking something sweet, hot or cold - Visible holes or pits in your teeth - Brown, black or white staining on any surface of a tooth - Pain when you bite down You will want to check the tooth to see if there is plaque build up or any discoloring. If so, then it would be good to schedule an appointment for a cleaning and check up so that the dentist can treat it earlier. If your child is visiting a pediatric dentist every 6 months then the dental staff should be able to catch any formations of cavities and stop any further growth. The best advice for those who are worried about cavities forming is to double down on strong oral habits. Make sure your children are brushing and flossing regularly. Monitor their diet and see if there were any areas that may have changed. For example, some kids can be receiving treats at school or from a friendly neighbor that can be full of sugars. What are my options if my child has cavities? If your child has developed cavities, there are a number of different options that can be taken. Common options include: Remineralization: Like we mentioned above, if the cavity is small enough, the pediatric dentist may elect to allow the tooth to repair itself. This occurs because saliva can actually speed up healing processes. Saliva contains proteins, enzymes and compounds that help harden the enamel of teeth and can even “remineralize” your tooth’s enamel. If a cavity is small enough, a tooth can go through remineralization (or self repair) if the oral hygiene and diet promotes the saliva. If diet is poor and filled with sugar and starch, it can and will overpower the saliva causing the cavity to grow larger. Your pediatric dentist will discuss proper care and course of action if they feel that remineralization is possible. White Fillings and Restoration: If remineralization is not a viable option, a filling will need to placed onto the teeth. This process is a commonly known one to both adults and children. Your pediatric dentist will drill away at the cavity and decay. Once everything has been taken away, the tooth will be filled with a filling. At Polkadot we use white fillings, which harden in seconds and mimic the color and appearance of natural teeth. For those who are worried, Dr. Nanna will walk through the procedure with the child, going over each tool that the would use. Crowns: If the tooth decay or cavity is large, a filling will not be able to do the job of restoring the tooth. A crown will create a protective structure around the afflicted tooth and will minimize the risk of developing a new cavity or further tooth decay on the tooth. To place a crown on a tooth, the tooth is shaved down and then the prefabricated crown is fitted on the tooth and then it is cemented on the tooth using dental cement. It’s important to make sure to use the right size for the crown as it will affect the child’s bite if the crown is too large or small. Extractions or Baby Root Canals: Once a cavity is so large that it starts to reach the nerve of a tooth, it will cause intense pain. The cavity can cause an infection and inflammation of the nerve and when this occurs, the only option is to conduct a baby root canal. This involves removal of the infected pulp (nerve and blood vessels). After removal, medication will be placed on top of the affected area. Unlike adult root canals, baby root canals are much different, they take only a few minutes to complete and additional visits are not necessary! Laughing Gas or Nitrous Oxide: Depending on the number of procedures or the comfort level of the child, laughing gas or nitrous oxide can be used. It is a safe sedative and extremely effective in helping children reduce anxiety. The laughing gas creates feelings of happiness and relaxation and has a rapid onset while non-allergic. The laughing gas is given by placing a fitted mask over the nose. Once the patient starts breathing through the mask, they will begin to feel the nitrous oxide. Laughing gas has no lingering effects and is perfectly safe. The use of this option will be at the recommendation of your pediatric dentist. Sedation Dentistry: In some extreme cases, the number or severity of dental treatments may be high. When it comes to dentistry for children, we have to balance the effectiveness of the procedure, but also their overall comfort with dentists. Children can sometimes need extensive work, but due to the stress and discomfort of all the work, it creates an anxious relationship where getting them to the dentist can be extremely difficult. Sedation dentistry is used to help balance the need for dental treatment while also ensuring a healthy relationship with the pediatric dentist on a long term basis. Your Pediatric Dentist in Roswell will discuss with you the options of using sedation dentistry and if it is a good option for you child. At Polkadot we have two options including conscious oral sedation and general anesthesia. Factors that can lead to the need for sedation dentistry include your child’s anxiety level, ability to cooperate and required amount of treatment. “Have you ever heard a dentist tell you not to clean your child’s pacifier off in your own mouth?” says Dr. Nanna. “This is because the bacteria that live in your mouth can be introduced into your child’s. It’s also why we say never share toothbrushes or even store toothbrushes where they can touch each other. Introducing new bacteria can lead to decay, especially in a child who has a diet high in sugar and who doesn’t have proper oral care habits.” If your child complains of dental pain, or even headaches, schedule a dental appointment right away. This can be a sign that there is decay, and the issue should be addressed by a dentist before it progresses too far. Whether your dentist recommends filling the cavity, or taking a more precautionary wait and see approach, you’ll know you are making an effort to save your child more pain and other problems. Even though baby teeth will eventually fall out, it’s important that you care for them just as you do permanent teeth. Baby teeth play an important role in a child’s health and well-being. Brush twice a day, floss daily, and maintain regular check-ups with a pediatric dentist, starting around age one. © 2018 Polkadot Pediatric Dentistry. Authorization to post is granted, with the stipulation that Polkadot Pediatric Dentistry in Alpharetta and Johns Creek, GA, is credited as sole source.
<urn:uuid:42dc3787-7093-4470-873d-3b965c945b8c>
CC-MAIN-2020-16
https://polkadotdental.com/do-cavities-in-baby-teeth-really-need-to-be-filled/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506988.10/warc/CC-MAIN-20200402143006-20200402173006-00353.warc.gz
en
0.947284
2,808
2.578125
3
As humans, we rarely think of the negative unless we are very well aware of it having happened. We never even let such thoughts come to our mind especially when we are enjoying a meal or having a good time with friends. Fine particles that stay in an enclosed space are a major stimulation for explosions That’s why, when you enjoy the sandwich snack that you carried to work for lunch or prepared for your kids for school or buy that cold bottle of beer to take home, you rarely stop to think that someone might have fallen sick while preparing what you are enjoying. But, how do we expect you to know? Of course, you don’t. And that’s totally fine. Still, that doesn’t stop the truth from being that most workers – especially in many flour mills, food factories, commercial stores, breweries among other manufacturing companies stand a considerable risk of suffering from occupational respiratory diseases that can become permanent causing them to be in and out of hospital every now or then or have devastating consequences such as death. Yes. You must be asking yourself, ‘how come?’. Well, don’t worry. We’ll let you know shortly. The above is all because of what is known as grain dust. Grain dust is dust that’s gotten from the harvesting, handling, mixing, drying, processing or storage of maize, rice, wheat, barley, oats which includes additives and contaminants which can be found in food dust such as: - Fungal Spores - Pesticide residue - Insects and insect debris Other types of grains that might give rise to dust exposure further causing health risks include pulses (like soya bean), different types of oilseeds, sorghum and peas. What are the Effects of Grain Dust? Until one becomes aware of the consequences of grain dust, most times we just live through life and don’t really care. But grain dust should be a major concern as it’s something that takes hundreds of lives every year in both developed and developing countries. Grain dust should be a major concern for anyone and it’s vital to have the correct dust extraction equipment in place.. Why? Because they are highly linked to the symphonic widespread occupational lung diseases like pneumoconiosis, coupled with systemic intoxications of lead poisoning especially when the exposure is high. Various symptoms come as a result of the inhalation of such dust. The symptoms can be as mild as a simple nose or eye irritation to severe conditions like bronchitis, asthma and chronic obstructive pulmonary disease. In some cases, the symptoms don’t materialize until years later when a lot of damage has already been caused. At much lower levels of exposure, workers are prone to dust-related ailments such as cancer, allergic alveolitis, and other non-respiratory illnesses. As a disease that mainly affects the breathing tubes and lungs, respiratory disease is one of the main occupational health risks. For instance, in agriculture, the cases of occupational asthma have doubled that of the national average. Research has shown that grain dust exposure to workers can have a significant impact. Moreover, workers who suffer from occupational respiratory diseases can become disables, develop permanent breathing problems which may eventually lead to them being unable to work. And while the affected people may be the workers, it has a far-reaching impact than you can imagine as it has a mammoth cost implication for both the grain industry and the employers. Explosion as an Effect of Grain Dust Lurking in lots of manufacturing firms is a hazard that has been given the cold shoulder for far too long. It can’t be seen, and neither can it be heard or smelled. However, when it strikes – and it does without warning – the results can be devastating leading to serious injuries and fatalities. Fine particles that stay in an enclosed space are a major stimulation for explosions as they – the fine particles – work jointly to build a broad surface area that becomes available to oxygen at a quicker rate. The heat that is generated from the oxidation process is what causes a flame. And grain dust is known to have caused deaths. A good case in point is back in February 2008 when 13 people were killed, and others sustained serious injuries at the Imperial Sugar Plant in Port Wentworth, GA. And that isn’t the only case. In 2003, six people died while many others were injured as a result of plastic powder that had with time accumulated above a hanging ceiling leading to an explosion at West Pharmaceutical Services in Kinston, NC. A more recent example was the Bosley Mill explosion in Cheshire on 17th July 2015. And these are not all the cases. There has been more overtime. The biggest tragedy is that such explosions would have been prevented from happening. Dust never crosses anyone’s mind when it comes to causes of explosions. Most people only think of other highly combustible substances and gasoline. However, the hard truth is that most organic materials – including food products, different metals, polymers and wood (cellulose) – can explode especially when suspended over a dust cloud and with favourable conditions. The explosions are mainly caused by the synchronic presence of three elements: - An oxidizer like oxygen - Fuel in a gaseous or powder form, e.g., flour, sawdust or any material that can burn - A source of ignition of any kind, e.g., an electrostatic spark, open flame or a friction-caused spark If the three factors above all exist at the same time, then an explosion is inevitable. And while rare, the impact of dust explosion as mentioned can result in serious in serious injuries or death. In the US for example, when a Chemical Safety Board was asked to investigate industrial chemical incidents and a detailed report issued, the results showed that over a span of 25-years, there had been 281 blasts and fires that had injured 718 people and killed 119 workers. 24 percent of these accidents had taken place in the food industry. In addition, it can destroy equipment at any manufacturing plant leading to a disruption in the business operations. Regardless of whether the powder in question is a granule chemical product or wheat flour, the possibility of an explosion can’t be ignored and should be identified and avoided at all cost. There are various ways that dust gets generated. Vegetable dust is mainly produced by dry treatment while mineral dust is gathered from parent rocks or by processes used in breaking them minerals down. The airborne concentration depends largely on the amount of energy that is put into the entire process. Air that moves in, out and around a powdered or granular substance will emit dust. As such, handling techniques for bulk substances such as when filling, emptying the bags or transferring the materials from one place to the other, contributes to substantial dust sources. In most cases, attrition causes coarse substances to have a dust-sized component. When there’s visible dust cloud in the atmosphere, it means that there’s a presence of a potential hazardous size hidden somewhere. However, that doesn’t mean that only the visibility of dust clouds is what one should look for. Lack of dust clouds could still mean availability of dust being present with a particle size that’s invisible to the bare eye in the normal lighting conditions. Dust generations need to be removed or prevented from the air otherwise; air may cause it to move and reach people who are far from the source of generation and whose exposure is highly unsuspected. Although dust material may not cause dust immediately, when they dry up, the case may be different. So, what are some of the industry sectors that dust generation is more likely to occur? Food factories and flour mills; - feed compounders and blenders, animal feed mills; - breweries, distilleries, and maltings; - grain and dock terminals; - commercial storage areas; and - grain transportation Some of the processes that lead to dust creation include: - grain harvesting and transfer from combines into the trailers; - cleansing, dressing, and drying of the grains; - movement of grains into a grain store; - the transfer of grains around, in and out of grain stores to the relevant terminals; - mixing and milling of dry grain; - feeding of dry milled grain; - maintenance of equipment and the plant in general; - cleaning of vehicles, equipment, building and plant by use of compressed air or by mechanical/manual sweeping; and - silo cleaning Evaluating and Recognizing the Dust Problem As mentioned earlier, dust can have fatal consequences that with good measures, can be prevented. The first step in doing that is by recognizing that there is a dust problem and evaluating the outcome. If a company carries out any dust processes, the first step is carrying out a LEV dust assessment to check if the people around – both employees and those living around – are at risk from dust exposure. This means having a critical look at the place and identifying if there is an issue and what could be done to prevent the risk. The assessment that is done should be able to determine the hazardous substances that are in use, in what amounts and the fraction of dust that may become airborne leading to exposure, among other elements. First, there needs to be a walk-through survey of the workplace that should be done. This involves checking the controls that are in use and finding out if they are effective or there is a need for more controls. Secondly, the cleaning and maintenance procedures need to be examined to ascertain that they are as effective as needed and don’t result in the excessive exposure. The positioning of workers and their task organizations need to be appraised based on the dust source and location. Additionally, the training level and workforce information should be assessed. The company’s management should have favourable work practices that not only reduce risks but eliminate them as well. And when dealing with hazardous materials and complicated situations, the advice of from skilled professionals should be sought. What Can Manufacturers do to Prevent Fatal Cases? Manufacturers or a company’s management has a role to play in this. The first step is for them to identify whether the material they deal with is combustible dust. This can be done by testing the materials according to the OSHA-prescribed techniques. Another known way to prevent or mitigate dust explosions hazards at the workplace is to have proper techniques developed for the handling and processing of the equipment. In such cases, employees should be involved in the review process giving them room to challenge and refine the protection measures. An effective yet simple step that manufacturing plants can incorporate to prevent dust explosions and as well as occupational health risks is by employing good housekeeping measures. This simply means avoiding excessive shaking when emptying the containers, having spilt materials cleaned up immediately and giving no room for powder or dust solids from accumulating at the workplace. Still, some companies don’t have the right staff who are trained to perform frequent training sessions, check for compliance and implement solutions that are in line with the set engineering standards. Majority of the engineering standards have been published by the National Fire Prevention Association and are well recognized by the Occupational Safety and Health Association (OSHA). Besides, OSHA had a program where they require their field officers to survey facilities that handle and generate combustible dust to mitigate the risks involved. However, it’s up to the companies to be proactive. For companies that lack in-house experts that can do this should still be able to hire an expert. Start by looking for an expert with manufacturing experience and that they actively participate in industry seminars like the ASTM and NFPA and work hand in hand with government-sponsored initiatives that assess reactive material hazards and dust explosions. This goes a long way in helping manufacturers address potential hazards early enough thus preventing costly citations as well as reducing the risk of business interruption. While we cannot deny that the aftermath of gathered dust at any workplace can be costly to both the employee’s health and the business itself, we can acknowledge that with the right measures put in place, a lot of occupational health risks, as well as serious injuries or fatalities, can be prevented. While employers have a responsibility towards their employees, the employees also have to ensure that their workplace safety comes first. After all, prevention is better than cure. If you suspect that your workplace is a dust generation hazard, it’s time for you to do something about it and get in touch with the Exeon dust extraction specialists.
<urn:uuid:c4eecf66-b2cd-4655-9e91-22814a3ba929>
CC-MAIN-2020-16
https://exeon.co.uk/another-one-bites-the-dust/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500482.27/warc/CC-MAIN-20200331115844-20200331145844-00314.warc.gz
en
0.957213
2,593
3.125
3
How does the PCB Assembly process work? How long does it take? What can go wrong? How much will I pay for the service? Unfortunately, it is hard to find all this information online. This led me to experiment myself with a real-life use case and share the results in this series of blog posts, so you can learn too. Arduino with USB Type C 🤯 After being frustrated with the bulky USB Type A cables that are currently used for Arduino Uno, I decided it was time to upgrade our little friend with a USB Type C connector. Sure, you can find Arduino Uno compatible boards with Micro USB, but wouldn't it be more fun if you didn't have to try at least 3 different times until the cable plugs in? This board includes 42 SMT components (as well as some through-hole parts, but these are easier to do manually). The USB connector and ATmega16U2 chip are the most challenging to solder manually: So, instead of getting a stencil, smearing the messy solder paste, manually placing all the components using tweezers, and baking on a hot plate, I decided to use this board as the test case for comparing different PCB assembly houses in China. Printed Circuit Board Assembly 🏭 Before we dive into the specifics of my experience with ALLPCB, here is a quick review of how the PCB assembly process works: - You design your PCB. I usually use KiCad for that, sometimes also EasyEDA. - Once your PCB design is ready, you export it in a format called Gerber. - You create a Bill-of-Materials (BOM) that contains a list of all the components that go on your board, their quantity and supplier part number / link. Some assembly houses require specific BOM file format, but most just accept any kind of CSV / Excel file, as long as they can find the relevant information. - You export a Pick-and-Place file from your PCB design software. This file contains information about the location and the orientation of each component. - You send all the files (Gerber, PCB, BOM) to the factory. They review it, outline any issues, and eventually send you a quote. Most of the assembly houses can also purchase the components for you, and the quote also includes the price of the parts. - You pay, and the assembly house orders the components and start making the PCB. - Once the components have arrived and the PCB is ready, they program the pick-and-place machine. At this stage, they will probably contact you if there is a problem (such as a component that doesn't fit well in the PCB footprint). - If everything goes well, the PnP machine assembles the boards (you can ask for a photo to make sure all looks fine), and then the through-hole components are soldered. - The boards can also be programmed with a HEX file you supply, for extra charge (usually around 0.2-0.5$/unit) - The assembled boards are packed and shipped to you. The PCB factory may also send you a photo of the assembled board to confirm just before shipping. As you can see, this is a pretty long and involved process, and there is a lot of room for errors. So how did it go with ALLPCB? Online Quote, Read The Fine Print 🔍 ALLPCB, like many of their competitors, have an online quote system, where you can input information about your design and the number of components to assemble and get an instant quote. When you go to their PCB assembly page, you are greeted with a hard-to-miss offer of only $29.99 for PCB assembly, with just 24 hour turn-around time: Note, however, that this is just the price for the assembly service, and you will have to pay for the PCB separately, as well as for the components. Furthermore, you may be surprised when you get a different number after you input your order parameters: If you read the fine print, you will find that the special prices only applies for orders which meet certain criteria: In my case, the board quantity was fine, but the sum of solder joints was way too big: (156*53) = 2029, which is way greater than the allowed 400. I changed my order parameters to stay within the limits: changed to board quantity to 2, and cut down some of the DIP components (I could solder these myself), so that I stay below the limit. The assembly price went down to 29.99$, as promised. Placing The Order 💸 Once I was happy with the price, I added my order to cart, and was asked to upload the BOM and Pick & Place files: Since they didn't say anything about the format of the files, I just went with the format I had. The BOM was a spreadsheet which looked like this: While the Pick & Place file was exported directly from KiCad: But then, when I tried to upload this file to the ALLPCB site, I got an obscure error message in Chinese, and the file didn't go through: I couldn't translate this message, as it disappeared as soon as I clicked somewhere in the page, and so, I sent an email to their support (never got a reply), and tried different things until eventually I figured it out: KiCad generates a CSV file, and their interface would only accept xlsx files. Not very user-friendly. After figuring this out, my order went through. I also ordered the PCB (white one with a black silk screen), added the Gerber files, and checked out. The total sum was $60.02, which also included $0.99 PCB price, $27 shipping, and some PayPal fee. Not too bad... OMG, This Is Too Expensive! 💰 I waited and about 12 hours later got an email from ALLPCB sales representative saying: I noticed that you placed an assembly order and I am sorry that I forget to quote you the price of Components. I would quote for you ASAP! 6 hours after I also got the quotation for the components (remember, the PCBA price is only for the label, components are quoted separately): Thank you for your waiting. I am always working on quote for you a favorable price. Please kindly check the components quotation sheet in the attachment. Pls notice: In yellow color, some remarks need your confirmation if it's OK for you. Their price, however, didn't seem favorable at all: However, a better inspection revealed that their sales person thought the number of pads was the quantity of each component, so instead of ordering just 1 ATmega16U2 chips per board, she thought we'd need 32 (I don't see how this makes any sense), and quoted accordingly. I added a new Quantity column, and also confirmed the alternative parts she suggested, and sent her the new BOM. She quickly confirmed her mistake: Sorry for my mistake and I am working on quote you the updated price. It wasn't until a day later when she sent me a new price, 19.70$, which now also made sense. I paid the difference (total of $22.66, due to the added PayPal fees), and finally my order was confirmed. Design Problems, We Can't Solder That 🤦♂️ About 2 hours after paying for the parts, I got another email outlining a problem with my design: Thanks for your order. There is one EQ need to be confirmed. As for the marked holes, pls confirm if they are DIP or vias. If DIP, the space of these holes are too narrow, pls enlarge the space. If vias, it shoudn't change and we will arrange to fab it soon. They also sent a picture showing the holes in question: Apparently, the leads of the USB Type-C receptacle I used were too challenging for their manufacturing facilities. I sent them a photo of the connector, saying that they could skip soldering these holes if it was a problem, and I'd solder myself once the board arrives. However, they responded they couldn't fabricate a PCB with these holes: Thanks! We could solder these holes for you. But these DIP holes are too narrow to fabricate it.It will be appreciated if you could enlarge the space between these holes. So that we could make it for you as soon as possible. Oh well... At least they told me about it before manufacturing the PCBs. I looked for a different connector, one that had no though-hole pins, and updated my board design accordingly. Just as I emailed them the new Gerber files, I got another email saying that they had a problem with my silk screen (the text that is printed on the board): In the meanwhile, could you pls provide the complete silkscreen for us? Also, in your pick and place file, the part X1 ZU4 has been missed, could you send the updated file to us? They also attached a screen shot of the silk screen, which looked fine to me: It took another back-and-forth email to figure out what they really needed. They wanted me to include a Factory Fabrication Gerber file, which contains the Designator (part identifier) for each of the components on the board, so they could place it. These files are not exported from KiCad by default, so you have to explicitly ask KiCad to include the files when generating Gerbers: You only need to do this if your Silk Layer does not include the Designators. In my case, including them in the Silk Layer would make the board not-as-pretty: And so, I sent them another iteration of my Gerber files, as well as updated pick-and-place file with the positions of the Through hole components they requested, X1 and ZU4. KiCad doesn't include through-hole component in the Pick-and-Place file, so I had to define them as SMD components first: And finally, 3 days after initially placing my order, they told me they'd start manufacturing it: YES, finial worked. Thanks for your new gerber file. We would fab your PCBs with your new file as soon as possible. I checked their website periodically and tracked the PCB fabrication progress: Wrong Part Numbers 😲 The email went silent for about a week, and then, they told me they received all the parts but there were some issues: 1: ON1--the link you provide, our staff make a mistake and purchase the wrong one, and we would take responsibility to you and re-purchase them then assembled. But the delivery time will be longer. Can you accept it? 2: X2--U262-241N-4BV60 (USB) the compoents you provide has the THT joints but on your PCBs, there were no holes. Pls kindly advise, do we need to repurchase or leave it alone. 3: TX1. RX1. L3. ON1, pls mark the polarity, like draw "+,-" in picture. Or is the pic show right? So the "ON" LED was their mistake. What happened with the USB Type C connector? It turns out I forgot to send them the new part number when I switched to SMT-only connector, so they ordered the original one, with the through-hole pins: Furthermore, the PCB didn't include any indication about the LED polarity (my bad, I should have added it to the Fab layer), so they took some wild guesses. They figured out TX1 and TX2 correctly though! Since they made a mistake with the part number, they offered to repurchase the missing LED and the SMT-only USB C connector without any additional charge, but they advised it'd delay the assembly: We would start to repurchase the ON1 and X2. They would both need a long time to repurchase, hope you can understand and we would arrange the assembly once the rest of them arrived! Lesson Learned - Whenever making any last-minute changes to the design, make sure to update and resend both the BOM and the P&P file, to ensure no missing parts. Confirm Some Issues? 😵 After 5 more days, I got an email that said: Regarding our assembly item A_H56974S5_3, We need your help to confirm some issues. Hmm... even more issues? However, when I opened the attached link, I saw a photo of the assembled board, which looked pretty good: Overall, the board looked good! There were some unpopulated parts (the female pin headers, the ICSP connectors, and the Reset button), but I intentionally left them out to get the cheaper assembly price. In addition, two resistors were missing (below and next to the USB connector). However, I checked and it was my bad - I forgot to include them in the BOM (they did appear in the pick-and-place file, though). The most disappointing part, for me, was the Wokwi's logo was missing. It turns out that I accidentally deleted it when I exported the Gerber files for the last time, when I added the missing Fab layers. Bummer! Anyway, I was eager to receive and test out the boards, so I confirmed that order and waited eagerly for the boards to arrive. I Got The Whole Board In My Hands! 😍 When I got the package, I was surprised by the big box that arrived. It contained many small anti-static bags with tiny SMD components: When you order a PCB Assembly service, the factory usually purchases more parts than are needed, for several reasons: - Some parts have minimum order quantity (MoQ). For instance, I used only two 22Ω resistors in my design, but the minimal amount you can buy is 100. This makes low-volume PCBA orders not very cost-effective. - The Pick and Place machines sometimes miss a part or two, so it's better to have some extra parts rather than having to stop the machine, purchase a missing part, wait for it to arrive, and then restarting the assembly process. Most importantly, the package had the assembled PCBs (along with some extra, unpopulated PCBs): You can spot the IoT Makers Meetup logo on the back of the PCB, and you may have also noticed the two square bars, one on top and one at the bottom, that are connected by small tabs to the edges of the board. PCB fab houses usually add the order number on the board. In order to avoid this, I usually design my boards in form of a panel, so the order number gets printed on these bars (you can see it on the bottom-left corner of the photo above). I then remove these panel edges using flat pliers, and file the edges of the board after breaking away the tabs. After adding the missing resistors, plugging in an ATmega328p chip, and soldering the reset button, ICSP headers, and female headers (the ones where the shields go), I connected it to my computer and run a quick test. It worked like a charm! One annoying thing, though, the assembled PCB had a sticky feeling. It probably has to do with the assembly process, as the unpopulated PCB aren't sticky at all. Cleaning with alcohol helped, though. Summarizing The Experience The whole manufacturing process of the boards took 14 days (+ 3 days for shipping) and costed total of $87.68 (including shipping, PayPal fees, etc). Overall, the process felt too manual and error-prone to me. Communication was a challenge, and I had to figure out some of the bits myself (like which layers to export out of KiCad, how to format the BoM and PnP files, the issue with the USB type C connector). I believe that without all the back-and-forth emails and the factory ordering the wrong part numbers, it would only take a week or so to produce these boards. I learnt several lessons that I'm going to apply in my future orders: including a Quantity column in the BOM, exporting the Fab layers from KiCad, checking the stock status for all parts in my BOM just before ordering (LCSC has a BOM tool that can do that), double checking that the BOM and Pick-and-Place files both have the same components, and looking at my exported Gerbers twice when I make any last-minute revisions. Despite all the challenges, at the end of the day, I got a working product in a reasonable time, and the team at ALLPCB did pretty good job in spotting potential issues and working out viable solution together with me, and I can conclude this experiment as a success. I hope that this write-up will help others who are going on a similar journey. I'm now checking other PCB Assembly fab houses, and plan to publish my findings over the next few weeks. Thank you for joining me here and reading on my adventures. If you like this kind of content, I invite you to join our mailing list and be the first to know when we publish some new content:
<urn:uuid:32a3146e-25ee-45b9-b862-4b6407c17607>
CC-MAIN-2020-16
https://blog.wokwi.com/pcb-assembly-service-review-allpcb-china/?utm_source=urish&utm_medium=blog
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500331.13/warc/CC-MAIN-20200331053639-20200331083639-00434.warc.gz
en
0.964349
3,656
3.15625
3
a score in baseball made by a runner touching all four bases safely "the Yankees scored 3 runs in the bottom of the 9th"; "their first tally came in the 3rd inning" count, running game, rivulet, political campaign, rill, numeration, reckoning, ladder, ravel, discharge, campaign, run, footrace, foot race, test, enumeration, running play, streak, trial, outpouring, runnel, counting, streamlet, running a bill for an amount due count, counting, numeration, enumeration, reckoning, tally(verb) the act of counting; reciting numbers in ascending order "the counting continued for several hours" match, fit, correspond, check, jibe, gibe, tally, agree(verb) be compatible, similar or consistent; coincide in their characteristics "The two stories don't agree in many details"; "The handwriting checks with the signature on the check"; "The suspect's fingerprints don't match those on the gun" equate, ensure, hold in, hit, couple, jibe, accord, consort, tick off, fit, gibe, concur, chequer, hold, moderate, learn, checker, total, mark off, add, add up, twin, go, cope with, condition, assure, scoff, tick, tot, harmonize, suss out, check off, check over, add together, accommodate, match, check up on, control, look into, ascertain, mate, mark, insure, change course, flout, tote up, go over, train, check, hold back, rival, harmonise, equalize, meet, pair, suit, tot up, jib, find out, see, discipline, equalise, sum up, equal, correspond, score, curb, concord, summate, gybe, arrest, stand for, check into, play off, contain, oppose, outfit, conform to, represent, determine, sum, stop, retard, crack, turn back, watch, chalk up, equip, see to it, break, touch, fit in, check out, barrack, rack up, jeer, fit out, agree, pit, delay score, hit, tally, rack up(verb) gain points in a game "The home team scored many times"; "He hit a home run"; "He hit .300 in the past season" tot, shoot, whip, dispatch, tote up, mark, sum up, hit, arrive at, add, attain, seduce, match, nock, fit, add up, strike, sum, score, make, polish off, grade, agree, collide with, gain, tot up, mop up, off, impinge on, chalk up, run into, stumble, worst, slay, gibe, murder, rack up, reach, correspond, come to, add together, jibe, total, summate, pip, bump off, remove, check tally, chalk up(verb) keep score, as in games total, tot, tot up, sum, sum up, summate, tote up, add, add together, tally, add up(verb) determine the sum of "Add all the people in this town to those of the neighboring town" supply, tot, resume, tote up, bring, impart, hit, rack up, add, match, fit, sum, add up, make sense, summarize, agree, tot up, sum up, chalk up, summarise, bestow, number, score, gibe, append, correspond, come, add together, jibe, contribute, total, summate, lend, amount, check Complete Dictionary of Synonyms and Antonyms to occupy the same place in space, the same point or period in time, or the same relative position the events coincidedSubmitted by rinat on June 1, 2017 Dictionary of English Synonymes Synonyms, Antonyms & Associated Words Words popularity by usage frequency How to use tally in a sentence? We have moved from a total of seven gold in London in 2012 to 13 gold this year, overall we've had a better year, the gold medal tally is a tremendous improvement and the signs are good for Rio. This is a tally sheet exercise, with the Thai side trying to figure how many cases and how many officials the U.S. government wants prosecuted in order to give an upgrade, thailand's efforts this year have been a lot of talk, shuffling of assignments at government inter-agency committees, but little substantive action to effectively end trafficking. We have sat together with bank officials to tally and estimate the total cash seized. Father Josiah Trenham said. The incident took place on April 12, some four months after a terror attack left 14 dead in nearby San Bernardino, and just over three months before a French priest was killed by ISIS-linked jihadists in his church. The events, whether far or near, underscore a grim new reality for pastors such as Father Josiah Trenham : Instead of offering sanctuary from evil, churches could in fact be attractive targets for terror. Many churches are now hiring self-defense instructors for classes or security guards that include off-duty police, said Ryan Mauro, a professor of Homeland Security at Liberty University and national security analyst for the Clarion Project. If you are an Islamist terrorist seeking self-glory, executing a priest will bring you more attention than executing an average civilian. While no lethal terror attacks have occurred inside a U.S. church to date, experts like Jeff King, president of International Christian Concern, notes the threat tally is growing. I'm pretty sure there will be attacks in the future, until [ radical Islam is defeated ], we can expect Christians, including in the West, to rationally tighten security measures and try to protect themselves from attack. In February, Khial Abu-Rayyan, 21, of Dearborn Heights, Mich., was arrested after Khial Abu-Rayyan told an undercover The FBI agent Khial Abu-Rayyan was preparing to shoot up a major church near Khial Abu-Rayyan home on behalf of ISIS. A month earlier, the Rev. Roger Spradlin of Valley Baptist Church – one of the biggest congregations in Bakersfield, Calif. – told attendees that they had received a threat written in Arabic. Undercover officers were then placed during worship services. We want to make sure that those remaining are actually people that need to be arrested, we find it prudent to pause for a while, tally the count and make sure those who surrendered are those on the list. Translations for tally From our Multilingual Translation Dictionary - übereinstimmen, Kerbholz, Anzahl, Zähler, zählen, einspielen, einstimmen, korrespondieren, Kerbstock, abhaken, markieren, aufeinander abstimmen, den Spielstand beihaltenGerman - cuenta, contar, marcaSpanish - pirkka, pulkka, vastinkappale, merkata, sovittaa yhteen, tilastomerkintä, vastata, tukkimiehen kirjanpito, pari, laskea, pykälä, viiva, pitää lukuaFinnish - somme, compte, marqueFrench - stämma överens, räknaSwedish - ల సంఖ్యTelugu - sayım, bağdaşma, seri işâretiTurkish Get even more translations for tally » Find a translation for the tally synonym in other languages: Select another language: - - Select - - 简体中文 (Chinese - Simplified) - 繁體中文 (Chinese - Traditional) - Español (Spanish) - Esperanto (Esperanto) - 日本語 (Japanese) - Português (Portuguese) - Deutsch (German) - العربية (Arabic) - Français (French) - Русский (Russian) - ಕನ್ನಡ (Kannada) - 한국어 (Korean) - עברית (Hebrew) - Український (Ukrainian) - اردو (Urdu) - Magyar (Hungarian) - मानक हिन्दी (Hindi) - Indonesia (Indonesian) - Italiano (Italian) - தமிழ் (Tamil) - Türkçe (Turkish) - తెలుగు (Telugu) - ภาษาไทย (Thai) - Tiếng Việt (Vietnamese) - Čeština (Czech) - Polski (Polish) - Bahasa Indonesia (Indonesian) - Românește (Romanian) - Nederlands (Dutch) - Ελληνικά (Greek) - Latinum (Latin) - Svenska (Swedish) - Dansk (Danish) - Suomi (Finnish) - فارسی (Persian) - ייִדיש (Yiddish) - հայերեն (Armenian) - Norsk (Norwegian) - English (English)
<urn:uuid:421bbbbd-f66d-4684-9f58-f16c4e7e9860>
CC-MAIN-2020-16
https://www.synonyms.com/synonym/tally
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371606067.71/warc/CC-MAIN-20200405150416-20200405180916-00475.warc.gz
en
0.88374
2,174
2.71875
3
Israel’s Post-Secularist Nation-State Law Upholds Democracy By: Harel Kopelman Lost in the conversation over Israel’s nation-state law, a bill that seeks to define Israel primarily as a Jewish state before its democratic designation, is an oft-overlooked question: what is democracy? Democracy’s earliest, most notable iteration comes from Athens. It was the fruit of a class struggle against tight-fisted aristocratic rulers who imposed draconian laws (such as the death penalty for loitering) on the general populace. The premier archon at the time was poet-statesman Solon, who expanded the definition of the ruling aristocracy to include eligible Athenian citizens, giving working-class citizens representation in legislature and government. Indeed, the very term “democracy” derives from the original Greek demokratia, a compound of demos (the people) and kratia (power, or rule); it came to be used in contradistinction to the prior form of government, aristokratia, or “rule of the elite.” The foundation of the State of Israel was, according to this definition, an unabashedly democratic endeavor. The horror of the Holocaust was still uncomfortably fresh for the Western world which had stood by while Jews were systematically deprived of basic rights and their lives. Intensive lobbying by Zionist organizations and British sympathy towards the Zionist cause, therefore, led the United Nations to give its blessing to the self-determination of the Jewish people in the majority-Jewish portion of mandatory Palestine in 1947. Democracy’s central appeal, a citizenry’s right to self-determination, was central to the Partition Plan that the United Nations approved: two states for two peoples, each of which would run its own affairs. Jews needed a state of their own that would represent their Jewish interests, and the Arabs living in Palestine at the time would gain their own such state as well. Jumping forward nearly seventy years, Israel seems to be thriving as a democratic society like any other with its citizens participating in national elections. Prime Minister Benjamin Netanyahu wants to enshrine Israel’s status as a Jewish state for its majority-Jewish citizenry in law, strengthening Jews’ right to self-determination and, by extension, the country’s democracy. So why are many people, from the New York Times’ editorial board, the Obama administration, and members of the opposition in the Knesset, fighting against the proposed legislation, calling it a narrowing of Israel’s democracy? The answer lies in two uncomfortable words: ‘secularism’ and ‘equality.’ Many seem to confuse secularism with atheism, a negation of religion entirely, but secularism in public policy discourse usually refers to separation of church and state so that religion is not used to form public policy. Only a society that is secular and ensures that religion does not inform its public policy and legislation has, historically, been able to ensure the religious freedom of all its constituents. True secularism, in this view, entails fighting for freedom from religion so that religious freedom can flourish. The political left in both Israel and in the United States objects to the nation-state bill because it fears the bill would circumvent Israel’s tradition of governance granting religious and ethnic minorities equal rights before the law. Israel’s 1992 Basic Law (the Basic Laws are Israel’s equivalent of a constitution) states that: “Human Dignity and Freedom already provides grounds for the judiciary with grounds to establish equality as a basic tenet of Israeli law...none may harm the life, body or dignity of a person inasmuch as they are a person,” and Supreme Court justices have long interpreted “dignity” as guaranteeing equal civil rights for all citizens irrespective of religion, sex, or age. Right-wing politicians fear that adding “equality” to the new bill would give the judiciary new power over state-religious institutions which adhere to Halakha or Sharia, by providing it with stronger footing to strike down decisions of religious courts, which do not take into account modern, secular principles of equality. The only government document that officially employs the term “equality” is Israel’s Declaration of Independence. It reads: “The State of Israel shall… realize absolute equality in social and political rights for all its citizens without regard to religion, race or sex… [and] will ensure freedom of religion, conscience, language, education and culture; will protect the holy places of all religions; and will be loyal to the principles of the United Nations Charter.” Netanyahu’s bill states that its purpose is to “define the identity of the State of Israel as the nation state of the Jewish people, and anchoring the values of the State of Israel as a Jewish and democratic state, in the spirit of the principles contained in the Declaration of the Establishment of the State of Israel.” The kerfuffle over the nation-state bill should therefore be seen as a tug-of-war between those who want to expand the Supreme Court’s judicial powers to knock down what it perceives as unethical or illegal, and Israel’s elected legislature. It is a struggle that has been long underway. The Israeli Supreme Court has repeatedly placed the human rights of minorities, illegal immigrants, and even terrorists above the interests of the State, as expressed by the Knesset. Just this past September, the Supreme Court knocked down the Knesset’s “infiltrator law,” which allows for the holding of African migrants for up to a year, claiming it violated the migrants’ basic human rights. In 2005 it prohibited the “neighbor procedure”, the use of Palestinian civilians by the IDF to engage, even voluntarily, with terrorists holed up in a house or building in order to peacefully neutralize a dangerous situation. Drafting Israel’s Declaration of Independence into the country’s constitution, which the left would like to do, would grant the Supreme Court even more power to rule according to secular ideals of equality, a proposition anathema to right-wing lawmakers who fear that would leave religion with little to no viability in public discourse. But Israel has largely been successful in its integration of religious elements into public life. The country’s three main religious constituents- Jews, Muslims, and Christians- all receive funding for religious institutions. Ethnic and religious minorities living in Israel proper are eligible for citizenship, public healthcare, pensions, voting rights and civil rights protections. Arab Knesset members regularly advocate for Arab interests, and the government recognizes diverse religious national holidays such as Ethiopians’ Sigd celebration. Religion is actively supported by the government, but it is largely non-coercive. Secular positions on some civil rights issues, detailed below, are largely unpopular in what is a largely socially-conservative Israeli society which looks to religion to define the parameters of daily life. Democratic solutions to these dilemmas lie not in the direction of the secular moral compass as determined by the supreme court, but in the majority opinion and its legislative manifestations. Such democratic solutions do not deny that countries need a detached, secular judiciary which limits the legislature. In the United States, it was the Supreme Court which took the first decisive steps in confronting and dismantling institutionalized racism against blacks in educational settings with its decision to desegregate schools in Brown v. Board of Education, a decision that a majority of Americans vehemently opposed at the time but which has since garnered sacrosanct legal and social support. Such far-reaching top-down civil rights decisions have not been necessary in Israel, but calls against state-favored religion and laws benefitting Jews specifically have come from Israelis who wish to live in a post-religion Israel. These Israelis wish to live in a society where their marriages, divorces, and personal statuses are not determined by the Chief Rabbinate, where the government does not impose its definition of Shabbat onto other people and businesses, and where the state does not offer Jews special incentives over Christians and Muslims to come live in Israel, as it currently does. The nation-state bill should thus be seen as a clash between democracy and secularism. Some Israelis want a post-religion, secular society. These secularist Israelis want to live in an Israel where Judaism plays no role in the public sphere, an entirely valid political interest they will continue to pursue. However, to the best of my knowledge, Jews have never imposed Halakha to oppress other peoples, and most Israelis support the state’s Jewish character. Halakha will most likely continue to influence and “inspire,” to borrow from Netanyahu’s current version of the bill, Israeli legislation and culture, unless the left’s version of the bill is passed and the Supreme Court’s right to curtail such efforts is strengthened. Israel is right now possibly the world’s first post-secular society, where religion has been placed at a democratic forefront which largely has not trampled on the civil rights of minorities. The freedoms Israel’s minority Christian and Muslim citizens enjoy are unparalleled in the Middle East, and stand on par with the benefits minorities enjoy in the United States and European countries, despite the central role Judaism plays in crafting Israeli legislation and creating government institutions. This is in stark contrast to Israel’s Arab neighbors, where Islamicism or perversions of Sharia allow for religiously-driven human rights abuses and minority persecutions. Israel is a post-secular democracy, however, and that means it is run by groups of people with varying interests. The battles between the religious and secular sectors of Israeli society will continue to dominate it for some time. The nation-state bill controversy reflects that battle. Israel has granted an oppressed people the religious and cultural self-determination they so desperately desired throughout the millennia of their dispersal, in addition to the political rights the Holocaust showed they deserved. The passage of the nation-state bill will prove decisive in determining what that self-determination will look like.
<urn:uuid:90665c55-e95e-4d8f-a694-a2c7f51b41fc>
CC-MAIN-2020-16
https://yucommentator.org/2015/02/israels-post-secularist-nation-state-law-upholds-democracy/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510287.30/warc/CC-MAIN-20200403030659-20200403060659-00114.warc.gz
en
0.9516
2,096
2.515625
3
Some times there will be no answer for years upon years. On This Day: May 20, 1932 — Amelia Earhart leaves Newfoundland, Canada to attempt a solo flight across the Atlantic Ocean on a Lockheed Vega. On This Day: May 21, 1937 — Amelia Earhart sets out to become the first woman to fly around the world. Facts & Information: Born July 24, 1897 (11:30 P.m.): Amelia Mary Earhart — after two of her grandmothers (Amelia Otis and Mary Earhart) Born in Atchison, Kansas Parents: Amy Otis and Edwin Stanton Earhart Father: lawyer and judge / also became an alcoholic which led to a divorce Sister: Muriel — 2 1/2 years younger Family Moved: Amelia attended Central High School in St. Paul Minnesota and played on the basketball team (After another move, she attended Hyde Park High School, Chicago – Graduated from there in June 1915). 1916 — Attended Ogontz School in Philadelphia 1919 — Moved to live with her mother and sister in Northampton, Massachusetts. Spring of 1919 — Amelia enrolled in an all-girls auto repair class Fall of 1919 — she enrolled as a pre-med student at Columbia University in New York 1920 — dropped out of Columbia due to the urging of her parents whose life and marriage was going out of control. 1931 — February 7, 1931 — married George Palmer Putnam 1937 — Died ? (1897 – 1937) Airplane Timeline & Records: 1909 — Saw her first airplane a the Iowa State Fair 1918 — While a Red Cross nurse’s aid in Canada, she attended and watched her first flying exhibition 1920 — Took her first airplane flight. Frank Hawks was the pilot. It was at the Daugherty Field in Long Beach, California — “As soon as we left the ground, I knew I myself had to fly.” 1921 — January 3, 1921 she began taking her first flying lessons with Neta Shook. 1921 — Engaged to Sam Chapman 1921 — Bought her first airplane — a used yellow “Kinner Airster” which she called “The Canary.” 1921 — December 15, 1921, Amelia took and passed her trials for a National Aeronautic Association license. 1921 — December 17, 1921 — Amelia participated in the Pacific Coast Ladies Derby exhibition at the Sierra Airdrome in Pasadena. 1922 — October 22, 1922 — First woman to fly solo above 14,000 feet — at Rogers Field in Southern California 1929 — Placed 3rd in the first transcontinental All-Women’s Air Derby race (the “Powder-Puff Derby”) 1932 — First woman to fly across the Atlantic Ocean 1932 — First woman to fly across the United States 1935 — First woman to fly from Hawaii to California 1937 — Second attempt to fly around the world — on a Lockheed Electra • Pilots: Amelia Earhart and Fred Noonan • Airplane: a Lockheed Electra • 2,550 miles – equatorial circumnavigation • It included three long over-water legs (New Zealand-Howland / Howland-Hawaii / Hawaii to San Francisco) • Disappeared on the longest leg – New Zealand to Howland On June 1, 1937 — She flew a twin-engine Lockheed 10E Electra and was accompanied on the flight by navigator Fred Noonan. They flew to Miami, then down to South America, across the Atlantic to Africa, then east to India and Southeast Asia. The pair reached Lae, New Guinea, on June 29. When they reached Lae, they already had flown 22,000 miles. They had 7,000 more miles to go before reaching Oakland. Earhart and Noonan departed Lae for tiny Howland Island—their next refueling stop—on July 2. It was the last time Earhart was seen alive. She and Noonan lost radio contact with the U.S. Coast Guard cutter Itasca, anchored off the coast of Howland Island, and disappeared en route. Theories On What Had Happened: √ The airplane ran out of gas and crashed and sank √ Landed and stranded on Nikumaroro island in the western Pacific Ocean √ Veered Off Course and ended up 350 miles to the Southwest on Gardner Island √ Earhart took an alternate flight plan √ Earhart and Noonan were captured and executed by the Japanese “Worry retards reaction and makes clear-cut decisions impossible.” “The most difficult thing is the decision to act. The rest is merely tenacity. The fears are paper tigers. You can do anything you decide to do. You can act to change and control your life and the procedure. The process is its own reward.” “One of my favorite phobias is that girls, especially those whose tastes aren’t routine, often don’t get a fair break… It has come down through the generations, an inheritance of age-old customs, which produced the corollary that women are bred to timidity.” “Preparation, I have often said, is rightly two-thirds of any venture.” “Anticipation, I suppose, sometimes exceeds realization.” “The more one does and sees and feels, the more one is able to do, and the more genuine may be one’s appreciation of fundamental things like home, and love, and understanding companionship.” “Women must try to do things as men have tried. When they fail, their failure must be but a challenge to others.” “No kind action ever stops with itself. One kind action leads to another. Good example is followed. A single act of kindness throws out roots in all directions, and the roots spring up and make new trees. The greatest work that kindness does to others is that it makes them kind themselves.” “I have tried to play for a large stake and if I succeed all will be well. If I don’t I shall be happy to pop off in the midst of such an adventure.” Key Illustrative Thoughts: • missing, but not forgotten • still no explanation • action and inaction • unknown, but the last flight • still no good answers as to what and why • can’t do / can’t be done • just do it • the price of courage • what went wrong? • looking for answers • still searching • a mystery / unsolved mysteries • women in history / Lindberg’s historical counterpart • difficulty & danger • Amelia lost • ordinary people change the world • alternate flight plan • final flight • in search of missing persons • pioneers / one life • “worth the price” • “a good deed near home” • “I knew I had to” Other Information & Links: “She left a legacy that challenges and inspires. She was not the “best” pilot, but she had the courage and drive to make another flight or reinvent herself when required, and, with the help of George Putnam, she excelled at public relations. Defying gender roles, she built an unorthodox career in a man’s world; earned the Distinguished Flying Cross; was a compelling force for aviation and for women’s rights; diversified her career with lectures, writing, and business ventures; and consistently made the Most Admired and Best Dressed women lists – a complex combination that allowed her to have a real and lasting impact. All told, her flying career, feminism, life, and death are subjects of countless books, articles, plays, movies, student essays, ad campaigns, public inquiries, and features like this.” — 80years “Amelia entered college in October 1916, attending the Ogontz School near Philadelphia, while her sister Muriel went to St. Margaret’s College in Toronto, Canada. Amelia had originally intended to go to Bryn Mawr, then Vassar, but she filed too late to attend Vassar that year. While at the Ogontz School, Amelia played hockey, studied French and German, and continued to excel in her classes, though she alienated some of her fellow students when she spoke out strongly against the secret sororities there. She was voted Vice President of her class, Secretary to a local Red Cross Chapter, and Secretary and Treasurer of Christian Endeavor while at Ogontz. Amelia spent the summer of 1917 with friends at Camp Gray near Lake Michigan, then returned to Ogontz for the fall semester. Entering her senior term she began planning for graduation, was elected vice-president of her class, and composed the class motto: “Honor is the foundation of Courage.” In December, while visiting her sister Muriel in Toronto over Christmas, Amelia was very affected by the sight of four wounded soldiers walking on crutches together down the street. . . . . After a brief return to the Ogontz School, Amelia decided not to stay and graduate, but to move to Toronto and join in the war effort. She became a Voluntary Aid Detachment nurse at the Spadina Military Convalescent Hospital in Toronto, caring for wounded World War I soldiers. Many of the patients at the hospital where Amelia worked were British and French pilots, and Amelia and Muriel began spending time at a local airfield watching the pilots in the Royal Flying Corps train. The war ended with the Armistice in November 1918.” — timeline
<urn:uuid:a734c7be-15ef-45b6-a63a-8d652747e72b>
CC-MAIN-2020-16
https://rhetoricandhomiletics.org/2019/05/20/todays-illustration-still-no-answer-as-to-what-why/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371660550.75/warc/CC-MAIN-20200406200320-20200406230820-00434.warc.gz
en
0.958798
2,052
3.40625
3
As the new frontier in medicine, genomics brings with it the hope of allowing researchers to find the cure for a number of largely incurable diseases, from cancer to Alzheimer’s, to infectious diseases and beyond. The challenge now is to map the DNA of as many ethnicities and nationalities as possible. Currently, 81 per cent of the existing genetic data is from Caucasians. One company is trying to bridge the gap by analysing the genome of different ethnicities in India, with hopes of expanding to the rest of Asia, Latin America and Africa. “You look at India, with 1.3 billion people, 20 per cent of the world’s population. A lot of people of Indian ethnicities reside globally, and yet they comprise less than 1 per cent of genomic insights and understanding”, says Sumit Jamuar, chairman and CEO of Global Gene Corp. He spoke with LSE Business Review’s managing editor, Helena Vieira, on 9 November 2017, during Web Summit in Lisbon. What exactly does Global Genecorp do? One of the biggest problems you have in genomics is that a large percentage of the world’s population is not well understood genetically. Currently, 81 per cent of the existing genetic data is from Caucasians. From studies, we know that different populations exhibit different traits. Consequently you need the underlying insights and data – the data foundation – to be able to use this technology. Science is all about data. The right data give you the insights. That’s what we’re creating. To give you a sense of the numbers, I estimate that 60 per cent of the world’s population represent less than 5 per cent of genomic data. You look at India, with 1.3 billion people, 20 per cent of the world’s population. A lot of people of Indian ethnicities reside globally, and yet they comprise less than 1 per cent of genomic insights and understanding. We’re very excited by genomics as a technology, because it’s one of the truly disruptive technologies with the potential to affect each one of us as an individual. We’re fascinated by the possibility that not only can you treat disease, but also keep people healthier for longer. Given that we’re at Web Summit, think about DNA as a computer code. The code gives us a sense of who human beings are, with the operating instructions included. It allows us to see the risks over our lifetime and understand what we need to do in order to mitigate and manage those risks. If something happens, we know what the right course of treatment is. That is truly the possibility of genomics and precision medicine. So you’re going to be focussing on India? Our focus is Asia, Latin America and Africa, which is a very large scope. Where did we start? We started in India, because in the beginning you need to set up and operate new systems. Given the large contribution that India has, from a population perspective, with 5,000 ethnicities and special characteristics in terms of family structures and other elements, it gives us a tremendous opportunity to create greater understanding. India in some ways was the last frontier of genomics. Building from that, we’ll expand in the rest of Asia, Latin America and Africa. Our fundamental aim is, ‘how do we solve the data problem?” That goes back to our objective of democratising healthcare through genomics. What is the product that you’re selling? At the current moment we’re looking at collaborations and partnerships with other esteemed parties, which could be governments, research institutions, pharmas or biotechs, to enhance the understanding, create the genomic understanding about individuals and the insights. We’re working towards that. Let’s create the data foundation, and start generating insights. Of course you can apply a lot of artificial intelligence and machine learning to create algorithms as well as products that are relevant to help the individual stay healthy and look at the propensity, cause-effect, and what can be done to manage certain conditions. We have to look at the different phases. As an industry we’re still at an early stage. The first human genome project was done in 2003. It took 13 years and $ 2.7 billion. Now, 14 years later, sequencing DNA takes about $1,000 and just a few hours. That’s the transformation of technology. We’re 2.7 million times cheaper, and much faster. Over the past 30 years, artificial intelligence has improved its capability 1 million times. We expect that in the next 30 years you will see it a million times improved. These are logarithmic movements, not arithmetic movements. We’re looking at significant improvement in performance. As the technology improves, it just keeps building up. But what you need to have is the first phase of the foundation in order to be able to build the next phase of things. I always liken it to the same stage where the internet was in 1995, when Amazon was just starting out, Google wasn’t yet created, the iPhone didn’t exist. Now, 20 plus years since, we have this thing where smart phones are indispensable. Our behaviour has changed significantly. And that’s the sort of generational shift that we expect. So you’re not profitable yet… No, at this point, such an early stage, it’s like building a dam. Once you build it, electricity costs you nothing to generate, but to build a dam takes a substantial amount of investment to make it happen. Who do you foresee would be your clients, using what you’re building? One use case is pharmaceuticals. They’re facing a huge challenge in terms of their pipeline, huge pressures in terms of the patent cliffs. Over 90 per cent of pharmaceutical drugs that go into trial fail. That’s their reality. When you look at application genomics and biomarkers, you find that the success rate triples. That’s the data. Think about a drug development process in which $1,5 to $2 billion are invested. You want to generate insights that allow you to accelerate drug development. You fail faster, but also succeed faster. Let’s try to link that to what happens in genomics. If you look at the global perspective, you find certain isolated populations that give you understanding, because they should have some special characteristics that allow you to find your next blockbuster. I’ll give you an example. There’s a gene called PCSK9 that was found in a particular population with low cholesterol levels. That understanding and insight about the mechanism that led to lower cholesterol allowed the creation of blockbuster cholesterol drugs with global applications. While this was found in one small population, it was applied globally. There are similar things with promising results. In Canada, in a homogeneous population, there’s a gene called SCN9A. People with that gene do not feel pain. That is being used to develop analgesics and other things, which is really exciting. If you look at pharma now, there are enough studies being done by the FDA and others. In most cases, drugs aren’t effective. If you look at cancer, unfortunately, 3 out of 4 oncology drugs developed are ineffective. So if you move from the drug development side, there could be drugs on the market right now where they figured out that Helena has a higher propensity to respond to that drug than Sumit, or vice versa. There’s a classic case of small cell lung cancer. You look at something like EGFR mutation. If you have this mutation, the drug is 15 times more effective than if you don’t. You have a 76 per cent chance of a positive response. If you don’t have that mutation, you have a single digit chance of a positive response (about 5 per cent). Its predominance is more on the East Asian population. So suddenly you have this opportunity to collect data that could lead to the development of new drugs. Regulators are rightly asking, “show me real world evidence, show me that this works”. If you’re able to work collaboratively with partners to demonstrate real world evidence, that is absolutely phenomenal. What we’re passionate about is producing positive consequences for any individual. That’s what the promise of precision medicine is. Sometimes with start-ups, once they grow, their mission becomes secondary to the profit motive. Are you building anything in your project that will make sure that this will really benefit the majority of the population? The most important thing in my mind is why we set this up. I am exceptionally privileged and fortunate to have achieved what I have. The kinds of people that I have connections with, whether it’s Jonathan, Kushagra or Saumya, who are the co-founders, or the folks that we have in our team, whether it’s Yaron, or Shalendra or others, we’re blessed with that network. As a company, the most important thing is to say, “What is the philosophy? Why are we doing what we’re doing? What’s our mission?” And it’s very important to have that mission right and centre, at the heart of it. You start out with the mission, then you create the right business model. We actually tell people, “If you don’t believe in this mission, if you’re coming for other reasons, don’t join us. It’s going to be fun, but you may not enjoy it, because it’s a journey. And you have to believe in those values.” Then you put the right business model in place. Everything else is a consequence, because if you’re addressing a real customer need, everything else takes care of itself. My family and my co-founders and the company, because it’s really important, will continue that dialogue. As we grow, we have a mission for what we use the resources we create for. Our view is to create a science company to do exceptional science. The scientists and the clinicians have to get on with doing this. My role is to make sure that that core happens and it remains, and that’s why we do what we do. One of the most profitable business models today is to collect private consumer data, package it and sell it, often times without the consumer knowing that information is being collected. Is that going to be part of your model? It’s too early to tell what our evolution will look like. Maybe when we have a conversation in the future, I will be in a much better position to reply. What we’re very clear on is that it’s important to be very consistent, and not only follow ethical standards and guidelines, but make sure that you’re a part of it. That’s a core principle of the business. When we’re closer to the bridge those values will allow us to respond to that situation and we’ll have a much more real conversation in the future. One last question regarding the subject of your talk here, do you expect humans to live to be 125 any time soon? I don’t know about me, but I would definitely say that about future generations. Some very promising insights are being created around the science. There are also clusters such as Okinawa and Sardinia, where people live a long and healthy life. The key objective for me would be: it’s not about longevity, personally and necessarily, but it’s also about the quality of life. What I would definitely like, personally, is to reduce the length, between morbidity and mortality. So I would say in the next couple of generations we’ll definitely see that happening more and more. Already if you look at Japan, there are about 65 thousand people who are above the age of 100. That’s a phenomenal number. So, we have societies that we can learn from. Genetic insights will be coming and will be translated, and then in the future you will likely have things like gene editing. So, I would expect that in a couple of generations people will have healthier, longer lives. - This Q&A is the last one in a series of 11 interviews during the Web Summit conference in Lisbon, 6-9 November 2017. The conversation was edited for clarity and brevity. - The post gives the views of the interviewee, not the position of LSE Business Review or of the London School of Economics and Political Science. - Featured image credit: Courtesy of GA4GH. Not under a Creative Commons licence. All rights reserved. - When you leave a comment, you’re agreeing to our Comment Policy.
<urn:uuid:9266ca88-9123-4479-90ec-e0b461e35268>
CC-MAIN-2020-16
https://blogs.lse.ac.uk/businessreview/2018/01/10/sumit-jamuar-indians-are-20-of-the-worlds-population-but-represent-only-1-of-existing-genetic-data/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496227.25/warc/CC-MAIN-20200329201741-20200329231741-00033.warc.gz
en
0.945482
2,677
2.734375
3
The stereoptic viewer is a toy with a relatively simple plastic body, but also a sophisticated lenses for looking at a pair of photographic transparencies mounted, along with six other pairs, in a flat paper reel. Each so-called stereo pair has a photo viewed through the left eyepiece and another viewed through the right. The photos are slightly different. The brain merges the images seen by the eyes to give them depth (also called a three-dimensional or stereo effect). The human urge to see three dimensional (3-D) pictures of the world began with the ancient Greeks. Euclid, the mathematician who established the principles of geometry, proved that the right and left eyes see slightly different views. In the sixteenth century, Jacopo Chimenti, a painter from Florence, Italy, made pairs of drawings—called stereo pairs—that, when viewed together, produced 3-D images. In 1838, Sir Charles Wheatstone patented a stereo viewer that used a complex series of mirrors to look at pairs of drawings. The invention, improvement, and popularity of photography during the period from 1790 to 1840 revived interest in 3-D views because photos can be more easily reproduced than drawings. In 1844, a camera for taking pairs of stereo photographs was created in Germany. Sir David Brewster, the Scottish physicist who also invented the kaleidoscope, used prismatic (mirror-like) lenses to make a compact stereo viewer that became known as the stereoscope. Sets of stereoscopic slides of the area that was to become Yellowstone National Park were given to members of Congress in 1871, convincing them to approve the first national park. News events were featured on the slide sets, so scenes of the building of the Panama Canal, the World's Fairs in Chicago and St. Louis (1892 and 1904, respectively), and the Great San Francisco Earthquake (1906) could be seen. From 1870 forward, local commercial photographers made slides of stores, farms, and even family gatherings. The immediate predecessor to the 3-D reel viewer was the filmstrip viewer, developed in the 1920s. The Tru-Vue Company began manufacturing these viewers in 1931 using filmstrips with 14 stereo frames each. Meanwhile, in 1939, William Gruber and Harold Graves invented the View-Master viewer and a system that used reels to hold the stereo photos. Sawyer's, a photofinisher and card manufacturer in Oregon, financed the Gruber-Graves viewer that was introduced in 1940. During World War II, department stores sold the increasingly popular products, and Sawyer's began packaging the reels in three-packs. Tru-Vue began producing "stereochrome" filmstrips in color in 1951 and acquired the exclusive license to use 3-D images of Walt Disney cartoon characters. Sawyer's bought out Tru-Vue and expanded the reels to include Tru-Vue's Disney characters. In 1966, Sawyer's was purchased by General Aniline & Film Corporation (GAF). Called the View-Master International Group by 1981, the firm bought the Ideal Toy Company and became View-Master Ideal, Inc. (V-M Ideal). In 1989, Tyco Toys bought V-M Ideal. The next merger did not occur until 1997 when Tyco joined Mattel, Inc.; View-Master became a part of Fisher-Price, a Mattel subsidiary. The viewer has two basic parts, the viewer itself and the reel with the photographs. The reel also has two primary components, the outside supporting structure and the photos. The outside is paper laminated (layered) with polyethylene film; this patented product is called Lamilux. The paper is delivered to the factory in huge rolls; thousands of reels are stamped from a single roll. Four-color, printed paper labels are also made outside. The labels are backed with adhesive and mounted on rolls; these "crack-and-peel" labels are like self-adhesive postage stamps, and the adhesive remains moveable temporarily and bonds later. The pictures mounted in the reels are transparencies. A film-processing house mass-produces the transparencies on 16-mm (0.63-in) film. The viewer is made of three different kinds of plastic. The body is polystyrene, a high-quality plastic that withstands impact, shattering, and other stresses. The advance lever is acetal plastic that is also strong with good dimensional stability and stiffness. The viewer holds four lenses of optical-grade, clear acrylic plastic. Acrylics are also strong and resist change so the lenses remain clear and focused. The three types of plastics are received at the factory in small pellets and are pre-colored. The viewer contains a metal extension spring that returns the advance lever after each advance of the reel. The extension spring is made of music wire and is a finished part delivered to the factory. Packaging materials are furnished by outside suppliers and include card and cardboard sheet stock and thin sheets of polyvinyl chloride (PVC) plastic that will be vacuum-formed into "blisters" in the shapes of the products to make display packages. The paper supplier applies heat-sensitive adhesives to the card stock, but printing for packages containing the reel sets is done in the factory. A representative, basic viewer resembles a small pair of binoculars enclosed in a colorful plastic housing. A slot at the top of the viewer where the focus adjustment for binoculars would be is the opening for the photo reel. A lever extends from the right or the top; it slides down a narrow channel to advance the photo reel and pops back up when the lever is released. The outsides of the lenses on the front of the viewer have looked like recessed binocular lenses. The lens eye openings at the back of the viewer are approximately 0.5 in (1.3 cm) in diameter and are set into eyepieces. The eyepieces are about 1.5-2 in (3—5 cm) in width. The models of "standard" viewers are typically about 3.5-4 in (9-10 cm) high, 5 in (13 cm) wide including the advance lever, and 3-3.5 in (8-9 cm) deep from the front of the viewer to the user's eyes. The viewers have been made in a variety of colors over the years. Blue and red are the most popular with consumers and have been used the most frequently. Each reel looks is circular with a ring of photos that are open so they can be seen from both sides. The reels are about 3.5 in (9 cm) in diameter. The coating on the reel is the Lamilux(r) film. The viewer reel complete with photos is called the reel assembly. Production of the photos and the laminated paper portions of the reel begin separately but meet later in the process. The photos are reproduced in mass quantities from originals. The original is a negative and the reproduction, also on film rather than paper, is a positive transparency. For the viewer, the tool contains four cavities that look exactly like the front and rear halves of the viewer housing. Two surfaces shape the inside and outside of the rear housing, and the other two are exact images of the inside and outside of the front housing. The outside halves of both front and rear housings are called cavity relief molds, and the inside surfaces are core relief molds. Similar tools for the lenses, reel retainer, and advance lever are designed for manufacture of the viewer. Quality control steps begin during conceptualization and design of a new product or part, redesign, and trials of new materials. During the first run of a new product such as a viewer, tests are done in the manufacturer's laboratory and include operation of the viewer and drop tests. The viewer must work 10,000 times for the product to be accepted. Each drop test includes 14 different drops, with one drop on each side and each corner of the viewer. If the lever breaks off, for example, the design and materials are modified to correct the faulty part. Quality control throughout manufacturing is part of a product integrity process that is mandated the manufacturer. During assembly of the reels, the positions of the film chips in the reels are critical to producing the 3-D effect. A machine checks the images, and, if the alignment is incorrect, the reel is rejected. The machine operators are responsible for confirming quality and rejecting products throughout the reel assembly process. During production of the viewer parts, some machines are instrumented to provide continuous feedback on operating temperatures, pressures, and other parameters. During viewer assembly, quality checks range from simply looking through lenses to confirm that they are clear to measuring dimensions with precision instruments and comparing the measurements with those in design drawings and specifications. Viewer manufacture is largely free of waste. Plastic parts like the mold runners are recycled back into the injection-molding machine, reground, and used to form other parts. Plastic of different colors can be blended; the red and blue wastes from the viewers are mixed with other colors to make black plastic for other products. Acrylic for the lenses is an exception. It cannot be reground for use in future lenses, but it can be recycled for other acrylic parts. Other wastes are minor considerations. Dust, for example, is routinely vacuumed or sucked away from specific operations by exhaust systems. The future of the stereoptic viewer is secure despite apparent competition from computers and other high-tech, rapid operation toys. Public interest, as well as company commitment, is a strong motivator for improving products and developing dynamic new product lines. View-Master's sales have tripled since the last change in ownership in 1997. Because designs of viewers and reels are well established, the major channels of change will be new processes and materials and availability of film, cartoon, and other entertainment properties that can be licensed. Appeal to collectors is also a key to a stable future. Stereoptican viewers sold for about $2,500 in the late 1980s. Viewers and reel sets are highly collectible, and early viewers sold for $100 with sets of reels priced from $5 to $100, also in the late 1980s. Sell, Mary Ann, and Wolfgang Sell. View-Master Viewers — An Illustrated History 1939-1994. Mission Viejo, CA: Berezin Stereo Photography Products, 1995. Sommer, Robin Langley. I Had One of Those: Toys of Our Generation. New York: Crescent Books, 1992. Baird, Keith. A Look at View-Master History. December 2001. < http://www.3dstereo.com/vmhist.html >. History of View-Master(r). Press packet, Fisher-Price, Inc., 1999. International Stereoscopic Union Web Page. December 2001. < http://www.stereoscopy.com/isu >. Gillian S. Holmes
<urn:uuid:2e9bd464-9231-45ab-9037-1a124bdc056a>
CC-MAIN-2020-16
http://www.madehow.com/Volume-7/Stereoptic-Viewer.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371886991.92/warc/CC-MAIN-20200410043735-20200410074235-00434.warc.gz
en
0.945955
2,266
3.828125
4
The Style Where Design Meets Function . . . Or Does It? The Bauhaus style of building fueled the ideas and changes that developed into the modernist movement – and eventually Modern style architecture as we know it today. The style is famous for its use of straight, rectilinear lines, concrete and steel, and spartan interiors with minimal embellishment. If you prefer to keep things simple, then this style home could be the perfect one for you. To help you decide if this is the case or not, we’ve put together a brief history of the Bauhaus style, as well as a closer look at its most recognizable features, which have been kept alive in Modern architecture. What Is Bauhaus Style Architecture? Essentially, Bauhaus is an architectural style in which the focus is not on aesthetics but rather the functionality of a building (and the spaces within it) as a whole. The movement was born in Germany at the Staatliches Bauhaus, an art school operational from 1919 to 1933. At its onset, the goal of the Bauhaus (literally “building house”) movement didn’t even have an architecture department. Instead, the aim was to unify art, craft, and technology. The school hoped to solve unique problems of design not with direct action but rather by understanding timeless principles to create long-term solutions. To do this, students learned basic elements and principles of design and color theory during the first year of their studies. Next, they would experiment with a range of materials and processes. Although the school closed its doors in 1933 due to pressure from the time’s growing Nazi regime, its ideas were not lost as its staff took its ideas with them as they fled Germany and emigrated all over the world. This 3-bedroom, 4-bath, 4580-sq.-ft. home is in the Modern style of architecture, whick takes many of its design cues from the German Bauhaus art movement in 1920s Germany (Plan #195-1249). Bauhaus around the World While the Bauhaus style’s origin is in Germany, due to the emigration of the school’s staff (and the style’s overall functionality) you can now see examples of buildings in the style throughout Europe, North America, and even the Middle East. In fact, in 2004 Tel Aviv landed on a list of United Nations world heritage sites because of its many Bauhaus buildings. From 1933, over 4,000 Bauhaus buildings have been built in and around the city. Mainly built by Jewish architects fleeing Nazi-occupied Europe, the area even has a special name, the White City due to its unique aesthetics: white concrete or stucco-covered concrete or masonry is the prevailing exterior finish for Bauhaus-inspired buildings. The Bauhaus Museum in Tel Aviv above, reflecting typical Bauhaus Modern Architecture, commemorates the Bauhaus style and the White City section of the region (photo credit: Bauhaus Tel Aviv Museum by Talmoryair under license CC BY 3.0). Features of a Bauhaus Building Modern Germany has a stereotype of being a plain, punctual, and efficient country. If you take these notions and translate them into a building style, you have the Bauhaus design in a nutshell. Here are the most common features of buildings in the Bauhaus style. These are handy to know for prospective buyers, builders, or those who simply wish to have a name for the types of buildings they see before them. This 6-bedroom, 5-bath Modern style home looks as if it could be from a German Bauhaus campus (Plan #116-1067) All Bauhaus buildings are cubic in shape. This doesn’t mean that they are perfectly square but rather rectangular – or a perfect mix of the two. There is nothing ornate about these buildings at all. This is because they only have one purpose: to provide housing – and that’s it. If you prefer functionality over ornate decorations, then you’ll love these types of homes. As discussed, this school of architectural thought doesn’t have any space for “fluff.” Here, fluff could mean curved lines, differing wall height, or extreme color variation. Due to their simplistic nature and cubic forms, Bauhaus-style buildings are also famous for their clear lines. Each wall, ceiling, window frame, and floor space has sharp, clearly defined lines. The distinctness of these lines leave no question as to the purpose and boundaries of each space. The flat roof might be one of the most identifiable – and best – features of Bauhaus buildings. This is because it instantly adds extra square footage with the least amount of wasted spacd to a home, which is especially useful in urban spaces. You can use this space as an outdoor dining room, a lounge, or even a rooftop garden. To a Bauhaus architect, a sloped roof is a waste because it simply eats up available space that could be used for something else. While many pitched roofs offer attic space, the attic is typically small, cramped, and offers little useful room for storage. And that’s not very functional, is it? Displaying all of the signs of classic Bauhaus style architecture – cubic form, clear lines, simple design, flat roof – this Modern style beachfront home seen durng the day (top) and at night with dramatic lighting (bottom) has 3 bedrooms, 2 baths, and a half bath in 1923 sq. ft. (Plan #116-1084). There are also several features that might be included in a Bauhaus building, but each new construction doesn’t necessarily include all of them. Here are the most common secondary features in this style: While single-level dwellings are possible, most Bauhaus single-family homes are at least two stories. Think about it: is it really the best use of space to only build one level on a lot you already own? You might as well expand upward to maximize all available space. Because of its innate ability to maximize space, the Bauhaus style is also popular with apartment developers and others who build more “communal” housing. It’s very common, especially in Europe, to see Bauhaus buildings with upwards of ten floors. At 2 stories tall, this Modern style home with obvious Bauhaus influences (other than the curved wall) is typical of the multi-story design of the German-origin architectural style (Plan #149-1455). Large Window Facades Natural light is a cornerstone of the Bauhaus movement. Besides the classic clean lines of the exteriors, this style of home is usually easy to spot because of the inclusion of at least one large window facade. One can assume that these larger windows were originally included because natural light costs less to supply than artificial light, thus saving money for the occupants in the long run. However, in a modern twist (on a modern style!), the large windows and warm sunshine provided by them offers these buildings a touch of coziness to what some might call an otherwise “cold” space. A 5165-sq.-ft. luxury home with 3-car garage, this Modern style home displays large, dramatic glass facades on front, sides, and rear of the house, letting in abundent natural light and, if placed correctly, solar gain. Today's homes use double- or triple-pane glass (often using plastic film as a "pane") that is is insulating when installed in a window. The glass may also be coated with a high-tech clear barrier to UV light rays (Plan #116-1106). The design gives you freedom in how you choose to use your space. This is because most of the rooms inside the classic cubic house are typically very open in concept and many feature “uninterrupted walls.” These are walls that do not have a window or door. The doors of adjacent walls also do not open onto them. Because of the ample space and openness of the inside floor plan, these homes often seem “bigger on the inside” and offer homeowners and renters alike more space than might seem initially available. The open room in the photo at top is shown from the inside while the same room at bottom is shown from the outside, demonstrating the “wall of glass” that slides open. Large open rooms are often seen in homes designed in the Bauhaus style of architecture (Plan #195-1249). Common Misconceptions about Bauhaus Buildings Because of the clean, smooth lines commonly associated with the Bauhaus movement, some people may confuse other modern types of architecture with it. For example, the famous American architect Frank Lloyd Wright was a champion of modernist buildings. And although his constructions (even his most famous, Fallingwater near Pittsburgh, Pennsylvania) have sharp, angular surfaces at first glance, upon further examination, one can easily see that there’s something just a bit different about them. In fact, Frank Lloyd Wright’s style actually grew out of the Arts and Crafts movement, which occurred at the end of the 19th century. These buildings, while modern, were more used for smaller farming structures – and thus were very close to nature. Though similar in appearance to classic Bauhaus type arcitecture, this Modern style home (front at top and rear at bottom) departs in a significant way: by incorporating the surroundings – and materials other than concrete, steel, glass, and plastic – into the design and "becoming one" with nature and its sourroundings (Plan #202-1027). If you look closely, you’ll see that homes like this might seem more “classically modern,” i.e., closer to the Bauhaus style, but they are really pulling their form from things like plants, flowers, rocks, and streams. While these buildings can have sharp angles or rustic materials (like concrete), each element is also always tied back to nature. You can see this best in Frank Lloyd Wright’s own studio in Oak Park, Illinois. In this structure, you can see how the style invites nature to “become one” with the building. In stark contrast are the completely manufactured, obviously man-made aspects of Bauhaus architecture. These structures are all concrete, steel, glass, and plastic. There is no room for nature at all, only strict functionality – which is really what sets them apart. Strong in form and bold in design, the Bauhaus aesthetic is the father of Modern residential architecture. The style’s clean lines and open spaces might be polarizing in terms of aesthetics, but no one can deny that the homes are highly functional for daily life.
<urn:uuid:abba8913-e57f-4149-a1ad-bfb4b7859308>
CC-MAIN-2020-16
https://www.theplancollection.com/house-plan-related-articles/bauhaus-style-beginning-of-modern-architecture
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505730.14/warc/CC-MAIN-20200401100029-20200401130029-00395.warc.gz
en
0.949066
2,285
3.1875
3
- The opening verses of chapter 3 identify the people against whom Amos is prophesying: “against the whole family that [God] brought up out of the land of Egypt.” (Discussed here.) - Verses 3-8 speak of Amos’s own role as a prophet. (Discussed here.) - Verses 9-15 speak of the destruction that will fall upon the nation. The prophet calls the surrounding nations to witness against YHWH’s Chosen People. הַשְׁמִיעוּ עַל־אַרְמְנוֹת בְּאַשְׁדּוֹד וְעַל־אַרְמְנוֹת בְּאֶרֶץ מִצְרָיִם וְאִמְרוּ הֵאָסְפוּ עַל־הָרֵי שֹׁמְרוֹן וּרְאוּ מְהוּמֹת רַבּוֹת בְּתוֹכָהּ וַעֲשׁוּקִים בְּקִרְבָּהּ “Proclaim to the strongholds in Ashdod, and to the strongholds in the land of Egypt, and say, ‘Assemble yourselves on Mount Samaria, and see what great tumults are within it, and what oppressions are in its midst.’” The people of God are on display. Their behavior is being judged by the nations around them. The tumults (מְהוּמֹת) and oppressions (וַעֲשׁוּקִים) in its midst shock even the nations. The witnesses are to assemble on Mount Samaria — indicating, as usual, that the Northern Kingdom is the main focus of this prophecy (though, compare verse 13). Amos chooses two nations as his witnesses — he mentions Ashdod, a major city of the Philistines, and then Egypt. It is, of course, significant that the invited witnesses are also traditional enemies of the nation. Before them, Israel is put to shame — before these nations who are not the chosen of YHWH. Yet, the prophet implies, they have a better sense of right and wrong. They become the judges. (more…) The opening verses of chapter 3 identify the people against whom Amos is prophesying: “the whole family that [God] brought up out of the land of Egypt.” They are the people that God has especially known. But, their special relationship with God implied a responsibility to live a life that reflected the character of the God who redeemed them. Now, in verses 3-8, Amos talks about his own role as a prophet. It begins with a series of cause and effect questions: when you see a certain effect, you can infer its cause? Or, as we might say: “Where there is smoke, there is fire.” They go like this: - Two people are walking together —> they must have made an appointment - A lion roars in the forest —> the lion must have caught something - A bird falls into a snare —> there must have been a trap - A snare springs up —> it must have taken something - A trumpet is blown in the city —> the people must be afraid This is another one of Amos’ rhetorical devices, he is leading up to something — the last cause and effect is a little different: (more…) The Old Testament is a wonderful gift from God to us. It is wonderful that we have this record — so ancient, so fascinating. These were the Scriptures of the earliest Christians — who turned to them to understand what God had done in their midst in Christ. It was the context of these Scriptures in which Jesus himself had taught — to a community shaped by it’s stories and laws and prophecies and poetry. And if anything is central to the Old Testament itself, it is the first five books. No doubt the material we currently know as the books of Moses (or the Pentateuch, or the Torah — that is, Genesis through Deuteronomy) were assembled and edited in the period of Israel’s exile in Babylon — they became especially valuable to the people in the times of the exile and then the re-establishment of the nation — they served to teach the people who they were in the light of their history as the people of YHWH. But, the stories themselves go back much further. The people of Israel knew themselves to be a nation that had been delivered by God from Egypt — and the exile, no doubt served as a time to gather those stories together. (more…) - Step one: Glory belongs to God and not to the nation (v. 1). (See: No Glory to Us and Glory to God’s Name.) - Step two: Why should the nations say ‘Where is there God?’ (v.2). - Step three: What Israel’s God is Like (v. 3). (See: The God Who Can’t Be Manipulated.) - Step four: What the nations’ gods are like (vv. 4-8). - Step five: A call for Israel to renew its trust in Yahweh (vv. 9-11). So, now the Psalm turns from reflections on whatever misfortune has come upon them, to an affirmation of renewed hope in their God. (more…) We are told in the Gospel of Luke 4:16-19 that when Jesus had opportunity to speak to the synagogue in Nazareth, he read from the scroll the words of Isaiah 61:1-2 and announced: “Today this scripture has been fulfilled in your hearing.” (Luke 4:21). These verses in Isaiah described Jesus’ mission in life. These ancient words speak to us today of the vocation of the preacher — then and now. When we first come to these verses in the prophecy of Isaiah (or Third Isaiah or whatever his name was) we immediately wonder: who is the prophet talking about? Is this the writer’s mission or is he speaking of someone else? Questions like this might not arise if it weren’t for the fact that the prophecies of the book of Isaiah can be quite mysterious that way. Who is the suffering servant of Isaiah 53? Who is the “servant” of Isaiah 42:1? Who is the figure spoken of in Isaiah 11:2? Who is speaking in Isaiah 48:16? You see what I’m saying. Amos continues his prophecies against the nations (which I discussed last week) in this chapter. Review: You don’t see what the prophet is doing here until you see that Amos 1-2 is a unit. And, it is carefully structured. Verse 2 pictures the LORD (YHWH) roaring like a lion. Then a series or oracles of judgement follow. Each is for a different nation. They are introduced with this repeated formula: “For three transgressions of _____________, וְעַל־אַרְבָּעָה לֹא אֲשִׁיבֶנּוּ and for four, I will not turn back….” There is a certain rhetorical power in this repeated formula. But, this whole poetic prophecy is going somewhere. It’s building. It is going to end in an extended prophecy of judgement at the end (in our chapter 2). And, the weight of this prophecy of judgement is going to fall on Israel. (more…) The time in which the prophet Amos lived was a time of peace and prosperity. But, the prophet could hear God roaring like a lion — in anger. Amos the prophet was certain that there was a God to whom the nations must give account. There was a moral judge of the world. No doubt this was a growing realization among the people of Israel. The God they worshiped was not a localized god — not simply their God, but the God of all the nations. YHWH was the God to whom all the nations were accountable. So, in these verses, the prophet begins with this notion: the God of Abraham, Isaac and Jacob will call the nations to accountability. (more…) This is essentially a Psalm of praise. We are called into praise from the very opening “Hallelujah” (praise Yah). So, it is a song of worship and it calls us into an attitude of worship. As Adam Clarke says: “It is an exhortation addressed to the priests and Levites, and to all Israel, to publish the praises of the Lord.” The opening verses are an exhortation to worship. Verses 8-12 remind the people of Israel of God’s saving acts in their history: their deliverance from Egypt and the defeat of legendary kings. Then, they are called again to praise. Remembrance has a significance for our faith. it is good to recount for ourselves the answered prayers we have experienced — and the unexpected blessing of God on our lives. The Bible is a book of remembrance: recounting the deeds of the Lord God in times past, as a way of illuminating our lives in the present. We know God through what God has done. For Christians, it is the story of Jesus — before any other — that calls forth our praise. And, so it is that in this psalm, the remembrance of God’s deliverance in the past, calls forth praise. (more…) I said that the opening editorial note in the book of Amos (1:1) already raises an issue for me. The issue is: Who speaks for God? It may not be the person we thought was authorized to do so. Which also brings to mind another question: ‘To Whom (if anyone) does God speak?'” The prophet is the one who sees what others do not. There is an interesting detail in the way Amos 1:1 tells us about this prophecy: Amos spoke what he saw. “The words of Amos… which he saw….” Amos conveyed the sense of what he saw. But, in Amos 1:2 it is more a matter of what he heard: (more…) This post is primarily just a list — for me to archive — and for those who might be interested. I have also included (at the end) a video presentation by Dr. Andrew Lincoln on the significance of the “I am” passages in the Gospel of John. In one of the churches I pastored, I led a series of brief Lenten studies on the “I am” sayings in the Gospel of John. In preparation for this, I did a search to find how many sayings like this there really were. I was a bit surprised how many I found. Occasionally I get asked about this, so it occurs to me that there may be other people who would also find this list interesting. (more…) If we are to follow God, if we are to trust God, we must have some assurance about God’s character. It is only natural that Bible often spends time with this issue. If we are to trust in God we need some assurance also of God’s power. Is God able to uphold us through the difficulties of life? To me, these are the issues addressed in Psalm 135:7,8. Which brings me back to the circumstances that made Psalm 135 so vivid to me in the first place. I started reading and meditating on this psalm on a stormy morning. There was a thunderstorm raging outside. And, it is clear that the Psalmist saw the power of God in the thunderstorm. It was not an unruly, threatening natural event — somehow the thunderstorm was also under the sovereign power of God. So there is no need to ultimately fear what would otherwise seem powerful, unruly or chaotic — all the powers of this world are under God’s overruling power. They reflect the power of God — for God is the Creator of all that is. (more…) Yet, it is also such a difficult issue. When there is a deep wound, the pain is still there, and the anger still arises. In times like this, we wonder: do the words mean anything? When time and time again, you have to pray “Lord, give me the grace to forgive my enemy” you have to wonder if there is ever hope for you. There have been many times, when I have wondered this about myself. And, I know I’m not alone in having this problem. Those people who have done things that have caused wounds — especially those who have done it quite deliberately and knowingly — are hard to forgive. There are people I know who have been treated unfairly and unjustly. There are people I know who have been abused. And, the problem with forgiveness is that it seems to say that all that was okay. To let go of the anger and the outrage seems to give in to injustice — to give permission for their abuser to do it again to someone else. (more…) I don’t know where such ideas come from — but a moment of thought will dispel them. The great Bible characters did not have lives that were devoid of difficulties or setbacks or griefs or disappointments. If this did not happen with them, how can I reasonably expect it for myself? Jesus grieved over Jerusalem. The apostle Paul knew setbacks and discouragements in his ministry. How can I suppose my life can be free from such things? The path of the Lord is not easy, it is worthwhile. Those who choose to live as Christ has taught make a positive contribution to life — to their own life and to the lives of others. We move along a difficult path characterized by faith and love and hope. And, by doing so, we bring more faith and hope and love into the world. (more…) “I will lay waste mountains and hills, and dry up all their herbage; I will turn the rivers into islands, and dry up the pools. “I will lead the blind by a road they do not know, by paths they have not known I will guide them. I will turn the darkness before them into light, the rough places into level ground. These are the things I will do, and I will not forsake them.” (Isaiah 42: 15, 16 NRSV) This is a powerful, irresistible, transformative resolve, to be undertaken with a high level of emotional intensity. It is a burst of generativity that is going to change everything and create a newness. This is a God who will not forsake: “I will not forsake them” (42:16); “You shall no more be termed Forsaken” (62:4). In this resolve to new creation, YHWH promises to overcome all forsakenness and abandonment known in Israel and in the world. When creation is abandoned by YHWH, it readily reverts to chaos. Here it is in YHWH’s resolve, and in YHWH’s very character, not to abandon, but to embrace. The very future of the world, so Israel attests, depends on this resolve of YHWH. It is a resolve that is powerful. More than that, it is a resolve that wells up precisely in tohu wabohu and permits the reality of the world to begin again, in blessedness. — Walter Brueggemann, An Unsettling God: The Heart of the Hebrew Bible. Note: The phrase “tohu wabohu” is a reference to the Hebrew phase used in Genesis 1:2, where before God’s creative action, the world is spoken of as being “formless and empty” (NIV). I have highlighted the phase in bold in the text below: וְהָאָרֶץ הָיְתָה תֹהוּ וָבֹהוּ וְחֹשֶׁךְ עַל־פְּנֵי תְהוֹם וְרוּחַ אֱלֹהִים מְרַחֶפֶת עַל־פְּנֵי הַמָּיִם
<urn:uuid:884d40d0-8b1a-4ff0-b56b-b3f2b3d33cbd>
CC-MAIN-2020-16
https://craigladams.com/blog/tag/yhwh/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370521876.48/warc/CC-MAIN-20200404103932-20200404133932-00513.warc.gz
en
0.961682
3,658
2.765625
3
Geared DC motors can be defined as an extension of DC motor which already had its Insight details demystified. DC Motors convert electrical energy (voltage or power source) to mechanical energy (produce rotational motion). They run on direct current. The DC motor works on the principle of Lorentz force which states that when a wire carrying current is placed in a region having magnetic field, then the wire experiences a force. This Lorentz force provides a torque to the coil to rotate. A geared DC Motor has a gear assembly attached to the motor. The speed of motor is counted in terms of rotations of the shaft per minute and is termed as RPM. The gear assembly helps in increasing the torque and reducing the speed. Using the correct combination of gears in a gear motor, its speed can be reduced to any desirable figure. This concept where gears reduce the speed of the vehicle but increase its torque is known as gear reduction. This Insight will explore all the minor and major details that make the gear head and hence the working of geared DC motor. Gear Reduction Motor is a gearbox that connects devices or drive machines. The main transmission structure is integrated and assembled by drive motor, shaft, and gearbox, which has the functions of connecting, transmission, deceleration, increase torque, and etc. Principle of deceleration of gear reduction motor is to achieve the purpose of deceleration, which driving power of high-speed is from driving source to pass the power to output shaft of reducer by input shaft. Because the number of teeth of motor is more than gearbox, so the rotate speed is reduced, and the torque is increased. The main function of gear reduction motor is reducing the speed of motor, and increase torque by gear ratio. If the gear reducer (gearbox) leaks lubricating oil, you can refer to as below: - Source of lubricating oil leakage: 1). Seal of output shaft; 2). Input shaft; 3). Motor shell; 4). lubricating oil outlet - Reason of lubricating oil leakage: 1). lubricating oil seals wear out and rigidification; 2). Sand holes of motor shell; 3). lubricating oil mass and running speed - Solution: 1). Replace lubricating oil seal; 2). Check the shaft around the lubricating oil seal; 3). Replace motor shell; 4). Increase or decrease the amount of lubricating oil Everyone meets gear motor at first time, who has all such a problem that is ‘What is the gear motor?‘ and ‘How gear motor works?‘. Now, let VEER Motor introduces it as below: “Gear motor” refers to a combination of a motor plus a reduction gear train. These are often conveniently packaged together in one unit. The gear reduction (gear train) reduces the speed of the motor, with a corresponding increase in torque. Gear ratios range from just a few (e.g. 3) to huge (e.g. 500). A small ratio can be accomplished with a single gear pair, while a large ratio requires a series of gear reduction steps and thus more gears. There are a lot of different kinds of gear reduction. In the case of a small transmission ratio N, the unit may be back drivable, meaning you can turn the output shaft, perhaps by hand, at angular velocity w and cause the motor to rotate at angular velocity Nw. A larger transmission ratio N may make the unit non-back drivable. Each has advantages for different circumstances. Back drivability depends not just on N, but on many other factors. For large N, often the maximum output torque is limited by the strength of the final gears, rather than by N times the motor’s torque. Veer Motor is the professional gear motor manufacturer with 13 years’ experience, if you want to buy gear motor with factory price, just contact us, email: firstname.lastname@example.org, or visit our official website: https://www.veermotor.com Discover Fast and Reliable Vending Using Motors and Gear Motors From VEER Motor Vending machines provide dispensable products through easy access. The motors and gear motors in your vending machines must be reliable, operate in various weather conditions and meet the demands of heavy use. We offer high-quality motors and gear motors for all of your vending applications through motor sourcing and manufacturing. If you are manufacturing an existing vending application or you are looking for custom-made motors or gear motors for a new OEM application, VEER Motor helps you find the right solution for your unique requirements. Speed Meets Reliability in Vending Applications The vending industry includes many unique applications that require reliable and fast electric motors and gear motors. Some of these applications include: - Food & drink vending: These machines are used in various settings including businesses, public parks, educational institutions and healthcare organizations. Due to heavy use, the motors and gear motors must withstand wear. - Postal stamp & ticket machines: Used in post offices as well as train stations, stamp and ticketing machines require intricate motors made to handle paper and plastic materials. - Merchandise vending machines: Often used in common areas and restrooms, merchandise vending machines vary in size and require motors and gear motors of various sizes built to fit. - Reverse vending machines: Reverse vending machines are used for container recycling and are common in areas with mandatory recycling laws. They require motors and gear motors that are manufactured specifically for this application. - Prescription medicine machines: Used to dispense prescription medication, these machines must use gear motors that are reliable and resistant to wear. Electric Motor Applications Within The Vending Industry Include from VEER Motor: |Food & Drink Vending||Postal Stamp & Ticket Machines||Merchandise Vending Machines| |Prescription Medicine Machines||Reverse Vending Machines| Secure, Safe, Dependable Motors and Gear Motors Security applications and solutions need motors and gear motors that are dependable. There is no room for error or failure within your security system. They must operate in any environment, under any circumstance to keep businesses safe and secure. VEER Motor’s team of experts works with you to facilitate the design, development, and manufacturing of motors and gear motors created specifically for these applications. Our portfolio of manufacturers offers only the highest quality material for your OEM motor and gear motor needs. Ensure Your Safety and the Safety of Others With Security Applications A security system includes various pieces of equipment that require a motor or gear motor. Some of these applications might include: - Electric door locks: Often found in banks, hospitals or other secure locations, these door locks feature a keypad. The gear motor inside locks and unlocks the door with the correct pin. - Electric turnstiles: A turnstile is often found in subway entrances, theme parks or other locations that see large crowds. They lock after hours, adding an extra layer of security. - Remote controlled cameras: These cameras often turn or adjust based on movement. The gear motor inside facilitates that movement. - Security lights and scanning systems: Lights that are motivated by movement are often used outside of commercial buildings and homes. Electric arm gates: These gates use a gear motor to move the electric arm up and down and are often found in parking garages. No matter the equipment, the motor inside must work efficiently day in and day out. With VEER Motor, you can rest assured that your security equipment will work. Features of VEER Motor’s Custom Motors & Gear Motors for Security Applications |Quiet Operation||Ability to Operate in Varying Environmental Conditions||Extended Operating Lifespan| |Reliable||Durable Assemblies||Highly Efficient| Custom electric motors and gear motors that power robotic applications No matter what they’re used for, robots need to be precise while operating reliably and efficiently. The motors that power the various robotic components are the heart of these machines and must operate with high precision and accuracy. Choose exceptional custom motor and gear motor solutions for the components that power robotics. Work with VEER Motor to facilitate the design, development, and manufacturing of motors and gear motors specifically for robotics applications. We understand that they must operate with a high level of precision without busting your budget. Work with us to build a cost-effective motor for your robotics application that you can rely on for years to come. We’ll work with you to produce the most accurate and reliable OEM motors and gear motors for your specific robotics application. Examples of Electric Motor Applications in Robotics Include: |Servo Mechanisms||Power Drives||Remote Control Devices| VEER Motor’s Robotics Capabilities Meet Requirements for Motors & Gear Motors That Call For: |Servos||Remote Controls||Stepper Motors||Robotic Kits| Custom Motors Built to Precisely Power Your Pump Applications Motor performance is a key factor in pump applications. Your machines must provide reliable and durable power for pumps dispensing cola, water and syrups, chemicals, soap and water, and sprayers. When you can’t compromise on quality, durability and reliability, look to VEER Motor. We provide cost-effective, precise motor solutions based on your pump specifications. Building High-Quality Custom Pump Motors for all Applications We’ve been building OEM motors and gear motors for pump projects for over 30 years. No matter your application, know we have the experience and for a variety of applications such as: - Industrial food service dispensing for soda, juice and other beverages - Chemical dispensing - Soap and water dispensing - Commercial and industrial spraying equipment Why Choose VEER Motor to Design Your OEM Pump Motor? Because we value our customers for the long term. We build the highest-quality motors in the OEM market today which keeps our customers coming back to us for years. Our engineers design for durability, longevity and reliability. We want to work with you on your next custom motor project and know that our quality and service will exceed your expectations. Since 1986, we’ve been helping our customers design and source customized OEM solutions making us a global industry leader. Features of Our OEM motors for pump applications are: |Waterproof||Built with Stainless Steel Materials||Long-Lasting| |Durable to Meet Application Requirements||IP Rated Based on Application Specifications||Diameter range can be customized| Dental professionals require tools that are powerful, efficient, and easy to maneuver in order to successfully treat their patients. At VEER Motor, we understand the dental industry’s needs and provide motor solutions that fit the unique demands of dental surgeons and other dental pros. VEER Motor’s DC and brushless DC motors are excellent for dental equipment applications such as dental restoration tools, surgical tools, and dental lab equipment. VEER Motor’s dental motors are ideal for use in a wide variety of dental tools: - Root canal obturators - Prosthodontics screwdrivers - Other surgical instruments Experience True Power for Your Off-Road Vehicles Off-road vehicles are subject to severe physical and environmental elements during the year. Your design needs durability and robust materials to ensure protection from water damage, dirt, heat and blunt force during operations. VEER Motor offers only the best motors and gear motors through sourcing services to fit your unique requirements. Increasing your production or developing something new requires the highest quality materials. We help you choose the perfect solution to fit your needs. Durability and Quality for All Off-Road Vehicle Applications The application you are producing will determine the type of electric motor you will need. Many off-road applications will use a gear motor including: Shifters: Found within various types of off-road vehicles from heavy equipment to ATVs, shifters must work seamlessly for safety and efficiency. Winches: Winches are attached to many heavy equipment vehicles and ATVs for pulling and transporting heavy objects. These tools require heavy duty gears to reduce fraying and cable wear. Starters: It is difficult to think of a vehicle that doesn’t have a start. The motors within the starter must run efficiently in order for the vehicle to run at all. Seating systems: Equipment and vehicles that have adjustable seats use motors to activate the movement simply. Wipers: Wipers are moved by using motors. Often found of vehicles of different types, wipers are important for safety and vision. VEER Motor’s motors and gear motors are specifically designed to stand up to the demanding punishment of off-road applications by: |Resisting Abrasive and Corrosive Elements||Withstanding Aggressive Shocks||Providing High and Consistent Torque||Operating in a Wide Range of Temperatures|
<urn:uuid:e122058f-1a70-49c0-a33f-a71d0ef6c4cd>
CC-MAIN-2020-16
https://www.veermotor.com/tag/gear-motor/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506870.41/warc/CC-MAIN-20200402080824-20200402110824-00473.warc.gz
en
0.910549
2,654
3.65625
4
An analytical essay is one of the most difficult pieces of writing due to the necessity to have an analytical mind. However, it doesn’t mean that you will not manage to write it successfully. The only difference is that you will have to spend a bit more time and efforts working on this type of an academic assignment, where the main purpose is to provide a well-grounded observation of a certain point of the chosen topic. The name of an analytical essay supposes that the writer should analyze some specific idea. You can watch a movie, read a book, or just consider a certain situation from the life experience. Your task is to concentrate on one idea and provide all necessary information for its observation. In order to write a great analytical essay, you should narrow the topic choosing one section, which you could analyze deeply. To develop an impressive piece of writing you will have to use critical thinking, learn more about the structure of an essay, and discover whether there are any secret tricks, which will help you to make the process of writing faster and more effective. Let’s start from the very beginning and have a closer look at the analytical essay structure. What Should You Know about the Structure of an Analytical Essay? The analytical essay is similar to other academic papers in terms of its structure. As any other type of an essay, it should have three main parts: an introduction, the main body, and the text summary or the conclusion. However, besides the general rules of structuring an essay, you shouldn’t forget about the distinctive features each essay type has. Depending on the topic, which you are going to analyze, you may need to adjust the structure of the content. For example, the most frequently assigned topic for an analytical essay is the analysis of some literary work. In this case, you will have to write according to the following plan: - The facts about the creation of the book. - Problematics (topic) and idea. - Composition. A brief plot. - Characters and their significance in the literary work. - Genre. Features of the language used by the author (at the level of vocabulary, syntax, phonetics and morphology). - Means of expressiveness and their meaning. - Rhythmics, trophy, the size of the verse (if you have chosen a poem for an analytical essay). - The influence of the work on the world literary community. - Personal evaluation or subjective opinion. Have a Closer Look at the Introduction If you have already written essays, you must know that it is very important to think hard of the first sentences of your work to make them eye-catching and original. An analytical essay isn’t an exception, and here you also need to write a hook thesis statement. There is one thing that makes this type of an essay a bit different from others. Your introduction should include not only sentences able to grasp the reader’s attention but also the background information. This information should be connected with the thesis and be relevant to it. You can start with the broader topic and narrow it gradually. Make sure you have used the logic transition. Things to Keep in Mind Working on the Main Body The main body of an essay is the major focus of your work, which contains the proof of the thesis written in the introductory part. Usually, the main body consists of either two or three paragraphs, each of which should be focused on one point. As you are going to write an analysis, you should remember that your main aim is to support the thesis statement providing an evidence in the main essay part. You should write so that the reader has no doubts that you know what you are talking about. Select the words carefully making the content both attractive for the reader and informative at the same time. Make sure that each paragraph finishes with some persuasive idea, which helps to understand your point of view. What Should You Write in the Conclusion? The conclusion is a short paragraph, which plays a crucial role in the overall impression made by your work. Don’t hurry writing the essay summary as your goal is not just to paraphrase the introduction but to find the most appropriate method to link the achieved results in the final part and the goals set in the very beginning. You should think of one sentence that will be able to summarize the ideas presented in your paper so that the reader doesn’t have a feeling that it is not the end. Look through the text once again and highlight the key points that have been analyzed. 2 Hook Examples of the Analytical Essay Here you are offered to look at the analysis of the lyrics of Emily Dickinson, which consists of an introduction, two paragraphs of the main body, and the conclusion. Sample 1: “The lyrics of Emily Dickinson can be by right considered the phenomenon as it contains the elements of the contradiction while being the whole entity. It is significant that, despite the breadth of spiritual interests, the nature of the problems that worried the poetess practically does not change. In her case, there is no need to talk about the evolution of creativity: this is an increasing deepening of the motives that appeared in her very first texts, a testimony of the ever-deepening life of the spirit. The innovative and original verse of Emily Dickinson seemed to her contemporaries something “too elusive”, then generally “shapeless”. The publisher of the eight poems written during the lifetime of the poetess, Higginson, wrote that they “resemble vegetables, this minute dug from the garden, the rain drops and dew can be seen on them, as well as the sticking pieces of soil.” This definition seems to be quite correct, especially if the word “soil” does not mean dirt, but soil as the foundation of everything that exists and is significant. Lyrics of E. Dickinson is really devoid of euphony and smoothness, so appreciated by the readers of her time. This poetry of dissonance, the author of which has not experienced the grinding and standardizing influence of any “circle” or “school” and therefore has preserved the uniqueness of style, clarity, and sharpness of thought. Her poetic technique is just the technique of Emily Dickinson. What is its specificity? First of all, in the laconism, which dictates the omission of unions, truncated rhymes, truncated sentences. The peculiarity is also reflected in the punctuation system invented by the poet – in the wide use of a dash emphasizing the rhythm, and capital letters that highlight key words and emphasize the meaning. This form is generated not by the inability to write smoothly (Dickinson also has quite traditional verses) and not by the desire to stand out (she wrote exclusively for herself and for God), but her desire to isolate the very grain of thought – without a husk, without a shiny shell. This is also a kind of rebellion against the fashionable “curlicues.” The form of Dickinson’s poems is natural to her and is determined by thought. Moreover, its incomplete rhymes, irregularities in style, convulsive changes in rhythm, the very unevenness of the poetry is now perceived as a metaphor for the surrounding life and is becoming more and more relevant. Actually, the time of Emily Dickinson came only in the 50-70s of the XX century, when one of the most important directions in American poetry was the philosophical lyrics filled with complex spiritual and moral collisions, and when the author’s innovative and free style ceased to shock the rumor already accustomed to dissonance compatriots.” Sample 2: “Little Prince” Antoine de Saint-Exupery “Little Prince is a deeply philosophical fairy tale, which is read by children from 3 years old but understood to the full extent only by adults. The story of the Little Prince reveals to the reader simple but eternal truths. This fairy tale is about how a little boy discovers such a big world. He meets various interesting characters who teach him a lot, make him understand the world order better. As in all other works of the author, the tale “The Little Prince” is filled with imagery and deep symbolism. The work was written during World War II when the whole Europe was suffering from the sight of destruction and loss. Therefore, in the “Little Prince” images of war, loss, destruction often veiled. Thus, the desert becomes a symbol of spiritual thirst for a person living in a world devastated by war. One of the main images of the tale is Rosa. She is capricious and defenseless. The little prince did not immediately understand her true nature, her inner harmony, and beauty. The conversation with Lisa opened the boy’s eyes. And one of the main philosophical ideas of the work is that beauty is only true when it is full of meaning and unique content. The little prince teaches the reader to love. It is the love, according to Exupery, which is able to heal all the wounds inflicted by a merciless war. One of the key phrases of the fairy tale is what Lisa said: “To love is not to look at each other, it means to look in one direction.” These piercing words describe the feeling of love best. These words are the author’s personal experience. With the lips of Lisa, Exupéry shared his own inmost knowledge with the reader. In his seemingly childish tale, the author, with an irony characteristic of the genre, touches such deep matters as good and evil, love and beauty, friendship and loyalty, loneliness and death. Simple and at the same time complex truths of the fairy tale “The Little Prince” make us kinder, more humane and more beautiful spiritually. A fairy tale teaches us to listen to the voice of our own heart, to see the true beauty and cherish love.” If you were careful reading both examples of the analytical essay, you could notice that they contain the in-depth analysis of the topic, where the author concentrates on the central idea. Top-35 Best Analytical Essay Ideas for You to Pick Up There are so many things you can analyze that it is not easy to choose the best topic for your essay. If you have been brainstorming ideas but still don’t know what the subject of your analysis is going to be, don’t pass by the longest list of the best analytical essay topics you have ever found. - Analyze the reasons for corruption. - Why do many women leave their kids? - Is there any negative influence on the kid if he lives with one parent? - Why is smoking still considered fashionable among teenagers? - Are there any differences between the boss-woman and a boss-man? - Analyze your favorite movie character. - Provide an in-depth analysis of the book you liked most in your life. - Who has influenced your personal development and how this happened? - Why do many couples divorce? - Is it possible to be a perfect mom and a career-oriented woman at the same time? - Analyze how Romeo and Juliet could live together if they families wouldn’t be against their relations. - Is it harmful to play computer games for several hours every day? - Does friendship between a man and a woman exist? Do you have any proof? - Is love nothing more than simple biology? - Why do young people never listen to their parent’s pieces of advice? - Why do many young girls dream about the profession of a model? - Is it possible to avoid natural disasters? - How can pets influence the child’s psycho? - Why is it possible to teach the parrot how to talk while it is impossible to do this with other animals and birds? - Is there a possibility that trees and flowers have a certain level of conciousness? - Why do females of some spiders eat the male? - Why do some people need to listen to music doing any chores? - Analyze the key problem of the famous literary work. - Make an analysis of an English poem. - Is the author the main character of all his works? - Why is there one step from love to hatred? - Is it possible to avoid conflicts with close people? - Analyze the behavior of your parents towards your personal life and studies. - Provide an analysis of the policy of the famous brand. - Choose a song you like most and analyze the text. - Which 5 things do you need to live normally? - Analyze why people have such a great dependency on their gadgets. - Analyze any dialogue from the “Little Prince”. - Choose the greatest invention and analyze its significance nowadays. - Analyze the current political situation in the world. Hope you will pick up the most interesting topic, which seems more fascinating to you. Look at the helpful tips on how to structure an analytical essay, and you will definitely develop an incredible piece of writing. Use the whole potential of your analytical mind and brainstorm original ideas as this type of an essay allows to demonstrate your personal attitude to many things. However, you should remember that any point of view should be supported by the persuasive proof. Reread your paper before submitting it and the get the guaranteed A-grade.
<urn:uuid:a2173491-dac7-4349-9703-ddb36e2d71ca>
CC-MAIN-2020-16
https://asolio.com/how-to-write-analytical-essay/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506477.26/warc/CC-MAIN-20200401223807-20200402013807-00155.warc.gz
en
0.950722
2,761
3.203125
3
You text others all the time. Sometimes you even call them (amazing!). But every now and then you have to sit down and write an email. Emails are different from letters, texts and phone calls. Even more, emails to your friends use different language from emails to your colleagues at work. To get your message across clearly without offending anyone, you’ve got to know how emails work in English. And that’s exactly why we’ve created this handy guide for you. By the end of this post, you’ll know the essential English vocab for sending emails, how to write an email to business coworkers, how to write an email to a friend and how to write an email to an acquaintance (someone you’ve only met once or twice). The Ultimate Guide to Writing Emails in English Must-know Vocabulary for Sending Emails in English If your email account is currently set to your native language, change it to English to learn some new words. You already know where everything is, so you will know what the words mean. For example, here are four words you can learn just by switching your email account’s language to English: - Subject: This is the topic of the email, or what the email will be about. - Recipient: This is the person receiving (getting) the email. - Compose: This means to create or write the email. The word “compose” is usually used with music. A composer is someone who writes, or composes music. - Attachment: This is any file you’re attaching (adding) to the email. See how easy that was? You now know four new words, and it only took a minute! There’s another part of an email that some people (yes, even native speakers) don’t know the meaning of: the CC and BCC fields. When you add an email address to the CC field, that address will get the email too. When you add an email address to BCC, that person will also get the email, but no one else will know that person received a copy. But what do CC and BCC mean? These are acronyms, or abbreviations made from the first letters of the words in a phrase. In this case the words are “carbon copy” and “blind carbon copy.” CC: Carbon copy Before email, carbon copy only meant a copy of a written (or typed) document using carbon paper. You might have seen carbon paper at work or even in your checkbook. It’s a thin grey paper with a layer of loose ink on one side, which you place over a blank sheet of paper. Then, put your original document on top of the carbon paper. Now, when you write on the document, it will push the ink from the carbon paper on to the blank sheet of paper, making a copy. You can see carbon paper in action in this video. So the name makes sense, because using CC when emailing is like sending the recipient a copy of the original email. BCC: Blind carbon copy And a blind carbon copy? Back when people used typewriters, secretaries would make carbon copies of documents, but only add the recipients’ names in after the copies were made. That way no one knew who else got a copy of the document. If someone is blind, it means they can’t see, so again the name makes a lot of sense. You would use the BCC field if you are sending out an email to a large number of people who might not want their email address shared with everyone else. Another reason to use BCC is when you want someone to see that you sent the email or the information in the email—but you don’t want that person to be a part of the conversation. Now we know the main parts of an email, how do you actually write one in English? The Basics for Writing an Email in English Here are a few quick basics about writing emails: - Emails are usually shorter than letters but longer than texts. - Emails are not as urgent (important, requires immediate attention) as speaking to someone in person or calling them on the phone. - An email will look differently depending on who you’re writing to. Just like when you speak, emails use different language for different recipients. So before you write your email, ask yourself why you’re using an email instead of just calling or mailing a letter. You might decide that a text or a phone call makes more sense. Read on to find out how and why to write emails to people you work with, people you know and close friends. Writing an Email in English to Your Work Colleague/Boss Emails at work are often used to set up meetings, since it’s easy to see all the information written down in one place. It’s also easier to get everyone’s attention and responses through email than in person. Work emails are also useful when you want to ask a question that doesn’t need to be answered right away, or to send a quick note to someone who is busy, so they can see it later. Always be clear and keep it concise (short). Possible parts to include A work email looks a lot like a business letter, with a few changes. Your email should have: - A greeting: Say hello, and address the person you’re writing to by name. - An introduction: If the person you’re emailing doesn’t know who you are, include a quick introduction. - The purpose of your email: Get to the point quickly and explain why you’re writing the email. - The details: Include only the details the recipient needs to know about the reason you’re emailing. If the recipient needs to take any action after reading the email, include that here too. - A signature: Sign your name at the end of the email. - “I hope you’re doing well.” — You can include this optional phrase at the beginning of an email, after your greeting. - “I hope this email finds you well.” — This sentence is similar to the one above, but it’s much more formal. - “I just wanted to update you on…” or “I just wanted to let you know that…” — These are both great ways to start an email if you’re sending a quick note about something that the recipient already knows about. - “Thank you for your time.” — It’s a good idea to thank people for their time and help at the end of an email, right before your signature. - “Sincerely,” — This word is often used before your name in a signature, usually only in formal letters (like one to your boss). Being sincere means that you really mean what you’re saying. Sample work email Here’s what an email to a coworker might look like: Subject: Friday Lunch Meeting Time Changed to 11:30 a.m. I hope you’re doing well today. This is [Your Name], from the marketing department. I wanted to update you on the lunch meeting we are having on Friday. The Friday lunch meeting has been moved from 11:00 a.m. to 11:30 a.m. Please let me know if you will be able to attend the meeting at this new time. Thank you for your time and I hope to see you there. Writing an Email in English to an Acquaintance An acquaintance is somebody you know, but not well. It’s somebody who isn’t quite a friend, but isn’t a stranger either. Email is a perfect way to get in touch with an acquaintance because it’s not as personal as calling or sending a text. Sending an email is a good way to reach out to somebody you haven’t spoken to in a long time, or to keep in touch with someone you met at an event. Possible parts to include An email to an acquaintance is less formal than writing to someone from work, but it’s a bit more personal. You can—and in some cases should—include more details about who you are and why you’re emailing. When you’re writing an email to someone you don’t know well, be sure to include: - A greeting: As always, say hello first! You can decide if you should use the person’s first or last name, based on how well you know them. - A reminder of where they know you from: Mention where you met the recipient or where you last saw them, so that they know who you are. - A positive detail about your recipient: You can mention how great of a conversation you had the last time you saw this acquaintance, or congratulate them on a recent promotion or new job. Including any little detail that shows you care about them is nice. - Your reason for writing: Why are you writing this email? It might be just to see how the recipient is doing, or to ask them for help with something. Make your reasons clear. - Your signature: Politely let the recipient know that you’re waiting for their reply, then sign with your name. - “Long time no see.” — If you haven’t seen the recipient in a while, you can use this very informal sentence at the beginning of your email. - “I’d love to catch up.” — To “catch up” means to talk about some of the things that have happened in your lives since you last spoke to a person. It’s a good phrase to use if you’re writing to someone you haven’t seen in a while. - “Keep in touch.” — This phrase means you’d like to keep talking with the recipient every once in a while. It’s a good sentence to use with someone you met recently. - “I look forward to hearing from you.” — Before you sign your name, you can use this phrase to show that you’d like to get a response. You could also use the slightly more casual “Looking forward to hearing from you.” - “Best wishes,” — In an email to an acquaintance, saying “sincerely” might be too formal. Instead you can use this phrase as a closing, or alternatively, just “Best,” followed by your name on the line below. Sample email to an acquaintance Here’s what an email to an acquaintance might look like: This is [Your Name]—we met at the New Year’s party at Sally’s last year. Long time no see! Congratulations on your recent promotion, you deserved it for all the hard work you do. I’m emailing to see if you’d like to meet up sometime to catch up. I’m in your city for a few weeks and I would love to chat with you. I look forward to hearing from you. Writing an Email in English to Your Friend These days we usually speak to our friends using texts, on a chat program or just in person. Sometimes, though, an email is still the best choice. You would send an email to your friend if the content is too long to fit into a text, if you want to include more than one link or attachment, or if you and your friend are far away from each other. Possible parts to include Emails to friends are very casual, and don’t always follow a specific structure. Still, there are some things you can include in your email to make sure your friend understands you: - A greeting: Say hello before you get to your email’s content! - Your reason for emailing: You can explain why you decided to email instead of text, or just go right into writing about what you wanted to share. - A signature: Writing your name is not always even necessary when you’re emailing a friend. Instead you can say “talk to you later” and leave it there. - “How’s it going?” — This is a casual way to say hello and ask how your friend is doing. - “Just wanted to tell you…” — This is a good way to start your email. Notice that the sentence is missing the word “I,” which should come at the beginning of the sentence. That’s because you can write the way you would speak to your friend. - “Talk to you later.” — You can also write this as the acronym TTYL. Common Internet acronyms. As we saw with CC and BCC, acronyms are abbreviations made from the first letter from each word in a phrase. A few of these acronyms are very popular when speaking online, and you might already use some of them. You can write “lol,” which stands for “laughing out loud” if you’re saying something meant as a joke. Or you might write “omg” for “oh my god,” if you’re amazed by something. Write however is natural to you! You can think of the email like it’s a longer text. Sample email to a friend Here’s what an email to your friend might look like: How’s it going? I was going to text you, but then I realized I had too much to say! Sorry I didn’t answer your text right away earlier, I was at a lunch meeting. It was soooo boring lol. After the meeting we had pizza and soda though, so everyone was happy. You know that I’m visiting New York atm*, right? Well I’m meeting with an old friend tomorrow and I wanted to get your thoughts on it. He’s the guy I met last year at that awesome New Year’s party. The one with the really nice shoes, remember? And guess what. I have no idea how I should dress. Help! *Note: atm is an abbreviation for “at the moment,” meaning “now.” With all of these phrases and email parts, now you’re ready to write your own email—whether it’s to a friend, acquaintance or coworker! If you liked this post, something tells me that you'll love FluentU, the best way to learn English with real-world videos.
<urn:uuid:9eeca7f0-f30f-4982-9bf9-52dce8ce3b8b>
CC-MAIN-2020-16
https://www.fluentu.com/blog/english/email-english/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496227.25/warc/CC-MAIN-20200329201741-20200329231741-00034.warc.gz
en
0.946944
3,120
3.296875
3
Google Cloud for Data Science: Beginner's Guide While AWS EC2 is the leader in cloud computing, Google Cloud has developed a very compelling and competitive Cloud Computing platform. In this tutorial, you will learn how to: - Create an instance on Google Compute Engine (GCE), - Install a data science environment based on the Anaconda Python distribution, and - Run Jupyter notebooks accessible online. Working in the cloud instead of on your own machine has two main advantages for data science projects: - Scalability: you can tailor the power (RAM, CPU, GPU) of your instance to your immediate needs. Starting with a small and cheap instance and adding memory, storage, CPUs or GPUs as your project evolves. - Reproducibility: a key condition of any data science project. Allowing other data scientists to review your models and reproduce your research is a necessary condition of a successful implementation. By setting up a working environment on a virtual instance, you make sure your work can easily be shared, reproduced, and vetted by other team members. Google Compute Engine Although built around the same concepts and elements (instances, images and snapshots), EC2 and GCE differ on both access and resources organization. A key aspect of the Google Cloud Platform (GCP) is its project-centered organization. All billing, permissions, resources and settings are grouped within a user-defined project which basically acts as a global namespace. This not only simplifies the interconnected mapping of the resources you use (storage, databases, instances, ...) but also access management from role-based permissions to actual ssh keys and security. When it comes to user friendliness and access and role management, I find that working on the GCP is easier than working with AWS especially using multiple services. The GCP also offers certain services which are particularly relevant for data science, including but not limited to: - Dataprep to build data processing pipelines, - Datalab for data exploration, - the Google Machine Learning Engine built on TensorFlow - BigQuery a data warehouse solution that holds many fascinating Big Data datasets. A low learning curve and data friendly services make the GCP a must have in your data scientist toolbox. Before you start launching instances and installing Python packages, let's spend a few moments to review some of the common vocabulary used in Cloud Computing. VMs, Disks, Images and Snapshots A Virtual Machine (VM) also called "an instance" is an on-demand server that you activate as needed. The underlying hardware is shared among other users in a transparent way and as such becomes entirely virtual to you. You only choose a global geographic location of the instance hosted in one of Google's data center. A VM is defined by the type of persistent disk and the operating system (OS), such as Windows or Linux, it is built upon. The persistent disk is your virtual slice of hardware. An image is the physical combination of a persistent disk and the operating system. VM Images are often used to share and implement a particular configuration on multiple other VMs. Public images are the ones provided by Google with a choice of specific OS while private images are customized by users. A snapshot is a reflection of the content of a VM (disk, software, libraries, files) at a given time and is mostly used for instant backups. The main difference between snapshots and images is that snapshots are stored as diffs, relative to previous snapshots, while images are not. An image and a snapshot can both be used to define and activate a new VM. To recap, when you launch a new instance, GCE starts by attaching a persistent disk to your VM. This provides the disk space and gives the instance the root filesystem it requires to boot up. The disk installs the OS associated with the image you have chosen. By taking snapshots of an image, you create instant backups and you can copy data from existing VMs to launch new VMs. Let's put all of that in practice and get started with your first VM! Getting Started with Your First VM on GCP Create an Account and Project To open an account on the GCP, you need a standard Google (Gmail) account. At time of writing, Google offers a 12 months / $300 free trial of the platform. Although this offer comes with certain restrictions (no bitcoin mining, for instance), it should be sufficient to get you started working in this environment. To create your GCP account with the free trial, go to cloud.google.com and click on the try it free button. You will be asked to login with your Google account and add your billing information. Once your account is created, you can access the web console at http://console.cloud.google.com/. You will first use the web console to define and launch a Debian-based instance and then switch to the web-based shell terminal to install all the necessary packages for your data science stack. But first you need to create a new project: - Go to the Resource Management page, - Click on "Create a new project", - Specify your project's title and notice how Google generates a project ID on the fly. - Edit the project ID as needed and click on "Create". The project ID has to be unique across the GCP naming space, while the project title can be anything you want. I name my project datacamp-gcp as shown below: By default, when you create a new project, your Google account is set as the owner of the project with full permissions and access across all the project's resources and billing. In the roles section of the IAM page, you can add people with specific roles to your project. For the purpose of this tutorial, you will skip that part and keep you as the sole user and admin of the project. Create an instance To create your first VM, you just go through the following steps: - Go to your dashboard at https://console.cloud.google.com/home/dashboard and select the project that you just created. - In the top left menu select "Compute Engine" and click on "VM instances". - In the dialog, click on the "Create" button. You are now on the "Create an Instance" page. Name the instance: you can choose any name you want. I name my instance Select the region: the rule of thumb is to select the cheapest region closest to you to minimize latency. I choose east-d. Note that prices vary significantly by region. Select the memory, storage and CPU you need. You can use one of several presets or customize your own instance. Here, I choose the default setup n1-standard-1with 3.75 Gb RAM and 1vCPU for an estimated price of $24.67 per month. Select the boot disk and go with the default Debian GNU/Linux 9 (stretch) OS with 10 Gb. If you prefer using Ubuntu or any other linux distribution, click on the "Change" button and make the appropriate selection. Since Ubuntu is a close derivation of Debian, either distributions will work for this tutorial. 5.Make sure you can access the VM from the internet by allowing http and https traffic. 6.(Optional) Enable a persistent disk for backup purposes: Click on the "Management, Disks, networking, SSH keys" link. This displays a set of tabs, select the "Disks" tab and unselect the "Deletion Rule". This way, when you delete your instance, the disk will not be deleted and can be used later on to spin up a new instance. 7.Finally, click on "Create". Your instance will be ready in a few minutes. Notice the link "Equivalent command line" link below the Create button. This link shows the equivalent command line needed to create the same instance from scratch. This is a truly smart feature that facilitates learning the syntax of gcloud SDK. At this point, you have a running instance which is pretty much empty. There are 2 ways you can access the instance. Either by installing the gcloud SDK on your local machine or by using Google's Cloud Shell. Google's Cloud Shell Google's Cloud Shell is a stand alone terminal in your browser from which you can access and manage your resources. You activate the google shell by clicking the >\_ icon in the upper right part of the console page. The lower part of your browser becomes a shell terminal. This terminal runs on a f1-micro Google Compute Engine virtual machine with a Debian operating system and 5Gb storage. It is created on a per-user, per-session basis. It persists while your cloud shell session is active and is deleted after 20 minutes of inactivity. Since the associated disk is persistent across sessions, your content (files, configurations, ...) will be available from session to session. The cloud shell instance comes pre-installed with the gcloud SDK and vim. It is important to make the distinction between the Cloud shell instance, which is user-based, and the instance you just created. The instance underlying the cloud shell is just a convenient way to have a resource management environment and store your configurations on an ephemeral instance. The VM instance that you just created, named starling in the above example, is the instance where you want to install your data science environment. Instead of using the Google Cloud shell, you can also install the gcloud SDK on your local machine and manage everything from your local environment. Here are a few useful commands to manage your instances. List your instances gcloud compute instances list Stop the instance (takes a few seconds) gcloud compute instances stop <instance name> Start the instance (also takes a few seconds) gcloud compute instances start <instance name> and ssh into the starling instance gcloud compute ssh <instance name> Setting up the VM Run that last command in your Google Shell window to log in your instance. The next steps will consists in: - Installing a few Debian packages with - Installing the Anaconda or Miniconda distribution. - Setting up the instance to make Jupyter notebooks securely accessible online. Let's start by installing the Debian packages: - bzip2, which is required to install Mini/Anaconda, - git, which is always useful to have, and - libxml2-dev, which is not required at this point but you will often need it when installing further Python libraries. Run the following commands, which work for both Ubuntu and Debian, in the terminal: $ sudo apt-get update $ sudo apt-get install bzip2 git libxml2-dev Anaconda / Miniconda Once the above packages are installed, turn your attention to installing the Anaconda distribution for Python 3. You have the choice between installing the full Anaconda version which includes many scientific Python libraries, some of which you may not actually need or installing the lighter Miniconda version which requires you to manually install the Jupyter libraries. The process is very similar in both cases. To install the lighter Miniconda distribution, run $ wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh $ bash Miniconda3-latest-Linux-x86_64.sh $ rm Miniconda3-latest-Linux-x86_64.sh $ source .bashrc $ conda install scikit-learn pandas jupyter ipython The install shell script is downloaded and run with the first 2 lines. You should accept the license and default location. In line 3, the no longer needed shell file is removed. Sourcing .bashrc on line 4, adds the conda command to your $PATH without having to open a new terminal. And finally, the last line installs the required python libraries: scikit-learn pandas jupyter ipython. The commands to install the full Anaconda distribution are very similar. Make sure to check the download page to get the latest version of the shell script file: $ wget https://repo.continuum.io/archive/Anaconda3-5.0.1-Linux-x86_64.sh $ bash Anaconda3-5.0.1-Linux-x86_64.sh $ rm Anaconda3-5.0.1-Linux-x86_64.sh $ source .bahsrc To verify that everything is installed properly, check your python version with python --version and verify that the right python is called by default with the command which python. You should be getting something similar to You now have a working Python environment with the standard data science libraries installed ( Allowing Web Access The third and final step is to configure your VM to allow web access to your Jupyter notebooks. You first need to make the VM accessible from the web. To do that, you will create a firewall rule via the Google Cloud console. Go back to your Instances dashboard and in the top left menu, select "VPC Network > Firewall rules". Click on the "CREATE FIREWAL RULE" link and fill out the following values: - Name: jupyter-rule (you can choose any name) - Source IP ranges: 0.0.0.0/0 - Specified protocols and ports: tcp:8888 - and leave all the other variables to their default values. The form should look like: This firewall rule allows all incoming traffic (from all IPs) to hit the port 8888. Using the "Equivalent command line link", you can see that firewall rule can also be created from the terminal with the following command line: $ gcloud compute --project=datacamp-gcp firewall-rules create jupyter-rule --direction=INGRESS --priority=1000 --network=default --action=ALLOW --rules=tcp:8888 --source-ranges=0.0.0.0/0 Now go back to the VM page (top left menu > Compute Engine > VM instances), click on your VM name. Make a note of your VM IP address. This is the IP address that you will use in your browser to access your Jupyter environment. In my example, the IP address is: 220.127.116.11, yours will be different. and make sure the Firewall rules are checked: Jupyter notebooks come with a configuration file that needs to be generated and edited in order to setup online access to your notebooks. In the terminal, run jupyter notebook --generate-config to generate the configuration file. And jupyter notebook password to generate a password. Tip: make sure that this is a strong password! Now edit the configuration file you just created with vim .jupyter/jupyter_notebook_config.py and add the following line at the top of the file c.NotebookApp.ip = '*' (to switch to edit mode in vim, just type the i character). Quit and save with the following sequence This will allow the notebook to be available for all IP addresses on your VM and not just the http://localhost:8888 URL you may be familiar with when working on your local machine. You are now ready to launch your Jupyter notebook with the command line: $ jupyter-notebook --no-browser --port=8888 And you should see something like that in the terminal: In your browser go to the URL: http://<your_VM_IP>:8888/ to access your newly operational Jupyter notebook. To check that everything is working as expected, create a new Python 3 notebook, and A few notes on your current setup: The IP address you used in your browser is ephemeral. Which means that every time you restart your VM, your notebooks will have a different URL. You can make that IP static by going to: Top left menu > VPC Network > External IP addresses and select "static" in the drop down menu The security of your current setup relies on the strength of the Jupyter notebook password that you defined previously. Anyone on the internet can access your Jupyter environment at the same URL you use (and bots will absolutely try). One powerful but very unsecure core feature of Jupyter notebooks is that you can launch a terminal with sudo access directly from the notebook. This means that anyone accessing your notebook could take control of your VM after cracking your notebook password and potentially run anything that would send your bills through the roof. The first level measures to prevent that from happening includes Making sure your Jupyter password is a strong one - Remember to stop your VM when you're not working on it You are also currently running the server over http and not https which is not secure enough. Let’s Encrypt provides free SSL/TLS certificates and is the encryption solution recommended in the jupyter documentation. For more information on security issues related to running a public Jupyter notebook, read this. Jupyter notebooks are very convenient for online collaborative work. But you can also run an IPython session from your terminal simply with the ipython command. That will open an IPython session which has all the bells and whistles of a Jupyter notebook such as magic commands (%paste, %run, ...) but without the web interface. It's easy to set up your VM to be enable R notebooks in your Jupyter console. The instructions for enabling R Markdown are available from this great DataCamp tutorial: Jupyter And R Markdown: Notebooks With R by Karlijn Willems. Google Cloud offers many interesting services for data science and powerful yet easy to setup VM instances alongside a very attractive free trial offer. The web console is easy to navigate and often displays the command line equivalent to current configuration pages, thus lowering the barrier to using the gcloud SDK. In this article, you've learned how to select and launch a VM instance, install the necessary Debian packages, Anaconda distribution and data science stack and finally how to setup the access rules to launch a Jupyter notebook accessible from your browser. Although the whole process may seem a bit complex the first time, it will quickly become familiar as you create and launch more and more VMs. And in case things become a bit too muddled, you can always delete the VM you're working on and restart from scratch. That's one of the perks of working in the cloud. After a few times you will be able to spin up complete data science environments on Google Cloud in a few minutes. Feel free to reach out and share your comments and questions with me on twitter: @alexip.
<urn:uuid:710916eb-48f7-4338-b329-9d0715bcc258>
CC-MAIN-2020-16
https://www.datacamp.com/community/tutorials/google-cloud-data-science
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371824409.86/warc/CC-MAIN-20200408202012-20200408232512-00035.warc.gz
en
0.882108
3,990
2.59375
3
- Prayer and Worship - Beliefs and Teachings - Issues and Action - Catholic Giving - About USCCB The ministry of the Word is a fundamental element of evangelization through all its stages, because it involves the proclamation of Jesus Christ, the eternal Word of God. “The word of God nourishes both evangelizers and those who are being evangelized so that each one may continue to grow in his or her Christian life” (National Directory for Catechesis [NDC] [Washington, DC: United States Conference of Catholic Bishops, 2005], no. 17). by Stephen J. Binz St. Gregory the Great, in the sixth century, wrote: "The biblical Scriptures are letters from Almighty God to his creatures. The Lord of all has sent you his letters for your life's advantage—and yet you neglect to read them eagerly. Study them, I beg you, and meditate daily on the words of your Creator. Learn the heart of God in the words of God" (Letters, 5, 46). Reading the Bible in this way, as the Word of God expressing the heart of God, is what the ancient church called lectio divina. This is what Origen meant when he wrote about lectio divina in the third century, and what the patristic writers in general recommended as a way of prayer. "Lectio Divina" is a Latin word that means "divine reading" or, as it is more often translated, "spiritual reading." The most important foundation of this way of praying with Scripture is an understanding of the text's inspiration. When St. Paul declared that "all Scripture is inspired by God" (2 Tm 3:16), he used the word theopneustos (God-breathed). The sacred text is written by the human hand, but "breathed" by God. God is the primary source of Scripture, and human writers are the instrumental source. Thus, inspiration is not only a charism given by God to the biblical writers, but it is a continuing characteristic of the biblical text. The Bible is always inspired, so whenever we take it in our hands to read, we know that God's Spirit has been infused into the text. So the Holy Spirit within us leads us to listen, reflect, and understand deeply the inspired words given to us in Sacred Scripture. Because the Bible is the Word of God—"letters" from God to us—our first response must be listening (lectio). We must attend carefully to the text, listening to it "with the ear of the heart," as recommended by St. Benedict (Rule of St. Benedict, Prologue). If God is indeed speaking to us through the sacred text, then we must attend to the words with a sense of expectation and let go of our own agendas. No matter how many times we may have read the passage in the past, we can expect God to offer us some new wisdom every time we read. So we must listen to the text as if for the first time, paying attention to whatever God desires for us. Listening to the inspired Word leads us to reflection (meditatio). We want to understand the meaning of the text in the context of our lives. Because the Scriptures are Divine Revelation, they are far more than mere information. By reflecting and pondering the text, we allow the text to be an encounter with God, and we open ourselves to the deeper significance and the grace God desires for us. Entering into this kind of meditation, we might try to place ourselves in the scene. We want to encounter God through the text with our whole selves: our minds, hearts, emotions, imaginations, and desires. Through this kind of reflection, we try to discern what God wants us to understand or experience through the sacred text. Then, after listening to and reflecting on God's Word, we naturally want to respond in prayer (oratio). Our prayer arises in our hearts as a result of having encountered God in the biblical text. As in any true communication, we listen and respond, so that a dialogue is established between God and ourselves. As St. Ambrose said, "In lectio we listen to God, in oratio we speak to God." Depending on what we have heard God say to us in our reflective reading, our prayers may be of praise, thanksgiving, lament, or repentance. And our prayers are increasingly enriched because they are continually nourished by the vocabulary, images, and sentiments of the sacred texts. Because our responses to God's Word are increasingly more personal relationships with God, our prayers then lead us to contemplation (contemplatio), which is resting in the presence of God. As with any relationship, words and dialogue can be sustained only for so long. In the presence of God, our prayers lead to silence. In this silent contemplation, we open our hearts to whatever God wants to do within us. Having been fed by God's Word, we are now transformed by God's grace in the ways God knows best. A humble receptivity on our part allows God to work his transforming will within us. Before ending our prayerful time with God's Word, we take time to move back into our active lives with awareness. We move from contemplation to action (operatio). We should consider what God wants us to do as a result of having encountered the Divine Presence in Scripture. By allowing our lives to be gradually transformed by Scripture, we become witnesses of the Good News. The experience of lectio divina deepens the presence of God within us as we seek to become more like Jesus Christ. So our daily lives become more attentive, more merciful, and more purposeful. Lectio divina is the Church's most ancient way of reading the Bible. Of course, this prayerful reading of Scripture was not called lectio divina until the time of the Latin Fathers, but this must have been the way that Jesus read the Scriptures of Israel: a way that he learned from the Jewish tradition. The early Christians read the Gospels in this way too, not just as a way of learning about Jesus, but as a means of forming their lives as his disciples. The Church Fathers spoke of lectio divina as a way of pondering the Word of God. Origen urged his readers to study and pray God's Word, asking to be illumined by God. Jerome encouraged his audience to be fed each day with lectio divina. As the monastic movement developed, lectio divina was practiced as the daily way to communicate with God. St. Benedict established lectio divina, along with the liturgy, at the core of his Rule. The monastic tradition encouraged this slow and thoughtful reading of Scripture and the ensuing pondering of its meaning. Other spiritual traditions practiced lectio divina in a variety of ways. St. Albert stipulated that the Carmelites should ponder the Word of God day and night. St. John of the Cross urged the practice of lectio divina in this way: "Seek in reading and you will find in meditation; knock in prayer and it will be opened to you in contemplation" (De officiis ministrorum 1, 20, 88). In Dominican spirituality, listening to the Word becomes a preparation for witnessing to the Word. St. Dominic's eighth way of prayer, sitting with Scripture, leads to his ninth way of prayer, walking with Scripture. St. Ignatius of Loyola added dimensions of imagination, consolation, and discernment to lectio divina as he developed the Spiritual Exercises. The Society of Jesus, most commonly known as the Jesuits, teach that lectio divina forms people into contemplatives in action. In recent years, lectio divina has been liberated from monasteries and religious houses to become the heart of lay spirituality. In his apostolic exhortation Evangelii Gaudium, Pope Francis recommended lectio divina as a "way of listening to what the Lord wishes to tell us in his word and of letting ourselves be transformed by the Spirit" (Evangelii Gaudium, 152). Lectio divina, he said, "consists of reading God's word in a moment of prayer and allowing it to enlighten and renew us." Rather than keeping Scripture at a safe analytical distance, this formational reading leads us to involve ourselves intimately, openly, and receptively in what we read. Our goal is not to use the text to acquire more knowledge, or to get advice, or to form an opinion about the passage. Rather, the inspired text becomes the subject of our reading relationship and we become the object that is acted upon and shaped by Scripture. Reading with expectation, we open ourselves so that the divine Word can address us, probe us, and form us into the image of Christ. Although some today try to create a clear distinction between studying the Bible and prayerful reflection on Scripture, the Christian patristic writers show us that we cannot create this kind of division with the Word of God. Whether we are studying or praying, we must be always clearing a path toward our hearts for Jesus to come. Bible study today must teach people how to listen personally to the voice of God in the inspired texts and how to seek a prayerful, contemplative, formative understanding and love for Scripture. There is no clear distinction here between study and prayer. Lectio divina is similar to Eucharistic communion in that, through it, Christ in a certain sense enters under our roofs, infuses our bodies and souls with his divine presence, and forms us into his own body. Pope Benedict says that "the diligent reading of Sacred Scripture accompanied by prayer brings about that intimate dialogue in which the person reading hears God who is speaking, and in praying, responds to him with trusting openness of heart. If it is effectively promoted, this practice will bring to the Church—I am convinced of it—a new spiritual springtime" (Address at the 40th anniversary of Dei Verbum, September 16, 2005). In a vision of Ezekiel, God invites the prophet to open his mouth and eat the scroll so that he may then speak God's Word to the people (Ez 3:1-4). Medieval writers often compared lectio divina with this process of eating: taking a bite (lectio), chewing on it (meditatio), delighting in its flavor (oratio), and then digesting it to become part of the body (contemplatio). I would add, finally, metabolizing the Word (operatio), so that it may be put to use in forms of witness and service. Copyright © 2016, United States Conference of Catholic Bishops, Washington, DC. All rights reserved. Permission is hereby granted to duplicate this work without adaptation for non-commercial use. Excerpts from Pope Francis, Evangelii Gaudium, copyright © 2013, Libreria Editrice Vaticana; Used with permission. All rights reserved. Scripture excerpts used in this work are taken from the New American Bible, rev. ed.© 2010, 1991, 1986, 1970 Confraternity of Christian Doctrine, Inc., Washington, DC. All rights reserved. No part of this work may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without permission in writing from the copyright owner. By accepting this message, you will be leaving the website of the United States Conference of Catholic Bishops. This link is provided solely for the user's convenience. By providing this link, the United States Conference of Catholic Bishops assumes no responsibility for, nor does it necessarily endorse, the website, its content, or
<urn:uuid:0af69c34-4a97-47a3-8abc-7b3196ee0023>
CC-MAIN-2020-16
http://www.usccb.org/beliefs-and-teachings/how-we-teach/catechesis/catechetical-sunday/prayer/adult-faith-formation-binz.cfm
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496330.1/warc/CC-MAIN-20200329232328-20200330022328-00473.warc.gz
en
0.943592
2,430
2.65625
3
Flowering rosette of the extremely rare Mauna Kea silversword (Argyroxiphium sandwicense subsp. sandwicense growing with a shrubby or tree-like member of the Hawaiian silversword alliance (Dubautia arborea) at about 2,950 m elevation. In spite of great morphological differences (including unbranched monocarpic rosette shrubs, highly branched polycarpic shrubs, trees, and vines), virtually all 28 species in the three genera constituting the silversword alliance retain the ability to hybridize, and many striking hybrid combinations are produced in nature. (Left) functional male flower of Echinocereus coccineus (Cactaceae) showing pollen-filled anthers surrounding the base of the stigma lobes. (Right) Functional felmale flower of E. coccineus from a different plant showing reduced filaments and empty anther sacs held below the stigma lobes. Wild rose mallow (Hibiscus moscheutos): flowers at left; close-up view of stigma at right, where excess pollen grains typically deposited by bees germinate and the resulting pollen tubes must compete for ovules. At left, female (top) and hermaphrodite (bottom) flower of bladder campion (Silene vulgaris); their nectaries differ in production of sugar content. At right, a European skipper (Thymelicus lineola) removes nectar from a female bladder campion flower. Differences in nectar production between the genders of plants may affect pollinator activity and, ultimately, indbreeding. Vigorous spring growth of introduced Spartina alterniflora on a mud flat in South San Francisco (California) Bay. This invasive grass threatens to reduce shorebird feeding areas and clog flood control channels. Individual genetic clones, seen as distinct circular patches, have highly variable seed set, which will probably influence the future genetic composition of this population. A lesser long-nosed bat, Leptonycteris curasoae, approaching a flower of the columnar cactus, Pachycereus pringlei. The abundance of this pollinating bat is a key factor in the maintenance of trioecy in this cactus. The Bitterroot Valley of western Montana, showing a typical habitat for the rare species, Arabis fecunda: dry and rocky with erodible, sparsely vegetated slopes. This species has a mixed mating system and exhibits inbreeding depression, attributes of reproductive biology that may complicate efforts to preserve its populations. Pollination droplet secreted from tip of slender micropylar tube of Ephedra ovule. Pollen adheres to the sticky, sugar-rich droplet, which later retracts to allow pollen to germinate in the pollen chamber adjacent to the female gametophyte. The central yellow ring of a young Androsace lanuginosa (Primulaceae) flower turns red over several days, while the rest of the flower remains fresh and unchanged. The color presumably signals the plant's fly pollinators, which visit only rewarding,yellow-phase flowers. Such floral color changes occur in at least 77 families in 33 orders. A unisexual (early male) umbel of Bomarea acutifolia, a hummingbird-pollinated vine in the mountains of Costa Rica. Following the male phase, each flower continues to produce nectar during a week-long neuter phase and then becomes female. Thus, each umbel is temporally unisexual and opportunities for self-pollination are limited. Group of Calypso bulbosa (Orchidaceae) from the rocky Mountain foothills west of Calgary, Alberta, Canada. After receiving pollen, these flowers undergo rapid changes in color and shape, but the rate of change is unaffected by the amount of pollen deposited. Removal of a flower's own pollen does not cause color or shape changes. A myrmicine ant, Aphaenogaster araneoides, carrying a seed of a neotropical understory herb, Calathea micans (Marantaceae). Ant-planted chasmogamous and cleistogamous seeds differed in establishment success in understory and gap sites. Cross section of Lilium at the tetrad stage stained with the PAS polysaccharide specific reaction. Soluble carbohydrates are detected within the locular fluid and the tapetum, whereas starch grains are accumlated in the outer anther wall layers (epidermis, endothecium, and middle layers). Pollination drops inside an ovulate cone of Sequoiadendron. These drops persist undisturbed during wet periods, since a water sheet forms on the wettable cone surface. Pollen capture resumes immediately after the cone dries. Pyllocladus glaucus Carr. (Phyllocladaceaes), Toatoa or Blue Celery Pine, endemic to New Zealand, showing part of a pseudowhorl of fertile phylloclades. Cones are borne marginally towards the base of these modified branch complexes. The individual ovules with pollination drops are at the stage of pollen receptivity. Magnification x5, from a color transparency by J. E. Braggins. A capitulum of the South African "beetle daisy" (Gorteria diffusa: Asteraceae). The dark raised spots on the ray florets are strikingly similar to the bee-fly (Megapalpus nitidus: Bombyliidae) that pollinates this plant. Experiments show that bee-flies are more strongly attracted to capitula with spots than capitula in which spots have been removed. In the dark forest understory, white flowers of the ginger Zingiber longipedunculatum are pollinated by pollen-collecting female Amegilla bees (Anthophoridae). Ginger species in a Bornean forest show high diversity, but they were grouped into only three pollination guilds. View of "Tres Picos" (Three Peaks) from Villagra on Robinson Crusoe Island, which is in the Juan Fernandez archipelago off the coast of Chile. The closer vegetation represents the habitat for Lactoris fernandeziana (Lactoridaceae). Inflorescence with flowers of purple loosestrife, Lythrum salicaria (left panel). Depending on the relative length of styles with respect to stamens within flowers, individuals are categorized into three floral morphs. The three floral morphs also differ in size and shape of stigmas (right panel). Stigmas (toplong morph, middlemid morph, and bottomshort morph, in the right panel; bar = 200 µm) are digitally false-colored computer-enhanced images from scanning electron micrographs. Photo credit: M. Biernacki, T. K. Mal, R. J. Williams, and The Camera Shop, Broomall, Pennsylvania. Color-enhanced scanning electron photomicrograph of a seed of Lobelia inflata (Campanulaceae). The seed's actual width is ~0.30 mm. An individual of this monocarpic and self-fertilizing species typically produces 50100 fruits, each containing up to 500 seeds. This species has a strict light requirement for germination. A nocturnal rodent, Gerbilluris paeba, feeds on the copious amounts of jelly-like nectar produced by flowers of the African lily Massonia depressa (Hyacinthaceae). This lily, which has flowers situated at ground level, is the first monocotyledon discovered to be pollinated by rodents. The striking similarities between the flowers of M. depressa and those of unrelated rodent-pollinated Protea spp. (Proteaceae) provide strong support for the concept of convergent floral syndromes. Pollen germination and tube growth in the snow buttercup, <IT>Ranunculus adoneus,</IT> photographed under fluorescence microscopy. Snow buttercup flowers exhibit heliotropism, the capacity to track the sun's rays over the course of the day. The adaptive significance of solar tracking in snow buttercups is mediated through the impact of flower heliotropism on paternal and maternal floral environments. In controlled crosses, pollen from solar-tracking flowers has higher germination success than pollen from experimentally restrained flowers. Solar tracking in recipient flowers also enhances pollen germination and increases pollen tube to ovule ratios. A severe infestation of <IT>Lygodium microphyllum</IT> (Cav.) R. Br. located at Jonathan Dickinson State Park, Martin County, Florida, USA. A native of the Old World tropics, <IT>L. microphyllum</IT> has become a serious pest in the forested wetlands of South Florida since naturalizing in the 1960s. Within severe infestations, this vine-like fern can smother both the understory and canopy, disrupting the recruitment of native vegetation and altering local fire ecology. The spread of this species appears to be facilitated by its ability to reproduce via intragametophytic selfing. Computer-generated three-dimensional reconstruction of the male germ unit of rye (Secale cereale) based upon serial ultrathin sections. The two elongated sperm cells are connected at one end and each contains some plastids (green), as well as numerous mitochondria (red) and a nucleus (blue). The vegetative nucleus (blue) is closely associated with the sperm cells, but not connected; it contains a single nucleolus (white). Tacca chantrieri in the shady understory of a tropical forest in Yunnan Province, China. The striking floral display involves dark-purple pigmented flowers and bracts, and extended whisker-like bracteoles. In Tacca, these traits have been assumed to function as a deceit syndrome in which reproductive structures resemble decaying organic material attracting flies that facilitate cross-pollination (sapromyiophily). However, experimental studies of the pollination and mating biology of T. chantrieri in China cast doubt on this hypothesis by demonstrating that most seed produced in populations is the result of autonomous self-pollination. Items posted on the Botanical Society of America's website by the author/creator are licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. We value sharing, growing and learning together. In the spirit of fairness, we believe in the attribution of materials and ensuring the appropriate voices are in place when considering further use.
<urn:uuid:2fa4a079-e556-4e12-ba1f-ab9a0b9960cf>
CC-MAIN-2020-16
https://pix.botany.org/media/collection/id.42.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370508367.57/warc/CC-MAIN-20200402204908-20200402234908-00233.warc.gz
en
0.879999
2,222
2.953125
3
Community Archaeology: Handing Back the Power of the Past The community of Quinhagak, Alaska, where the Nunalleq Project is inspiring Yup’ik youth about their history and culture. Photo by Sean O’Rourke. By Sean O’Rourke It’s July 2015 and, after leaving Canada some 40 hours prior, I find myself on a tiny plane descending rapidly into Quinhagak, Alaska, an out-of-the-way Yup’ik village roughly 745 people call home. As the 10-seater Cessna cuts through haze blown in from forest fires hundreds of kilometres away, the tundra’s intricate beauty comes into focus: This treeless-but-lush landscape is endlessly dissected by an ever-changing network of meandering streams lazily draining into the Yukon and Kuskokwim rivers before rushing toward the Bering Sea. Tundra outside Quinhagak, Alaska. Photo by Sean O’Rourke. I soon learn, however, that the river-scape is not the only thing shifting in the Yukon-Kuskokwim Delta, a region twice the size of Scotland (and just as rainy). As a fledgling archaeologist, I’m flying to Quinhagak to partake in a field school at the Nunalleq Project—one of a few recent excavations turning the discipline of archaeology’s typical power structures on its head. Power and Archaeology in the “New World” Power relations shape archaeology as they do nearly every other human context. In North American archaeology, the balance of power has historically favoured archaeologists, giving the Indigenous cultures they study little say regarding the fate of their material heritage or whether it is dug up in the first place. Early archaeology in the so-called “New World” often amounted to little more than treasure hunting. In 1838, a pair of brothers gutted the Grave Creek Mound in West Virginia, the largest conical burial mound on the continent. The siblings transformed it into a museum (and later a saloon and dance floor), selling artifacts and human remains they encountered—many of which, like the valuable scientific data the mound once held, were never recovered. Though during the 20th century archaeology evolved into the systematic and regulated discipline we know today, Indigenous perspectives mostly lingered out of sight. Communities felt powerless as they watched their cultural heritage excavated and shipped around the world. At the dawn of the 21st century, nearly 20,000 Native American artifacts were in British museums. Over the past few decades, however, the discipline has slowly been shifting toward an equilibrium and a reality of genuine, equitable partnership has begun to emerge. Today, Indigenous groups hold more decision-making clout in archaeology than ever before—a tool some, like Quinhagak, now wield to empower their communities. The Nunalleq Project: New Ideas from the Old Village My plane touches down on Quinhagak’s bumpy gravel airstrip and I shyly hitch a ride with a gruff postal worker to meet my team at the community centre (or “the big red building,” as the packages I sit beside call it). I shuffle in the door and Charlotta Hillerdal, one of the head archaeologists, begins briefing our crew on what we’ll be doing over the next month. She explains how the Nunalleq (“old village” in Yup’ik) Project is the excavation of a 600-year-old Yup’ik village perched precariously on the rapidly eroding Bering Sea coastline and that our job is to excavate, document and preserve as much of it as possible before it is lost to the region’s notoriously vicious winter storms. Due to climate change, the ground does not freeze here like it used to, and more and more of the coast washes away each year. Hillerdal emphasizes that the excavation is jointly directed by the University of Aberdeen and Qanirtuuq Inc., the village’s corporation, so we should all expect to meet and work with locals. Being my first archaeological experience, this does not strike me as noteworthy—but I could not have been more wrong. The author working at Nunalleq in July 2015. Photo by Lindsay Paskulin. On our first day at Nunalleq, the rain never stops. Covered head to toe in mud, I wonder what I signed myself up for. But on the second day, the sun comes out—and so does the community. All day long, whole families—elders, parents, children, even family dogs—cram onto their ATVs and drive down the sandy beach from Quinhagak to visit. Though elders and parents mostly chat with the archaeologists as they walk around the excavation’s edges, peering down over our shoulders, the kids and teens almost always get down in the mud and dig with us. My colleagues and I teach them to identify common artifacts (like stone flakes or wooden dolls and darts), trowel with care and screen dirt. Judging by the smiles and laughter that fill our camp, they love it. Whenever we host open houses, Yupiit of all ages clamour to catch a glimpse of our most recent discoveries. In addition to the hundreds of immaculately preserved museum-quality artifacts we regularly unearth—a human-walrus transformation mask, tattoo needles, an amber necklace and intricate ivory earrings, to name a few—we excavate hundreds upon hundreds of razor-sharp stone arrowheads. “It was like a drive-by shooting,” co-head archaeologist Rick Knecht says. All day long, whole families—elders, parents, children, even family dogs—cram onto their ATVs and drive down the sandy beach from Quinhagak to visit.” In fact, it is because of interested locals that Nunalleq got started in the first place. In the mid-2000s, beachcombing villagers began noticing artifacts washing up with the tide. Qanirtuuq Inc. contacted Knecht, who was previously at the University of Alaska Fairbanks before moving to the University of Aberdeen, and in 2009 Nunalleq was discovered. It is unusual for a Yup’ik community to request the assistance of archaeologists—Yupiit tend to believe the past should stay in the ground—but they made an exception and chose to study Nunalleq rather than lose it forever. According to Hillerdal, “It sparked an engagement with Yup’ik traditional culture, especially among the younger generation.” I spend much of July on hands and knees clearing tundra moss with my trowel and picking salmon vertebrae out of my screen. The villagers tell me that Nunalleq is transforming how youths see themselves and their culture, and I cannot get their words out of my head. Parents praise the education programs that blossom from Nunalleq, such as carving workshops facilitated by elders and a cultural art program that urges schoolchildren to explore what it means to be Yup’ik in the modern era. One local man I befriend, Mike Smith, got involved with Nunalleq as a teen distraught over a break-up. Working with the archaeologists encouraged him to take up Yup’ik carving. “This project saved me,” he would later tell the Anchorage Daily News. Over a cup of tea, a teacher at the community school describes how Nunalleq both inspired youths to create dances about their ancestors—the first time Yup’ik dancing has occurred here since it was banished by missionaries over a century ago—and galvanized one of her students, Angela, to give a prize-winning speech on the importance of learning about the past. In her speech, Angela says, “The dig site has taught me the importance of staying in school so that I can be involved in this when I get older. I want to be able to share the artifacts with my family one day. I want to encourage everyone in my community to use the resources around them to learn about their culture. Through the dig site I have become more proud of who I am and where I have come from.” My Research: How Does Nunalleq Affect Yup’ik Youth? On a blustery, sunny day in September 2016, I again find myself on a plane traversing the tundra en route to Quinhagak. With Mike Smith as my research partner, I’m here to speak with elders, parents and educators about the effects Nunalleq is having on youth. Qanirtuuq CEO Warren Jones has said, “One of the big reasons for this project was to help our future generation get back to our history and culture, so all this work is basically for our children and the future generations.” Based on what Smith and I hear from community members, these efforts are paying off. Mike Smith plucks a salmon out of the Bering Sea on the mud flats next to Nunalleq. Photo by Sean O’Rourke. The school’s Yup’ik language teacher, Keri, explains how Nunalleq makes youth “appreciate their culture because they know what the weather can get like here. If our ancestors didn’t make it, then we wouldn’t be here.” This appreciation, according another teacher, Alicia, “makes them value who they are.” She links the Nunalleq-inspired resurgence of Yup’ik dancing to further psychological changes: “We have one kid I can think of in particular. He is the best native dancer. It has helped a lot of our kids feel confident. Those kinds of things help the kids realize they can be whatever they want. The Yup’ik dancing for sure has helped a lot of our kids. You see them up there dancing, and it’s like they’re shining.” The Nunalleq Culture and Archaeology Centre: A Milestone in Community Archaeology A decade of close collaboration between Qanirtuuq Inc. and the University of Aberdeen recently culminated in the Nunalleq Culture and Archaeology Centre, which opened last August. The museum ensures artifacts can be preserved right in the village. At over 70,000 pieces, it is now the largest collection of pre-contact Yup’ik artifacts in the world. Archaeological materials are typically whisked thousands of kilometers away to university laboratories for preservation, analysis and storage, but Quinhagak now has the power to decide what happens to their material heritage and share their past with generations to come. “I almost broke down. It’s our culture. It’s priceless to us,” Qanirtuuq Inc. CEO Warren Jones said at the museum’s opening ceremony. “They will be here with us forever. When I’m gone, when we’re gone, they’ll be in the village. And future generations can come at any time and look at them, and never forget where they came from.” Archaeology by the Community, for the Community Since the 1970s, a movement toward Indigenous collaboration and community-based research has been picking up momentum in both professional and academic spheres. Community-based archaeology projects like Nunalleq prioritize community involvement and capacity building by training and incorporating locals, as well as sharing decision-making power with community leaders. Prominent archaeological organizations like the Canadian Archaeological Association urge collaboration, because it promotes socially responsible research and adds depth to the archaeological process. According to Hillerdal, community involvement is what makes archaeology relevant. “Community engagement affects the way we do archaeology and also the questions that are asked to the material,” she says. “Involvement from the community and inclusion of local knowledge contributes to more complex interpretations and also a more ‘grounded’ archaeological practice.” Her words underscore one of community-based archaeology’s central tenets—and greatest strengths: an equal reliance on Western scientific and Indigenous knowledge systems when interpreting the past. When community members participate in a dig, they bring life experiences, cultural knowledge and fresh perspectives that help researchers address puzzling questions and generate new hypotheses. When locals are involved, “archaeologists and community members learn from each other, and the result is a shared story,” Hillerdal says. Sunset on Quinhagak’s Moravian church. Photo by Sean O’Rourke. The metamorphosis of archaeology as a colonial discipline into one in which the studier and the studied stand on equal footing is not yet complete. Significant work remains to be done. Leadership roles in archaeology’s academic and professional sectors are sorely lacking Indigenous voices. Governmental regulations regarding Indigenous archaeology are incomprehensive and many artifacts remain tucked away in repositories, gridlocked by outdated policies. Some in the field of cultural resource management seem as reluctant as their cousins in natural resource industries to understand Indigenous issues. Nevertheless, a handful of community-oriented projects and the growing number of Indigenous students in archaeology indicate the discipline is headed in the right direction. As time treads on, more archaeologists realize the power of material culture to profoundly reshape the realities of people alive today. The sooner society acknowledges that the past does not have to be cut off from the world by museum glass, the sooner we will be able to embrace and celebrate this aspect of our shared humanity. As a father with whom I spoke in Quinhagak put it, community-based research can provide the key to the past: “It’s up to us to unlock the door.” Learn more about the Nunalleq Project and keep up to date on its recent developments by following the Nunalleq blog. Read more about my research by reading an article I recently published in The Journal of Archaeology and Education. Sean O’Rourke is an interdisciplinary studies (psychology and geography) graduate student at the University of Northern British Columbia researching the intersection of meaning in life and land-use with Eveny reindeer herders in Siberia. Although he grew up in Calgary, Sean has lived in northern BC for the past two years. He has spent the past four years undertaking anthropological and psychological research in collaboration with Indigenous communities throughout the circumpolar north.
<urn:uuid:bb23f683-7b18-4f91-8c5d-a33731961431>
CC-MAIN-2020-16
https://culturallymodified.org/community-archaeology-handing-back-the-power-of-the-past/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371700247.99/warc/CC-MAIN-20200407085717-20200407120217-00195.warc.gz
en
0.949527
3,021
2.734375
3
SEO: An Introduction What is SEO? Search Engine Optimization (SEO) refers to the process of adjusting a website’s structure and content with the intention of improving the site’s visibility in search engine results. How Search Engines Operate Search engines operate by utilizing specially designed programs called "crawlers" which search rapidly through webpages and extract details about the site's contents. The information extracted by the crawlers is added to a storage area called an index, which the search engine accesses each time a user submits a search. Each search engine has its own specialized algorithm which searches the index for the submitted search terms and determines which sites will be most likely to contain the information the user is looking for. By optimizing a website's structure to match parameters in the search engine algorithm, a developer can increase the probability of having the site selected as a high-ranking result when a user submits a relevant search query. By achieving a higher search ranking, more potential users will be drawn to the site and will be more likely to trust it. Optimizing Website Structure The ultimate goal with SEO is to design a website which will be fast and easy for a crawler to navigate, and which contains enough information to give the crawler a good idea of what the site contains. There are many factors to consider in search engine optimization, with varying degrees of importance. The most important factor to account for when designing a website is the quality of the content. Naturally, having real, helpful information on a site is much more beneficial to a user than having large quantities of useless or repeated information. Low quality websites will draw fewer return visitors and will keep users engaged for smaller amounts of time. Both of these negative effects are quantifiable and are in fact tracked by crawlers to determine which sites tend to giver users the best experience. Keywords are important to include in your content to give the crawlers a good idea of what the site is about. Keywords should include specific words which tell what is being talked about, as well as common industry terms or related phrases. More success can be gained by including terms which users outside of the industry will be likely to use. Terms like “Magnesium sulfate heptahydrate” are excellent and will tell the crawler exactly what your concert is about, but a user who is researching the topic or unfamiliar with the terminology may be more likely to search for “epsom salts.” It is also important to note that “Keyword Stuffing” or the practice of jamming large amounts of keywords onto a page (visibly or invisibly) is not appreciated by search engines and will typically result in action being taken agains the site in the form of lower search rankings or, in serious cases, removal from the search results altogether. As far as the actual architecture of the site is concerned, the single most important factor is the HTML title tag. Crawlers look for this specifically, and it will determine how a site gets indexed. Title tags should accurately describe the contents of the webpage, using specific keywords. An effective title tag will ensure that a user finds the site when they search for specific keywords. Meta tags don’t provide an actual boost in search rankings, but this is typically the content that appears as a snippet below the link to the site in the search results, so it should look good. Like the title, it should contain several keywords and provide a clear overview of what will be found when the user clicks the link. Frequent updates to content increases the quality of a website’s content, which has already been discussed as an important factor. Staying recent will also provide direct search result rankings if a related search term or subject suddenly becomes more popular then average. Many search engines will give a boost in the rankings to websites with the most recent information about a trending topic. Here, care must be taken again to avoid meaningless updates or updates with little real value. This will ultimately succeed only in decreasing the usefulness of the site, and therefore the quality. There are also ways of increasing crawler efficiency by restricting access to pages that are unnecessary for the crawler to read, or which have information unrelated to the content of the website. Utilizing robots.txt files and 301 redirects when appropriate will make the crawl process more efficient, and will better reflect the content of the website. Navigation in general is important the crawl process, as well as the user experience. A website that is easy to navigate with few clicks will make a user feel more comfortable, and will allow a crawler to more easily reach each page that it needs to. Including a sitemap is a good way to tell the crawler exactly where it needs to go, as well as helping users who may struggle with navigation. Mobile device friendliness makes it more convenient for all users to access information on a site, and both Google and Bing will provide a slight boost to search rankings for sites that accommodate mobile users. Adding alternate text to images is a small, yet effective way of telling a crawler what is included on a page, since it cannot see what an image actually contains. Descriptive urls of the form www.domain.com/vacations/2015/minnesota are much easier to understand than those like www.domain.com/2015/2344=?45h3lkjl&&-0100-1. They make navigation easier for users and crawlers, as well as ensure that users are not discouraged by strange-looking urls which could give the impression of untrustworthiness. Sites that load faster can slightly improve rankings, and also make users more comfortable and happier with results. Waiting for a long time for a page to load can discourage users and they may look for answers elsewhere, which can actually negatively affect the site’s rank. Many factors are listed above, and each should be considered in some way when designing a website. As a general guideline, the site should simply be appealing, full of high quality content, and easy to navigate. How Links Affect Search Ranking Search engines will also take into account many external factors which will help it to gain a better understanding of what the site contains, as well as how useful other people think it is. Among these factors are the amount of time users spend on your site, whether or not they return to the search engine after a visit (supposedly because they did not find the information they were looking for), the location of the user, and the number of places across the web which link to the site. The most important of these is the number of links to a website. If a site is linked to many times, it shows that users value the information that the site contains. But beyond simply amassing large quantities of links to the site, it is important where those links come from. Links from blog posts, comments, and untrustworthy websites will not score as highly as links from other industry leaders or sites with good reputations. Crawlers can also look at the anchor text that is used inside a link to get a better clue about what the site actually contains, and what users think about it. Websites that provide users with a good experience and new, valuable information will be more often linked to than sites which simply answer a question. Creating video tutorials, doing research which others might benefit from, providing open-source software, or offering a useful product or service for free are all methods for drawing new users as well as increasing the odds that they will link to the site and spread the word. Social media presence can give a site a boost as well. Google will give a higher ranking to sites with more google+ activity, and if a website has a Facebook presence, it makes it easier for users to share the site with their friends. Search engines are aware that people will try to manipulate their algorithms to gain higher search rankings. To combat this, they have included measures to identify these attempts and will take action against them by reducing their search rankings or removing them from the index completely. It is unlikely that an accidental violation will result in penalties, but care should still be taken to avoid any risk. Low-quality content is a large determining factor in whether a site will be valuable to a user. If a site contains low-quality, duplicated, or unoriginal content, it will be less likely to be included in search results. The best way to avoid penalties for low-quality content is to make the website unique, and offer information that other sites do not. Cloaking refers to the act of creating fake sites or hiding content from the view of users to trick the search engine into thinking a site is more relevant than it is. This is very frowned upon, and will result in major penalties. Keyword stuffing has been mentioned already as a bad idea. It is easy to detect, and will result in lower search rankings. A history of DMCA takedowns or piracy will hurt a site’s reputation and make it less likely to appear high in search results. Overuse of advertising on a website is distracting, inconvenient for users, and difficult for crawlers to navigate. Too much advertising will result in a lower search ranking. Paid link schemes and link spamming are both serious offenses, and will result in severe penalties. Both of these violations hinge on creating large numbers of links to a website to make it look more credible, and are taken very seriously by most search engines. In most cases, evidence of these violations will result in removal from the search results. There are literally hundreds of ways to optimize a website for search engine results. In fact, Google uses over 200 indicators to determine which sites will appear for a given search. Search engine optimization is not a one-time project that can be completed in a weekend or scheduled for a day of work. Building a website’s reputation and establishing a strong base of links takes time, and results may not appear for months. Rather, SEO is an ongoing process, and one that should always be considered at all stages of website design. Generally, though, if a site is full of high-quality, unique content, is easy to navigate, and provides something of value to a user, it will likely perform very well in search results and encourage users to return and share their discoveries with others.
<urn:uuid:d3b897d2-0f12-4954-9f94-ed40502ee861>
CC-MAIN-2020-16
http://setwise.com/blog/2015/07/06/seo-an-introduction
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371611051.77/warc/CC-MAIN-20200405213008-20200406003508-00354.warc.gz
en
0.947123
2,125
3.125
3
October 1969 Electronics World Table of Contents Wax nostalgic about and learn from the history of early electronics. See articles Electronics World, published May 1959 - December 1971. All copyrights hereby acknowledged. This is the last of a series of articles on printed circuit boards (PCBs) that appeared in the October 1969 issue of reporting on the latest and greatest advances in printed circuit board technology. Author Gaetano Viglione, of Sanders Associates (bought by Lockheed Martin in the 1980s and now owned by BEA Systems), reported on the state of the art of flexible printed circuit wiring. Sanders did a lot of aerospace and military electronics systems and was a leader in the field. In those days, the larger electronics manufacturers had their own in-house PCB design and fabrication capability. Flexible Printed Wiring By Gaetano T. Viglione / Manager, Product R & D Flexiprint Division, Sanders Associates, Inc. The author is a graduate of Columbia University (Chemical Engineering). He holds several patents in metal processing and chemical engineering. He has been a consultant to industry and federal government in process engineering problems. This wiring can be preformed, folded and rolled, twisted and turned to fit any conceivable irregular configuration. It is particularly useful in very dense electronic packages. Flexible printed circuitry consists of flat etched copper conductors bonded between layers of pliable insulation. While this sounds simple enough, flexible printed circuitry has had placed upon it many demands, some of which are due to the numerous design features which appear to be attainable. The use of flexible circuitry as an interconnection medium has begun to inspire the design engineer because of its many advantages. For example, the significant difference of less weight per unit of area coupled with the reduction of volume over conventional cabling techniques, results in reductions in mass of 2:1 and, in some cases, 8:1. The feature of flexibility allows circuits to be preformed, folded and rolled, or twisted and turned to fit any conceivable irregular configuration in the more dense electronic packages, especially those typical of aerospace applications. Further, flexible printed circuitry is readily shielded and inherently more reliable due to its design and the adaptation and use of materials having exceptional physical properties. Table 1 - Characteristics of the most commonly used insulation material for flexible printed wiring. The real dollar savings come at the time of assembly of the electronic package because of a reduction in man-hours required for assembly. Another benefit is the reduced possibility of error when connecting by flexible printed circuitry as compared to the greater possibility of error when using point-to-point wiring. Listing of Advantages The benefits resulting from the use of flexible printed wiring and cabling include the 1. Each circuit is a finished unit, ready for component assembly. 2. Handling of individual wires is eliminated because there is no need to measure, cut, strip, tin, route, solder, and lace. 3. Circuits are custom-designed for each job; therefore, wiring errors are eliminated. 4. Each circuit of a particular design is mechanically and electrically identical and completely 5. Solder pads are in one place, rendering them ideal for automatic processing. 6. Wiring requirements no longer limit package geometry. Circuits can be run flat, bent around sharp corners, folded, and twisted. 7. Conductor breakage is nil. High-reliability hinge, spring return, and extensible interconnections can be readily designed. 8. Flexible printed circuitry can be bonded to rigid circuit boards to create a complete, one-piece interconnection assembly, eliminating unnecessary solder joints. 9. Single and multilayer circuits are closely spaced and held to close tolerances; therefore, high wiring and internal package densities are possible. 10. Flexible printed circuitry has a high volumetric efficiency. Close-tolerance conductor location is possible because each circuit is a precision etched unit. 11. Material normally needed in the form of a relief loop for the bend radius using standard wire cable can be eliminated, resulting in shorter wiring runs. 12. Thin, flat, two-dimensional geometry permits cable routing through narrow slots and along smooth surfaces, eliminating the excessive bulk of round wire. 13. Depending on the specific application, flexible printed circuitry can save approximately 75% of volume and weight over conventional round-wire cable. Fig. 1 - Conductor widths and spacings for various thicknesses. 14. Foreign material, such as moisture, flux, and gases which could "wick" inside the insulation of wire, cannot degrade flexible printed-circuit performance because all conductors are completely 15. Tension loads are carried by the entire cable, not by individual wires; therefore, each circuit part is a solid mechanical structure. 16. High reliability in demanding environments is inherent because the entire circuit flexes as a unit under stress of vibration and shock. 17. Distributed capacitance and cross coupling do not vary from unit to unit of a single design, resulting in constant electrical characteristics. 18. Circuits are easier to solder and inspect than a tangle of conventional wires; therefore, quality-control operations are more accurate. Design Criteria & Costs When considering the use of flexible circuitry for interconnecting either printed-circuit boards or black boxes, certain basic design criteria should be remembered. General guidelines to keep the number of layers in flexible circuitry to a minimum are: set-up pin address to reduce or eliminate crossovers; use the freedom of pin address as a method of reducing layers; consider the use of narrower conductors and spacing; and use fold-outs to increase density at the terminal area. There are many factors that affect the cost of a flexible circuit. Major cost advantages result if standard flexible-circuit materials and sizes are used and if non-critical tolerances are loose enough to allow economical, automated production. The following items will also serve to keep costs to a minimum: 1. Specify 0.0027" thick (2-ounce) copper conductors if possible. These are the most economical because raw materials are purchased, handled, and stocked in large quantities. Other sizes, such as 0.00135" (1 ounce) and 0.0040" (3 ounces) are available should the need arise. 2. Specify enlarged punched-out areas in the covercoat at solder-pad areas rather than tight-fitting, pad-sized individually punched areas. This eases registration in manufacturing and thereby reduces cost. 3. Try to keep punched-out bare copper areas on the same side of the circuit. If it is necessary to present bare copper on the reverse side of a circuit, try to eliminate the extra cost involved in punching the base insulation by folding the circuit. 4. Keep large punched-out areas in simple shapes to lower the cost of intricate dies and/or to avoid excessive hand-cutting and punching. 5. Design terminal pads somewhat oversized to allow for slight drift. 6. Specify insulation as shown in Table 1. These are usually stocked in quantity. 7. It is recommended that conductor widths be specified larger than the minimum shown in 8. It is recommended that conductor spacing be specified larger than the minimum shown in Fig. 1. 9 . Hold over-all flexible-circuit length and width to a minimum. 10. Use the fewest number of layers possible. This complex flexible printed circuit, which is shown here along with an 18-in rule for size comparison, demonstrates how a complicated interconnection problem has A multilayer flexible circuit with layer-to-layer interconnections along with connectors attached is illustrated. Another important design consideration is the selection of the termination method. Recently, connectors for flat cable that accommodate circuits of 50-mil centers have become available. The type and size of terminating pins and connectors and the method of attaching the circuits to them is important in keeping down the size and weight and permitting greater accessibility. The method of terminating using a pad with a hole or a "lap" soldering technique utilizing pins from a connector are two of the most commonly used fabricating techniques. For reasons of greater economy and increased reliability, a new generation of terminations welded to flexible circuitry has evolved. Materials such as tinned copper, nickel, and gold-plated Kovar are being satisfactorily welded to flexible circuits and potted as a substitute for specifically designed connectors. All of these terminal lead materials are easily solderable and, in some cases. they are also welded to weldable printed-circuit boards. The advantages here are: lower attrition due to the fact that leads do not become unsoldered upon secondary soldering; faster processing, reducing the possibility of process effects such as delamination due to less time at high temperatures as when a solder joint is made. The materials used in the fabrication of flexible printed circuitry depend upon the application and environment in which the equipment will be expected to operate. Normally, Kapton-F film, to which copper has been laminated, is used because of its desirable high-temperature properties. (Kapton-F is the trademark of the duPont Company for its plastic film consisting of a layer of Teflon FEP resin bonded to one or both sides of a polyimide film.) Covercoat materials can be varied depending on the humidity characteristics and high-temperature properties expected in the system operation. To some extent, the choice of materials is dictated by the amount of flexing anticipated prior to and during assembly of the units. Generally. the polyimide materials are most popular since the physical influences, mentioned above, are most easily overcome. However, copper, due to its inherent nature when subjected to excessive flexing and vibration, can eventually result in fatigue and electrical failure unless proper support of the connector areas and the crucial bend radii are carefully observed and practiced. Testing Flexible Printed Circuitry While much information is available on the applications aspects of flexible circuits, very little data has been presented about the environmental characteristics of the finished product. Sanders, as well as other manufacturers of flexible printed wiring, has set up test programs to run environmental checks on its products to make sure they meet customer specifications. The following government and industry standards have been used for such tests: MIL-T-5422E. MIL-STD-354, MIL-STD-202, MIL-P-55110A, MIL-STD-810, MIL-E-5272, MIL-P-13949, IPC-CF-150 copper foil specification, ASTM-D-635, and AST-1-D-150. The parameters evaluated include: operating temperature; moisture absorption; flexing and tensile strength; resistance to chemicals, abrasion, and fungus; aging and weathering effects; distributed capacitance and require shielding; flammability; dielectric constant and strength; conductor and insulation resistance; and current-carrying capability. The results of the test data will be supplied to customer - and potential customers by the manufacturers of flexible printed wiring. Such test data, while considered to be evidence of the present and potential quality of flexible circuitry, is really only a start. Much testing of new material - and advanced processes is continuing, insuring that advanced technology is made available to the user of flexible printed circuitry as new materials are developed and placed on the market. Posted November 7, 2017
<urn:uuid:4e6a4f0b-f224-45af-b396-81a8a5fd2f0c>
CC-MAIN-2020-16
https://www.rfcafe.com/references/electronics-world/flexible-printed-wiring-electronics-world-october-1969.htm
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371818008.97/warc/CC-MAIN-20200408135412-20200408165912-00154.warc.gz
en
0.900441
2,461
2.671875
3
The Norfolk Historic Environment Record holds archives relating to many different aspects of Norfolk’s archaeology. This includes evidence for the production of clothing dating from the prehistoric period onwards. Clothes worn by people in the past rarely survive on archaeological sites. Archaeologists therefore have to look at the tools used to make clothes and the items used to hold them together to work out what people wore and how they made their clothes. This article shows how woollen cloth was made in the past. It examines the evidence from Norfolk for each part of the process and shows how raw wool from sheep and other animals was processed into fabric. This article is based on a display put together for the Woolly Wonders event held at Gressenhall Farm and Workhouse in 2006 and repeated at the Norfolk Show in 2006. Shearing - Removing the Wool from the Sheep First of all the wool has to be removed from the sheep. In the past this was done by hand. The sheep was firmly placed between the shearer’s legs and a pair of metal shears was used to carefully snip the wool away from the skin. This resulted in a very close shave and, if the sheep was unlucky or the shearer unskilled, a few nicks and cuts. Metal shears, possibly used to shear sheep, have been found at several sites in Norfolk including a Roman pair of shears from Snettisham (NHER 1555), a Late Saxon pair of iron shears from Sedgeford (NHER 31814) and a medieval pair of shears from King’s Lynn (NHER 1219). The technology for these objects remained unchanged until the post medieval period when man-powered pump action shears and later electric shears replaced the hand shears. Although the shearer could not cut the wool so close to the skin the sheep was less likely to be hurt during the process and shearing was much quicker and safer. Holkham Hall, one of the most important and influential Palladian houses in England. (© NCC) In the past sheep were important commodities. The Coke Column (NHER 39800 ) at Holkham Hall (NHER 1801 ) was erected in the mid 19th century to commemorate Thomas Coke. The capital has turnips and mangle wurzels instead of acanthus leaves and the base is decorated with reliefs including one depicting the Holkham sheep shearings. On one of the corners of the plinth is a sheep. The monument illustrates the importance of agriculture, including sheep, to the estate. To protect his sheep from harm and to enable him to keep an eye on them a farmer would tie crotal or animal bells around their necks. These are relatively common finds (NHER 22540, 42603 and 41686). Cows, goats and other domesticated animals also wore them. If the sheep went missing the farmer would be able to find them by following the noise of the bell. If a neighbouring farmer found an unusual sheep in his flock he could tell where it belonged by the type of crotal bell it wore. A post medieval lead alnage cloth seal from Fincham. (© NCC) Once the fleece was removed from the sheep it could be packaged up for transportation and processing elsewhere. Lead cloth seals were widely used in Europe between the 13th centuries to identify where wool and cloth had come from, to regulate trade and act as a quality control check. Many of these lead cloth seals have been found in Norfolk (NHER 22755 ). Cloth seals were folded around the cords securing a bundle of cloth and stamped closed. One side of the seal depicted a city’s coat of arms and the other would record the length or width of fabric or the weight of the parcel. The cloth seal system was necessary because the fleece of different types of sheep and even different parts of a fleece from a single sheep had different qualities and colours. By selecting to use fleece from a certain part of the animal or a certain type of sheep you could radically alter the quality and texture of the resulting cloth. After selecting the quality and colour of the fleece to be used the wool then had to be cleaned and prepared for spinning. Cleaning and Preparing the Wool When wool is removed from a sheep it is greasy and dirty. It needs to be cleaned and prepared before it can be spun into thread. This can be done by hand, picking out lumps of dirt, twigs and pulling the fibres apart gently. It is much quicker and easier, however, to achieve the same effect by using a wool comb. This pulls all the tangles out of the wool and anything sticking in the wool can be easily removed. The earliest wool comb found in Norfolk dates to the Roman period (NHER 9786), but it is likely that they were also used before this date. Natural combs, like teasels and thistles, may also have been used. A Roman wool comb from Venta Icenorum. In the medieval period cards began to be used. These flat paddles had handles and a series of metal pins stuck into the face of the paddle. They were used in pairs with the wool being pulled between the teeth of the pins to separate and clean it. This was even quicker than using a single comb. A possible post medieval card has been discovered at Caistor St Edmund (NHER 37008). After combing wool can be spun. If it is unwashed it contains lots of high lanolin (grease). Lanolin helps the fabric to repel water but is also rather smelly. Wool can be carefully washed at this stage to remove some of the lanolin. A small amount of grease makes the fibre easier to spin and so it would not have been completely removed. Wool can be dyed at a number of different stages in the process of producing cloth. If it is dyed before spinning the process is referred to as “dying in the hank”. If it is dyed after weaving it is “dyed in the piece”. In the past various natural dyes were used to brighten the colours of the fleece. People did not wear just browns, greys and whites but also produced red, blue, yellow, green, black and purple cloth. The most common dye was a plant-based red dye produced from the root of the madder plant. This has been used since the prehistoric period. Later kermes (an insect-based dye) and, from the 12th century, brazilwood were also used to produce red fabrics. Blues were made from the yellow flowered woad plant in a lengthy and complicated process. Greens and yellows could be made from the weld plant, blacks, browns and greys from tannins and purples from certain types of lichen. The Romans also created a purple dye known as Tyrian Purple, from shellfish but it was expensive and its use was restricted. To create even more variation the dyes were often mixed together. Spinning and Plying Before the invention of the spinning wheel in Europe in the 14th century all wool was spun using a drop spindle. A drop spindle is a wooden stick or rod with a weight (the whorl) on one end. Very few wooden spindles survive archaeologically. The whorl is a more common find. It can be made of pot (NHER 17652, 4626 and 5159), bone (NHER 2629 and 5439), lead, stone (NHER 9777, 3594 and 1449) or wood. Some spindle whorls are made from broken bits of pot with a hole drilled through the centre (NHER 5272). Saxon spindle whorl found in Dickleburgh. (© NCC) Different weight whorls would be used to produce threads of different thicknesses. To spin a thin thread a light weight would be used. To make thicker, chunkier threads a heavier weight would be employed. The spindle whorl was twisted in the working hand whilst the wool was gradually threaded onto the spindle from a stick called a distaff. These rarely survive archaeologically as, like the spindle, they were made of wood. There are illustrations from the Roman period showing how they were used. Megan Dennis (NLA) and Debby Craine, a Norfolk Heritage Explorer volunteer, during the Woolly Wonders event in Spring 2006. (© NCC) The introduction of the spinning wheel in the 14th century was a revolution in cloth manufacture. It meant that wool could be spun from a seated position (it is most economical to spin with a spindle whorl from a standing position). The spin was placed on the fibre by the wheel rather than the hand-spun spindle. As long as the spinner could coordinate feet and hands the wheel made the job much quicker and more efficient. Once wool is spun into a twisted thread there is nothing to stop it un-spinning itself. By plying (twisting) two threads against each other the spinner stops any untwisting. Therefore to prepare a fibre for weaving or knitting two threads need to be spun separately and then plied together. Exactly the same process is used for plying as has been described for spinning. Although first carried out on a spindle whorl it was also quicker and easier to ply using a wheel. The simplest type of loom for weaving cloth is a “warp-weighted” loom. This is basically a rectangular frame of wood. The warp, or vertical, threads are hung from the top of the frame and weighted with loomweights to keep them taut. Loomweights are a fairly common find (NHER 1423, 5216 and 8565). The rest of the loom often rots away because it is made of wood. Loomweights can be made of a variety of materials but are most commonly stone or pottery. If the loom is set up for a fine, thin material made using fine yarn then light loomweights are used so the threads do not snap. If coarser fabric is being woven heavier weights can be used. Archaeologists can date loomweights by their shape. In the Bronze Age they were cylindrical, Iron Age loomweights are triangular, Roman loomweights are more pyramidal in form, Early Saxon weights are ring-shaped and Late Saxon weights bun-shaped. Once the loom has been set up, with the warp threads hanging down vertically, the weft (horizontal) thread can be woven in and out of the threads. After each horizontal row has been completed the weft threads are pushed together with a pin beater to form the fabric. A pin beater (or weft beater) is a cigar-shaped bone tool (NHER 41335). It is usually pointed at both ends. It was used to sort out knots and tangles during weaving and to beat the weft, or horizontal, threads into place on the loom. Through repeated use a pin beater becomes shiny and polished because it absorbs the lanolin from the wool. Weaving swords and combs (NHER 11445 and 37008) were also used to push the threads into place correctly. The warp-weighted loom was used commonly from the prehistoric period onwards. In the Roman period there is some evidence to suggest that cloth manufacture was overseen by the state in Norfolk. The Notitia Dignitatum (a late 4th century record of dignitaries and their areas of responsibility) includes a description of the controller of the state weaving-works at Venta. This might be Venta Icenorum (NHER 9786). Before the introduction of the warp-weighted loom a simpler two-beam loom was used in the prehistoric period. This stretched the warp threads between two horizontal beams rather than using weights. In the medieval period the treadle loom, the most common horizontal loom, was developed. With later advances in technology the treadle loom was gradually mechanised and replaced with large automatic looms that required a minimum of supervision. These made the process of cloth manufacture much quicker and mass production from the post medieval period onwards has revolutionised not only fashions but also our attitudes to clothing. Knitting and Crochet An alternative method of production to weaving cloth was to knit, crochet or knot threads together to form a fabric. The earliest forms of knitting that survive appear to be a type of netting (also called sprang) used to create hairnets, bags and fish traps. It is very rare for these fabrics to survive, although in waterlogged conditions they are occasionally found. Knitting and crochet were generally used for items of clothing that were difficult to make using seamed construction such as stockings, socks and hats. None of these knitted fabrics survive from Norfolk although it is fairly certain that they were used. Knitting needles and crochet hooks have been identified at others sites although none have been found in this county. The Finished Product Once a fabric has been woven it needs to be cut and stitched together into the finished clothes. This was carried out by hand with sewing needles. Several of these have been found in Norfolk. Needles were made from a variety of materials including bone (NHER 30165), copper alloy (NHER 22776 and 24324) and iron. An Iron Age ring-necked pin from West Rudham. (© NCC and S. White.) Bone needles have been used since prehistoric times. Copper alloy needles were used for finer work, but bone needles continued to be made into the medieval period. Needles came in all different shapes and sizes – just like today. The smaller copper alloy needles used for finer work tended to get blunt. To sharpen them a bone with small grooves cut into it was used (a pinner’s bone). The needle would be run through the groove to sharpen it. Some pinner’s bones are stained green from the copper in the needles. An Iron Age copper alloy toggle from North Repps. (© NCC) Once clothes had been sewn together they could be embroidered or embellished in a variety of different ways. Feathers, gold wire, silk threads, cords, pearls and other precious stones could be attached on very luxurious clothing. Clothes could not be fastened with zips or Velcro in the past but there are plenty of toggles (NHER 25719 ), buttons (NHER 6030 ) and buckles (NHER 1314 ) in the archaeological record that indicate how clothing was held together. Brooches (NHER 1626 ), belts (NHER 13901 ), pins (NHER 7850 ) and clasps (NHER 21112 ) would also have been used to secure items. Brooches, pins and buckles found in graves can be very useful. They give us clues as to how clothes were worn and where they were fastened. Although the clothes have rotted away from the location of a brooch archaeologists can work out what the garment might have looked like. A complete Early Saxon small long brooch from the cremation and inhumation cemetery at Saxlingham in the parish of Field Dalling. (© NCC) Although we don’t have any complete clothes from the past we do have very small fragments of textile. Often these are only preserved by the corrosion of metal near them. For example a hoard of Iron Age silver coins were hoarded in a pot that had been sealed with a linen or hemp cloth (NHER 25758 ). In some situations organic remains, like cloth, can survive. At Fishergate in Norwich (NHER 41303 ) the waterlogged conditions led to the survival of several pieces of medieval cloth and a complete leather shoe. The corrosion of metal objects can also help textile to survive as seen on a gilt medieval brooch from Gissing (NHER 24892 ) and an undated strap end from Feltwell (NHER 4927 ). A piece of Roman tile with a textile impression was found in Hockwold-cum-Wilton (NHER 5587 ). From these small remains, and the types of objects described above archaeologists can begin to piece together how clothes were made in the past. M. Dennis (NLA), 5 January 2006. Wild, J.P., 1988. Textiles in Archaeology (Princes Risborough, Shire Publications).
<urn:uuid:b6e96653-672e-47cf-a88f-0133ce382daf>
CC-MAIN-2020-16
http://www.heritage.norfolk.gov.uk/record?tnf1191
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371830894.88/warc/CC-MAIN-20200409055849-20200409090349-00355.warc.gz
en
0.970306
3,435
3.90625
4
The patient sits in a worn, upholstered armchair. She’s been having a tough go lately—doubting herself at work, wondering whether her friends truly like her, spending increasing amounts of time in bed, taking fewer showers, and eating less. The thoughts and feelings are familiar to her: she was diagnosed with a mental illness in her teens and has been hospitalized twice since. She knows it’s time to start talking to someone again, so here she is, in the armchair. “How are you doing today?” She inhales, sighs, and types: “OK, I guess,” into her phone. Though this scene is imagined, ones like it, in which a patient engages with artificial intelligence trained to simulate a therapist, are established forms of treatment. While not yet widespread in North America, smartphone apps that use AI to treat mental health problems appear viable and offer distinct advantages for dealing with one of the world’s leading health issues. The nascent technology for this is already available in several forms, such as chatbots that replace therapists and give patients tools for developing healthy coping mechanisms. Woebot, for example, is a Facebook-integrated bot whose AI is versed in cognitive behavioral therapy—a widely researched approach that is used in lieu of, or in conjunction with, talk therapy to treat depression, anxiety, and a host of other mental illnesses. Clinical research psychologist Dr. Alison Darcy developed the AI-powered chatbot with a team of psychologists and AI experts. As she explained in a 2017 interview, the project was developed, at least in part, as an effort to increase access to treatment for those suffering mental health issues. With Woebot, the user and chatbot exchange messages. This allows the AI to learn about the human and to tailor conversations accordingly. Based on the bot’s cognitive behavioral therapy learning, it then provides therapeutic tools deemed appropriate for the human user. Because this technology is integrated with Facebook Messenger—a platform with 1.3 billion monthly users and not bound by medical privacy rules—Darcy’s bot opens the door to mental health treatment for hundreds of millions of people who might not otherwise gain access due to lack of income, insurance, or time or out of fear of stigma. Because there’s no real-life human interaction, Darcy says that her innovation is not meant to replace traditional therapy but rather to supplement it. Chatbots like Woebot actively engage users, but there are more passive forms of AI mental health therapy, as well. These include Companion and mind.me, which are apps that can be installed on a phone or smartwatch. Left to work in the background, their AI collects data from its user 24 hours a day and without direct input. Companion was developed in conjunction with the U.S. Department of Veterans Affairs. Its design “listens” to the user’s speech, noting the number of words spoken and the energy and affect in the voice. The app also “watches” for behavioral indicators, including the time, rate, and duration of a person’s engagement with their device. Based on the understanding that early intervention can be life-saving for those with mental health issues, Companion was originally designed to flag known signs of mental illness in veterans and to share that data with the individual and his or her health care managers. Other apps, such as Ginger, take a two-pronged approach to treatment by supplementing AI with human clinicians in the form of licensed therapists and certified psychologists, available to chat when necessary. Ginger uses AI—analyzing data gleaned through surveys and from app use—to help its clinical staff fine-tune therapies according to the needs of each individual client. Some examples of treatments include emotional health coaching, mindfulness, cognitive behavioral therapy, and talk therapy with a trained therapist. But, like many emerging technologies, these innovations are imperfect. Privacy and mental health experts worry about the potentially deadly consequences of divulging deeply sensitive information online and wonder about the overall effectiveness of treatment. “It’s a recipe for disaster,” said Ann Cavoukian, who spent three terms as Ontario’s privacy commissioner and is now the distinguished expert-in-residence leading the Privacy by Design Centre of Excellence at Ryerson University in Toronto. “I say that as a psychologist,” she explained in an interview. “The feeling of constantly being watched or monitored is the last thing you want.” In an article in The New Yorker, Nick Romeo argues that there is little “good data” on the efficacy of AI therapy, due to the fact that it’s such a recent development. This view is echoed by NPR Massachusetts, in its report on a 300-person study of app-based therapy conducted at Brigham and Women’s Hospital. The director of the hospital’s Behavioral Informatics and eHealth Program, psychologist David Ahern, points out that, “There are tens of thousands of apps, but very few have an evidence base that supports their claims of effectiveness.” Nevertheless, these applications and others like them offer unprecedented—and sorely needed—solutions to the overall lack of access to mental health care. According to national statistics in both Canada and the United States, each year one in five people experience a mental health problem or illness. Canada’s Center for Addictions and Mental Health puts the economic burden of mental health—including the cost of health care, lost productivity, and reductions in quality of life—at an estimated at $51 billion, annually. In the United States, the National Alliance on Mental Illness estimates the country loses $193.2 billion each year in earnings as a result of inadequate treatment. Those staggering statistics are even more alarming when stacked up against the number of people who don’t receive treatment at all. In both countries, at least half of all adults experiencing mental illnesses go untreated. That is to say, they don’t receive or take medication or have any form of counseling. For some people, this might be a choice born out of a fear of what others—family, employers, colleagues, friends, and even doctors—might think. But in many places around the globe, the choice to seek medical help for mental illnesses is simply a pipe dream. The Potential User Population is Enormous In 2014, 45 percent of the world’s population lived in a country with less than one psychiatrist available per 100,000 people, according to the World Health Organization. The same report found that, worldwide, there were 7.7 nurses working in mental health for every 100,000 people. From a global perspective, access to treatment is so scarce, it could easily be considered a luxury. Which is why AI and the apps it supports seem so promising. This technology can help overcome the problem of access while simultaneously mitigating stigma. Machines are not thought to be judgmental in the same way a human might be. Charlotte Stix, the AI policy officer and research associate at the Leverhulme Centre for the Future of Intelligence, in England, points out that offering people a way to find help without fear of judgment proves meaningful when trying to break down that particular barrier. However, as is often the case with budding technologies, shimmering hopes can be tinged with possible drawbacks; in this case, it’s the possibility that society never overcomes the stigma of mental health. “A potential downside,” says Stix, “could be that instead of eventually receiving expert human support, patients stay with purely algorithmic solutions, and society pretends the problem is solved without actually dealing with the core issue at hand.” Similarly, any benefits gleaned from the increase in access could be undermined by overreliance, questionable technology, and doubtful effectiveness of diagnosis and treatment. “As with any app, there will be those on the market that do not adhere to a certain standard and ought not to be used under any circumstance, particularly by someone in crisis,” Stix says. “There is a plethora of health care apps, fertility-monitoring apps, and so on, on the market with starkly varying quality.” In other words, she argues, there is already a divide between useful and potentially harmful health care apps, and there’s no reason to believe that this won’t apply to mental health care apps, as well. Clearly, mental health care workers, like all humans, might also range in capability and potential; however, these practitioners receive training and are bound to certain codes, laws, and standards that can be enforced. By contrast, machines and apps that are meant to help people suffering mental health issues are not yet regulated. Some of these technologies are peer reviewed, notes Glen Coppersmith, the founder and CEO of Qntfy, a company that analyzes personal data in the hope of improving the scientific understanding of human behavior and psychological well-being. “That’s a low bar, but that’s already been done,” he says. “There’s a balance to be struck between innovation and finding better solutions to these problems.…But right now, I don’t think there’s enough information for [the government] to adequately write regulations for this.” Researchers Confront Positives and Negatives The benefit-drawback dichotomy is evident all over this emerging technology. Analyzing personal data through AI can bring objectivity to a historically subjective field—much as the thermometer did for body temperature, the x-ray for bone health, and MRIs for tissue damage. As IBM researcher Guillermo Cecchi notes, “Psychiatry lacks the objective clinical tests routinely used in other specializations.” This is why he and his colleagues used AI to develop a program that analyzes natural speech to predict the onset of psychosis in young people at risk. “Novel computerized methods to characterize complex behaviors such as speech could be used to identify and predict psychiatric illness in individuals,” Cecchi’s team notes in an article published in Nature. IBM’s technology was, in fact, so successful that it outperformed traditional clinical assessments. The efficacy of AI for diagnosis was also highlighted in Romeo’s New Yorker article. In it, Stanford University psychiatry professor David Spiegel said that, in time, an AI machine could, unlike a human, have perfect recall of every past interaction with a given patient, combining any number of otherwise disconnected criteria to form a diagnosis, “potentially [coming] up with a much more specific delineation of a problem.” On the flip side, having all of a patient’s highly sensitive and personal data online threatens privacy. This problem concerns Stix. “If you choose to discuss your personal and mental health issues through an app, it can be unclear to what degree this information, and your sensitive data, is stored and used at a later point—and for what,” Stix says. “You may have signed a terms-and-conditions agreement before using the app, but these can be unclear and, particularly for people in a vulnerable position, might not suffice.” Privacy questions loom large over people’s online activity, in general. For this specific technology, the critical questions relate to where each user’s data goes, how it’s used, and who owns it, said Cavoukian. “We have to be so careful with AI, because AI has amazing potential—there’s no question,” she says, “but people often talk about the potential for discrimination, for tyranny.” Artificially intelligent technologies are built on trained data sets that create algorithms. But if the data sets are biased in certain ways—for instance, says Cavoukian, if they only take into consideration certain parts of the population—there can be dramatic and devastating implications. “I always tell people, be aware of the unintended consequences. You don’t know where [your data] is going to end up,” she says. “If it’s going to end up in the hands of your employer or your insurer, it can come back to bite you. And you have no idea how that can play out.” The Health Benefit Offsets the Privacy Risk Years ago, Coppersmith and colleagues at Qntfy scraped data from publicly available posts on social media. But today, so many people are “donating” their data that scraping is no longer necessary. At Qntfy, each person who volunteers their data decides which accounts can be accessed—Facebook, Twitter, Reddit, Fitbit, or Runkeeper, for instance, and sometimes all of them. While this particular company takes steps to protect the sensitive and personal information of individuals, there’s no immunity from data breaches. “It’s a legitimate concern,” Coppersmith says, when asked about the possibility of his company’s data sets being compromised. “But it’s no different from breaches at Facebook, Amazon, or anything else. If you’re still posting to social media, if you’re still doing online banking, you’ve made a choice—a risk-reward trade-off.” In this case, as Coppersmith says, the trade-off is between the risk of a data breach that could lead to your personal data falling into the hands of unknown players and the benefit of having a better picture of your mental health. “If we’re able to give your clinician superpowers to better understand you,” asks Coppersmith, “is that worth the risk of perhaps your data being compromised at a certain point in time?” But looking at the issue in an either-or, win-or-lose manner—assigning a conflict to it—is a false choice, says Cavoukian. There are ways to reduce the risk of personal data being reidentified to less than 0.05 percent. “Damn good odds,” she adds, explaining that with the right safeguards, a person would be more likely to be hit by lightning than experience a data breach. Having that protocol and, most importantly, designing privacy “as a default” into technological developments—ensuring the information an individual provides is strictly used for the intended purposes—is imperative, according to Cavoukian. Until companies make it abundantly clear they will only use data for the agreed purposes, individuals will have to take responsibility for their own privacy. “We can do both. We have to do both,” Cavoukian asserts. “I want lives saved and privacy protected.” While AI continues to make strides into almost all aspects of a human’s daily life, there remain many questions about its validity, biases, and effectiveness. There is perhaps no area more private or vulnerable than an individual’s mental health, and it remains to be seen whether letting an intelligent machine into that space will help or hinder. But, says Coppersmith, expect that involvement to intensify. “I would bet the amount of influence that AI is going to have over mental health is going to increase,” he says. “But I would be shocked if it ever totally replaced that human connection.” Because in its most basic sense, as Coppersmith observes, mental health is shaped and affected through interactions with the world—with humans. Amy Minsky has worked as a journalist in Canada for more than a decade. Until recently, Minsky was based in Ottawa, covering politics and policy on Parliament Hill. This article is the third in our series on the accelerating impact of AI, and was first developed for the pilot issue of Mai magazine (see maimedia.org).
<urn:uuid:b7452695-b9a8-4696-a41b-7821c9e672ba>
CC-MAIN-2020-16
https://washingtonspectator.org/the-brave-new-world-of-mental-health/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506870.41/warc/CC-MAIN-20200402080824-20200402110824-00474.warc.gz
en
0.951583
3,313
3.046875
3
As the world deals with the COVID- 19 pandemic, ensuring safeguards to curb its spread, it has become of utmost importance to question how the government is dealing with personal, sensitive data. Given the seriousness of the disease, the question of data may seem like a futile one. However, with news about the racism the pandemic has brought about, and the conditions of quarantines, it is perhaps worth thinking what the government actually does with our data. For the past few months, India has been in a state of unrest; the discriminatory Citizenship Amendment Act has made thousands of people take to the streets to voice their dissent. Even as the movement continues, news about protesters being recorded are making the rounds. Since the police works under the government, it is important to ask if they have been asked to record videos on government orders? Or are they doing it on their own volition? And who is party to the access of this data? Time and again, voters’ lists have been allegedly used to perpetrate violence against particular groups as in the case of Anti-Sikh riots 1984, and the Gujarat riots of 2002. Such information, which is only available with state institutions makes one wonder how complicit a government is in misusing personal data. With the digital age, more and more data is available just a click away. With the lack of proper, well-developed cyber laws, questions about privacy violations arise. Who has access to our information? And how is our information really being used? Right to privacy was declared a Fundamental Right under Article 21 Right to Life in 2017. In the digital age then, how are our rights being violated? The Story Of Aadhar Aadhar contains the biometric, iris scans and personal information of a billion Indian citizens. In 2018, the Supreme Court ruled that Aadhar linking was only mandatory if you wanted a PAN card or needed to file income tax returns. It barred Aadhar linking by private companies. Aadhar was also not needed to link bank accounts. However, Aadhar was deemed necessary for availing welfare schemes. The SC ruled that collecting basic biometric data did not lead to a violation of privacy. Two years on, documents revealed by a RTI show that the government is in the final steps of making a Social Registry. This means that the government will create a searchable database carrying all information about a citizen. From when a person changes jobs, to when they buy new property, to whom they marry, to when their family members are born or die. Two years on, documents revealed by a RTI show that the government is in the final steps of making a Social Registry. This means that the government will create a searchable database carrying all information about a citizen. From when a person changes jobs, to when they buy new property, to whom they marry, to when their family members are born or die – all such information would be available to the government. While this effort was envisioned as an effort to understand the dynamics of poverty, things now tell a different story. By creating a database, or a network of databases of all citizens, their caste, religion, employment, marital status, income, property, etc. will all be gathered in one place. This means that each and every person’s socio-economic status can be tracked easily by the government. And unlike the Census which only enumerates rather than gather personal data, the Social Registry does not have such safeguards. Aadhar information has already been misused in the past. A substantial number of voters were de-registered in Telangana and Andhra Pradesh; there have been deaths due to starvation because people’s ration cards were not linked to their Aadhar; in Jammu and Kashmir, Aadhar is not recognized as valid proof; not to mention that Aadhar data has been leaked on various occasions. What then can such a consolidated database do? Firstly, who will this Social Registry be available for? Can individuals access it? Can private companies access it? Secondly, if Aadhar data has been used for pro-poor schemes for a targeted audience, can it not be used for violence against some groups? Thirdly, what happens when knowledge about someone’s addresses in available? With stigma against live-in couples, Muslims, Dalits, same-sex couples already prevalent, can the availability of such data create a system of state discrimination? And in the wake of the National Registry of Citizens coupled with the Citizenship Amendment Act that decides who will be a citizen, what role will the Social Registry play? Finally, the most important question that we need to ask is, are we moving towards a surveillance state? most sanitation workers are Dalits, while their supervisors inevitably are from upper castes. What does it mean for an upper caste person to literally track, hear, see and record the movements of a Dalit worker at all times of the day? Tracking Sanitation Workers In the guise of the Swachh Bharat initiative, Municipal Corporations have started using GPS tracking systems to track sanitation workers. With a microphone and a camera attached to a smart watch, supervisors can track all movements of a sanitation worker while on duty. Since most sanitation workers are employed on a contractual basis, even a small break during working hours means a pay cut. Since Human Efficiency tracking systems earn more points for municipalities on the Swachh Survekshan rankings, more money allocated for the Swachh Bharat Abhiyan is being spent on these GPS trackers. This essentially means that sanitation workers are not being provided with other basic amenities like masks, boots and gloves. In some cases, sanitation workers have been required to wear the trackers even after duty hours. Such systems have been implemented in cities like Navi Mumbai, Thane, Chandigarh, Lucknow, and Indore. This is not to mention that most of these workers are Dalits, while their supervisors inevitably are from upper castes. What does it mean for an upper caste person to literally track, hear, see and record the movements of a Dalit worker at all times of the day? Since these workers are unaware about their rights during working hours, tracking devices are being forced on them. What then does consent mean? Women workers, in fact, have stopped using washrooms for fear of being recorded. Consent then, is not only limited to working conditions, but also to privacy. Sanitation work continues to be dangerous and stigmatized in India. In fact, while manual scavenging has been made illegal, the number of manual scavengers has only increased under the Swachh Bharat initiative. As per the Union of Ministry of Social Justice and Empowerment, 282 sanitation workers died between 2016 and November 2019. As per a report by Wateraid, one sanitation worker dies every five days. What then has really been done for sanitation workers except making their lives more difficult because they are being constantly tracked? However, with news about the racism the pandemic has brought about, and the conditions of quarantines, it is perhaps worth thinking what the government actually does with our data. Are You Being Recorded? CCTVs seem important for public safety, especially for women. Installation of CCTVs is meant to be a deterrent for harassers, stalkers, robbers and others who commit crimes. Since their actions are recorded, there will be proof of their involvement which can lead to their indictment. There have often been claims made that CCTVs have in fact been of great assistance in addressing crime rates. However, is it not possible that the criminals only became more innovative and aware of avoiding these cameras? And how does this dependence on technology actually address the socio-economic, political, cultural and psychological reasons as to why crimes are committed? This then brings the question of what good is CCTV surveillance doing? While they may not be bad in and of themselves, it is worth asking who has access to CCTV data, especially from the cameras installed by the government. Secondly, how secure are these cameras and what happens to the data in case they are hacked? Who is accountable for the protection of this data? Are facial recognising software being used in these cameras? If so, can they lead to higher gender and racial profiling, and in turn, to both cyber stalking and real life stalking? As anti-CAA protests have taken the country by the storm, there have been increased reports of police recording videos of protestors. There have also been reports of drones flying around recording protestors. In Lucknow, hoardings with the names, addresses and phone numbers of anti-CAA protestors had been put up. If such data is so readily available, are recording devices really creating a more unsafe state? Are We Really Safe? It is worth questioning then what the government plans to do with this Social Registry. The government has constantly attempted to fudge up or not reveal data. With its rhetoric to create a Hindu Rashtra, the BJP in a way, has led to an increase in surveillance by citizens as well, where people feel compelled to take action against certain groups. This is evident in the case of cow lynching, or more recently, in the case of Jamia and Shaheen Bagh shooters. The existence of technology only makes matters worse. The government has been accused of being involved in Whatsapp snooping; its crackdown on protestors could mean that CCTVs could well be an outlet they plan to use to arrest or detain protesters. Moreover, it is still unclear who manoeuvres over our data. With the existence of facial recognition software, and the possible availability of our whole social-economic data, how safe can we feel? And what about measures like the Data Protection Bill where government agencies are exempt from informing how personal data is being used? There is no conclusive correlation between reduced crime and presence of CCTV. In that scenario, how are our governments actually addressing women’s safety when there is so much stigma against reporting any form of harassment? And how do we navigate a space where raising our voice against the government translates to landing up in jail? Where then do we draw the line between safety, the right to privacy and being snooped on by a surveillance state? Featured Image Source: The Logical Indian
<urn:uuid:50558372-49e4-45bf-895f-65acfd81a560>
CC-MAIN-2020-16
https://feminisminindia.com/2020/03/23/surveillance-state-data-safe-india/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371637684.76/warc/CC-MAIN-20200406133533-20200406164033-00554.warc.gz
en
0.968544
2,091
2.9375
3
Flavie Halais is a French freelance journalist, blogger and filmmaker living in Montreal, Canada. An urban farm in Montreal is scaling the industry "with more software than farmers." MONTREAL, Canada — In 1999, Dickson Despommier, a professor of environmental health sciences and microbiology at Columbia University, popularized the idea of large-scale urban agriculture by releasing a conceptual model for vertical farms. Crops would grow inside tall city buildings, using very little land to produce bounties of food that would not need to be shipped far to be eaten. With nine billion people worldwide to feed by 2050, and close to 70 percent of them residing in cities, bringing food production into dense urban areas had long been seen as a logical step toward sustainable living, and Despommier’s work seemed to take us in the right direction. Fifteen years later, despite many experiments with farming inside city buildings, the first large-scale vertical farm, as envisioned by Despommier, has yet to be built. The urban farming industry, still in its infancy, is struggling to address the engineering challenges that make growing food in cities a costly business. Sales and distribution have also proven harder than almost anybody imagined. “What's been lacking,” says Mohamed Hage of Montreal, “are players who will do it at a true commercial scale, with the right business model.” Hage is trying to fill exactly that gap with his company, Lufa Farms. His path to scaling up urban agriculture is not vertical but horizontal. In 2011, Lufa set up the world’s first commercial rooftop greenhouse on top of a wide industrial building in northern Montreal. The area, a nondescript industrial zone bordered by two highways and an outdoor mall full of big box stores, doesn’t turn up in design magazines. But this is where Hage, after spending hours painstakingly scouring Google Earth, found the perfect rooftop for his next-gen greenhouse. The 31,000 square-foot facility (2,900 square meters) uses hydroponics, a technique that uses water to deliver nutrients and therefore requires no soil. Lufa’s methods exclude pesticides, herbicides or fungicides, and use biological pest control to get rid of harmful bugs. The greenhouse is computer-monitored, recirculates 100 percent of irrigation water and composts all organic waste. And in a cold-weather region with a growing season of four to six months, Lufa works year-round, growing enough tomatoes, eggplants, zucchinis, and lettuce to help feed 10,000 people in the Montreal area. Lufa’s biggest innovation has little to do with farming techniques or architecture: It’s marketing and e-commerce. Lufa sells its produce through a complex distribution system that puts to shame the usual get-what-you-get offerings of farm co-ops found in many North American cities. For a minimum of $30 a week, Lufa customers select what goes into their basket through a fancy online marketplace. (Sicilian eggplants have never looked sexier.) To fill out its product offerings, Lufa partnered with a slew of other local food makers to provide customers with all kinds of products, from fresh bread to dairy, to herbs, honey, and dry beans. Orders close at midnight on delivery day—and then the magic happens. Each producer receives an order indicating exactly how many baguettes, liters of milk, or pounds of potatoes are needed. Everything gets cooked, picked, prepared, bottled, packed, then shipped overnight to Lufa’s warehouse to be assembled in crates, and finally sent in the morning to more than 150 pickup points throughout the city. This order-based system drastically cuts down waste, both on the producer’s side and in Lufa’s warehouse, since there aren’t any unsold products to be discarded from store shelves. It also considerably reduces packaging. By this remarkable logistical feat, Lufa may very well have solved one the biggest problems plaguing our global food system. Some 40 percent of food in the United States goes straight to landfills uneaten. And about 1.3 billion tons of food destined for human consumption get lost or wasted each year globally, discarded anywhere along the supply chain, from farmland to supermarkets and restaurants. From farm to table Eating food that’s grown locally and sustainably is a fantastic and increasingly popular idea, but it’s also expensive. Producers tend to drown under marketing and distribution costs, and struggle to find retail channels for their products. To assume that urban farms can escape that trap because of their extreme proximity to consumers would be a mistake; getting food to consumers has proven a logistical nightmare for them as well. A pillar of the new “farm-to-table” economy is to facilitate marketing and distribution for local producers and limit the number of intermediaries along the supply chain. That challenge has proven perfect for a number of new tech companies hungry for new retail sectors to “disrupt.” One of them is Good Eggs, a San Francisco-based company operating an online marketplace similar to Lufa’s in the San Francisco Bay Area, New Orleans and Brooklyn. Good Eggs considers itself a “local-food aggregator,” pooling together the marketing and distribution needs of many farmers who are often invisible to supermarket chains and find themselves limited to alternative channels such as farmer's markets. "A lot of the urban farms that we work with are not able to sustain themselves through traditional food retail channels," says co-founder and CEO Rob Spiro. “The margins are too slim, and the volume requirements are too high. So they end up selling to restaurants.” Good Eggs is determined to beat the supermarket, which it thinks will enable the model to go to scale. So the company has made it possible for customers to order throughout the week, rather than weekly, and is offering free home delivery. “If you're less convenient than the supermarket, even if the food is better and better for the world, it's going to be really hard to reach a mainstream audience,” Spiro says. Whether flexibility and convenience can ultimately bring costs down enough to reach the mainstream remains to be seen. For now, the food sold by Lufa, Good Eggs and similar providers like Farmigo might be cheaper than what you’ll find at the organic food store or at the farmer’s market, but it’s still out of reach for households working with a tight budget. Good Eggs is planning to begin accepting food stamps, a form of government assistance for low-income families in the United States. But can local food aggregators work for the typical middle-class supermarket shopper? Solving the engineering puzzle For Lufa, logistics at the distribution level are just one of many sources of headaches. To build its first greenhouse, Lufa not only had to match the building’s structural requirements—using soil to grow root vegetables isn’t possible, for example, because it would put too much strain on the roof. It also had to respect the myriad rules and regulations imposed by local building and fire codes, from the number of bathrooms or parking spaces to the amount of glass used. “We had questions along the lines of, ‘Will birds hit the greenhouse?’” recalls Hage. “Because it's all glass, and there's lots of trees inside, so you have to go and do the right kinds of studies to show that no, birds do not hit the greenhouse.” Other roadblocks reveal just how unprepared cities currently are for urban farming on the scale Lufa has in mind. Lufa can’t be considered an agricultural business, for instance, and therefore can’t claim the same tax rebates as other rural farms in the rest of Canada. Yet it still has to pay rent every month. Such types of policy changes, although welcomed by cities, could be entangled in bureaucratic procedures for years. Last year, Lufa opened its second farm, a 43,000 square-foot (4,000 square-meter) rooftop greenhouse—the world’s largest—in the northern suburbs of Montreal. This time, the structure was integrated into a brand-new building. Hage thinks building new will be important to scaling up urban farming—it allowed Lufa’s architects and engineers to work with structural challenges more easily. (A company spokesman says the second greenhouse was "significantly cheaper" on a per-square-foot basis than the first one, which cost about $3 million to build.) The new farm is crammed with lots of new features: a system that increases air pressure inside the greenhouse prevents undesirable bugs from entering, and a chamber regulates airflows in order to maintain optimal growing conditions. Lufa is also developing its own in-house technology. The company has just received a patent for a system that allows it to grow 30 percent more food on the same area. Meanwhile, the IT team is developing a suite of iPad apps for greenhouse management. One of them, which helps manage insect populations, will soon be made available to all organic growers. “We’ve decided it’s too valuable for us not to be going out to the world and saying, 'Use it for free,'” says Hage. Lufa's greenhouses become profitable as standalone units within about 18 months of construction. The company as a whole doesn't expect to become profitable until later this year or next year, as much of the revenue is being channeled back into growth activites. Lufa’s long-term goal is to become a provider of technology for property developers, real estate owners, or businesspeople who wish to set up a rooftop farm on top of a building—any building. The company’s third greenhouse, which will be in Boston, will reflect some of the latest innovations cooked up by the team. “What we're doing in Montreal and Boston is only to be able to have a test bed to be able to demonstrate our technology,” says Hage. “Every farm we build is a new R&D facility.” No one on Lufa’s founding team had ever worked on a farm or in the food industry. Hage himself was running a software company; his business partner Lauren Rathmell was a biochemist fresh from McGill University. Yet they thought this was precisely the type of mindset needed for the urban farming sector to take off. To this day, Hage sees Lufa primarily as a tech and systems venture; most of the company’s budget goes to engineering and IT, with the e-commerce platform and logistics operations raking up the largest share. “Really,” Hage says with a laugh, “it’s farming with more software than farmers.” This story originally appeared on Citiscope, an Atlantic partner site.
<urn:uuid:eb7b275b-fd6b-4d07-8fc9-ab05238104ca>
CC-MAIN-2020-16
https://www.citylab.com/life/2014/08/can-urban-agriculture-work-on-a-commercial-scale/378984/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496669.0/warc/CC-MAIN-20200330054217-20200330084217-00355.warc.gz
en
0.953886
2,305
3.015625
3
Signs and Symptoms Demodicosis refers to an infestation by mites of the genus Demodex. In humans, these mites selectively inhabit the skin of the face and head and have been associated with rosacea, steroid-induced dermatitis and seborrheic dermatitis, among other conditions.1-4 When Demodex infest the eyelids and lashes, the condition is referred to as ocular demodicosis or Demodex blepharitis. The typical patient with ocular demodicosis is over 50 years of age, with increasing prevalence in the elderly population.5-7 There is no known racial or gender predilection.6 Clinical symptoms of blepharitis—itching, burning, sandy or gritty feeling, heaviness of the lids or complaints of chronic redness—are often present in these patients, although a recent study indicates that nearly half of those individuals who harbor Demodex remain asymptomatic.6 The classic sign associated with ocular demodicosis is the presence of collarettes, or scales that form clear casts around the lash root, a finding first recognized by Coston in 1967.8 In 2005, Gao and associates coined the phrase cylindrical dandruff (CD), which is more descriptive of the eyelash sheathing encountered with Demodex infestation.7 The study showed that lashes demonstrating diffuse or sporadic CD had a significantly higher incidence of Demodex organisms than those without CD.7 Additional, nonspecific signs of ocular demodicosis include red and swollen lid margins, trichiasis, eyelash disorganization, madarosis, meibomian gland dysfunction (MGD), blepharoconjunctivitis and blepharokeratitis.9,10 Recent studies also suggest a potential association between Demodex and pterygia and chalazia.11,12 Much controversy surrounds the role of Demodex in ocular inflammation. The organism is considered by many to be nothing more than a commensal saprophyte, inhabiting the skin of the host and feeding on accumulated oil secretions and dead epithelial cells.13,14 Others, however, view the mites as parasitic—by definition, thriving in or on the host organism, offering no benefit and potentially causing harm. Judging by the recent literature, the latter view is currently more popular. Two species of mites are known to inhabit the eyelids and eyelashes of the human host: Demodex folliculorum and the smaller, less prevalent Demodex brevis.5-7,9,10,15 D. folliculorum tends to cluster superficially around the lash root, while D. brevis burrows into the deeper pilosebaceous glands and meibomian glands.11,16,17 As D. folliculorum feed along the base of the lashes, follicular distention occurs, contributing to the formation of loose or misdirected lashes.10 Cylindrical dandruff appears to result from epithelial hyperplasia and reactive hyperkeratinization around the base of the lashes, possibly due to microabrasions from the mite’s sharp claws and cutting mouth-parts (gnathostoma).7,10 D. brevis, in contradistinction, is believed to impact the meibomian glands either by mechanical blockage of the duct, a granulomatous reaction to the mites as a foreign body or as a vector for other microbes that incite the host’s innate immune response.10,11,18 The end result is MGD with associated lipid tear deficiency.19 Of course, not all individuals manifesting Demodex display these pathological changes. Studies have shown that infestation by Demodex induces an upregulation of tear cytokines, particularly interleukin-17, a potent mediator of inflammation.20,21 Whether the symptomology and clinical manifestations associated with demodicosis are related to a critical number of organisms (with a pathological tipping point), concurrent pathogenic bacteria, age, environment or some other factor is yet to be determined. Because the eye is set back into the orbit, it does not lend itself to routine washing as readily as the rest of the structures of the face; this may in part explain why Demodex seem to flourish in this environment. Simple cleansing of the eyelids with baby shampoo or other surfactant cleaners has been advocated by some as a form of therapy, but studies have shown this to be ineffective as a standalone treatment modality.7,19,22 Salagen (pilocarpine gel 4%, Eisai Pharmaceuticals) applied to the eyelids once or twice daily has also been recommended as a deterrent to mite infestation. This agent is theorized to interfere with the mites’ respiration and motility via toxic muscarinic action.23 However, studies have shown this intervention to be only partially effective, and the parasympathomimetic effects of pilocarpine on pupil size and accommodation must be weighed heavily against the clinical benefit.22-24 Tea tree oil (TTO), naturally distilled from the leaves of the Melaleuca alternifolia plant, appears to be the most widely accepted and most well-substantiated treatment for ocular demodicosis. Numerous derivatives of this essential oil have been advocated for application to the lid margins and lashes, including a 50% TTO in-office therapy, a 10% TTO home therapy, a 5% TTO ointment, a commercially available TTO shampoo and Cliradex (terpinen-4-ol, Bio-Tissue).19,24-27 Cliradex is typically prescribed once or twice daily for three to six weeks. Sensitivity to these solutions tends to be dose and duration dependent, and while complete eradication of Demodex mites may be unattainable for all patients, subjective improvement is the rule rather than the exception. TTO can cause intense discomfort when applied to the delicate skin of the eyelids at full strength and can result in significant ocular toxicity if appropriate care is not taken. Diluting the solution with other natural oils (e.g., coconut oil, walnut oil or macadamia nut oil) is an intermediate step that can improve tolerability. In clinical studies, successful in vivo eradication of mites was seen in 73% to 78% of patients, while symptoms diminished dramatically in 82% of subjects after four weeks of therapy.19,24 While there are currently no studies to support the practice in terms of Demodex management, we have achieved great success with microblepharoexfoliation (MBE) using the BlephEx device (BlephEx). MBE provides ideal induction therapy for demodicosis by rapidly stripping away accumulated sebum, devitalized epithelial tissue, bacterial biofilm, cylindrical dandruff and even the more superficial mites themselves. In our experience, the combined use of MBE with ongoing hygiene efforts and specific, miticidal treatment modalities allows patients to achieve symptomatic relief much more quickly. For more recalcitrant cases of demodicosis, or in those patients where compliance with topical therapy is unattainable, Stromectol (oral ivermectin, Merck) may provide some clinical benefit. Stromectol is an antihelminthic agent typically prescribed for the treatment of parasitic disorders such as strongyloidiasis or onchocerciasis. In terms of Demodex therapy, two 200mcg/kg doses given seven days apart represents the current standard.28,29 As an example, an adult weighing 165 pounds would be prescribed five 3mg tablets to be taken in bolus form at the time of diagnosis, and an identical dose to be taken one week later. The most common side effects noted include nausea, diarrhea, dizziness and pruritus.30 Because Demodex inhabit various regions of the face and scalp, patients must remain vigilant even after a treatment for ocular demodicosis has been concluded. The patient should be advised to wash the face and hair regularly in order to reduce excess oils. Ideally, this should be done on a daily basis. The use of specialized facial scrubs or shampoos containing miticidal agents such as tea tree oil or permethrin may offer added benefit. Permethrin 5% cream, which is most commonly used for scabies treatment, may help to diminish stubborn Demodex reservoirs in patients with persistent or recurrent issues. This cream is typically applied to the face in the evenings, several times per week.31 Due to toxicity, it should not be used on or near the eyelids. • Clinical recognition of demodicosis can be challenging, as lid and lash debris are typically attributed to Staphylococcal or seborrheic blepharitis. • Demodex mites are virtually impossible to view at the slit lamp due to their transparent nature, small size, aversion to bright light and tendency to remain buried within the lash follicle. Pulling two or three lashes and viewing them under a high magnification microscope can offer confirming evidence of these organisms in many cases. If a microscope is not available, lash rotation under the slit lamp can often help with the diagnosis. Rotating a lash in a circular fashion in the follicle can irritate the Demodex organisms and cause them, along with their debris, to evacuate the follicle. • The hallmark finding of demodicosis is the presence of cylindrical dandruff at the base of the eyelashes. • MGD may also be associated with demodicosis. Demodex mites have been identified as a risk factor for rosacea, and there may be a causative link.4,32,33 • Improved lid hygiene is the primary goal in managing any form of blepharitis, including ocular demodicosis. • 50% TTO is generally used for in-office treatment only, while 10% solutions are recommended for home use. For those patients who cannot or prefer not to formulate their own concoctions, single-use commercial products such as Cliradex or Blephadex eyelid wipes are available. 1. Zhao YE, Peng Y, Wang XL, et al. Facial dermatosis associated with Demodex: a case-control study. J Zhejiang Univ Sci B. 2011;12(12):1008-15. 2. Hsu CK, Hsu MM, Lee JY. Demodicosis: a clinicopathological study. J Am Acad Dermatol. 2009;60(3):453-62. 3. Ríos-Yuil JM, Mercadillo-Perez P. Evaluation of Demodex folliculorum as a risk factor for the diagnosis of rosacea in skin biopsies. Mexico’s General Hospital (1975-2010). Indian J Dermatol. 2013;58(2):157. 4. Forton FM. Papulopustular rosacea, skin immunity and Demodex: pityriasis folliculorum as a missing link. J Eur Acad Dermatol Venereol. 2012;26(1):19-28. 5. Chen W, Plewig G. Human demodicosis: revisit and a proposed classification. Br J Dermatol. 2014 Jan 28. [Epub ahead of print]. 6. Wesolowska M, Knysz B, Reich A, et al. Prevalence of Demodex spp. in eyelash follicles in different populations. Arch Med Sci. 2014;10(2):319-24. 7. Gao YY, Di Pascuale MA, Li W, et al. High prevalence of Demodex in eyelashes with cylindrical dandruff. Invest Ophthalmol Vis Sci. 2005;46(9):3089-94. 8. Coston TO. Demodex folliculorum blepharitis. Trans Am Ophthalmol Soc. 1967;65:361-92. 9. Mastrota KM. Method to identify Demodex in the eyelash follicle without epilation. Optom Vis Sci. 2013;90(6):e172-4. 10. Liu J, Sheha H, Tseng SC. Pathogenic role of Demodex mites in blepharitis. Curr Opin Allergy Clin Immunol. 2010;10(5):505-10. 11. Liang L, Ding X, Tseng SC. High prevalence of Demodex brevis infestation in chalazia. Am J Ophthalmol. 2014;157(2):342-348.e1. 12. Huang Y, He H, Sheha H, Tseng SC. Ocular demodicosis as a risk factor of pterygium recurrence. Ophthalmology. 2013;120(7):1341-7. 13. Kamoun B, Fourati M, Feki J, et al. Blepharitis due to Demodex: myth or reality? J Fr Ophtalmol. 1999;22(5):525-7. 14. Türk M, Oztürk I, Sener AG, et al. Comparison of incidence of Demodex folliculorum on the eyelash follicule in normal people and blepharitis patients. Turkiye Parazitol Derg. 2007;31(4):296-7. 15. Patel KG, Raju VK. Ocular demodicosis. W V Med J. 2013;109(3):16-8. 16. Hom MM, Mastrota KM, Schachter SE. Demodex. Optom Vis Sci. 2013;90(7):e198-205. 17. De Venecia AB, Siong RL. Demodex sp. infestation in anterior blepharitis, meibomian gland dysfunction, and mixed blepharitis. Philipp J Ophthalmol. 2011;36(1):15-22. 18. Lacey N, Kavanagh K, Tseng SC. Under the lash: Demodex mites in human diseases. Biochem (Lond). 2009 Aug 1;31(4):2-6. 19. Gao YY, Di Pascuale MA, Elizondo A, Tseng SC. Clinical treatment of ocular demodecosis by lid scrub with tea tree oil. Cornea. 2007;26(2):136-43. 20. Kim JH, Chun YS, Kim JC. Clinical and immunological responses in ocular demodecosis. J Korean Med Sci. 2011;26(9):1231-7. 21. Kim JT, Lee SH, Chun YS, Kim JC. Tear cytokines and chemokines in patients with Demodex blepharitis. Cytokine. 2011;53(1):94-9. 22. Inceboz T, Yaman A, Over L, et al. Diagnosis and treatment of demodectic blepharitis. Turkiye Parazitol Derg. 2009;33(1):32-6. 23. Fulk GW, Murphy B, Robins MD. Pilocarpine gel for the treatment of demodicosis—a case series. Optom Vis Sci. 1996;73(12):742-5. 24. Gao YY, Di Pascuale MA, Li W, et al. In vitro and in vivo killing of ocular Demodex by tea tree oil. Br J Ophthalmol. 2005;89(11):1468-73. 25. Gao YY, Xu DL, Huang lJ, et al. Treatment of ocular itching associated with ocular demodicosis by 5% tea tree oil ointment. Cornea. 2012;31(1):14-7. 26. Koo H, Kim TH, Kim KW, et al. Ocular surface discomfort and Demodex: effect of tea tree oil eyelid scrub in Demodex blepharitis. J Korean Med Sci. 2012;27(12):1574-9. 27. Tighe S, Gao YY, Tseng SC. Terpinen-4-ol is the most active ingredient of tea tree oil to kill Demodex mites. Transl Vis Sci Technol. 2013;2(7):2. Epub 2013 Nov 13. 28. Holzchuh FG, Hida RY, Moscovici BK, et al. Clinical treatment of ocular Demodex folliculorum by systemic ivermectin. Am J Ophthalmol. 2011;151(6):1030-1034.e1. 29. Salem DA, El-Shazly A, Nabih N, et al. Evaluation of the efficacy of oral ivermectin in comparison with ivermectin-metronidazole combined therapy in the treatment of ocular and skin lesions of Demodex folliculorum. Int J Infect Dis. 2013 May;17(5):e343-7. 30. STROMECTOL [package insert]. Whitehouse Station, NJ: Merck & Co; 2009. 31. Stephenson M. Blepharitis diagnosis: Don’t forget Demodex. Review of Ophthalmology. 2012;19(9):46,48,50,75. 32. Moravvej H, Dehghan-Mangabadi M, Abbasian MR, Meshkat-Razavi G. Association of rosacea with demodicosis. Arch Iran Med. 2007;10(2):199-203. 33. Zhao YE, Wu LP, Peng Y, Cheng H. Retrospective analysis of the association between Demodex infestation and rosacea. Arch Dermatol. 2010;146(8):896-902.
<urn:uuid:40097b92-3e53-45be-a416-a58f843863fc>
CC-MAIN-2020-16
https://natoph.com/2015/06/01/ocular-demodicosis/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371665328.87/warc/CC-MAIN-20200407022841-20200407053341-00315.warc.gz
en
0.838813
3,786
3.390625
3
Seychelles is home to some of the most dynamic species which are endemic to it. Likewise, it boasts of the most controversial nut of all times- the Coco de Mer. A French word, the Coco de Mer translates into 'coconut of the sea'. This nut has garnered attention due to its naughty, peculiar shape resembling that of a female bottom. It grows on tall trees and is exclusive to a tiny area in Seychelles. Various legends and myths have been associated with it, and this naughty nut has been the centre of attraction in royal courts. These beautiful lines are enough to speak for the nut's enigma and popularity. The real Coco de Mer trees were not discovered until 1743 in Seychelles. General Gordon even named it as the 'forbidden fruit'. Other popular names include sea coconut, love nut, double coconut, and coco fesse. Decoding Coco de Mer - The National Fruit of Seychelles Biological name - Lodoicea Maldivica The Coco de Mer is the largest and the heaviest seed in the world, weighing up to 25 kilogrammes with a diameter of 40 to 50 centimetres. It is a monotypic genus in the palm family and is endemic to Seychelles. This spectacular species has the distinction of holding five botanical records- the largest fruit, the heaviest seed, the longest cotyledon, the largest female flower of any palm and the most efficient plant in drawing up nutrients. Coco de Mer Trees and Leaves The Coco de Mer grows on tall trees that have an average height of 25-34 metres. The stem is erect and spineless with rings of leaf scars. There is a large bulbous structure at the base of the trunk, which eventually narrows down towards the bottom. A wide variety of animals live around these trees, suggesting that their evolution would have taken many years. The leaves are fan-shaped and can be as long as 10 metres and as wide as 4.5 metres. The petiole is about 4 metres in length, and the leaves are plicate at the base. The funnel formed by the leaves traps the pollen which is carried to the base during rainfall, thereby ensuring that the tree has enough nutrients. Flowers and Inflorescence The female flowers of Coco de Mer are the largest of any palm -dioecious, with separate male and female plants. Each flower has a small bracteole, three sepals forming a cylindrical tube, and a three-lobed corolla. The male flowers are arrayed in a catkin-like inflorescence which is pollinated by lizards, rain and wind. The trees begin to produce flowers only after 11 years. Coco De Mer Fruit The fruit of Coco de Mer is the largest in the world with a diameter of 40-50 centimetres and a weight of 15-30 kilogrammes. It is bilobed and flattened and usually contains one seed. Sometimes, it can also hold two to four seeds. The nut of Coco de Mer is the largest in the plant kingdom. It takes six to seven years to mature and another two to three years to germinate. Earlier, it was believed that the sea dispersed the seed. However, later studies showed that the Coco de Mer nuts are too heavy to float, it is the hollow germinated ones who are carried away by the sea currents. Uses of Coco de Mer The Coco de Mer has been put to several uses since its discovery. It is the de facto symbol of Seychelles and lies at its heart and soul. Its applications have therefore been restricted to a certain extent by the Seychelles government. Traditional Uses of Coco De Mer Seed The Coco de Mer has been the most prized and cherished nut, only increasing its allure. In the Maldives, the nuts were inevitably the property of the royal king. These nuts have often served the purpose of regal gifts and were embellished with jewellery by European nobles to add a charm to their gallery of private collectables. The tough exterior was used as a small vessel (bowl) to carry water and was even made into other wooden articles. The Coco de Mer was also believed to possess miraculous healing properties. João de Barros, the great Portuguese historian, believed that its healing powers even surpassed the precious stone Bezoar. Some even thought of it as the antidote to all poisonous substances. Currently, the Coco de Mer is grown as an ornamental tree. The fruit is edible but is not commercially valuable. It is used in Siddha medicine, Ayurvedic medicine and also in traditional Chinese medicine. In the Southern Chinese cuisines, it is also used as a flavouring substance. As a tourist, this is the best sovereign you can take from Seychelles. However, strict rules apply to it being taken out of the country, and you, therefore, need to be careful. Buying Coco de Mer The Coco de Mer has arrested the attention of everyone paying a visit to this spectacular archipelago. The distinct shape of the nut, often compared to that of a nicely shaped female buttock, is the main reason for its unwavering popularity. You definitely cannot leave Seychelles without packing it in your bag. All you require is some considerable space in your backpack to show this mother nature's wonder to your friends and family back at home. Know the rules before you buy A nut is an invaluable object in Seychelles, and to maintain its sanctity, the government has imposed several rules as to its sale and transport to other parts of the world. The tourists are, therefore, obliged to follow these since breaking the rules results in a penalty comprising heavy fines, and in some cases, even jail. All the Coco de Mer trees in Seychelles are under the supervision and authority of the Seychelles government, including the privately-owned ones. All the shops selling these nuts need to have proper registration. Each nut is given an ID and a green label. The main idea is to stop illegal poaching and to conserve the few 7000 trees that remain. In short, if the seller is unable to produce the right certificate and papers, DO NOT BUY it. Where to find Coco de Mer? If you simply want to be in awe of the majestic glory of the 'queen of curios', that is the Coco de Mer, you can spend some time in the Vallee De Mai Reserve in Praslin. The reserve is one of the two UNESCO World Heritage Site in Seychelles and has been named as the 'Garden of Eden'. The tour guides here provide plenty of information about the 'love nut'. The soil here is quite dry, thereby arousing the curiosity of scientists regarding the nutrition of the Coco de Mer trees. Hanging from the stately erect, tall trees, the nuts speak volumes of the legends and myths revolving around the trees and the reserve. You can also spot the Seychelles black parrot and geckos in this reserve. A considerable number of these can also be found on the Curieuse Island, another natural habitat of the Coco de Mer. You will have an exciting time exploring the trees as you climb the uphill trail on the island. A few years ago, a small population of the Coco de Mer trees was introduced on the Silhouette Island. Where to buy Coco de Mer? The Coco de Mer nut is the most iconic souvenir to bring home from Seychelles. These can be bought from all licensed gift stores in uptown Mahe, Praslin and La Digue. A few good places to name are St. Anne on Praslin and the La Ciota building near the hospital in Victoria. However, be sure that these have the holographic stickers and the gift shop has a proper certificate. Moreover, do note that the Coco de Mer nuts you buy as souvenirs have their kernels scooped out and are hollow from the inside (so that these cannot be implanted elsewhere). These are sawn in half and then glued back together. Nevertheless, its charming peculiar shape remains intact. Coco de Mer Price - The Size of a Coco De Mer The price of the Coco de Mer nut depends primarily on its quality and size. You can buy the tinier, forlorn ones at SCR 600, while the bigger and better ones come at around SCR 3000. Also to be noted is that all shops do not sell at the same price - you can get as many as 30 nuts at SCR 3000 in one shop while having to pay a larger amount for the same in another. So, expect a nut that fits in your budget. Legends of Coco de Mer - The Seychelles Coconut The Coco de Mer nut has an erotic shape and looks like a woman's buttock on one side and a woman's belly and thighs on the other. This unique shape of the nut has served as fertile soil for some juicy stories since times immemorial, ultimately transforming into legends. Some of these are still believed. Nevertheless, these legends are quite exciting and lend an unusual dimension to the Coco de Mer. The Malay Legend The Coco de Mer nut was considered to possess magical and mystical properties. The origin of the nut and its tree was unknown, thereby adding to the mysterious nature of the nut. The Malay legend was based on an unknown property of the nut. The Coco de Mer, due to its hefty size, is unable to float on water, thus sinking to the bottom of the sea. Over time, due to biological processes, the outer husk of the nut withers and the inner germinating parts of the plant decay and rot. The gases released as a consequence push the bare seed to the top of the ocean, which is then washed up to distant shores by ocean currents. The seamen of Malay, therefore according to their intellect, saw these nuts 'falling upwards' from the ocean bed. This led them to believe that there was a forest at the bottom of the Indian Ocean. They even believed that these trees were the abode of the giant bird Garuda, which hunted down the elephants and tigers. The African priests, on the other hand, had another interesting belief. They said that the Coco de Mer trees sometimes rose above the ocean waters and the turbulent waves created as a result obstructed the path of any ship sailing in the vicinity. The Garuda then mauled the people sailing in these ships. The hollow germinated nuts of Coco de Mer were usually carried away by the strong ocean currents to the shores of Maldives, where they were completely unknown. These nuts reaching the shores no longer remained fertile, and therefore no trees emerged from them. The atypical shape of the nut perplexed the inhabitants of this island and made additions to its legend and lore. Any of the Coco de Mer nuts found on Maldives' shore had to be offered to the royal court - keeping it to oneself meant a death penalty. The Dutch Admiral Wolfert Hermansson was gifted with the Coco de Mer nut in 1602 by the Sultan of Bantam in return for his services. The upper part of the nut, however, was chopped off for it would have been an insult to the admiral's modesty. The nut was even believed to possess medicinal properties. Legend in Seychelles The discovery of Coco de Mer in Seychelles in 1743 led to another legend about the Coco de Mer. Since the fruits of this plant grow only on female trees, it was widely believed that the male trees walked up to the female ones to make love to them on nights. The erotic shape of the nut is largely attributed to this clandestine lovemaking. However, legend has it that those who saw the trees making love to each other either died or went blind. This belief of trees making love to each other is strengthened by the fact that the pollination of the Coco de Mer has still not been fully identified by the scientists and biologists. The Forbidden Fruit Legend Seychelles was visited by General Charles George Gordon in 1881. He hailed the Vallee De Mai Reserve in Praslin (the home of Coco de Mer trees) as the original Garden of Eden, as mentioned in the Bible. The Coco de Mer, therefore, was the 'forbidden fruit' eaten by Eve. However, counter statements to this view said that Eve would have had a really tough time giving the fruit to Adam due to its heavyweight. When the 'love nut' was gifted to the Duke and Duchess of Cambridge If you remember correctly, one thing that made global headlines back in 2011 was the gift presented to the Duke and Duchess of Cambridge- Prince William and Kate Middleton by Seychelles. The giant aphrodisiac nut, Coco de Mer, was gifted to the royal couple by the Seychelles government who considered it a privilege to host them for their royal honeymoon. They were even granted a special license to bring home this erotic nut. The 'love nut' was given to the "deeply in love" couple by foreign minister Jean-Paul Adam on behalf of Seychelles President James Michael at a ceremony in Mahe. The Coco de Mer nut was presented to Wills and Kate towards the end of their lavish and lovely honeymoon in Seychelles. The BBC had reported this event as "Royal Honeymooners' erotic Seychelles souvenir". Apart from the sweet memories of torch-lit dinners, picturesque beaches and dips in the Indian Ocean, the other thing that the Duke and Duchess of Cambridge took from Seychelles in the literal sense was this exotic nut- the Coco de Mer. The Coco de Mer has raised eyebrows since times immemorial. Its shape is what everyone wants to talk about, be it the scientists or the poets or the locals or even the media. There is something so exciting and appealing about this forbidden fruit that you cannot resist your desire to "taste" it. It is truly a pride of Seychelles.
<urn:uuid:897fa31f-8e74-460e-b240-12392e01cd1e>
CC-MAIN-2020-16
https://www.holidify.com/pages/coco-de-mer-826.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370492125.18/warc/CC-MAIN-20200328164156-20200328194156-00075.warc.gz
en
0.96294
2,905
3.25
3
Last Ice: Jill Heinerth Dives Under Icebergs to Illustrate Issues of Climate Change Under the pale light of the Arctic summer night, I return to the edge of the ice floe to watch yesterday’s dive site disappear on the horizon. The iceberg that was lodged in sea ice has broken free and begun a journey to its demise as it heads out of the mouth of Eclipse Sound. Yesterday’s exploratory dive will never be seen by anyone else. # # # In 2000, I led a National Geographic diving team to make the first cave dives inside the largest piece of ice ever seen on our planet. The B-15 iceberg had calved from the Ross Ice Shelf in Antarctica, and we were drawn to explore this gargantuan portent to global climate change. When I wrote the script for my first documentary film, Ice Island, I was warned that politically charged terms like “climate change” and “sea level rise” might limit the acceptance of our film. I was told that without scientific credentials, any claims regarding “unproven science” were a bad idea. Thanks to support from the Royal Canadian Geographical Society, I’m making that untenable leap again as an artist documenting our transient cryosphere. It’s approaching twenty years since I floated through the cavernous blue voids inside B-15, and now many scientists believe that the Arctic Ocean will be ice-free in another twenty. Will my photographs of the sea ice hang in a gallery of great extinctions beside the dodo bird and perhaps even our cherished polar bears? With every dive I conduct in the Arctic, I realize I’m swimming in an environment that has never been seen and will never be seen again. Groundless? Unproven? I know with certainty that I am in a race to record the last ice. A Place of Great Change The Arctic is transforming more rapidly than anywhere else on our planet. Temperatures in the Canadian north are rising at twice the rate seen elsewhere. With the Arctic food web shifting from shrinking sea ice, traditional Inuit hunts are disrupted, and the tenuous balance of food security is lost. Permafrost melting, sea level rise, erosion and an increase in stormy weather pose risks for a society that has always lived in balance with nature. With the Arctic becoming more navigable and accessible, resource speculation is on the rise. Oil, gas and shipping industries are jockeying into position to snag new routes and drilling rights in the open water. These activities will indelibly alter the complexion of the Arctic and bring new threats to an otherwise pristine sanctuary. “The Arctic is unraveling,” says Rafe Pomerance, who chairs a network of conservation groups called Arctic 21. A recent report from his organization found, from 2011 to 2015, the Arctic was warmer than at any time since records began around 1900. Sea ice has diminished, and the snow cover in Europe and North America is half of what it was in the days when I swam through the B-15 iceberg. Traveling 700 km north of the Arctic Circle takes a bit of work. From my part-time home in Florida, it’s the equivalent of a journey to the center of the earth: some 6424 km. But the airline trip out of the relative abundance of Ottawa to the simplicity of Pond Inlet is delightful. The flight from my nation’s capital included refills in Iqaluit, Hall Beach, and Igloolik. Each stop provides an opportunity for families and friends to reconnect. As our plane dropped into the tiny outpost of Hall Beach, cheering and applause erupted in the small cabin. One woman yelled, “There is my house! There is my house! Can you see my house?” She was genuinely excited when two quads roll into the airport parking lot, piled high with aunties, young men, and women wearing traditional amautis. Tiny eyes peered out from the dark recesses of a quilted hood, and a small baby crawled around a young woman’s neck and emerged onto her shoulder to reach toward the passenger from the plane. They nuzzled a familial greeting filled with joy, and then hugged and giggled with gratitude for their brief connection. Thirty minutes later, after the cargo was swapped and fuel loaded, we were called back to the tarmac. Some people might call these stops an inconvenience, but I can think of nothing better than being a part of these brief, loving reunions. Our arrival in Pond Inlet was just the beginning of our journey.It would take another half day or more to reach our camp on the sea ice. With the sea ice melting and rain in the forecast, a direct course to Arctic Kingdom’s encampment was unlikely. Though the ice was still safe for travel, large melt ponds and long cracks meant a circuitous path to reach the temporary outpost set far enough from the floe edge to remain viable for a few more weeks. Beneath Fleeting Ice For diving, we loaded sleds again and traveled on traditional qamutiks from our comfortable canvas yurts. I was awestruck by the majesty of the snow-covered peaks. Glaciers descended to the sea ice that my Inuit guides call “the land.” Misty clouds poured down the valleys in swirling masses of white that blended into the tableau before me. I felt the connection of people, snow, and mountains: the Arctic is one harmonious organism. The line from the qamutik to the Skidoo swung sideways and sprayed a rooster tail of slush in the way a boat creates wake. Our driver, Kevin Enook, looked back to signal “okay” and ensure we were still grinning from ear-to-ear. We turned abruptly to follow alongside an open lead in the ice. The crack was remarkably straight and too wide to cross. A mile closer to shore we found a spot where our Skidoo could bridge the gap. Kevin unhooked the qamutik, revved the engine to full speed, and flew across the lead. He threw us a rope and pulled the longer sled across. Although this ice was fastened to the shore, I began to appreciate the transient nature of the melting pack. We resumed our race to a pinnacle on the horizon: an iceberg that made a journey from the glaciers of Greenland to be locked in the fast ice for a winter. Upon arrival to the looming berg, Enook and Billy Mergosak took careful steps to test the ice. A small strip of open water provided testament to the fact that this frozen monument is struggling to free itself from the grips of the floe. Under the bright sun, fresh water cascaded down the face of the ice in streaming rivulets that furrowed the surface in vertical channels. We’d found the perfect site for an exploration dive. Nathalie Lasselin fired up the compressor, which seemed deafening in this pristine place. Enook threaded a titanium ice screw into the surface a few meters back from the cobalt blue hole where we would descend to undetermined depths. He prepared lines to connect us to the surface and knocked away the unstable margin of the hole we’d enter. I placed my camera by the water and wondered whether I should tie it off too. One small crack would send it plummeting into the unknown. I settled in the water first and pushed away the slush that obscured my view. Enook passed me the camera, and I dropped through a fuzzy halocline of mixing fresh and salt water. Long runners of algae flapped horizontally in the current, held fast to the undersurface of ice. This alga and other nutrients held within the ice feed the zooplankton that serve as the base of the Arctic food chain. Bottom dwellers such as anemones, sponges, and halibut, in turn, feed other fish and marine mammals like belugas, narwhal and bowhead whales. The surface of the ice was dimpled and fluted, carved by the undersea currents that now pull my rope taut. I’m connected to Enook like a fish on a line; only I hope he does not lose this catch. Falling down the facade, I observed layers of time that could date this ice back 10,000 or more years. Some seams were distinctly transparent while others were packed white with small air bubbles that fizzed as they dissolved. Deeper, I reached a colorful carpet of orange kelp that hid a miniature garden of crustaceans and Cnidaria. I looked upwards in the glaciated cathedral to see Nathalie descend on a silver wire of bubbles. Her silhouette glided through the cerulean depths as she worked hard to pull her line toward me. We met in this temporary palace and realized how privileged we are to document this fragile kingdom of ice. Nobody can be certain when the Arctic sea ice will be gone, but scientists agree that we’re on a precarious downward spiral. Professor Jason Box, a glaciologist with the Geological Survey of Denmark and Greenland declared, “the loss of nearly all Arctic sea ice in late summer is inevitable.” Others assert that an ice-free Arctic Ocean will arrive within decades. I’m grateful for the opportunity to preserve this memory, but what will happen to the people and animals of the North. How will they adapt to the last ice? Jill Heinerth is a Canadian cave diver, underwater explorer, writer, photographer and filmmaker. Many consider her the best female underwater explorer in the world, and she has dived in some of the most extreme locations on the planet, from underwater caves to icebergs in the frigid waters off Antarctica. She is currently chasing icebergs in the calving grounds off the coast of Greenland.
<urn:uuid:43cf3571-aefd-437a-b127-5b815000bcce>
CC-MAIN-2020-16
https://www.kuhl.com/borninthemountains/last-ice-jill-heinerth/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510287.30/warc/CC-MAIN-20200403030659-20200403060659-00114.warc.gz
en
0.95281
2,052
2.546875
3
Crash Course in Taxonomy, or, What Those Odd Latin Terms for Bugs Mean Perhaps you’ve noticed that our Bug Week writings often contain Latin terms for bugs. For example, if we mention the common bed bug we’ll probably reference its scientific name, Cimex lectularius. Or when we talk about grasshoppers we might note that they’re part of the insect order known as Orthoptera. Maybe you’ve wondered what all this terminology meant. We’ll try to explain… The first thing to know is that there’s an entire scientific discipline devoted to identifying and classifying organisms. It’s called taxonomy. When someone discovers a previously unknown species of insect, taxonomists are the people who determine what insects are closely related to the newly discovered species, and taxonomists are the people who come up with a formal, scientific name for the new critter. Let’s talk about formal, scientific names for a moment. One of the main reasons for using scientific names is that common names aren’t standardized. For example, what one person calls a “giant water bug” someone else may call an “electric light bug,” someone else calls an “alligator tick” and yet another person calls a “toe biter,” and perhaps none of these four people realizes that they’re all talking about the same type of insect. But if you say “a member of the Belostomatidae family,” there’s no room for misunderstanding – assuming that you know what Belostomatidae means in the first place. Generally, organisms are given a scientific name that contains two words. This name represents the smallest taxonomic division that’s possible for an organism. For example, the luna moth goes by the scientific name Actias luna and so if you gathered up 1,000 male and 1,000 female specimens of Actias luna, all the specimens in each gender group would be virtually identical, even if you carefully examined them under a microscope. You might notice some minor variations in their color and size, but all these specimens would be so similar that any randomly chosen male and female could mate and produce viable offspring. In fact, that’s one of the standards used in determining whether two groups of organisms are members of the same species – can a male from one group and a female from the other group reproduce? Scientific names use Latin words (or what passes for Latin – we’ll get to that in a moment.) For example, the invasive mosquito species commonly known as the Asian tiger mosquito has the scientific name Aedes albopictus. Each word has separate meaning – “Aedes” comes from an ancient Greek word that means “unpleasant” and “albopictus” means “white-painted.” So, Aedes albopictus means “unpleasant, white-painted mosquito,” more or less. In a scientific name, the first word is always capitalized. Also, the scientific name is generally written in italics, to emphasize that this is a Latin term. Now, the first word in the scientific name is the name of the organism’s genus. The genus is the next-largest taxonomic classification after the species. There may be many organisms in a genus, or just one. Regardless of the exact number, all the organisms in a single genus are closely related. In the mosquito genus Aedes there are dozens and dozens of species – they all have scientific names that start with “Aedes” and then have a unique second word that denotes the species, to distinguish them from other members of the Aedes genus. So besides Aedes albopictus we also have its close relative, Aedes aegypti, the yellow fever mosquito – same genus name, different species name. When someone finds a previously undiscovered organism, a taxonomist studies it and writes a very detailed description of its anatomy. This description will eventually “go on record” as the formal set of characteristics that this species possesses. The taxonomist who handles this task usually gets to assign the species name to the organism – the second word in the scientific name. It used to be that a taxonomist would usually select a species name that was a Latin term for a noteworthy characteristic of the organism – as we mentioned earlier, “albopictus” means “white-painted,” which refers to the white markings on the legs and body of the Aedes albopictus mosquito. However, in recent decades it’s become common for taxonomists to assign species names that are meant to honor people. To “Latinize” the names, there’s often a letter “i” added to the end of the honoree’s name, rather than a real attempt at translating the name into a Latin equivalent – this shortcut is probably taken to ensure that it’s obvious who is being honored. For example, in the world of tropical fish, there are dozens of species with scientific names that end in “axelrodi,” to honor ichthyologist Herbert R. Axelrod, a major figure in the tropical fish hobby. That might seem reasonable, but lately things have taken a non-scientific turn and it’s now fairly common for celebrities to have organisms named in their honor. In the entomological world alone, there’s a spider named for Harrison Ford, Calponia harrisonfordi; a wasp named for Lady Gaga, Aleiodes gaga; and a beetle named for Sigmund Freud, Cyclocephala freudi. If you’re interested, there’s a long list of examples on this Wikipedia page — http://en.wikipedia.org/wiki/List_of_organisms_named_after_famous_people. Confidentially, we here on the BugWeek webteam find this trend silly and a bit disconcerting, because it eliminates one clue – species names based on notable traits – that can help scientists and amateur bug enthusiasts identify puzzling specimens. Here’s what we mean – it’s sometimes difficult to determine an arthropod’s genus and species by casual inspection, especially if there are several species that look alike. When the species name references the critter’s appearance or habitat or behavior, the name can help us determine what we’re examining. We can only hope that Calponia harrisonfordi seldom gives interviews (probably true) and that Aleiodes gaga is always wearing a fancy hat (probably not true.) Okay, one more thing to discuss (and thank you for reading this far, by the way) – the taxonomic hierarchy. This will help explain why we throw around terms like “Coleoptera.” In taxonomy, each organism is typically classified at eight levels, starting with a very large group and gradually working down to the very small group we’ve discussed already, the species. This hierarchy is similar to the way that a Florida resident’s home address can be classified by eight levels – planet, continent, country, state, county, city, street and exact house number. From largest to smallest, the taxonomic classifications for living things go like this: One easy way to remember the correct progression for these terms is to memorize the phrase “do kangaroos prefer cake or frosting, generally speaking?” The first letter of each word in the “kangaroo” phrase corresponds to the first letter in one of the classification terms. Let’s look at the terms used for one bug, the European honey bee, Apis mellifera… Domain – Eukaryota (organisms with cells that have nuclear membranes) Kingdom – Animalia (animals) Phylum – Arthropoda (arthropods, ie. animals with exoskeletons and jointed legs) Class – Insecta (arthropods with six legs, compound eyes and one pair of antennae) Order – Hymenoptera (often flying insects with some specific anatomical traits) Family – Apidae (a family containing most bees) Genus – Apis (honey bees that sting) Species – Apis mellifera (the European honey bee) You’ll notice that only the genus and species names are italicized. There are a few other classification terms that occasionally show up, such as subphylum, superfamily, subfamily, tribe and subspecies. We’re not going to worry about those right now. Aside from the genus and species names, the two taxonomic terms you’re most likely to encounter in BugWeek material are phylum and order. Let’s take a quick look at those before we wrap up – Phylum – Among animals, there are about 35 phyla (“phyla” is the plural form of “phylum.”) Virtually all organisms that are commonly called “bugs” are part of the phylum Arthropoda. This phylum includes all invertebrates that have exoskeletons, jointed bodies and jointed legs. Members of this phylum are called arthropods. Insects are arthropods. So are spiders, scorpions, centipedes, crabs, lobsters and shrimp. In case you’re wondering, there are some organisms people might call “bugs” that are not arthropods. They include snails and slugs (part of the phylum Mollusca), worms (part of the phylum Annelida) and nematodes (part of the phylum Nematoda.) (Thus far, we’ve limited BugWeek coverage to members of the phylum Arthropoda, though we’re thinking about expanding our coverage next year.) Order – Among arthropods, there are about 33 orders. An order is a classification that is narrow enough that the average person might recognize that its members have general similarities. For example, among insects, the Odonata order contains dragonflies and damselflies; the Dermaptera order contains earwigs; and the Coleoptera order contains beetles. The one arthropod order that the average person might recognize by name is Lepidoptera (the insect order containing butterflies and moths). In case you’re interested, the insect order Coleoptera, the beetles, contains more species than any other animal order, and includes about one-fourth of all the formally described animal species on Earth – about 400,000 species. What’s more, it’s believed that there are actually at least 1 million beetle species alive today – most of them undescribed – and possibly as many as 100 million. Wow. Looks like the taxonomists are not going to run out of work anytime soon. Don’t forget – if you are talking about BugWeek on Facebook, Twitter or Instagram, use the official #UFBugs hashtag! — Tom Nordlie
<urn:uuid:0449ac2d-8cc1-45a4-abd2-48d0b502f57c>
CC-MAIN-2020-16
https://blogs.ifas.ufl.edu/entnemdept/2016/02/05/crash-course-in-taxonomy-or-what-those-funny-scientific-names-mean/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370497309.31/warc/CC-MAIN-20200330212722-20200331002722-00555.warc.gz
en
0.925525
2,353
4.0625
4
Nonverbal communication is more important than many people think, and teachers often teach body language activities to their students to learn more about how body language works. Below are some facts about body language and its importance in everyday communication, along with some activities specifically designed to teach about body language. - Advertisement - - Advertisement - What Is Body Language? Simply put, body language is a form of nonverbal communication that uses physical behaviors instead of verbal communication. Intentions, feelings, and even thoughts can be communicated using body language. This includes your posture, eye movement, gestures, and facial expressions, among others. Also known as kinesics, you can easily interpret body language if you know what to look for. Teachers often teach activities that help students understand the significance of body language. Many of these activities emphasize how body language is often stronger than verbal communication. Here is one of those activities: - Advertisement - - Stand in front of the students and have them stand also - While announcing out loud what you’re doing, initiate certain activities - These activities can include clapping your hands, touching your nose, stomping your feet, and much more, but perform the activities while describing them aloud - Next, change up the activity and the out-loud description and see who follows; for instance, while saying “touch your nose,” you touch your ears instead - Observe the number of students doing what you are saying aloud and those who are doing what you’re actually doing This can teach students a lot about the differences between verbal and nonverbal communication, and the importance of the latter. Many other activities can do the same. - Advertisement - Body language is practiced by both humans and animals and is an important way to communicate with others. In addition to the actions mentioned above, it also uses mannerisms and even certain attitudes. It can be conscious or subconscious, and it can be used to communicate both physical and emotional states. Some Very Simple Activities to Learn About Body Language Dozens, perhaps even hundreds of body language activities are available to teach students and even employees the significance of body language. Below are a few of them. - Advertisement - #1: Writing a story. Divide participants into groups of two or three and have them write a 600-word story about body language. The story tells of two people who are communicating via body language, so no dialogue can be included. Afterward, each group will read their story aloud by describing the various body language movements. Feedback should be provided by the other groups. #2: You-tubing it. Get several videos from online sources and present them to the group one at a time. Have everyone write up a short description of what is going on in the video. They have to pay attention to gestures, body movements, and much more. Get each participant to read aloud his or her description. It might be surprising to learn that many participants will disagree about the video they just saw. #3: Emotion cards. Get some postcards and write down an emotion on each one. These can include emotions such as sadness, nervousness, stress, frustration, cynicism, anger, or any others. Ask for a volunteer and give him or her one of the cards. Have that person leave the room, then walk back in displaying that emotion. See if the other participants can guess what the emotion is. #4: The butterfly effect. Have the participants stand in a circle. Hand a beach ball to someone and have that person throw it to another person, and he or she has to carefully observe the person the ball is thrown to. The ball is thrown to each person only once, and the last person has to observe the person who started the activity. Then, each person has to imitate the actions of the person he or she threw the ball to. In most cases, the imitations get livelier as they go along. #5: Acting up. Divide the participants into two groups. Assign a leader from each group and have these two individuals go outside. While outside, they have to think up an object to describe in front of the participants. The participants have to guess what the object is while the two leaders use only nonverbal cues in their description. The group that guesses first wins, and both leaders join that group. Choose two more leaders and repeat until one group contains all of the participants. #6: Guess the leader. Have participants stand in a circle. Choose a “guesser” and have that person leave the room. The rest of the participants then choose a leader. When the guesser comes back in, the participants all start making hand gestures and movements, while trying to imitate the leader. The guesser has to guess who the leader is based on everyone’s nonverbal movements. #7: Silent designs. Divide participants into groups of three or four people. Give each of them a large sheet of paper and a lot of scrap paper, as well as some colored markers. Each team has to come up with an item to draw – a shoe, a tote bag, a lamp, and much more – and they have to draw it together without speaking. Only nonverbal communication can be used to draw the perfect item. #8: Miscommunications. Choose two-person teams and give them two exercises. In the first, Person A has to describe his or her hobby for one minute without smiling; Person B listens and asks questions if desired. Then, the participants swap places. In the second exercise, Person B talks about his or her hobby in a natural tone, while Person A listens with no eye contact and without asking any questions. Hold a discussion on both exercises afterward. Activities Just for Children Of course, along with basic nonverbal communication games, there are also those specifically designed for certain groups. Below are some great body language activities developed just for children. #9: Examples in the media. Turn on the television set, but make sure the sound is turned off. With your child, try to figure out what the actors are trying to communicate by the way they are acting. Point out eye contact, hand movements, and much more, to teach them about the importance of body language and communication. #10: Point out the different possibilities. If a person folds his or her arms, that person could be frustrated about something – or he or she could just be cold. Highlight different body language actions and teach your child about the different emotions that may be represented there. #11: Try the game of charades. Take index cards and on each one, write down a different emotion; for example, sad, happy, angry, and much more. Pull out one of the cards and act out the emotion to the group of children. Let them guess which emotion you’re trying to display. You can even use animals and activities, such as brushing your teeth, instead of emotions. #12: Snacking differently. Take 5 to 10 bowls and place a different snack in each of them. Have each child take a bite of one or more snacks and indicate their like or dislike of the snack using only facial movements. Have the other children try to guess what the child is trying to “say.” #13: Emotions can be displayed nonverbally. Take pieces of paper and on each of them, write down a disposition or mood; for example, suspiciousness, security, guilt, and much more. Place them in a bowl. When it’s their turn, all of the children will read the same sentence. The difference is that they must read it in a way that demonstrates what mood they’ve chosen when they drew a piece of paper from the bowl. It doesn’t matter what the sentence is; it just has to be the same one for each child. #14: Card games can be fun. For this game, give each child a card and make sure he/she does not let anyone else know what the card is. Instruct the children they have to get into four groups, divided into diamonds, clubs, spades, and hearts, but they have to do so without talking. After they get into these groups, they have to get in order from ace to king, again, using nonverbal communication only. How About College Students? College students are soon to be in the real world starting their lives, so it’s a great time for them to learn about the importance of nonverbal communication skills. Below are some activities designed especially for college students to learn more about body language. #15: All pictures tell a story. Find four to five photographs that show people demonstrating different types of body language; for example, someone flashing a peace sign, someone crossing his arms, and much more. Divide the participants into groups of 7 to 10 people each and have them look at the photos one at a time. For each photo, ask questions such as, “how would you react if you saw someone striking this pose?”, “what message is the person in the picture sending?”, and “how does the race/gender/age of the person in the photograph affect the message you get from it?” You can write as many questions as you like. #16: Following the leader. Choose a leader, and that person must give signals that the others follow. Without speaking, everyone else has to follow these commands. If someone messes up, that person is disqualified. Change the leader after each turn, with the current leader choosing the next one. The game ends when only one person is left standing. #17: Explaining your drawing. This is one of the many body language activities that use drawings as part of the game. Divide participants into various groups and have each group draw a picture. Each of the other groups has to decide what the picture means. Every group will draw a picture, and every group gets the chance to interpret the other groups’ drawings. #18: Introductions. Divide the participants into groups of two. They tell each other their names without letting the other groups know this information. When it’s a pair’s turn, each member has to introduce the other one without using any words. Other pairs have to guess the name they’re trying to communicate nonverbally. #19: Posture is important. Choose five students to stand in front of the classroom and give them a scenario. For instance, have them pretend they are waiting to be interviewed for a great job. Then watch how their posture changes, and get the other students to write down what their impression is of the postures. Change the scenarios if you like, and read what the others say about the different postures they have seen. #20: Good versus bad listening. Divide participants into pairs. Person A tells Person B a story while Person B listens intently without saying anything. Next, Person B tells Person A a story, but Person A has to be a bad listener. Have the other students explain how both the “good” listener and the “bad” listener reacted nonverbally. #21: Subconscious decisions. Have participants stand up and demonstrate an activity while using nonverbal cues that are the opposite. For instance, a student can tell the class he/she is excited about winning a blue medal in cooking or swimming, but his/her body language shows just the opposite. The class can then talk about the differences. #22: Sharing familiarities. Get two people to talk to one another about their favorite movie, book, or television show. The one that isn’t talking at the moment has to show a variety of nonverbal cues; for example, standing with feet wide apart, acting like they are not listening, standing too close to the other person, and much more. The talker has to describe the feelings he or she got while the other party was doing these things. #23: Acting out scenes. Divide participants into groups of two or three people. Write down “scenes” on slips of paper and have the first group choose one of those pieces of paper. The group acts out the scene without words. Next, another group comes forward and acts out the scene with words. It is always interesting to see how the two versions usually differ. As you can tell, body language is important, and so are the various body language activities available in many different places, especially online. Finding the best ones for your group, therefore, should never be difficult. - Advertisement - - Advertisement - - Advertisement -
<urn:uuid:ca20d05f-7b12-4e70-a1ed-1edd77df2241>
CC-MAIN-2020-16
https://anivda.com/body-language-activities/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370519111.47/warc/CC-MAIN-20200404011558-20200404041558-00195.warc.gz
en
0.954169
2,598
4.3125
4
Between 1981 and 1990, the National Museum of Natural History carried out its second major overhaul of the east wing paleontology exhibits. Entitled “Fossils: The History of Life”, the new exhibit complex represented a significant departure from earlier iterations of this space. While the previous renovation arranged specimens according to taxonomy and curatorial specialties, “The History of Life” followed the evolutionary progression of fossil plants and animals through time. The new exhibits also differed from prior efforts in that they were not put together exclusively by curators. Instead, the design process was led by educators and exhibits specialists, who sought curatorial input at all stages. The result was a (comparably) more relatable and approachable paleontology exhibit, created with the museum’s core audience of laypeople in mind. By 1987, four sections were completed: The Earliest Traces of Life, Conquest of the Land, Reptiles: Masters of the Land, and Mammals in the Limelight. Occupying halls 2, 3, and 4, these exhibits (along with the older Hall of Ice Age Mammals and the Rise of Man in Hall 6) told the complete story of the terrestrial fossil record. However, Hall 5 (the narrow space running parallel to the central dinosaur exhibit on its north side) was still vacant. Going back to the 1977 theme statement that kicked off the History of Life renovations, the intent was always for Hall 5 to feature two exhibits: one on prehistoric sea life and another on the geological context for the fossil record. These ideas were fleshed out in a 1987 briefing packet that was distributed to potential donors. As the document explained, “it is in the undersea realm that the history of life is most abundantly documented,” and coverage of fossil marine life is therefore “critical” to visitors’ understanding of evolution through deep time. From the beginning, the “Life in the Ancient Seas” exhibit promised to feature a life-sized diorama of a Permian reef community, mounted skeletons suspended in life-like swimming poses, and an immersive underwater ambiance. Meanwhile, the proposed “Changing Earth” exhibit would “illuminate the entire story [told in the fossil halls] by looking at the ways geological processes have affected the course of evolution over millions of years.” A key feature was a “video disc time machine”, which was essentially a computer terminal where artwork reconstructing different time periods could be viewed. Changing Earth was ultimately never built. Instead, the allocated space became a windowed fossil preparation lab, which would prove to be one of the most popular exhibits in the History of Life complex. Nevertheless, many of the ideas planned for Changing Earth would be revisited in the Geology, Gems, and Minerals hall, which opened in 1997. Life in the Ancient Seas did get funding, however, and with a budget of approximately $4 million, production of the exhibit was underway by early 1988. As with any large exhibit, Life in the Ancient Seas was made possible through the combined efforts of dozens of talented scientists, artists, and technicians. Like the rest of the History of Life complex, the Department of Exhibits generally initiated and produced the content, which the Department of Paleobiology then revised or approved. Linda Deck was the content specialist, steering the ship throughout the planning and production process. She selected specimens, chose the major storylines, and acted as a bridge between the curators and exhibits staff. Li Bailey and Steve Makovenyi were the designers, overseeing the exhibit’s aesthetics and making sure it functioned as a cohesive whole. Sue Voss was the lead writer of label copy. The hall’s design revolved around two main ideas, one aesthetic and one pedagogical. Visually, the exhibit needed to “simulate the perspective of a scuba diver” (Deck 1992). Makovenyi and Bailey gave the hall a blue-green color palate, with a low, black-tiled ceiling. Shimmering lights projected on the floor contributed to the illusion of traveling through the underwater world. Meanwhile, the layout of the hall adhered strictly to the chronology of geologic time. As visitors traversed the space, archways and glass barriers emphasized the conceptual divisions between the Paleozoic, Mesozoic, and Cenozoic eras. Life in the Ancient Seas featured over 1,000 specimens, most of which were invertebrates like trilobites, brachiopods, ammonites, and bivalves. Early lists of vertebrates earmarked for display were (as is typical) much longer than the final selection of twelve mounted skeletons – a walrus and a baleen whale were among the casualties. A few of the mounts, like the ancestral whale Basilosaurus (USNM V 4675) and the sea lizard Tylosaurus (USNM V 8898), had already been on display for decades and needed only modest touch-ups for the new exhibit. Most of the vertebrate skeletons, however, were brand new. The Dolichorhynchops (USNM PAL 419645) was collected in Montana in 1977, and acquired in a trade with the Denver Museum of Nature and Science. Arnie Lewis prepared and assembled the mount in 1987. A Eurhinodelphis dolphin (USNM PAL 24477) from Maryland was mounted by contractor Constance Barut Rankin. Her work was so impressive that she earned a full-time position for her trouble. The sea cow Metaxytherium (USNM PAL 244477) was a very late addition, having been excavated in Florida during the 1988 field season. A variety of created objects joined the real specimens in telling the story of marine life through time. Model Hybodus sharks swam near the ceiling, and a realistic papier-mâché seabed extended the length of the exhibit beneath the mounted skeletons (little did visitors know this “seabed” was fragile enough to be punched through if it was ever stepped on). The exhibit team decided early on that Life in the Ancient Seas would include an 11-foot high, life-sized diorama of a Permian reef, based on the Glass Mountains deposits in Texas. Smithsonian paleontologist G. Arthur Cooper spent years collecting and publishing on the immaculate fossils found in this region, so a reconstruction of the Permian near-shore ecosystem was an obvious choice. What’s more, there was already a man lined up for the job. Terry Chase of Missouri-based Chase Studios (who would later go on to create Phoenix the whale) had already built a Permian reef for the Petroleum Museum in Midland, Texas, and most of the same molds and designs could be re-used. Still, the NMNH diorama was a massive undertaking, featuring 100,000 unique models – some hand-sculpted and some cast in translucent resin or wax. Phillip Anderson experimented with a variety of materials to create the shimmering of sunlight shining through water that appeared in the diorama and at the exhibit’s two main entrances. As it turns out, nothing looks as good as actual light penetrating actual water. To accomplish the effect, Anderson rigged a piston cylinder to continuously produce waves in a shoebox-sized plexiglass container of water. A quartz light shone through the container and projected the pattern onto the floors and walls. Life in the Ancient Seas opened in May 1990. In a Washington Post review, Hank Burchard raved about the ocean-themed design and especially Voss’s text, stating that “every museum text writer in town should study her style.” For the next 23 years, Life in the Ancient seas stood out as the gem among the east wing fossil exhibits. It was more colorful, easier to navigate, and generally more inviting than the other History of Life galleries. The theatrical label copy was arguably over the top (“Act One had been a bottom-dweller’s ballet, Act Two would be a swimmer’s spectacle”), but the exhibit as a whole plainly succeeded in presenting the story of evolution, adaptation, and extinction in an appealing and attractive way. Over the years, there were a few changes: the shimmering lights were shut off, a charming clay-mation video about the end-Cretaceous food chain collapse was removed, and the Dunkleosteus skull and Basilosaurus skeleton were relocated to the Ocean Hall (the latter was replaced with a cast of the related whale Zygorhiza). Indeed, the opening of the similarly-themed but far larger Ocean Hall in 2008 overshadowed Life in the Ancient Seas, and made many of its displays redundant. Although it was the best part of the History of Life complex, Life in the Ancient Seas was also the shortest lived. It was the last section to open, and in 2013, it was the first section to close. Those familiar with the exhibit will have surely noticed that I have yet to discuss the beautiful 122-foot mural painted by Ely Kish. Running the entire length of the exhibit, this amazing artwork outclasses even the famous “Age of Reptiles” at the Yale Peabody Museum in terms of scale and number of subjects depicted. This monumental accomplishment will be the subject of the next post – stay tuned! Burchard, H. 1990. Fossils Fuel Sea Journey. The Washington Post. https://www.washingtonpost.com/archive/lifestyle/1990/05/25/fossils-fuel-sea-journey/d582f067-0745-44a0-90c8-248c1328962a/ Deck, L. 1992. The Art in Creating Life in the Ancient Seas. Journal of Natural Science Illustration 1: 4: 1-12. Telfer, A. 2013. Goodbye to Life in the Ancient Seas Exhibit. Digging the Fossil Record: Paleobiology at the Smithsonian. http://nmnh.typepad.com/smithsonian_fossils/2013/11/ancient-seas.html
<urn:uuid:ccbc65a6-1723-4654-9656-3bd0f78ca0e3>
CC-MAIN-2020-16
https://extinctmonsters.net/2016/07/27/revisting-the-ancient-seas/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371637684.76/warc/CC-MAIN-20200406133533-20200406164033-00554.warc.gz
en
0.959269
2,106
3.34375
3
14 minute read This lesson focuses on ways to classify traffic as well as several traffic shaping strategies, including The motivation behind traffic shaping is to control traffic resources and ensure that no traffic flow exceeds a particular pre-specified rate. Traffic sources can be classified in many different ways. Data traffic may be bursty, and may be periodic or regular. Audio traffic is usually continuous and strongly periodic. Video traffic is continuous, but often bursty due to the nature of how video is often compressed. We usually classify traffic sources according to two kinds of traffic. One is constant bit rate (CBR) source. In a constant bit rate traffic source, traffic arrives at regular intervals, and packets are typically the same size, resulting in a constant bit rate of arrival. Audio is an example of a constant bit rate source. Many other sources of traffic are variable bit rate (VBR). Video and data traffic are often variable bit rate. When we shape CBR traffic, we tend to shape according to the peak rate. VBR traffic is often shaped according to an average rate and a peak rate. In a leaky bucket traffic shaper, traffic arrives in a bucket of size β and drains from the bucket at rate ρ. Each traffic flow has its own bucket. While traffic can flow into the bucket at any rate, it cannot drain from the bucket at a rate faster than ρ. Therefore, the maximum average rate that data can be sent through this bucket is ρ. The size β of the bucket controls the maximum burst size that a sender can send for a particular flow. Even though the average rate cannot exceed ρ, the sender may be able to, at times, send at a faster rate, as long as the total size of the burst does not overflow the bucket. Setting a larger bucket size can accommodate a larger burst rate. Setting a larger value of ρ can enable a faster packet rate. In short, the leaky bucket allows flows to periodically burst while maintaining a constant drain rate. For an audio application, one might consider setting the size of the bucket to 16kB, so packets of 1kB would then be able to accumulate a burst of up to 16 packets into the bucket. The regulator’s rate of 8 packets per second, however, would ensure that the audio rate would be smooth to an average rate not to exceed 8kB / second, or 64kbps. In (r, T) traffic shaping traffic is divided in T-bit frames, and a flow can inject up to r bits in any T-bit frame. If the sender wants to send more than one packet of r bits, it simply has to wait until the next T-bit frame. A flow that obeys this rule has an (r, T) smooth traffic shape. In (r, T) traffic shaping a sender can’t send a packet that is larger than r bits long. Unless T is very large, the maximum packet size may be very small, so this type of traffic shaping is typically limited to fixed-rate flows. Variable flows have to request data rates that are equal to the peak rate. It would be incredibly wasteful to configure the shaper such that the average rate must support whatever peak rate the variable rate flow must send. The (r, T) traffic shaper is slightly relaxed from a simple leaky bucket. Rather than sending one packet per every time unit, the flow can send a certain number of bits every time unit. If a flow exceeds a particular rate, the excess packets in the flow are typically given a lower priority. If the network is heavily congested, the packets from a flow that exceeds its rate may be preferentially dropped. Priorities might be assigned at the sender or at the network. At the sender, the application might mark its own packets, since the application knows best which packets may be more important. In the network, the routers may mark packets with a lower priority, a feature known as policing. Sometimes we want to shape bursty traffic, allowing for bursts to be sent on the network, while still ensuring that the flow doesn’t exceed some average rate. For this scenario, we might use a token bucket. In a token bucket, tokens arrive in a bucket at a rate ρ. Again, β is the capacity of the bucket. Traffic may arrive at an average rate and a peak rate. Traffic can be sent by the regulator as long as there are sufficient tokens in the bucket. To consider the difference between a token bucket and a leaky bucket, consider sending a packet of size b < β. If the token bucket is full, the packet is sent, and b tokens are removed. If the token bucket is empty, the packet must wait until b tokens drip into the bucket. If the bucket is partially full, the packet may or may not be sent. If the number of tokens in the bucket exceed b, then the packet is sent immediately and b tokens are removed. Otherwise, the packet must wait until b tokens are present in the bucket. A token bucket permits traffic to be bursty but bounds it by the rate ρ. A leaky bucket forces the bursty traffic to be smooth. If our bucket size in a token bucket is β, we know that for any interval T, our rate must be less than β plus the rate that which tokens accumulate (ρ) times T. Intuitively, this makes sense. We can completely drain the bucket and also consume the tokens that are added to the bucket over the interval T, which ρ*T. We also know that the long-term rate will always be less than ρ. Token buckets have no discard or priority policies, while leaky buckets typically implement priority policies for flows that exceed the smoothing rate. Both token buckets and leaky buckets are relatively easy to implement, but the token bucket is a little bit more flexible since it has additional parameters for configuring burst size. One of the limitations of token buckets is that in any interval of length T, the flow can send β + T*ρ tokens of data. If a network tries to police the flows by simply measuring the flows over intervals of length T, the flow can cheat by sending β + T*ρ traffic in each interval. Consider an interval of 2T. If the flow can send β + Tρ in each interval, the flow can send 2(β + T ρ) in an interval of 2T, which is greater than what it should be allowed to send: β + 2T * ρ. Policing traffic being sent by token buckets can be rather difficult. Token buckets allow for long bursts. If the bursts are of high priority traffic, they are difficult to police and may interfere with other high priority traffic. There is a need to limit how long a token bucket can sender can monopolize the network. To apply policing to token buckets, one strategy is to use a composite shaper, which combines a token bucket and a leaky bucket. The combination of the two ensures that a flow’s data rate doesn’t exceed the average data rate enforced by the smooth leaky bucket. The implementation is more complex, though, since each flow now requires two timers and two counters, one for each bucket. Power boost allows a subscriber to send at a higher rate for a brief period of time. For example, if you subscribed at a rate of 10Mbps, power boost might allow you to send at a higher rate for some period of time before being shaped back to the rate at which you subscribed at. Power boost targets the spare capacity in the network for use by subscribers who don’t put sustained load on the network. There are two types of power boost. If the rate at which the user can achieve during the burst window is set to not exceed a particular rate, the power boost is capped. Otherwise, the power boost is uncapped. In the uncapped setting, the change to the traffic shaper is simple. The size β of the token bucket is increased. Since the rate of flow through a token bucket depends on β, a larger bucket size will be able to sustain a bigger burst. In this case, the maximum sustained traffic rate is remains ρ. If we want to cap the rate, all we need to do is simply apply a second token bucket with another value of ρ. That token bucket limits the peak sending rate for power boost eligible packets to Ρ, where P is larger than ρ. Since ρ plays a role in how quickly tokens can refill in the bucket, so it also plays a role in the maximum rate that can be sustained in a power boost window. Suppose that a sender is sending at some rate R which is greater than their subscribed rate r. Suppose as well that the power boost bucket size is β. How long can a sender send at rate Given the diagram above, we need to solve for We know that the β is d * (R - r) , so if we solve for d, we see that d is equal to β / (R - r). Here is a graph (courtesy of the BISmark project) measuring the power boost experienced by four different home networks - each with a different cable modem - connecting through Comcast. Different homes exhibit different shaping profiles: some have a very steady pattern, whereas others have a very erratic pattern. In addition, it appears that there are two different tiers of higher throughput rates. Even though power boost allows users to send at a higher traffic rate, users may experience high latency and loss over the duration that they are sending at this higher rate. The reason for this is that the access link may not be able to support the higher rate. If a sender can only send at some sustained rate R for an extended period of time, but is allowed to send at a boosted rate r for a short period of time, buffers may fill up. TCP senders can continue to send at the higher rate r without seeing packet loss even though the access link may not be able to send at that higher rate. As a result, packets buffer up and users see higher latency over the course of the power boost interval. To solve this problem, a sender might shape its rate never to exceed the sustained rate R. If it did this, it could avoid seeing these latency effects. Certain senders who are more interested in keep latency under control than sending at bursty volumes may choose to run such a traffic shaper in front of a power boost enabled link. The increase in latency as a result of power boost that we visualized in the previous section is an example of buffer bloat. Buffer bloat occurs when a sender is allowed to send at some rate r that is greater than the sustained rate R without seeing packet loss. A buffer in the network that can support this higher rate will start filling up with packets. Since the buffer can only drain at R, all of the packets that the sender is sending at r are just being queued up in the buffer. As a result, the packets will see higher delay than they would if they simply arrived at the front of the queue and could be sent immediately. The delay that a packet arriving in a buffer will see is the amount of data ahead of it in the buffer divided by the sustained rate These large buffers can introduce delays that ruin the performance for time-critical applications such as voice and video applications. These larger buffers can be found in Let’s look at the round trip times for 3 different DSL routers. An upload was started at 30 seconds. We can see that the modems experience a huge increase in latency coinciding with the start of the upload. The routers saw an increase in RTT up to 1 second and even 10 seconds, up from the typical RTT of 10ms. A home modem has a packet buffer. Your ISP is upstream of that buffer, and the access link is draining that buffer at a certain rate. TCP senders in the home will send until they see lost packets. If the buffer is large, the senders won’t actually see those lost packets until this buffer has already filled up. The senders continue to send at increasingly faster rates until they see a loss. As a result, packets arriving at this buffer see increasing delays, while senders continue to send at faster and faster rates. There are several solutions to the buffer bloat problem. One obvious solution is to use smaller buffers. Given that we already have a lot of deployed infrastructure, however, simply reducing the buffer size across all of that infrastructure is not a trivial task. Another solution is to use traffic shaping. The modem buffer drains at a particular rate, which is the rate of the uplink to the ISP. If we shape traffic such that traffic coming into the access link never exceeds the uplink provided by the ISP, the buffer will never fill. This type of shaping can be done on many OpenWRT routers. There are two types of network measurement. In passive measurement, we collect packets and flow statistics from traffic that is already being sent on the network. This might include In active measurement, we inject additional traffic into the network to measure various characteristics of the network. Common active measurements tools include Ping is often used to measure the delay to a particular server, while traceroute is used to measure the network-level path between two hosts on the network. Why do we want to measure traffic on the network? We might want to charge a customer based on how much traffic they have sent on the network. In order to do so, we need to passively measure how much traffic that customer is sending. A customer commonly pays for a committed information rate (CIR). Their network throughput will be measured every five minutes, and they will be billed on the 95th percentile of these five minute samples. This mode of billing is called 95th percentile billing. This means that the customer might be able to occasionally burst at higher rates without incurring higher cost. Network operators may also want to know the type of traffic being sent on the network so they can detect rogue behavior. A network operator may want to detect One way to perform passive traffic management is to use the packet and byte counters provided by the Simple Network Management Protocol (SNMP). Many network devices provide a Management Information Base (MIB) that can be polled/queried for particular information. One common use for SNMP is to poll a particular interface on a network device for the number of bytes/packets that is has sent. By periodically polling, we can determine the rates at which traffic is being sent on a link, and taking the difference in the counts divided by the interval between measurements. The advantage of SNMP is that is fairly ubiquitous: it’s supported on basically all networking equipment. There are many products available for polling/analyzing SNMP data. On the other hand, the data is fairly course. Since SNMP only allows for polling byte/packet counts on the interface, we can’t ask really analyze specific hosts or flows. Two other ways to measure passively are by monitoring at a packet-level granularity or flow-level granularity. At the packet level, monitors can see full packet contents (or at least headers). At the flow level, a monitor may see specific statistics about individual flows in the network. In packet monitoring, the monitor may see the full packet contents - or at least the packet headers - for packets that traverse a particular link. Common packet monitoring tools include Sometimes, packet monitoring is performed using expensive hardware that can be mounted in servers alongside routers that forward traffic through the network. In these cases, an optical link in the network is sometimes split, so that traffic can be both sent along the network and sent to the monitor. Even though packet monitoring sometimes requires expensive hardware on high speed links, the software-based packet monitoring tools essentially do the same thing. These tools allow your machine to act as a monitor on the local area network, and if any packets are sent towards your network interface, the monitor records those packets. On a switched network, you wouldn’t see many packets that weren’t destined for your MAC address, but on a network where there is a lot of traffic being flooded, you might see quite a bit more traffic destined for an interface that you are using to monitor. The advantages of packet monitoring is that it provides lots of detail, like timing information and information gleaned from packet headers. The disadvantage of packet monitoring is that there is relatively high overhead. It’s very hard to keep up with high speed links, and often requires a separate monitoring hardware device. A flow consists of packets that share a common A flow monitor can record statistics for a flow that is defined by the group of packets that share these features. Flow records may also contain additional information, often related to routing, such as Flow monitoring has less overhead than packet monitoring, but it is more coarse than packet monitoring. Because a flow monitor can not see the individual packets, it is impossible for the monitor to surface some types of information, such as information about packet timing. In addition to grouping packets into flows based on common data elements, packets are also typically grouped into flows if they occur close together in time. For example, if packets that share common sets of header fields do not appear during a particular time interval - say, 15 or 30 seconds - the router simply declares the flow to be over, and sends the statistics to the monitor. Sometimes, to reduce monitoring overhead, flow level monitoring may also be accompanied by sampling. Sampling builds flow statistics based only on samples of the packets. For example, flows may be created based on one out of every ten or 100 packets, or a packet might be sampled with a particular probability. Flow statistics may be based on the packets that are sampled randomly from the total set of packets. OMSCS Notes is made with in NYC by Matt Schlenker.
<urn:uuid:2a0356d8-aa78-4092-8e24-0fa292b5de43>
CC-MAIN-2020-16
https://www.omscs-notes.com/computer-networks/rate-limiting-and-traffic-shaping/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371611051.77/warc/CC-MAIN-20200405213008-20200406003508-00354.warc.gz
en
0.942511
3,775
3.703125
4
By Michael S. Luehlfing, Cynthia M. Daily, Thomas J. Phillips, Jr., and L. Murphy Smith System security measures should be a primary focus in developing a new accounting information system (AIS). The viability of security measures depends upon informed and constant monitoring of the system; unfortunately, it is often neglected. Key components of systems security include passwords, firewalls, data encryption, employee participation, and protection from computer viruses. Generally speaking, systems security involves risk assessments and countermeasure implementations to ensure that such systems will operate, function correctly, and be safe from attack by internal and external adversaries. Central to proper systems security is that all stakeholders must understand the general need for security and the specific potential threats faced by the organization. In January 2000, the AICPA published its annual list of top 10 technology issues. The systems security topics of information security and controls, disaster recovery, and high availability and resiliency of systems all made the top five. Barriers to developing systems security are both financial and philosophical. Systems security is often viewed in a manner similar to physical security: Buy it once and use it forever. Unfortunately, like physical security, obsolete policies, procedures, and technologies leave systems extremely vulnerable to external and internal attacks. Most stakeholders find it difficult to accept the need for constant spending on systems security when it is difficult to quantify the benefits. Even when benefits can be quantified, unenlightened stakeholders may still question the need for continuous spending in the systems security area. In many cases, education can overcome this philosophical barrier. Unfortunately, often only severe losses from a security breakdown will prompt appropriate, albeit late, action. Calculation of security benefits. The benefits of systems security can be calculated from a loss exposure perspective. The following process is generally undertaken to quantify loss exposure: First, each system must be identified. Second, each system must be prioritized in terms of sustaining daily operations. Finally, a dollar amount must be calculated for the upper cost limits to the company if a particular system were compromised or even destroyed. Once this is done, the cost of the security system program or upgrade must be compared to the upper cost limits of system failure. Before undertaking such a task, the system’s vulnerabilities in terms of passwords, firewalls, data encryption, and employees must be understood. Threats to Computer Security Based on movies and television shows, it might appear that the greatest threat to computer security is intentional sabotage or unauthorized access to data or equipment. For most organizations this is simply not true. There are five basic threats to security: natural disasters, dishonest employees, disgruntled employees, persons outside of the organization, and unintentional errors and omissions. The extent to which each of these threats is actually realized is shown in Exhibit 1. As shown in the exhibit, unintentional errors and omissions cause the great majority of computer security problems. Errors and omissions are particularly prevalent where there is sloppy design, implementation, and operation; if the systems development process is done properly, errors and omissions will be minimized. An effective internal control structure is an integral part of any reliable information system. A primary motive for a well-designed set of internal controls is to support the fiscal management capabilities of the firm’s officers and employees. Inadequate internal controls can severely hinder fiscal management and unduly tempt officers and employees to become engaged in questionable activities and accounting practices. Chaotic accounting and fiscal management conditions resulting from inadequate controls create unnecessary conditions of stress, which can impair officers’ and employees’ mental well-being and task effectiveness. Strong controls guard honest officers and employees from suspicion and false accusations. User authentication is especially important as applications are tied into operating systems and linked around the world. In this regard, inappropriate use of passwords, internally or externally, is a primary risk that systems face. Password control is essential. Employees should be admonished to safeguard passwords. For example, they should not tape passwords inside desk drawers or under keyboards and they should not use obvious terms such as the name of a spouse, child, or home address. Furthermore, simple things like randomly putting a number within a password will enhance security. With respect to the overall organization, access to passwords should be strictly controlled. While organizations typically allow their information technology (IT) personnel access to all employee passwords, this creates an unnecessary opportunity for unauthorized access using another person’s name. Similarly, employees should only have access to areas that are essential to their particular functions. For example, staff accountants do not need access to high-level administrative files. Accordingly, security levels must be articulated for each user or group of users. Password maintenance requires constant diligence on the part of both the IT and human resources departments. For example, the passwords of employees leaving the organization should be immediately canceled upon notice or before termination, as applicable. Given the sensitivity of certain termination situations, coordination between IT and human resources is critical to successful password maintenance. Just like industrial spies, hackers can also pose as employees to obtain sensitive information such as passwords. Typically, they make an authentic-looking ID and walk into an office under the guise of service contractor or government inspector. Hackers then install a “sniffer” (a device plugged into a network jack) that collects passwords as well as user names. Hackers can also acquire passwords by nontechnical means. They often begin by socializing with employees to obtain employee names, departments, or personal information and then name drop to get user names to support front-end attacks. Armed with this information, a hacker can easily enter most systems. While the appropriate use and safeguarding of passwords is often associated with the prevention of front-end attacks, firewalls are typically deployed to impede back-end attacks. Generally speaking, a firewall is a combination of hardware and software that controls access between systems. Firewalls are especially critical when linkages exist between the Internet and internal systems. Firewalls are typically designed to allow users connected to internal systems to download information from the Internet, but not vice versa. A firewall is only as good as its weakest link. In order to exploit holes in a security system, hackers often employ repetitive lurking schemes such as war dialers, which automatically dial a block of phone extensions thought to be connecting an internal network to the Internet. Developing countermeasures for such techniques is a continual process, because hackers are constantly developing new techniques. For example, if hackers find a way to penetrate through a weakness in an application, the vendor will likely write a patch to fix the problem, forcing hackers to find a new way to attack the application. Because patching is a continuous process, monitoring a firewall is as important as implementing it. In this regard, current firewall systems should be grounded in real-time monitoring and response capabilities. Similarly, automated detection systems must be in place in order to alert IT personnel that a security breach is in progress. Encryption techniques should be employed by any company involved in e-commerce or electronic payment and collection activities. Encryption protects data as it moves between systems by scrambling it in transit. The current minimum encryption level is 128-bit encryption, but it will soon be 256-bit; lower levels (e.g., 64-bit) of encryption can be easily deciphered by today’s computers. Accordingly, encryption complexity must evolve with the computer technology used to crack it. For all of the fear pertaining to external attacks, internal users are a more likely threat. Although the need for external security measures goes without saying, internal policies and procedures (including monitoring) are at the heart of systems security. Identifying user profiles and understanding user tendencies will often identify unusual situations. Background checks of potential and current employees may also provide insight into unusual situations. Employee security breaches may be categorized as rather innocent, seemingly reckless, or outright criminal, depending upon the circumstances. Unfortunately, there are no foolproof means for preventing the inappropriate use of a system or its contents by an employee. The existence of good internal control policies and procedures is the best defense. Such policies and procedures are necessary in view of the typical defense of untrustworthy employees (i.e., nobody told me it was wrong). Most programmers or web designers are too busy worrying about functionality and design to worry about system security. As a countermeasure, companies need to obtain assurances from external security specialists. Addressing some aspects of system security is a relatively new service available from CPAs called WebTrust. The AICPA and the Canadian Institute of Chartered Accountants (CICA) jointly introduced CPA WebTrust in September 1997. Websites bearing the WebTrust seal have been deemed trustworthy and reliable by a CPA. Clicking on the seal on a particular website provides access to information about the firm’s business practices, management assurances, and the independent auditors’ report. Exhibit 2 shows the WebTrust seal. Organizations that bear the seal are listed online at www.cpawebtrust.org. Some WebTrust-certified firms prominently display the seal on their homepage (e.g., www.alpinebank.com), while others display it under the security section or on some other page. Some firms use more than one type of assurance service (such as the Better Business Bureau Online and TrustE certifications). For a further discussion of WebTrust and its perception by consumers, see the feature article on page 46. Another employee-related issue is inappropriate use of the Internet and e-mail systems. For example, downloading or e-mailing pornography or other insensitive materials should be expressly forbidden. Pornography is a major concern under the hostile environment definition of sexual harassment. Employers that care about their employees will want to protect them from pornographic materials. The use of e-mail to transmit insensitive materials is also a problem. Sooner or later the organization can and will be held responsible and accountable for such transmission. Numerous news stories have left many computer users confused about the nature of viruses and the damage they can cause. A virus is a computer program that piggybacks or attaches itself to application programs or other executable system software. There are many different ways that viruses can be transmitted and executed. When a virus is activated, the results also vary: Sometimes it erases files, sometimes it just leaves harmless messages. Antiviral techniques include safe user procedures and antivirus software (see Exhibit 3). The following five steps provide a starting point for developing systems security: Don’t wait. Appropriate systems security should be developed as early as possible. First, all information systems user departments should develop their own security and response capabilities in preparation for possible disruptions. Involve users. Ralph Waldo Emerson once said, “Nothing great was ever achieved without enthusiasm.” People are the most important component of any systems security process. The goal of preventing security breakdowns must be explained to all system users. Top management must support the process with clear policy statements and directives. After a security breakdown occurs, a thorough after-the-fact analysis must be conducted in order to facilitate the development of solutions to prevent such incidents from happening in the future. The focus must be on solving problems, not placing blame. Friends and enemies. When a system breakdown occurs, it is essential to determine whether it was a software or hardware failure or an intentional disruption by an adversary. This distinction is critical in order to develop appropriate countermeasures. Often an organization can benefit from the assistance of consultants specializing in security measures. Be prepared. To catch a thief think like a thief. Organizations can stay alert by identifying potential weaknesses in the accounting system and implementing appropriate countermeasures. Beware of canned software. Default settings in vendor-supplied software are generally set to “accept” all requests. Security-related settings should be changed to a “deny all” mode and later methodically reset to accept only what is essential. As with all system components, any changes should be carefully documented to assist with future investigations. A Continuous Process Systems security may be viewed as a necessary evil, but the key word is “necessary.” Identifying vulnerabilities and taking measures to eliminate them can save an organization from severe losses. As technology changes, so do potential weaknesses. Keeping up with protective measures and how they can be used to eliminate weaknesses is crucial. Websites such as www.securitywatch.com and www.securityfocus.com provide information regarding available products, explanations of various security-related terms, and trends in computer security. James L. Craig, Jr., CPA The CPA Journal is broadly recognized as an outstanding, technical-refereed publication aimed at public practitioners, management, educators, and other accounting professionals. It is edited by CPAs for CPAs. Our goal is to provide CPAs and other accounting professionals with the information and news to enable them to be successful accountants, managers, and executives in today's practice environments. ©2009 The New York State Society of CPAs. Legal Notices Visit the new cpajournal.com.
<urn:uuid:b4a8f875-0362-42df-99dd-3aef26c37e8b>
CC-MAIN-2020-16
http://archives.cpajournal.com/2000/1000/dept/d106200a.htm
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370526982.53/warc/CC-MAIN-20200404231315-20200405021315-00275.warc.gz
en
0.934796
2,674
2.5625
3
1 out of 7 Americans will struggle with addiction. What does the word addiction mean to you? For most people, when they hear of someone being an “addict”, they think of them as lacking willpower. If they really wanted to quit, they would just stop, right? Unfortunately, addiction is a deceptive disease that doesn’t have anything to do with willpower. Many people are aware they have a substance abuse problem yet they don’t know what to do about it. Fortunately, new studies are helping medical communities better understand addiction. Previous negative stereotypes are being erased and treatment methods are improving. People are learning that addiction is a chronic disorder that usually requires some kind of intervention. Without outside help, it can be more difficult and even impossible for the individuals to quit using a substance. Are you or a loved one suffering from an addiction to Xanax? Learning about Xanax withdrawal can help prepare you for what lies ahead. Read on to learn about how and when you should quit taking Xanax. How Xanax Works Xanax withdrawal is easier to understand when you know how Xanax works. The official medical name for Xanax is alprazolam, and it belongs to the benzodiazepines family. While benzodiazepines aren’t opioids, they are a type of painkiller and tranquilizer. Because Xanax affects your mood, it’s considered a type of psychoactive drug. It’s also a sedative with effects that are similar to those of valium. Doctors usually prescribe benzodiazepines with the intention of the patient using them temporarily. Minor wounds, surgeries, and dental procedures often result in a prescription for benzodiazepines. However, they can also receive a prescription to take on an as-needed basis for panic disorders. What Does Xanax Do to Your Brain Chemistry? Xanax affects your brain and central nervous system with the purpose of helping your body to calm down. Xanax is meant to treat anxiety and problems with panic disorders. It works by communicating with your brain’s GABA neurotransmitter. GABA is short for gamma-aminobutyric acid. Your body naturally produces the chemical GABA to produce sedative effects. If you’re in distress, your brain receives a message. Your body tells your brain that you are feeling a negative emotion and need help. Anger, sadness, anxiety, and fear all send signals to your brain that you should try to relax. In an effort to calm you down, your body will release GABA. After you take Xanax, it works on enhancing your GABA neurotransmitter’s activity. As a result, more of the calming chemical enters your bloodstream. When used for a long time, Xanax can cause mild to severe withdrawal symptoms. If you take high dosages for a long period of time, the withdrawal symptoms can become more intense. After a long period of use, Xanax will no longer be as effective on your body. Xanax isn’t a bad medication–it’s actually very good at helping people overcome their anxieties. However, many people struggle with its addictive properties. Individuals who have had drug and alcohol problems before are more prone to becoming addicted. The standard dose for Xanax is usually between .75 and 1.5 milligrams. Directions for dosages will be different for every individual. However, in most cases, doctors will tell patients to only take Xanax when they really need it. How Long Does It Stay In Your System? Xanax begins working the moment your body starts to digest it. Within minutes of taking it, you’ll feel the drug affecting your central nervous system. When you take it in high dosages, Xanax can give users a euphoric relaxation. It’s easy to abuse Xanax because it works quickly and is noticeably effective. Many people with an addiction to Xanax say they use the drug to escape from negative feelings. You can become addicted to Xanax even if you take the recommended dosage. Your body begins to crave the mental escape and feeling of peace. After taking Xanax for a few weeks, it can start to become habit forming. You’ll notice that you feel the need to take more of the drug and you feel anxious when you’re running low. The high you feel from high dosages of Xanax only lasts 2-4 hours. During that time the user won’t feel an ounce of pain or worry. However, after the high has worn off, you might feel sluggish and tired. After prolonged use of Xanax, your body will build a tolerance. Your body will no longer respond to the drug at the same level it was before. Instead of releasing a lot of GABA, your brain will only release a little bit. As the GABA production slows down, the user will have to take even higher dosages. You can become dependent on Xanax to regulate your brain’s activity. If you develop a dependence on Xanax, you’ll only be happy when you have it in your system. After developing a dependency, you can start to experience problems with withdrawal symptoms. Next, we’ll explain exactly what withdrawal means. What Is Xanax Withdrawal? It is possible to become addicted to a substance that you have a prescription for. Individuals are most likely prescribed Xanax because of a problem that’s affecting the quality of their life. The problem could be anxiety issues, panic disorders, or other feelings of fear and impending doom. Anxiety and panic disorders can be symptoms of a chemical imbalance. Xanax is helpful but doesn’t fix the chemical imbalance causing the anxiety or panic. Instead, Xanax is more like a band-aid that helps cover up the symptoms with a temporary solution. Withdrawing from benzodiazepine (Xanax) can be even more dangerous than withdrawing from cocaine. Fear of previous anxiety symptoms resurfacing can add extra stress to the user. The drug also affects your brains chemistry and causes your mind to need the drug to function properly. Xanax withdrawal happens when you abruptly decrease the dosage or stop taking the drug. The previously suppressed chemical imbalances will now begin to resurface. Except now your brain expects the presence of Xanax to help regulate its systems. Your brain adapted to using Xanax and now you’re asking it to function on its own. The previous problems with anxiety are now exacerbated by your brain’s inability to fix the problem. Withdrawal can begin within hours after you stop taking Xanax. Immediately, individuals can start to feel both emotional and physical side effects. After a few days, the symptoms can worsen in intensity. Physical Symptoms of Withdrawl There are several physical symptoms of Xanax withdrawal. Here is a list of some of the symptoms you could experience during withdrawal: - Trouble sleeping - Blurry vision - Numbness in fingers - Lack of appetite - Sudden weight gain or loss - Women may have intense menstrual cramps - Tingling in legs and or arms - Tightness in jaw - Tooth pain - Heart palpitations - Muscle spasms Benzodiazepine withdrawal syndrome is another way to describe these unpleasant symptoms. Everyone will have their own unique experience of quitting Xanax. Psychological Symptoms of Withdrawl Along with physical symptoms, there are psychological symptoms too. Here’s a list of the different withdrawal symptoms you could experience: - Sensitivity to an external stimulus (like lights and sounds) - Suicidal tendencies - Isolation from loved ones Your personal brain chemistry can affect the withdrawal experience. A previous chemical imbalance can affect the intensity of the symptoms. If you had major anxiety problems before, the symptoms of Xanax withdrawal could be more intense for you. Cold Turkey Detox How you quit will affect your withdrawal from Xanax symptoms. Quitting cold turkey means you immediately stop taking Xanax. Abruptly quitting any addictive substance can cause your withdrawal symptoms to intensify. Your body’s central nervous system can go into shock when you don’t wean off Xanax. Your brain will exhaust itself as it tries to make up for the lack of GABA. In many cases, the symptoms will come in waves. Users may think they are finally free of withdrawal symptoms. Yet after a short while, they find themselves facing another wave of debilitating symptoms. Experiencing withdrawal symptoms after you thought you were in the clear is a form of post-acute withdrawal syndrome(PAWS). The syndrome causes people to suffer through emotional and physical pain even though the drug is no longer in your system. If you experience PAWS, you may have the following issues: - Problems remembering things - Loss of appetite - Problems paying attention - Low energy levels Benzodiazepine has been a leading cause for a few fatality cases. In most of the cases, the individual isn’t passing away from an overdose of Xanax but rather from quitting abruptly. One case involves a female user who was using high dosages of Xanax. The woman took approximately 200mg of Xanax over the course of 6 days. After she ran out, she quit taking the drug altogether. Four days later she went to the hospital with a high temperature, hypertension, and seizures. About 15 hours after entering the hospital, the woman passed away. Sadly, she probably could’ve survived if she hadn’t tried to quit alone. A safer way to quit Xanax is to taper off and slowly lower your dosages. Quitting without medical supervision is dangerous and can be deadly. Slowly Quitting Xanax The safest method for detox is to taper off using medical supervision. You might be wondering how long Xanax withdrawal lasts. The answer will vary from person to person. Depending on how long you’ve been using Xanax, you might need extra time to detox. Having medical professionals guide the process will help protect you from life-threatening situations. You can avoid problems with psychosis and seizures when you slowly wean off the drug. While you go through medical detox, the medical staff will look out for your safety and well-being. To help you taper off your Xanax, they will slowly lower your dosage over time. The amount they lower your Xanax dosage to will depend on how much you were taking previously. The recovery process could take up to 8 weeks or in some cases even longer. Your physician will slowly lower the dosage more and more every week. Your friends and family members can be a great support system while you’re withdrawing from Xanax. However, they shouldn’t be your only source of support. You should always have medical assistance to detox successfully and safely. While family members may mean well, they could accidentally make things worse for you. Certain family members might try to use a tough love approach while other members may hover and over-focus on your needs. The stress caused by relying on family for detox can actually end up prolonging your withdrawal symptoms. Finding a Treatment Center You or your loved ones don’t have to go through Xanax withdrawal alone. Addiction treatment services can guide you to the help you need. Recovery becomes possible the moment you or a loved one acknowledges the need for treatment. Addiction Treatment Services helps provide families with the answers they need to get treatment. Our goal is to help simplify things for you by providing you with guidance. Our team of experts work within the industry and can help you understand the different treatment processes and options. We can also help you comprehend how insurance plays into entering a treatment program. After you feel comfortable with how the treatment works, we can begin to make referrals. Our team can recommend professional intervention services. We can also refer you to the best addiction treatment centers. Let us guide you and your family to the best help possible. Contact us today to schedule an intervention, ask questions, or request more information.
<urn:uuid:d45a4187-5e42-4809-b44e-1993dc686060>
CC-MAIN-2020-16
https://addiction-treatment-services.com/benzodiazepine/xanax/comprehending-withdrawal/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496330.1/warc/CC-MAIN-20200329232328-20200330022328-00474.warc.gz
en
0.942929
2,503
3.109375
3
In Greek mythology, Cronus, Cronos, or Kronos (, from Greek: Κρόνος, Krónos), was the leader and youngest of the first generation of Titans, the divine descendants of Uranus, the sky, and Gaia, the earth. He overthrew his father and ruled during the mythological Golden Age, until he was overthrown by his own son Zeusand imprisoned in Tartarus. According to Plato, the deities Phorcys, Cronus, and Rhea were the eldest children of Oceanus and Tethys. Cronus was usually depicted with a harpe, scythe or a sickle, which was the instrument he used to castrate and depose Uranus, his father. In Athens, on the twelfth day of the Attic month of Hekatombaion, a festival called Kronia was held in honour of Cronus to celebrate the harvest, suggesting that, as a result of his association with the virtuous Golden Age, Cronus continued to preside as a patron of the harvest. Cronus was also identified in classical antiquity with the Roman deity Saturn. During antiquity, Cronus was occasionally interpreted as Chronos, the personification of time. The Roman philosopher Cicero (1st century BCE) elaborated on this by saying that the Greek name Cronus is synonymous to chronos (time) since he maintains the course and cycles of seasons and the periods of time, whereas the Latin name Saturn denotes that he is saturated with years since he was devouring his sons, which implies that time devours the ages and gorges. The Greek historian and biographer Plutarch (1st century CE) asserted that the Greeks believed that Cronus was an allegorical name for χρόνος (time). The philosopher Plato (3rd century BCE) in his Cratylus gives two possible interpretations for the name of Cronus. The first is that his name denotes “κόρος” (koros), the pure (καθαρόν) and unblemished (ἀκήρατον) nature of his mind. The second is that Rhea and Cronus were given names of streams (Rhea – ῥοή (rhoē) and Cronus – Xρόνος (chronos)). Proclus (5th century CE), the Neoplatonist philosopher, makes in his Commentary on Plato’s Cratylus an extensive analysis on Cronus; among others he says that the “One cause” of all things is “Chronos” (time) that is also equivocal to Cronus. In addition to the name, the story of Cronus eating his children was also interpreted as an allegory to a specific aspect of time held within Cronus’ sphere of influence. As the theory went, Cronus represented the destructive ravages of time which devoured all things, a concept that was illustrated when the Titan king ate the Olympian gods — the past consuming the future, the older generation suppressing the next generation. In an ancient myth recorded by Hesiod‘s Theogony, Cronus envied the power of his father, the ruler of the universe, Uranus. Uranus drew the enmity of Cronus’s mother, Gaia, when he hid the gigantic youngest children of Gaia, the hundred-handed Hecatonchires and one-eyed Cyclopes, in Tartarus, so that they would not see the light. Gaia created a great stone sickle and gathered together Cronus and his brothers to persuade them to castrate Uranus. During the Renaissance, the identification of Cronus and Chronos gave rise to “Father Time” wielding the harvesting scythe. H. J. Rose in 1928 observed that attempts to give “Κρόνος” a Greek etymology had failed. Recently, Janda (2010) offers a genuinely Indo-European etymology of “the cutter”, from the root *(s)ker- “to cut” (Greek κείρω (keirō), cf. English shear), motivated by Cronus’s characteristic act of “cutting the sky” (or the genitals of anthropomorphic Uranus). The Indo-Iranian reflex of the root is kar, generally meaning “to make, create” (whence karma), but Janda argues that the original meaning “to cut” in a cosmogonic sense is still preserved in some verses of the Rigveda pertaining to Indra‘s heroic “cutting”, like that of Cronus resulting in creation: RV 6.47.4 varṣmāṇaṃ divo akṛṇod he cut [> created] the loftiness of the sky. This may point to an older Indo-European mytheme reconstructed as *(s)kert wersmn diwos “by means of a cut he created the loftiness of the sky”. The myth of Cronus castrating Uranus parallels the Song of Kumarbi, where Anu (the heavens) is castrated by Kumarbi. In the Song of Ullikummi, Teshub uses the “sickle with which heaven and earth had once been separated” to defeat the monster Ullikummi, establishing that the “castration” of the heavens by means of a sickle was part of a creation myth, in origin a cut creating an opening or gap between heaven (imagined as a dome of stone) and earth enabling the beginning of time (chronos) and human history. A theory debated in the 19th century, and sometimes still offered somewhat apologetically, holds that Κρόνος is related to “horned”, assuming a Semitic derivation from qrn. Andrew Lang‘s objection, that Cronus was never represented horned in Hellenic art, was addressed by Robert Brown, arguing that, in Semitic usage, as in the Hebrew Bible, qeren was a signifier of “power”. When Greek writers encountered the Semitic deity El, they rendered his name as Cronus. Robert Graves remarks that “cronos probably means ‘crow’, like the Latin cornix and the Greek corōne“, noting that Cronus was depicted with a crow, as were the deities Apollo, Asclepius, Saturn and Bran. After dispatching Uranus, Cronus re-imprisoned the Hecatonchires, and the Cyclopes and set the dragon Campe to guard them. He and his sister Rhea took the throne of the world as king and queen. The period in which Cronus ruled was called the Golden Age, as the people of the time had no need for laws or rules; everyone did the right thing, and immorality was absent. Cronus learned from Gaia and Uranus that he was destined to be overcome by his own sons, just as he had overthrown his father. As a result, although he sired the gods Demeter, Hestia, Hera, Hades and Poseidon by Rhea, he devoured them all as soon as they were born to prevent the prophecy. When the sixth child, Zeus, was born Rhea sought Gaia to devise a plan to save them and to eventually get retribution on Cronus for his acts against his father and children. Rhea kept Zeus hidden in a cave on Mount Ida, Crete. According to some versions of the story, he was then raised by a goat named Amalthea, while a company of Kouretes, armored male dancers, shouted and clapped their hands to make enough noise to mask the baby’s cries from Cronus. Other versions of the myth have Zeus raised by the nymph Adamanthea, who hid Zeus by dangling him by a rope from a tree so that he was suspended between the earth, the sea, and the sky, all of which were ruled by his father, Cronus. Still other versions of the tale say that Zeus was raised by his grandmother, Gaia. Once he had grown up, Zeus used an emetic given to him by Gaia to force Cronus to disgorge the contents of his stomach in reverse order: first the stone, which was set down at Pytho under the glens of Mount Parnassus to be a sign to mortal men, and then his two brothers and three sisters. In other versions of the tale, Metis gave Cronus an emetic to force him to disgorge the children, or Zeus cut Cronus’s stomach open. After freeing his siblings, Zeus released the Hecatoncheires, and the Cyclopes who forged for him his thunderbolts, Poseidon’s trident and Hades’ helmet of darkness. In a vast war called the Titanomachy, Zeus and his brothers and sisters, with the help of the Hecatonchires and Cyclopes, overthrew Cronus and the other Titans. Afterwards, many of the Titans were confined in Tartarus. However, Oceanus, Helios, Atlas, Prometheus, Epimetheus and Menoetius were not imprisoned following the Titanomachy. Gaia bore the monster Typhon to claim revenge for the imprisoned Titans. Accounts of the fate of Cronus after the Titanomachy differ. In Homeric and other texts he is imprisoned with the other Titans in Tartarus. In Orphic poems, he is imprisoned for eternity in the cave of Nyx. Pindar describes his release from Tartarus, where he is made King of Elysium by Zeus. In another version, the Titans released the Cyclopes from Tartarus, and Cronus was awarded the kingship among them, beginning a Golden Age. In Virgil‘s Aeneid, it is Latium to which Saturn (Cronus) escapes and ascends as king and lawgiver, following his defeat by his son Jupiter (Zeus). One other account referred by Robert Graves, who claims to be following the account of the Byzantine mythographer Tzetzes, it is said that Cronus was castrated by his son Zeus just like he had done with his father Uranus before. However the subject of a son castrating his own father, or simply castration in general, was so repudiated by the Greek mythographers of that time that they suppressed it from their accounts until the Christian era (when Tzetzes wrote). Libyan account by Diodorus Siculus In a Libyan account related by Diodorus Siculus (Book 3), Uranus and Titaea were the parents of Cronus and Rhea and the other Titans. Ammon, a king of Libya, married Rhea (3.18.1). However, Rhea abandoned Ammon and married her brother Cronus. With Rhea’s incitement, Cronus and the other Titans made war upon Ammon, who fled to Crete (3.71.1-2). Cronus ruled harshly and Cronus in turn was defeated by Ammon’s son Dionysus (3.71.3-3.73) who appointed Cronus’ and Rhea’s son, Zeus, as king of Egypt (3.73.4). Dionysus and Zeus then joined their forces to defeat the remaining Titans in Crete, and on the death of Dionysus, Zeus inherited all the kingdoms, becoming lord of the world (3.73.7-8). Cronus is mentioned in the Sibylline Oracles, particularly in book three, which makes Cronus, ‘Titan’ and Iapetus, the three sons of Uranus and Gaia, each to receive a third division of the Earth, and Cronus is made king over all. After the death of Uranus, Titan’s sons attempt to destroy Cronus’s and Rhea’s male offspring as soon as they are born, but at Dodona, Rhea secretly bears her sons Zeus, Poseidon and Hades and sends them to Phrygia to be raised in the care of three Cretans. Upon learning this, sixty of Titan’s men then imprison Cronus and Rhea, causing the sons of Cronus to declare and fight the first of all wars against them. This account mentions nothing about Cronus either killing his father or attempting to kill any of his children. El, the Phoenician Cronus When Hellenes encountered Phoenicians and, later, Hebrews, they identified the Semitic El, by interpretatio graeca, with Cronus. The association was recorded c. AD 100 by Philo of Byblos‘ Phoenician history, as reported in Eusebius‘ Præparatio Evangelica I.10.16. Philo’s account, ascribed by Eusebius to the semi-legendary pre-Trojan War Phoenician historian Sanchuniathon, indicates that Cronus was originally a Canaanite ruler who founded Byblos and was subsequently deified. This version gives his alternate name as Elus or Ilus, and states that in the 32nd year of his reign, he emasculated, slew and deified his father Epigeius or Autochthon “whom they afterwards called Uranus”. It further states that after ships were invented, Cronus, visiting the ‘inhabitable world’, bequeathed Attica to his own daughter Athena, and Egypt to Taautus the son of Misor and inventor of writing. Roman mythology and later culture While the Greeks considered Cronus a cruel and tempestuous force of chaos and disorder, believing the Olympian gods had brought an era of peace and order by seizing power from the crude and malicious Titans, the Romans took a more positive and innocuous view of the deity, by conflating their indigenous deity Saturn with Cronus. Consequently, while the Greeks considered Cronus merely an intermediary stage between Uranus and Zeus, he was a larger aspect of Roman religion. The Saturnalia was a festival dedicated in his honour, and at least one temple to Saturn already existed in the archaic Roman Kingdom. His association with the “Saturnian” Golden Age eventually caused him to become the god of “time”, i.e., calendars, seasons, and harvests—not now confused with Chronos, the unrelated embodiment of time in general. Nevertheless, among Hellenistic scholars in Alexandria and during the Renaissance, Cronus was conflated with the name of Chronos, the personification of “Father Time“, wielding the harvesting scythe. As a result of Cronus’s importance to the Romans, his Roman variant, Saturn, has had a large influence on Western culture. The seventh day of the Judaeo-Christian week is called in Latin Dies Saturni (“Day of Saturn”), which in turn was adapted and became the source of the English word Saturday. In astronomy, the planet Saturn is named after the Roman deity. It is the outermost of the Classical planets (those that are visible with the naked eye). Tree of Life Attributions Israels Regardie in his book A Garden of Pomegrenates argues innfavor of adding the Greek Kronos, the god of time, to the list of correspondence for Binah (see Israel Regardie, A Garden of Pomegrenates, p. 43) This attribution of Kronos to Kether was made by Gareth Knight in his book A Practical Guide of Qabalistic Symbolism where he explains that this is a partial correspondence since the only reason why Cronos can be considered as a Kether figure is because he devoured his children which cognate with Kether because it will finally indraw all that has been created through it. The other reason why this is a partial attribution is because Cronos is of the second divine dynasty of the Greeks and although the above attribution is valid for anyone who cares to make it so. Cronos has reference to a much later stage of manifestation. he was one of the Titans, who can be considered human memories of a pre-human race. They took part in the Greek version of the War in heaven which appears in so many mythologies, including the Bible. Cronos in turn was overthrown by Zeus, who, with the other Olympians, was the main manifestation of God to the Greeks. In the Orphic cosmogony Cronos is an entirely different concept, being called the First Principle-Time, from which came Chaos, the infinite, and Ether, the finite Chaos was surrounded by Night, and in the darkness an egg was formed of which Night formed the shell. The centre of the egg was Phanes-Light, creator, in conjunction with Night, of heaven and earth and also Zeus. This creation fantasy can be considered as a resume of the concretion of Kether. The distinction of Time Infinite and Finite, light and Darkness are philosophical abstractions which demonstrates this conception to be a metaphysical structure rather than genuine primitive myth. These writings were attributed to Orpheus, whose original teachings were probably of Eastern origin, thought it was Dionysos who became the supreme god of Orphism. (Gareth Knight, A Practical Guide of Qabalistic Symbolism, p. 74-75)
<urn:uuid:013caab3-f73d-468a-ba45-4afe5b3d1136>
CC-MAIN-2020-16
https://occultrevival.net/kronos/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370511408.40/warc/CC-MAIN-20200410173109-20200410203609-00434.warc.gz
en
0.973195
3,666
3.46875
3
The Universe as Quantum Computer Department of Mechanical Engineering Massachusetts Institute of Technology Cambridge MA 02139 USA December 17, 2013 This article reviews the history of digital computation, and investigates just how far the concept of computation can be taken. In particular, I address the question of whether the universe itself is in fact a giant computer, and if so, just what kind of computer it is. I will show that the universe can be regarded as a giant quantum computer. The quantum computational model of the universe explains a variety of observed phenomena not encompassed by the ordinary laws of physics. In particular, the model shows that the the quantum com- putational universe automatically gives rise to a mix of randomness and order, and to both simple and complex systems. The computing universe The physical universe bears little resemblance to the collection of wires, transistors, and electrical circuitry that make up a conventional digital computer. How then, can one claim that the universe is a computer? The answer lies in the definition of computation, of which Turing was the primary developer. According to Turing, a universal digital computer is a system that can be programmed to perform any desired sequence of logical operations. Turing’s invention of the universal Turing machine makes this notion precise. The question of whether the universe is itself a universal digital computer can be broken down into two parts: (I) Does the universe compute? and (II) Does the universe do nothing more than compute?. More precisely, (I) Is the universe capable of performing universal digital computation in the sense of Turing? That is, can the universe or some part of it be programmed to simulate a universal Turing machine? (II) Can a universal Turing machine efficiently simulate the dynamics of the universe itself? At first the answers to these questions might appear, straightforwardly, to be Yes. When we construct electronic digital computers, we are effectively programming some piece of the universe to behave like a universal digital computer, capable of simulating a universal Turing machine. Similarly, the Church-Turing hypothesis implies, that any effectively calculable physical dynamics – including the known laws of physics, and any laws that may be discovered in the – can be computed using a digital computer. But the straightforward answers are not correct. First, to simulate a universal Turing machine requires a potentially infinite supply of memory space. In Turing’s original formulation, when a Turing machine reaches the end of its tape, new blank squares can always be added: the tape is ‘indefinitely extendable. Whether the universe that we inhabit provides us with indefinitely extendable memory is an open question of quantum cosmology, and will be discussed further below. So a more accurate answer to the first question is ‘Maybe.’ The question of whether or not infinite memory space is available is not so serious, as one can formulate notions of universal computation with limited memory. After all, we treat our existing electronic computers as uni- versal machines even though they have finite memory (until, of course, we run out of disc space!). The fact that we possess computers is strong empirical evidence that laws of physics support universal digital computation. The straightforward answer to question (II) is more doubtful. Although the outcomes of any calculable laws of physics can almost certainly be simulated on a universal Turing machine, it is an open question whether this simulation can be performed efficiently in the sense that a relatively small amount of computational resources are devoted to simulating what happens in a small volume of space and time. The current theory of computational complexity suggests that the answer to the second question is ‘Probably not.’ An even more ambitious programme for the computational theory of the universe is the question of architecture. The observed universe possesses the feature that the laws of physics are localthey involve only interactions between neighboring regions of space and time. Moreover, these laws are homogeneous and isotropic, in that they appear to take the same form in all observed regions of space and time. The computational version of a ho- mogeneous system with local laws is a cellular automaton, a digital system consisting of cells in regular array. Each cell possesses a finite number of possible states, and is updated as a function of its own state and those of its neighbors. Cellular automata were proposed by von Neumann and by the mathematician Stanislaw Ulam in the 1940s, and used by them to investigate mechanisms of self-reproduction. Von Neumann and Ulam showed that cellular automata were capable of universal computation in the sense of Turing. In the 1960s, Zuse and computer scientist Edward Fredkin proposed that cellular automata could be used as the basis for the laws of physics – i.e., the universe is nothing more or less than a giant cellular automaton. More recently, this idea was promulgated by Stephen Wolfram. The idea that the universe is a giant cellular automaton is the strong ver- sion of the statement that the universe is a computer. That is, not only does the universe compute, and only compute, but also if one looks at the ‘guts’ of the universe – the structure of matter at its smallest scale – then those guts consist of nothing more than bits undergoing local, digital operations. The strong version of the statement that the universe is a computer can be phrased as the question, (III) ‘Is the universe a cellular automaton?’ As will now be seen, the answer to this question is No. In particular, basic facts about quantum mechanics prevent the local dynamics of the universe from being reproduced by a finite, local, classical, digital dynamics. Quantum mechanics is the physical theory that describes how systems behave at their most fundamental scales. It was studying von Neumann’s book. The mathematical foundations of quantum mechanics that inspired Turing to work on mathematics (In particular, Turing was interested in reconciling questions of determinism and free will with the apparently indeterministic nature of quantum mechanics.) Quantum mechanics is well-known for ex- hibiting strange, counter-intuitive features. Chief amongst these features is the phenomenon known as entanglement, which Einstein termed ‘spooky ac- tion at a distance’ (spukhafte Fernwirkung). In fact, entanglement does not engender non-locality in the sense of non-local interactions or superluminal communication. However, a variety of theorems from von Neumann to Bell and beyond show that the types of correlations implicit in entanglement can not be described by classical local models involving hidden variables. In particular, such quantum correlations cannot be reproduced by local classical digital models such as cellular automata. Accordingly, the answer to question (III), is the universe a cellular automaton, is ‘No.’ The inability of classical digital systems to cope with entanglement also seems to prevent ordinary computers from simulating quantum systems ef- ficiently. Merely to represent the state of a quantum system with N sub- systems, e.g., N nuclear spins, requires O(2N) bits on a classical computer. To represent how that state evolves requires the exponentiation of a 2N by 2N matrix. Although it is conceivable that exponential compression tech- niques could be found that would allow a classical computer to simulate a generic quantum system efficiently, none are known. So the currently accepted answer to question (II), can a Turing machine simulate a quantum system efficiently, is ‘Probably not.’ The difficulty that classical computers have reproducing quantum effects makes it difficult to sustain the idea that the universe might at bottom be a classical computer. Quantum computers, by definition, are good at re- producing quantum effects, however. Let’s investigate the question of whether the universe might be, at bottom, a quantum computer. A quantum computer is a computer that uses quantum effects such as su- perposition and entanglement to perform computations in ways that classical computers cannot. Quantum computers were proposed by Paul Benioff in 1980. The notion of a quantum Turing machine that used quantum su- perposition to perform computations in a novel way was proposed by David Deutsch in 1985. For a decade or so, quantum computation remained something of a curiosity. The previous year, Lloyd had showed how quantum computers could be constructed by applying electromagnetic pulses to arrays of coupled quantum systems. The resulting parallel quantum computer is in effect a quantum cellular automaton. In 1995, Ignacio Cirac and Peter Zoller showed how ion traps could be used to implement quantum computation. Since then, a wide variety of designs for quantum computers have been proposed. Further quantum algorithms have been developed, and prototype quantum computers have been constructed and used to demonstrate sim- ple quantum algorithms. This allows us to begin addressing the question of whether the universe is a quantum computer. If we ‘quantize’ our three ques- tions, the first one, ‘Does the universe allow quantum computation?’ has the provisional answer, ‘Yes.’ As before, the question of whether the uni- verse affords a potentially unlimited supply of quantum bits remains open. Moreover, it is not clear that human beings currently possess the technical ability to build large scale quantum computers capable of code breaking. However, from the perspective of determining whether the universe supports quantum computation, it is enough that the laws of physics allow it. Now quantize the second question. (Q2) ‘Can a quantum computer effi- ciently simulate the dynamics of the universe?’ Because they operate using the same principles that apply to nature at fundamental scales, quantum computers – though difficult to construct – represent a way of processing information that is closer to the way that nature processes information at the microscale. In 1982, Richard Feynman suggested that quantum devices could function as quantum analog computers to simulate the dynamics of extended quantum systems. In 1996, Lloyd developed a quantum algorithm for implementing such universal quantum simulators. The Feynman-Lloyd results show that, unlike classical computers, quantum computers can simulate efficiently any quantum system that evolves by local interactions, in- cluding for example the standard model of elementary particles. While no universally accepted theory of quantum gravity currently exists, as long as that theory involves local interactions between quantized variables, then it can be efficiently simulated on a quantum computer. So the answer to the quantized question 2 is ‘Yes.’ There are of course subtleties to how a quantum computer can simulate the known laws of physics. Fermions supply special problems of simulation, which however can be overcome. A short-distance (or high-energy) cutoff in the dynamics is required to insure that the amount of quantum information required to simulate local dynamics is finite. However, such cutoffs – for example, at the Planck scale – are widely expected to be a fundamental feature of nature. Finally, we can quantize question three: Is the universe a quantum 12 cellular automaton?’ While we cannot unequivocally answer this question in the affirmative, we note that the proofs that show that a quantum computer can simulate any local quantum system efficiently immediately imply that any homogeneous, local quantum dynamics, such as that given by the standard model and (presumably) by quantum gravity, can be directly reproduced by a quantum cellular automaton. Indeed, lattice gauge theories, in Hamiltonian form, map directly onto quantum cellular automata. Accord- ingly, all current physical observations are consistent with the theory that the universe is indeed a quantum cellular automaton. The universe as quantum computer We saw above that basic aspects of quantum mechanics, such as entanglement, make it difficult to construct a classical computational model of the universe as a universal Turing machine or a classical cellular automaton. By contrast, the power of quantum computers to encompass quantum dynamics allows the construction of quantum computational models of the universe. The immediate question is ‘So what?’ Does the fact that the universe is observationally indistinguishable from a giant quantum computer tell us anything new or interesting about its behavior? The answer to this question is a resounding ‘Yes!’. In particular, the quantum computational model of the universe answers a question that has plagued human beings ever since they first began to wonder about the origins of the universe, namely, Why is the universe so ordered and yet so complex? The ordinary laws of physics tell us nothing about why the universe is so complex. Indeed, the complexity of the universe is quite mysterious in ordinary physics. The reason is that the laws of physics are apparently quite simple. The known ones can be written down on the back of a tee shirt. Moreover, the initial state of the universe appears also to have been simple. Just before the big bang, the universe was highly flat, homogeneous, isotropic, and almost entirely lacking in detail. Simple laws and simple initial conditions should lead to states that are, in principle, themselves very simple. But that is not what we see when we look out the window. Instead we see vast variety and detail animals and plants, houses and humans, and overhead, at night, stars and planets wheeling by. Highly complex systems and behaviors abound. The quantum computational model of the universe not only explains this complexity: it requires it to exist. To understand why the quantum computational model necessarily gives rise to complexity, consider the old story of monkeys typing on typewriters. The original version of this story was proposed by the French probabilist E?mile Borel, at the beginning of the twentieth century (for a detailed account of the history of typing monkeys see). Borel imagined a million typing monkeys (singes dactylographes) and pointed out that over the course of single year, the monkeys had a finite chance of producing all the texts in all the libraries in the world. He then immediately noted that with very high probability, they would would produce nothing but gibberish. Consider, by contrast, the same monkeys typing into computers. Rather than regarding the monkeys random scripts as mere texts, the computers in- terpret them as programs, sets of instructions to perform logical operations. At first it might seem that the computers would also produce mere gibberish ‘garbage in, garbage out,’ as the programmer’s maxim goes. While it is true that many of the programs might result in garbage or error messages, it can be shown mathematically that the monkeys have a relatively high chance of producing complex, ordered structures. The reason is that many complex, ordered structures can be produced from short computer programs, albeit after lengthy calculations. Some short program will instruct the computer to calculate the digits of ?, for example, while another will cause it to produce intricate fractals. Another will instruct the computer to evaluate the consequences of the standard model of elementary particles, interacting with gravity, starting from the big bang. A particularly brief program instructs the computer to prove all possible theorems. Moreover, the shortest programs to produce these complex structures are necessarily random. If they were not, then there would be an even shorter program that could produce the same structure. So the monkeys, by generating random programs, are producing exactly the right conditions to generate structures of arbitrarily great complexity. For this argument to apply to the universe itself, two ingredients are necessary - first, a computer, and second, monkeys. But as shown above, the universe itself is indistinguishable from a quantum computer. In addition, quantum fluctuations – e.g., primordial fluctuations in energy density – automatically provide the random bits that are necessary to seed the quantum computer with a random program. That is, quantum fluctuations are the monkeys that program the quantum computer that is the universe. Such a quantum computing universe necessarily generates complex, ordered structures with high probability.
<urn:uuid:e94e3c83-dbcd-4ca5-9b65-d5b63da08cd0>
CC-MAIN-2020-16
http://paolobottarelli.com/index.php/works/time-exists-because-things-can-not-happen-all-at-t/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371803248.90/warc/CC-MAIN-20200407152449-20200407182949-00075.warc.gz
en
0.920149
3,277
3.125
3
The Kingsnake Trail is an isolated trail that connects to the park’s main trail system after 3.6 miles. The parking lot and trailhead also serve as the park’s primary canoe/kayak access point to Cedar Creek, and almost all visitors to this part of the park are here for paddling. Though the trail does not loop, the initial 1.2-mile portion leading almost directly south into the floodplain makes a pleasant out-and-back walk. The trail follows a logging road built in the 1970s. The initiation of logging after a hiatus dating from the early 1900s reignited a movement started by conservationist Harry Hampton to preserve the old-growth forest along the Congaree floodplain. The most intense logging in this part of the park took place south of where you will travel today. Though you will be walking through woods that were selectively cut, many mature trees remain, and you will actually be walking directly past one of the most impressive hardwoods in the park—a large cherrybark oak on the bank of Tear Pond. Other highlights include a champion-class persimmon tree and a water oak over 18 feet in circumference. The trail crosses Cedar Creek, the only Outstanding National Resource Water in South Carolina and then returns to Cedar Creek’s bank for the trail’s final reach. On the hike, you may see wood ducks startled from some of the sloughs and ponds along the trail, and hear barred owls calling. This can be a good area to find Rusty Blackbird, a winter visitor to the park whose numbers have declined greatly since the mid-1900’s. Just off-trail, one of the larger baldcypress in the park can be found, as well as a cattle mound dating from the 19th century, when such structures were constructed by enslaved African-Americans as refuges for livestock during floods. One highlight you are unlikely to come across is an Eastern Kingsnake—though the trail was named for this species when a kingsnake was seen during the layout of trail, they are primarily nocturnal hunters and infrequently seen. To reach Kingsnake Trail from the park’s Harry Hampton Visitor Center, follow National Park Road (the park’s entrance road) back to Old Bluff Road. Take a right on Old Bluff Road until it ends at a T-intersection with South Cedar Creek Road. Turn right on South Cedar Creek Road and follow it south for 1.8 miles until you see the sign for South Cedar Creek Canoe Launch and Kingsnake Trail Access. Continue straight on the gravel road and park. You are ready to begin your hike! The Kingsnake Trail is generally easy to follow, though a series of ice storms, floods, and tropical storms has downed several large trees and the trail’s path can be obscured. These fallen trees can make it hard sometimes to follow the trail; when navigating obstacles, always try to keep trail signs in sight. The park marks the Kingsnake Trail with signs with the number “6”. These signs are placed on trees approximately every 25-30 yards; some of these signs include GPS coordinates to assist with rescue efforts. The Kingsnake Trail was formerly marked by painted light blue blazes on the trees and some of these historical marks can still be seen. Though a number of the sites in this guide are numbered, there will not be any signs for these sites on the trail. Because the park is designated a national wilderness area, signs and other human impacts are limited. Most of the numbered sites correspond to natural or man-made features, so their location should be clear. For some sites, the numbering on the trail guide map provides only an approximate site location. SIte 1: Parking Lot The parking lot can be a good location to scan the sky for soaring birds, including Mississippi Kite in the summer. It can also be a good place to look for butterflies, who like to collect mineral salts from the gravel lot. If it’s not too hot a day, take a look around before reviewing the kiosk at the trail head and proceeding down the trail to Cedar Creek and the Iron Bridge. And take advantage of the pit toilet facilities if needed. Site 2: Iron Bridge (Bridge L) The first portion of the trail follows a logging road that was cut in the 1970s, sometimes referred to as the New Road. Historical records indicate that the first part of the New Road followed the footprint of a road dating from the late 18th century. Pause at the bridge for a look at Cedar Creek. Cedar Creek enters the park at Bannister Bridge and slowly winds across the floodplain for 15 miles before joining the Congaree River. Cedar Creek is the only Outstanding National Resource Water in the state, recognized for its exceptional natural and recreational value. Canoes and kayaks can be launched on the Cedar Creek Wilderness Trail at two different sites at the park: Bannister Bridge and South Cedar Creek, the landing you see here. The current is slow enough here and the creek sufficiently wide that paddlers can readily paddle downstream or upstream and then retrace their route. Contact the park for information about its popular free paddling tours. Site 3: Second Bridge This bridge crosses a small slough. Sloughs are often formed from abandoned river or creek channels. Even though no longer directly connected to a water source, sloughs usually hold water year-round except during times of drought. The park has several species of trees uniquely adapted to the wettest habitats in the floodplain, but two of them, water tupelo and bald cypress, dominate the canopy in these low spots. Most of the trees you see here are water tupelo, with large leaves and sinuous trunks. Bald cypress is one of the iconic trees of the Southern wetlands, with its characteristically straight trunk, feathery needles and numerous surrounding “knees”. Site 4: Third Bridge This bridge crosses a gut connecting Whiskey Pond (so named for the remnants of a whiskey still along its bank) on your right with Long Pond on your left. A “gut” is a local term for a small, short floodplain stream that has clearly defined banks. Guts are distinguished from creeks by being shallower and shorter in length. They are often dry or stagnant during the summer and early fall. Guts seem to wind aimlessly over the floodplain, sometimes connecting different parts of the same creek together, or connecting one oxbow lake to another. Guts have an important role in floods. They transport water from the Congaree River throughout the floodplain in the initial stage of a flood, and then channel water so that it can flow more quickly back to the river and main creeks as the flooding subsides. There is a large bald cypress tree on your right. Bald cypress decays slowly, so much so that lumber from bald cypress has been called “the wood eternal”. Historically, bald cypress was highly prized for building, including shingles used in roofing and siding. The late 19th century and early 20th century logging industry claimed many of the bald cypress trees in the southeast United States, so trees in the Congaree National Park are some of the last remaining old-growth bald cypress trees on U.S. soil. Site 5: Fifth Bridge From the third bridge to the fifth bridge, you will be hiking along an area that was selectively logged in the 1970s. Nonetheless, some impressive canopy trees can be found along the trail, dominated by sweetgum, hickory and oak. Starting in the late summer, Longleaf Lobelia are numerous here, and you will sometimes see multiple butterflies, including Lace-winged Roadside Skipper and Zabulon Skipper, nectaring on the same bloom stalk. Site 6 (optional): Moccasin Pond Baldcypress There is an opportunity for a short off-trail excursion that should only be undertaken if you have a working compass (electronic or otherwise) or GPS. You will be no more than a couple hundred yards off trail before retracing your steps, but it is very easy to lose your sense of direction in the park and you should not venture off trail without the help of navigational aids. Do not attempt to reach the tree if water levels are high; even at moderate water levels, you often can approach no closer than fifty yards—close enough for a good look. To find the tree, look for an exposed galvanized culvert pipe on the trail; an arm of Whiskey Pond that reaches all the way to the trail edge will be on your right. And if you look behind you, you will see you have just passed a trail marker with coordinates N 33.8176, W -80.6465. Enter the woods to your left, following a small channel in a westerly direction. Another channel will appear on your right—stay in between these channels and you will soon enter Moccasin Pond—you should notice a low bank 50 yards to your left. If you keep the bank edge 50 to 75 yards to your left, you will reach the cypress in a couple hundred yards—it is unmistakeable, with a prominent “foot” and a palisade of cypress knees surrounding it. The purpose of cypress knees is still debated. Some feel they increase the stability of the root system, while a previously rejected theory that they promoted gas exchange when the forest is flooded has been supported by recent peer-reviewed scientific research. When done, return west to the trail and resume your trip. Site 7: Bridge K and Summer Duck Slough Bridge K crosses a gut that connects Summer Duck Slough and Big Snake Slough. The trail turns sharply to the right (west) after the bridge, while the old logging road continues straight ahead. The logging road is unmaintained and increasingly overgrown in switchcane; it should only be explored accompanied by someone who is familiar with its route. Switchcane, a native bamboolike grass that grows in wet woods, provides critical nesting habitat for some characteristic bottomland bird species, including Swainson’s Warbler. Native Americans managed canebrakes with controlled fires to encourage further growth, in part because canebrakes were excellent habitat for game. But canebrakes declined as early settlers used the cane for livestock forage and plowed it under for agriculture, in part since the presence of cane was considered a sign of rich soil. For the next mile, you will be walking along the south bank of Summer Duck Slough, so named for the Wood Duck, colloquially called “Summer Duck” because it is the only waterfowl that commonly breeds in portions of the southeast. The male Wood Duck is one of the most beautiful waterfowl in the US. It is not unusual to flush wood duck, often in pairs, from waterbodies in the park. The wood duck will almost always see you before you see them, and you will typically hear the hen’s alarm call and catch only a glimpse of the wood ducks flying away in the distance. As of 2017, this portion of the trail has multiple fallen trees that need to be negotiated—keep an eye out for trail markers to make sure you find your way safely along the trail. Remember than Summer Duck Slough and its well-defined bank edge is always on your right and it is safer to find detours to the right rather than to the left. Site 8: Eighth Bridge You will begin to encounter pine trees along the trail, which marks the end of your walk along Summer Duck Slough. The eighth bridge (are you still keeping count?) crosses a small gut connecting Big Slough and the northwestern end of Summer Duck Slough. Cooner’s Mound is nearby—it is one of seven cattle mounds in the park that are listed in the National Register of Historic Places. This well-shaped rectangular mound was constructed by slaves in the 19th century as a refuge during floods for free-range livestock that foraged in the park. Flooding from the Congaree River can inundate the entire floodplain, and typically occurs in winter and early spring. Though floods usually subside after a few days, livestock herded to mounds would still need to be fed hay delivered by boat. After you cross the eighth bridge, watch trail markers closely; there are a couple shallow channels that cross the path and it is easy to inadvertently follow these channels rather than the trail. Site 9: Bridge J Bridge J crosses Circle Gut, after which the trail turns sharply right. As of 2017, a large log had floated on the bridge and damaged it, though the bridge is still stable. You will soon see Tear Pond to your left, a large tear-shaped slough whose bank you will follow until you rejoin Cedar Creek. Tear Pond is dominated by Water Tupelo, though some Baldcypress appear, especially along the pond edge near the trail. Site 10: Cherrybark Oak Travelling along Tear Pond, you reach an immense Cherrybark Oak tree, one of the larger hardwoods in the park. This tree is over 22 feet in diameter, though it has suffered some crown damage and does not stand as tall as some other Cherrybark Oak in the park. Cherrybark Oak is one of the more valuable hardwoods found in the park, and its wood was harvested for veneer in the furniture trade. Site 11: Cedar Creek Continue along Tear Pond and you will arrive back at Cedar Creek. The trail turns to the left (upstream) and follows the creek for a little more than a half mile before the trail ends at its junction with the Oakridge Trail. As you walk along this portion of the trail, there is an opportunity to see a couple more outstanding trees. Champion trees come in all sizes, including a former co-champion Persimmon near the trail. Persimmon can be recognized by their checkered black bark that looks as though it has been burned. If you carefully scan the woods to your left just as the trail leads away from the bank edge of Cedar Creek (the tree is much easier to see in the winter, when foliage does not obscure your view), you may see an 8-foot circumference Persimmon that was a national co-champion at one time. A tree this size is easily overlooked in the park, but it is truly remarkable to see a Persimmon so large and tall, reaching 110 feet in height. The champion trees in the park are often unusually tall for their species, though their circumference and crown may not match those of trees growing in more open habitats. After the trail leaves Cedar Creek, you will encounter another area with several large downed trees. Keep an eye out for trail markers, and detour to the right (along Cedar Creek) when possible. After negotiating this section, look to your left for a large Water Oak almost 18 feet in circumference. Water Oak, despite their name, are generalists, adapted to a wide variety of soils. Other oak species are much more common in the park; Laurel Oak, Swamp Chestnut Oak, Willow Oak and Cherrybark Oak are among the dominant canopy trees for many of the park’s varied vegetative communities. Site 12: Junction with Oakridge Trail After you reach the junction with the Oakridge Trail, you can retrace your steps 3.6 miles, or continue to the Visitor Center in another 1.9 miles if you have arranged a car shuttle. If you choose to continue to the Visitor Center, follow the Oakridge Trail over Bridge I to the Weston Lake Loop Trail, turn right on the Weston Lake Loop Trail and follow it to the Boardwalk Loop; either direction on the loop trail will take you to the Visitor Center.
<urn:uuid:544117b8-1342-4df0-aa0a-f88b70ce46e4>
CC-MAIN-2020-16
https://friendsofcongaree.org/location/kingsnake-trail-guide/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371810807.81/warc/CC-MAIN-20200408072713-20200408103213-00273.warc.gz
en
0.955506
3,305
2.59375
3
Located in the Kotayk Province, roughly 17 miles southeast of the Armenian capital of Yerevan, is the village of Garni. This historical settlement has an ancient history that likely dates back to the construction of its fortress complex in the 3rd century BC. The area itself has been inhabited since the 3rd millennium BC. With a history so long, it should come as no surprise that its historical sites are some of the top things to do in Garni and Geghard, Armenia. Garni and its surrounding area are chock-full of fascinating historical sites sure to thrill any history buff. Garni’s ancient fortress complex, its churches and shrines, the world-famous Temple of Garni, and the nearby Geghard Monastery can all be easily visited within the same day. Nature-lovers will likely appreciate the beauty of the columnar basalt along the sides of the Garni Gorge. Foodies aren’t left out here, as there are ample opportunities to try traditional Armenian cuisine here as well. These are the top things to do in Garni and Geghard, Armenia. Visit Geghard Monastery Surrounded by the beautiful and imposing cliffs of the Azrat River Gorge less than five miles northeast of Garni is Geghard Monastery, also known as Geghardavank. While this UNESCO World Heritage Site is technically located in its namesake town, Geghard, it’s close enough to Garni to make it an easy addition to any Garni day trip itinerary. Geghard Monastery is made up of several structures, including caves, jhamatuns (lobby-like entrance areas contiguous to the west of medieval Armenian monasteries), constructed buildings, and more, which are arranged in and adjacent to the Azrat River Gorge cliffside. Though the main chapel at Geghard Monastery was built in 1215, the monastery itself dates back much further to the 4th century. It was founded by Gregory the Illuminator, the religious leader who converted Armenia from paganism to Christianity. He founded the monastery on top of a holy spring that was considered sacred even in pre-Christian times. The word “Geghardavank” translates to “Monastery of the Spear,” which is a direct reference to the Spear of Destiny, the artifact that pierced the side of Jesus Christ during the Crucifixion. That spear is thought to have been brought to Geghard Monastery by the Apostle Jude, along with other relics. The spear remained in the monastery from the 13th to 18th centuries but has since been moved to the Etchmiadzin Treasury. The monastery’s significance makes it one of the top things to do in Garni and Geghard. When you first arrive at Geghard Monastery, you’ll find street vendors selling rosaries and other religious items. There are also food vendors selling sweets and dried fruits. My favorite was the gata, a fluffy, Armenian sweet bread made with flour, honey, and sugar. It makes for a sweet and tasty snack before you explore the monastery. The Upper Jhamatun There are lots of different structures that make up the monastery. One of the most fascinating is a 13th century noble family tomb located inside the mountain. This beautiful mausoleum, called the Upper Jhamatun, can be reached by following a cave on the monastery’s second level whose walls are etched with carvings. The tomb was carved from the top down, so everything inside, including its stunning pillars, walls, and floor, were all hewn from one massive piece of rock. On its walls are Armenian historical records that were documented during the 13th century so they wouldn’t be lost during the frequent Mongol attacks of the time period. Records show that the tomb was completed in the year 1288. Interred in the tomb are the Armenian princes Merik and Grigor, though others were also buried there once. In addition to the tomb serving as their final resting place, it is also used as a music school. I could see why after witnessing a beautiful song performed by a group of singers there. The acoustics in the tomb are extraordinary and their performance gave me chills! Watching them sing was a privilege and one of my favorite things to do in Garni and Geghard. One of the monastery’s more well-known locations is Avazan Church, which was hewn from an ancient cave in the 13th century. The cave in which this rock-cut church was carved is home to a spring that has been a place of worship since Armenia’s pagan days. The church is cruciform, or cross-shaped, and was carved by the architect Galdzak, who also carved the other jhamatuns and rock-cut churches over the course of 40 years. It’s a little dark inside with light streaming down from a hole above. But after your eyes adjust, you can see the detailing in the wall carvings and columns. It was unlike anything I had ever seen. The design was breathtaking. In the back of the Avazan Church is the famous holy spring. It comes straight out of one of the rock walls and trickles down a groove that leads elsewhere. Dip your hands in near the source and take a sip. The water is cold and refreshing! Drinking the water from the holy spring should be high on the list of things to do in Garni and Geghard, Armenia! Rock-Cut Church and Katoghike Chapel To the left of the chamber with the holy water stream is another rock-carved church. This church was carved from a single piece of rock and features a cupola, as well as more intricate carvings on its walls. I could make out crosses, trees of life, and pomegranates, which I learned are a sign of fertility. Just outside the rock-cut church is Geghard Monastery’s main church, the Katoghike Chapel. Built in 1215 right against the mountainside, this beautiful church was constructed with specially-made corners to protect the building from seismic activity. Inside, you’ll find vibrant paintings of angels and Jesus Christ on the walls. Outside, on the southern side of the chapel, are more intricate carvings of pomegranate trees, their leaves intertwining with grapes. Seeing it is one of the most beautiful things to do in Garni and Geghard, Armenia! Check out the 5 Reasons Why You Must Visit Etchmiadzin, Armenia See the Temple of Garni Once you’ve finished up at the gorgeous Geghard Monastery, head on down to the village of Garni. There, you’ll find the magnificent Temple of Garni, a pre-Christian and pre-Hellenistic pagan temple dedicated to the Armenian sun god, Mihr. It’s the only free-standing colonnaded Greco-Roman structure in all of the former Soviet Union. The Temple of Garni is also considered a symbol of Armenia’s classical past. The History of the Temple of Garni Though the Temple of Garni is a UNESCO World Heritage Site, the structure that stands on the site is not the original temple. The original was likely built by King Tiridates I in the 1st century AD. It was built without the use of cement. Instead, iron was used to unite the stones of the foundation. The temple survived the purge to get rid of pagan temples after Armenia made the shift from polytheism to Christianity, but collapsed during an earthquake in 1679. It was rebuilt from 1969 to 1975. The temple reminded me of ones I had seen during my many trips to Italy. It has 24 columns and steep steps leading up to a main hall and an altar. At the backside of the temple, you can bask in stunning views of the surrounding mountains and valleys. While you’re there, you may even get to listen to a musician play a pipe made from apricot trees called the duduk. Even rebuilt, the ancient history of the temple and surrounding area is apparent. Excavations have shown that the area has been inhabited since the 4th millennia BC. On the site, you’ll find an 8th-century stone inscribed in Aramaic by King Ardashir I, who wrote of uniting Garniani land with Ararat. Also nearby is the summer residence. Inside is a gorgeous mosaic from the 3rd century, which was made from 14 types of natural stones. It’s very colorful! After my visit, I understood why the Temple of Garni is considered a highlight of any trip to Armenia. It was my favorite site I’d visited in the country so far. It should be the cornerstone of things to do in Garni and Geghard, Armenia! Check out my VIDEO: Rila Bulgaria’s Most Famous Monastery Make Lavash at Garnitoun Restaurant With just a few days in Armenia under my belt by the time I visited Garni, I had already fallen in love with the cuisine. One of my favorite elements of Armenian cuisine is its bread, specifically the lavash. Lavash is a long, thin Armenian flatbread that is often eaten with various cheeses and herbs wrapped inside it. After trying it so many times, I got the chance to learn how to make it at Garnitoun Restaurant in Garni! First, I watched the baker at the restaurant make her lavash. She quickly rolled out the floured dough, spread it on a pillow, and sprayed it with water. Then, she slaps the dough to the inside wall of an oven called a tonir and lets it cook for a few minutes. She was a lavash-making machine! Then, it was my turn. I love food, but I admit, I’m not a cook. I didn’t quite get my lavash entirely inside the oven, but it still managed to cook. Unfortunately, I had some trouble pulling it out of the oven once it was ready, though. All in all, let’s just say I won’t be a professional lavash maker anytime soon. But that’s all right, because I prefer eating it to baking it anyway! Even if you’re not a cook, take some time to learn to make lavash when you visit Garnitoun Restaurant. It’s actually a lot of fun and a great way to learn more about Armenian culture and cuisine. It’s definitely one of the best things to do in Garni and Geghard, Armenia! Check out my VIDEO: Best Restaurants in Novi Sad, Serbia Dine at Garnitoun Restaurant After you make your lavash (or in my case, attempt to make lavash), head inside to the restaurant. It features an incredible dining hall with large windows that offer gorgeous views of the surrounding mountains and valleys. You also get an incredible look at the Temple of Garni. If you’re brave enough, step onto the glass floor near the edge of the terrace and get a good look at the valley floor far below! While you enjoy the views, be sure to indulge in some stunning, traditional Armenian food. Of course, I recommend starting with the lavash, which you can stuff with herbs, three different types of cheeses, cream, and even the cucumber and tomatoes from the Greek-style salad. Just trying the lavash is among the best things to do in Garni and Geghard, Armenia! The cheeses here are all so tasty and different from one another. Among them are a string cheese as well as one with herbs in it. The cheeses ferment in clay jars, which are placed underground. They range from mild to tangy and pungent. Combined with the fresh herbs in the lavash, they’re incredibly refreshing! As an eggplant fanatic, I can’t recommend the eggplant stuffed with cheese enough. It comes with a honey-like glaze on top and is unbelievable. I also suggest trying the greens with pomegranate seeds, which add a fruity pop to the earthy greens. You also must try the green bean omelet, which was perfect because the egg was still slightly runny while the beans were undeniably fresh. Don’t miss the dried fruit salad, either. The plums, apple, pomegranate, and apricots are amazing together! For your main course, my suggestion is to go with the barbecued trout. It’s so tender, it falls apart the moment you spear it on your fork. The meat is buttery and flaky, while everything from the flesh to the crispy skin is full of smoky, mouthwatering flavor. Just be careful, as it contains lots of small bones! Enjoy your meal with a 2018 VanArdi dry white wine. If you’ve ever had Spanish or Portuguese Albariño wine, the taste is very similar. It pairs so well with the foods and was the perfect cap to a delicious meal with my new friends from Armenia Travel! Having amazing food with amazing people is for sure, one of the best things to do in Garni and Geghard, Armenia. Check out my 5 Things to Do in Kalymnos, Greece That concludes my list of the top things to do on a day trip to Garni and Geghard, Armenia! This beautiful village and its surrounding area boasts some of the most remarkable historical sites I visited in Armenia. Spending a day there is something you must do when you visit this Western Asian country. Between the history and the incredible food, I was one happy traveler, and I think you will be, too. Book a trip to Yerevan, Armenia today to experience everything Garni has to offer! Special thanks to my friends at Armenia Travel for their kindness, hospitality, and for arranging my trip. I couldn’t have done it without them! Also, if you would like to visit Garni, please contact Lusine. NOTE: If you need to check the visa requirements of a particular country, click here. To apply for a visa, find up-to-date visa information for different countries, and calculate the cost of a particular visa, click here!
<urn:uuid:dee729f0-a6bc-4160-979b-3b1dcd1368d2>
CC-MAIN-2020-16
https://davidsbeenhere.com/2020/01/05/things-to-do-in-garni-and-geghard-armenia/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371618784.58/warc/CC-MAIN-20200406035448-20200406065948-00233.warc.gz
en
0.965981
2,999
2.53125
3
Fruits and vegetables improve children’s nutrition, help prevent obesity and may boost school performance. Fruits and vegetables benefit kids in many ways, including improved nutrition, decreased obesity risk and better school performance, but most children don’t get the recommended five or more servings of fruits and vegetables a day. Only 22 percent of toddlers and preschoolers and only 16 percent of kids ages 6 to 11 meet the government’s recommendation, according to Ohio State research. One-half of children’s mealtime plates should be filled with fruits and vegetables in order to reap the benefits. Children’s growing bodies require good nutrition, and fruits and vegetables contain a multitude of vitamins, minerals and other healthy compounds. Citrus fruits and strawberries are rich in immune system-boosting vitamin C, carrots are loaded with eye-healthy vitamin A and spinach is a good source of iron, a mineral that helps prevent anemia. According to DrGreene.com, apples contain 16 different polyphenols, which are antioxidants with health-promoting properties. Eating fruits and vegetables in a rainbow of colors will provide a wide range of nutrients that help keep kids healthy. Fruits and vegetables are high in filling fiber, but low in fat and calories. Encouraging kids to eat fruits and vegetables instead of sugary snacks and fat-laden fast food can help children avoid obesity. According to the U.S. Department of Health and Human Services, 16 percent of kids ages 6 to 19 are overweight, increasing the risk of Type 2 diabetes, high cholesterol, hypertension, respiratory problems and depression. A USDA study of 3,064 kids ages 5 to 18 linked higher fruit consumption to healthier body weights. High-fiber foods, such as fruits and vegetables, help the digestive system function properly. Constipation in kids can often be eased by eating more high-fiber prunes, apricots, plums, peas, beans and broccoli, according to the American Academy of Pediatrics. As fiber passes through the digestive system, it absorbs water and expands, which triggers regular bowel movements and relieves constipation. Better School Performance Children with healthy diets, including high consumption of fruits and vegetables, performed better on academic tests than children who consumed fewer fruits and vegetables in a study published in the April 2008 issue of the “Journal of School Health.” The study of 5,200 Canadian fifth graders found that the kids with healthy diets were up to 41 percent less likely to fail literacy tests than the other children. A number of factors influence the academic performance of kids, but nutrition is an important contributor to better school performance, the report noted. To increase consumption of fruits and vegetables, shop with your kids and let them prepare vegetable and fruit dishes. A child who makes the green beans himself may be more likely to eat them, notes an article by Elizabeth Cohen, CNN senior medical correspondent. Sneak pureed vegetables into your children’s favorite foods and stock kid-level shelves in the fridge with baggies of cut-up veggies and fruits and fruit cups. Shop organic if you can. If cost is a factor, however, be selective in buying organic, recommends the American Academy of Pediatrics. The most important thing is for kids to eat fruits and vegetables – organic or not. 1. Set the right example. Children learn what they live, making it vital that parents set the right example with their own food choices. If parents are routinely eating and snacking on unhealthy foods, how can children be expected to do any differently? Setting the right example to get children to eat right requires parental self-discipline. Parents need to provide loving and firm guidance in making healthy and wise choices regarding food and snacks. 2. Choose healthy snacks for children such as fresh fruit and vegetables with tasty dips. Keep healthy snacks well-stocked at home, readily available and easily accessible for children to grab. Save cookies and other sugarcoated treats for an occasional sweet treat or special occasions. Never get into the habit of giving children cookies or other sugary-treats when the family meal is being prepared or is almost ready to be served. Consider offering a couple of bites of the vegetables or salad already planned for the meal to tide them over. 3. Provide necessary discipline. Children typically do not like changes being made to their routines, so expect children to express their dislike to newly implemented changes in the family meal plan. Calmly explain that “this is what we’re having for dinner”, and if children adamantly refuse to eat the planned meal, simply cover it and save it for when they say they’re hungry. Remember, your home is not a cafeteria-style restaurant where children dictate what they will or will not eat. When the child later says they are hungry, simply say “Well that’s good because I saved your dinner for you”, and then reheat as needed 4. Try a different vegetable every day and prepare it in different ways. Remember vegetables can be served, raw, baked, steamed, grilled, in salad, in juice form, stir-fried and broiled. Try a wide variety and in different ways until you find the vegetables that your child will like and in the style, they will like to eat them in. 5. Mix them in your child’s favorite meal. If your child likes macaroni and cheese, make it with steamed broccoli or peas mixed in. If your child likes spaghetti, mix in real tomatoes, mushrooms, or peas and carrots into the sauce. Sometimes mixing right into their favorite foods makes them eat it without even noticing. 6. Try juicing vegetables and mixing it with fruit. Make your child part of the juicing experience and they may be more inclined to drink them. Combinations such as carrot, apple, and celery juice are usually sweet to the taste and a big hit. 7. Offer vegetables and fruit with dip. Most children love to dip items (i.e French fries in ketchup) so provide them dipping choices such as a salad dressing they might like and let them dip away. Always make vegetables ready to at and available with lunch, dinner, and snack. By having them readily, available your child will eat when they are ready. 8. Offer your toddler many different types of foods and letting them see you eat and enjoy various foods, especially fruit and vegetables. Although infants often get fruit and vegetable baby foods, once they start eating table food, what you eat is going to be a big influence on what your kids like to eat. If you rarely serve vegetables with meals or eat fruit, don’t be surprised if your kids develop the same tastes. 9. Find foods that your kids already like to eat, like smoothies, muffins, or yogurt. Find recipes that allow you to add fruit or vegetables to them, like banana or zucchini muffins. 10. Offer visually appealing vegetable and fruit. Try edible faces with carrot circles for eyes, strips of pepper for eyebrows, baby sweet corn for the nose and broccoli pieces for the mouth. Kids will enjoy helping with the composition, especially if you deliberately make a few anatomical mistakes. Add wild hairdos with shredded cabbage, watercress, or courgette ribbons. 11. Introduce colour into your children’s diet with stir-frying. It is quick, so they get to see instant results. Try stir-frying peas, pepper strips, bean sprouts and Chinese cabbage, or a mixture of sweet corn, small chunks of carrot and peas. 11. Don’t overcook vegetables. Steaming or microwaving retains more nutrients than boiling. Although babies need mushy textures, older children prefer a little ‘bite’ and may like to eat their vegetables as finger foods. Did you know? - 56% of primary and 80% of secondary school students do not eat the recommended daily amount of vegetables. - Research shows that watching a lot of TV is associated with children and teenagers drinking more soft drink and not eating enough fruit and vegetables. - Fruit and vegetables are a great source of vitamins, minerals and dietary fibre. - Eating fruit and vegetables every day helps children and teenagers grow and develop, boosts their vitality and can reduce the risk of many chronic diseases – such as heart disease, high blood pressure, some forms of cancer and being overweight or obese. How many serves do kids and teens need? All of us need to eat a variety of different coloured fruit and vegies every day – both raw and cooked. The recommended daily amount for kids and teens depends on their age, appetite and activity levels – see table below. Recommended serves of fruit and vegetables by age Note: One serve of fruit is 150 grams (equal to 1 medium-sized apple; 2 smaller pieces (e.g. apricots); 1 cup of canned or chopped fruit; ½ cup (125ml) 99% unsweetened fruit juice; or 1½ tablespoons dried fruit). One serve of vegetables is 75 grams (equal to ½ cup cooked vegetables; ½ medium potato; 1 cup of salad vegetables; or ½ cup cooked legumes (dried beans, peas or lentils). Fresh fruit is a better choice than juice While whole fruit contains some natural sugars that make it taste sweet, it also has lots of vitamins, minerals and fibre, which makes it more filling and nutritious than a glass of fruit juice. One small glass of juice provides a child’s recommended daily amount of vitamin C. Unfortunately, many children regularly drink large amounts of juice and this can contribute to them putting on excess weight. How to help kids and teens eat more fruit and vegies Eating more fruit and vegies every day can sometimes be a struggle. However, research shows that we’re more likely to do so if they’re available and ready to eat. Children may need to try new fruits and vegies up to 10 times before they accept them. So stay patient and keep offering them. It can also help to prepare and serve them in different and creative ways. Some ideas to try: - Involve the whole family in choosing and preparing fruit and vegies. - Select fruit and vegies that are in season – they taste better and are usually cheaper. - Keep a bowl of fresh fruit in the home. - Be creative in how you prepare and serve fruit and vegetables – such as raw, sliced, grated, microwaved, mashed or baked; serve different coloured fruit and vegies or use different serving plates or bowls. - Include fruit and vegies in every meal. For example, add chopped, grated or pureed vegetables to pasta sauces, meat burgers, frittatas, stir-fries and soups, and add fruit to breakfast cereal. - Snack on fruit and vegies. Try corn on the cob; jacket potato topped with reduced fat cheese; plain popcorn (unbuttered and without sugar or salt coating); chopped vegies with salsa, hummus or yoghurt dips; stewed fruit; fruit crumble; frozen fruit; or muffins and cakes made with fruit or vegies. - Try different fruits or vegies on your toast – banana, mushrooms or tomatoes. - Add chopped or pureed fruit to plain yoghurts. - Make a fruit smoothie with fresh, frozen or canned (in natural or unsweetened juice) fruit; blend it with reduced fat milk and yoghurt. - Chop up some fruit or vegie sticks for the lunchbox. - In summer, freeze fruit on a skewer (or mix with yoghurt before freezing) for a refreshing snack. - Make fruit-based desserts (such as fruit crumble or baked, poached or stewed fruit) and serve with reduced fat custard. - Have fresh fruit available at all times as a convenient snack – keep the fruit bowl full and have diced fruit in a container in the fridge.
<urn:uuid:93cb422e-38c7-4c08-bdcd-e24d090256cc>
CC-MAIN-2020-16
https://www.kidzee.com/fruits-vegetables-improve-childrens-nutrition/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370526982.53/warc/CC-MAIN-20200404231315-20200405021315-00275.warc.gz
en
0.935642
2,480
4.03125
4
ANZAC Day developed from the commemoration of the Australian and New Zealand soldiers who lost their lives on Gallipoli during the First World War. Although the campaign failed, the endurance and sacrifice of those soldiers led to the creation of the “ANZAC legend”. Over the years, ANZAC Day has broadened to include the commemoration of all men and women who have served for Australia in war and peace. The Australian War Memorial explains the history of ANZAC Day, it’s symbols and ceremonies. Facts on ANZAC Day: - ANZAC is the acronym formed from the initial letters of the Australian and New Zealand Army Corps. - 25 April was officially named ANZAC Day in 1916. - During the 1920s ANZAC Day became established as a national day of commemoration. - For the first time in 1927, every state observed some form of public holiday on ANZAC Day. - By the middle of the 1930s, all the rituals we now associate with the day – dawn vigils, marches, memorial services, reunions, two-up games – were firmly established as part of ANZAC Day culture. - 25 April is also the anniversary of the recapture by Australians of Villers-Bretonneux, France, in 1918. - It is also the anniversary of the final day of the battle of Kapyong, Korea, in 1951. On that day, 25 April 1915: - Australian and New Zealand soldiers set out, as part of a larger British force, to capture the Gallipoli peninsula in order to open the way to the Black Sea for the allied naval forces. - Soldiers landed on Gallipoli on 25 April; they met fierce resistance from the Turkish defenders. The campaign dragged on for eight months. - Almost 9,000 Australian soldiers were killed, with 26,000 casualties in total. - Although the Gallipoli campaign failed, the Australia and New Zealand created the “ANZAC legend”, which has become an important part of the national identity of both nations. Symbols of commemoration: Rosemary became an emblem of both fidelity and remembrance in literature and folklore. Traditionally, sprigs of rosemary are worn on ANZAC Day and sometimes on Remembrance Day. Rosemary has particular significance for Australians, as it is found growing wild on the Gallipoli peninsula. The Flanders poppy has long been a part of Remembrance Day, the ritual that marks the Armistice of 11 November 1918, and is also increasingly being used as part of ANZAC Day observances. During the First World War, red poppies were among the first plants to spring up in the devastated battlefields of northern France and Belgium. In soldiers’ folklore, the vivid red of the poppy came from the blood of their comrades soaking the ground. The sight of poppies on the battlefield at Ypres in 1915 moved Lieutenant Colonel John McCrae to write the poem In Flanders fields (see “The recitation”). In English literature of the nineteenth century, poppies had symbolised sleep or a state of oblivion; in the literature of the First World War a new, more powerful symbolism was attached to the poppy – the sacrifice of shed blood. The poppy soon became widely accepted throughout the allied nations as the flower of remembrance to be worn on Armistice Day. The Australian Returned Soldiers and Sailors Imperial League (the forerunner to the RSL) first sold poppies for Armistice Day in 1921. For this drive, the league imported 1 million silk poppies, made in French orphanages. Each poppy was sold for a shilling: five pence was donated to a charity for French children, six pence went to the League’s own welfare work, and one penny went to the League’s national fund. Today the RSL continues to sell poppies for Remembrance Day to raise funds for its welfare work. The poppy has also become very popular in wreaths used on ANZAC Day. An early instance took place in Palestine, where poppies grow abundantly in the spring. At the Dawn Service in 1940, each soldier dropped a poppy as he filed past the Stone of Remembrance. A senior Australian officer also a laid a wreath of poppies picked from the slopes of Mount Scopus. Poppies adorn the panels of the Memorial’s Roll of Honour, placed beside names as a small personal tribute to the memory of a particular person, or to any of the thousands of individuals commemorated there. This practice began at the funeral of the Unknown Australian Soldier on 11 November 1993. As people waited to lay a single flower by his tomb in the Hall of Memory, they had to queue along the Cloisters that house the Roll of Honour. By the end of the day, hundreds of RSL poppies had been pushed into the cracks between the panels bearing the names of the fallen. Order of ceremony: ANZAC Day marches and other memorial parades are often led by a lone, riderless horse, with a pair of boots set backwards in the stirrups and the saddle stripped. Ancient peoples, such as the Saxons and Scythians, used to bury a great warrior’s horse with him so that it could serve him in the afterlife. This practice was continued in some European countries until the late eighteenth century. In modern times, custom has been kinder to the horse, which has been led in its master’s funeral procession with his boots reversed as a sign that a warrior has fallen in battle. A riderless horse has been added to some ANZAC Day parades as an additional symbol of respect and mourning, often for the men of Light Horse units. The Federation Guard: Australia’s Federation Guard is a tri-service ceremonial unit of the Australian Defence Force. The service involves the Federation Guard forming a catafalque party around the Tomb of the Unknown Australian Soldier. A Catafalque Party was originally appointed to guard a coffin from theft or desecration; the coffin has come to be represented by a remembrance stone or tomb. Now it performs a ceremonial role, honouring the dead. During ANZAC Day, the Catafalque Party is mounted at the Stone of Remembrance. The tradition of reversing and resting on arms – that is, leaning on a weapon held upside down – has been a mark of respect or mourning for centuries, said to have originated with the ancient Greeks. Descriptions of sixteenth-century military funerals provide the earliest documented instances of carrying arms reversed in more recent times. Although Australian soldiers still rest on arms as a mark of respect for the dead, the short Steyr rifle, the present Australian service rifle, is difficult to carry reversed. Flags at half mast: The tradition of lowering flags to half mast as a sign of remembrance is believed to have its origins on the high seas. As a sign of respect or honour for important persons, sailing ships would lower their sails, thus slowing the vessel and allowing for the VIP’s own vessel to come alongside and for him to board if so desired. Lowering of sails was also used to honour VIPs who were reviewing a naval procession from the land. In time only the ship’s flags were lowered in a symbolic gesture. This practice was also adopted on land. It is today a universal symbol of respect and remembrance. During the National Ceremony the flags begin at half mast and are raised to the mastheads during the Rouse. Laying of wreaths: Flowers have traditionally been laid on graves and memorials in memory of the dead. - Rosemary, symbolising remembrance, is popular on ANZAC day. - Laurel is a commemorative symbol; woven into a wreath, it was used by the ancient Romans to crown victors and the brave as a mark of honour. In recent years, the poppy, strongly associated with Remembrance Day (11 November), has also become popular in wreaths on ANZAC Day and as a sign of commemoration when placed on the Roll of Honour or the Tomb of the Unknown Australian Soldier. - During the National Ceremony, wreaths are laid on the Stone of Remembrance by visiting dignitaries and representatives of various countries, junior legatees, and service organisations. - The public may lay a wreath at the conclusion of the official ceremony. The recitation, including the Ode: In most ceremonies of remembrance there is a reading of an appropriate poem. One traditional recitation on ANZAC Day is the Ode, the fourth stanza of the poem “For the fallen” by Laurence Binyon (1869–1943). This poem has been recited in ceremonies since 1919, including the Memorial’s inauguration in 1929, and at every ANZAC Day and Remembrance Day ceremony held at the Memorial. They shall grow not old, as we that are left grow old: Age shall not weary them, nor the years condemn. At the going down of the sun and in the morning We will remember them. We will remember them. Sounding the Last Post The Last Post is the bugle call that signifies the end of the day’s activities. It is also sounded at military funerals to indicate that the soldier has gone to his or her final resting place, and at commemorative services such as ANZAC Day, Remembrance Day, and at the Last Post ceremony held each day at the Memorial. A period of silence: Silence for one or two minutes is included in the ANZAC Day ceremony as a sign of respect and a time for reflection. One minute’s silence was first observed in Australia on the first anniversary of the Armistice and continues to be observed on Remembrance Day, 11 November. Over the years, the one minute’s silence has also been incorporated into ANZAC Day and other commemorative ceremonies. The Rouse and the Reveille: After the Last Post and one minute’s silence, flags are raised from half mast to the masthead as the Rouse is sounded. Today it is associated with the Last Post at all military funerals, and at services of remembrance. - From Roman times, bugles or horns had been used as signals to command soldiers on the battlefield and to regulate soldiers’ days in barracks. The Reveille was a bright, cheerful call intended - to rouse soldiers from sleep and get them ready for duty; it has also been used to conclude funeral services and remembrance services. It symbolises an awakening in a better world for the dead, and also calls the living back to duty once their respects have been paid to the memory of their comrades. - The Rouse is a shorter bugle call that was also used to call soldiers to their duties; being short, the Rouse is most commonly used in conjunction with the Last Post at remembrance services. The exception is the Dawn Service, when the Reveille is played. The lone piper: The bagpipes are the traditional instrument of the people of the Scottish highlands and have been carried into battle with Scottish soldiers, from the days of William Wallace in the fourteenth century to the Falklands War of 1982. Traditionally, in Scottish units a lone piper takes the place of a bugler to signal the day’s end to troops (see Last Post) and also bids farewell to the dead at funerals and memorial services. The ceremonial presence of a piper became established in Australia during the 1920s. Australia’s involvement in external conflict: Colonial period, 1788–1901 Sudan, 1885 South African War (Boer War), 1899–1902 China (Boxer Rebellion), 1900–1901 First World War, 1914–1918 Second World War, 1939–1945 Occupation of Japan, 1946–1952 Korean War, 1950–1953 Malayan Emergency, 1950–1960 Indonesian Confrontation, 1963–1966 Vietnam War, 1962–1975 Iraq: First Gulf War, 1990–1991 Afghanistan, 2001 to present Iraq: Second Gulf War, 2003–2009 Peacekeeping, 1947 to present
<urn:uuid:43f5fbba-a164-40be-b198-e0d8557ffe97>
CC-MAIN-2020-16
https://www.tracesmagazine.com.au/2014/04/the-memorials-guide-to-anzac-day/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370518622.65/warc/CC-MAIN-20200403190006-20200403220006-00314.warc.gz
en
0.958952
2,565
3.6875
4
The 7 Elements of Art Every time you create an artwork there are 7 elements, or components, that your artworks consists of. More often than not we just take these elements for granted, or don't even give them a second thought. They are however critical to the success of your artwork. By keeping them in mind as plan and create your artwork, you will end up with a much better artwork. One that will be easier to look at as the person's eye will flow through your artwork more fluently. You will have control over how their eye moves through your painting - you will be able to lead them through the painting. This is just one of the benefits of knowing the 7 elements of art. Let's dive right in by looking at what these 7 elements are: What are the 7 Elements of Art? The seven elements of art are line, shape, form, space, value, color and texture. These elements are the essential components, or building blocks, of any artwork. Any good artwork should consist of these 7 ingredients. Element 1 - Line Line is the most basic element of art. Without line the other elements couldn’t exist so let's start here and then we will gradually go more advanced. A line can be thought of as a moving dot. If the dots overlap, it’s a solid line, if they don’t it’s a dotted line. A line has a beginning and an end and by its existence, creates an edge. If a line joins up it forms an outline (also called a contour). An outline creates a shape. Lines can be: Long or short Thick or thin A thick line gives emphasis and advances while a thin line recedes. Straight lines on the other hand are more mechanistic and dynamic and rarely found in nature. Curved lines change direction gently with no sharp angles and suggest comfort and ease to the viewer. Curved lines most often relate to the natural world. Zigzag lines alter direction fast and create feelings of unrest, turmoil and movement. Diagonal lines, give movement and dynamism to a composition. Horizontal lines create the feeling of stability and calm. Vertical lines give the impression of height and strength and often have a spiritual connotation. Lines can be imaginary or implied; for example line of sight can be a very strong albeit invisible line along which the viewer’s eye travels. Also a pointing finger can send the viewers eye on a journey through the painting. Lines alone can also be used to create a three dimensional effect, (depth, in a 2-dimensional artwork. Hatching lines (straight or curved) are used to turn shape into form using value as seen the works of the masters like Rembrandt. In summary lines can: - Describe 2-dimensional shapes and 3-dimensional forms - Create feelings of movement and emotion - Create value and thereby show the direction of light - Change 2-dimensional shapes into 3-dimensional forms with value - Depict texture Element 2 - Shape When a line meets up to enclose a space, a shape is formed. Shapes can be: Geometric or organic. Shapes are 2-dimensional, i.e. they have height and width but no depth e.g. a square. The best way to remember the shape element is to think of an outline. Positive or Negative Shapes The object you draw on your page is a shape enclosed in a frame. This frame may be a box you drew to designate the edges of your drawing area or the edge of the page if you didn’t draw a box. The object you draw is the positive shape. The rest of the space in your box (or if you didn’t draw a box then the rest of the page) is called negative shape. Element 3 - Form Form is the next step up from shape as we now add depth to it to create a three dimensional form. A square (shape) vs a cube, a triangle vs a cone etc. etc. Form encloses volume i.e. height, width as well as depth. In drawing and painting form can only be implied because they are 2-dimesional (flat) media. Artists must use tricks to fool the viewer’s eye so as to create the illusion of the third dimension i.e. depth. This is known as Trompe l’oeil and is achieved using tools like value (shading), colour and contour lines. Here you can see how shading has been used to create the illusion of 3-dimensional objects on a flat wall: Like shapes, forms can be geometric or organic. Organic forms are common in nature while geometric forms are more characteristic of architecture and man-made items. Nature however also uses geometric forms on occasion. Examples are crystals and honeycombs. Element 4 - Space Space is what lies between, around or within an object. To show space in a 2-dimensional medium the artist must use techniques to create the illusion of space between items that are in reality on a flat surface. How do artists create this feeling of space between objects? When an object is drawn or painted on top of another object the viewer’s eye interprets this as one object being in front of another implying there must be a space between them. Objects higher up in the picture plane will seem to the viewer’s eye to be further away than objects placed low down in the picture frame. Smaller objects look as if they are further away than larger objects. Notice how much smaller the house is in relation to the flowers. The further away an object, the less detail is visible to the viewer. By purposely reducing the amount of detail in an object it will appear further away than an object with greater detail. Colour and Value Objects in the distance usually appear cooler (bluer) and lighter in colour. Close up objects appear warmer and darker in value. Can be used to create the feeling of depth on a 2-dimensional surface. The most commonly used perspective types are linear and 2-point perspective. Space can be either positive or negative in the same way as shapes can. Negative space is all around the object, which is the (positive space) subject of the painting. Negative space is very important and an artist must plan the negative space as carefully as the main subject. Is there enough negative space to give the subject room to “breathe” or does it appear boxed in? Negative space can be cut to a minimum or eliminated entirely for a very close up and intimate focus on the subject. It can be greater on one side than the other, or greater at the top or bottom. All choices which will affect how the viewer sees the overall composition. Element 5 - Value Value is how light or dark something is. There is a scale of light and dark from pure white through to pitch black. The value of a colour depends on how light or dark it is compared to the value scale. Getting the values right is more important than getting the colours right in painting. Value is what makes it possible to show 3-dimensional forms in a 2-dimensional surface. By increasing differences in value, contrast is increased as well. A highlight will look brighter when surrounded by a dark value. Decreasing contrast will make objects visually recede into the picture plane and draw less attention. The focal point of a painting is where you want to add the most contrast as this high contrast automatically draws the viewer’s eye. If a painting is done on the lower (darker) edge of the value scale it is called a “low key” painting. Low key paintings give rise to a heavy, mysterious, dramatic, sometimes brooding feeling in the viewer. By contrast “high key” paintings take their range of values from the upper end of the value scale and create emotions of lightness, quickness, spirituality etc. Most paintings however use the full range of values from light to dark. Value is what artists use to portray light and form. The further from the light the darker the value. How value changes determines the form of an object. If there’s a gradual transition in value it conveys to the viewer that the surface is gently rounded. This is called a soft edge. If however there is a rapid transition between values it means there is an edge. This is called a hard edge. Value is also used to create shadows which show light direction and anchor the object, preventing it from appearing as though it is floating. Element 6 - Colour Colour is created when light is reflected into the viewer’s eye. In art, colours are arranged on a colour wheel. The colour wheel was developed by Isaac Newton who took the colour spectrum and bent it into a circle. The colour wheel shows primary colours, (colours that can’t be mixed), secondary colours (made by mixing two primaries) and tertiary colours (made by mixing a primary and secondary colour). Colour theory helps the artist to mix desired colours from primary colours. It’s only a theory and can’t be proven but it is nevertheless useful to the artist. Colour theory is based on the colour wheel, colour value and on which colours work well together - also called colour schemes. There are various colour schemes which define the primaries. The most common is the Red, Yellow, Blue model. Another popular scheme uses Cyan, Magenta and Yellow as the primaries. There are several other and each works well in different situations. Colour is described by its hue – red, green etc. (Hue the name we give a colour.) A colour has intensity called chroma, also known as saturation, brightness or purity. The more pure the colour is (less of other colours mixed in), the more intense or saturated it is. In painting only small amounts of saturated colours are usually used as accents. Too much saturated colour can give a garish result. The chroma of a colour is not the same as its value. Colours also have value. Value is how light or dark the colour is, as discussed in Element 5 above. Each colour falls on the value scale from light to dark. Yellow would be near the top (light end) of the scale while purple would be found near the bottom end. To change the value of a colour you follow the Colour Mixing Rules. Element 7 - Texture Actual texture is the way an object feels to the touch. Drawing or painting texture on a 2–dimensional, flat surface is a challenge for artists. The artist must instead convey the illusion of the actual texture to the viewer on the flat surface. How this is done is by the careful use of value and specific marks / brush strokes which then mimic the actual texture. Every textured surface reflects light in a very particular way. Think of the difference in texture between a chrome ball and a concrete ball. The artist, through careful observation and the use of light and dark values, recreates this actual texture visually on the picture plane. You can follow our tutorial in Drawing Weathered Textures to get a feel for how this is done. It is possible to create actual 3D texture on a flat surface by the addition of texturing compounds which create a raised surface. Impasto paste is one way or you could even add sand etc. to the paint. Even thick paint will leave the texture of the brush marks for the viewer to see. You can follow our tutorial on Texture Painting Techniques to see how you can add texture to your canvas. It is also possible to create patterns by the repetition of shapes that creates 2D texture. This is often used in Op Art. (Optical Art). I think you will agree that you have been using many of the seven elements throughout your artworks without even realising it. Now that you are however aware of these elements, you can look out for them as well as look out for ways to incorporate more of them into your artworks. This will add extra depth, dimension, texture and interest to your artworks, taking them to a whole new level.
<urn:uuid:28b45f31-a887-4046-bb2a-52195c958f5d>
CC-MAIN-2020-16
https://onlineartlessons.com/tutorial/7-elements-of-art/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370494064.21/warc/CC-MAIN-20200329074745-20200329104745-00274.warc.gz
en
0.937644
2,553
2.984375
3
The majority of naturally occurring drugs and biologically active compounds are asymmetrical in their chemical structure. This means that the molecule is structured around one or more carbon atoms in such a way that the molecule is distributed mostly on the right (R=rectus) or left (S=sinister) of the symmetrical carbon atom, the so-called chiral centre of the molecule. Thus a large proportion of psychotropic drugs in current use possess one or more chiral centres and therefore exist in pairs of enantiomers which differ in terms of their three-dimensional structures. However, it must be remembered that chirality can apply not only to molecules but also to anatomical structures. For example, the left and right hands are chiral structures as is evident when one attempts to put a left-handed glove on the right hand and vice versa! At the cellular level, the various types of receptor, transporter, enzyme and ion channel are all chiral in form. Thus although the enantiomers of a drug may have identical physicochemical properties, the way in which they may interact with chiral targets at the level of the cell will give rise to different pharmacodynamic and pharmacokinetic properties. A few simple examples will illustrate how taste and olfactory receptors can differentiate between enantiomers. Thus R-carvone tastes like spearmint whereas the S-isomer tastes like caraway. Similarly, R-limolene smells like lemon whereas the S-enantiomer tastes of orange. In psychopharmacology, interest in the properties of enantiomers has been aided by the need to improve the therapeutic efficacy and decrease the side effects and toxicity of drugs. For example, if the therapeutic activity resides entirely in one enantiomer (called a eutomer) then giving a racemic mixture which contains the active and the inactive enantiomer is clearly wasteful. Thus using the single enantiomer (isomer or eutomer) should enable the dose of the drug to be lowered, reduce the interpatient variability in the response and, hopefully, reduce the side effects and toxicity of the drug (see Table 3.4). Table 3.4. Possible advantages of the single stereoisomer over the racemic mixture 1. Reduction in the therapeutic dose. 2. Reduction in the interpatient variability in metabolism and in response to treatment. 3. Simplification of the relationship between the dose and the response to treatment. 4. Reduction in the toxicity and side effects due to the greater specificity of action of the isomer with the relevant biological processes. In addition to the possible advantages of the single enantiomer, the pharmacologically inactive enantiomer may reduce the efficacy of the active isomer by reducing its activity at its site of action or by interfering with its metabolism. Thus separating a racemic mixture into its enantiomers, and assessing the individual properties of the isomers would seem to be a reasonable approach to improving the clinical profile of many well-established psychotropic drugs. The process whereby a racemic mixture is reintroduced as a single enantiomer is termed ''chiral switching''. While there appears to be a compelling argument for using single enantiomers whenever possible in order to improve the efficacy and safety of a racemic drug, there is no certainty that chiral switching will always be beneficial. For example, in 1979 seven cases of inadvertent injection of the local anaesthetic racemic bupivacaine resulted in cardiovascular collapse in a few patients. The toxicity appeared to reside entirely in the R-isomer so that, by chiral switching, a safer and less toxic local anaesthetic was produced. Other examples have not been so successful however. For example, the chiral switching of racemic fenfluramine to its R-enantiomer, dexfenfluramine (the nomenclature has changed recently so that D-enantiomers, Dex enantiomers, are now termed R-enantiomers while the L-enantiomers, levoenantiomers, become S-enantiomers) was at first heralded as a successful new appetite suppressant. However, it was soon shown that, despite its improved efficacy, the R-enantiomer was more likely to cause pulmonary hypertension. This has resulted in the withdrawal of the drug. Some examples of the properties of single enantiomers in psychopharmacology (1) Analgesics - methadone This synthetic opiate was introduced in 1965 to manage opioid dependence and has been successfully used as an aid to abstinence since that time. Methadone is a racemate, the R-enantiomer being the pharmacologically active form of the drug. This isomer shows a 10-fold higher affinity for the mu and delta opioid receptors, and nearly 50 times the antinociceptive activity of the S-enantiomer. In addition, the R-isomer is less plasma protein bound than the S-form; the latter isomer being more tightly bound to alpha-1 acid glycoprotein. The plasma clearance of the R-form is slower than the S-isomer. Patients treated with the isomers of methadone showed considerable individual variability, with some parameters reaching 70%: this would not have been detected if the racemate had been administered. These pharmacokinetic differences could be crucially important when patients are being treated with methadone as part of an opiate withdrawal programme as relatively small decreases in the plasma concentration could produce marked changes in mood, thereby undermining the positive benefit of the methadone withdrawal programme. (2) Sedative/hypnotics - zopiclone Zopiclone is widely used as a sedative-hypnotic. It is metabolized to an inactive N-desmethylated derivative and an active N-oxide compound, both of which contain chiral centres. S-Zopiclone has a 50-fold higher affinity for the benzodiazepine receptor site than the R-enantiomer. This could be therapeutically important, particularly if the formation and the urinary excretion of the active metabolite benefits the S-isomer, which appears to be the case. As the half-life of the R-enantiomer is longer than that of the S-form, it would seem advantageous to use the R-isomer in order to avoid the possibility of daytime sedation and hangover effects which commonly occur with long-acting benzodiazepine receptor agonists. (3) Neuroleptics - thioridazine Thioridazine is a complex first-generation antipsychotic agent that is metabolized to two other pharmacologically active drugs (mesoridazine and sulphoridazine) which have been introduced as neuroleptics in their own right. All three neuroleptics have chiral centres. Interest in thioridazine has arisen in recent years because of the higher incidence in sudden death, due to cardiotoxicity, found in patients who had been prescribed the drug. Thioridazine-5-sulphoxide would appear to be the metabolite responsible for the cardiotoxicity. This metabolite alone has four chiral centres and knowledge is lacking concerning the toxicity of these enantiomers which serves to illustrate the complexity of the problem. Regarding the pharmacological activity of thioridazine, the R-enantiomer has been shown to be at least three times more potent than the R-isomer in binding to the D2 dopamine receptors and nearly five times more potent than an alpha-1 receptor antagonist. Conversely, the S-isomer has a 10-fold greater affinity for the D1 receptor than the R-form. Thus the pharmacological consequences of using a single enantiomer of thioridazine are, unlike the other three examples given, very complex. Thus if the S-enantiomer was selected, while the potency would undoubtedly increase (due to its D2 antagonism), the chances of postural hypotension (due to the alpha-1 receptor antagonism) would also be greater. Furthermore, the relative activity and toxicity of the individual enantiomers and their metabolites is unknown. With regard to the extrapyramidal side effects for example, experimental studies have shown that the R-isomer is more likely to cause catalepsy and is, in addition, far more toxic than the S-form. Dose-response relationships have also been undertaken on the individual enantiomers versus the racemate form of thioridazine and show that the racemate is 12 times more potent than the S-isomer and three times more potent than the R-isomer. It is widely agreed that there is little difference in the therapeutic efficacy between any of the first- and second-generation antidepressants. However, in terms of their tolerability and safety, the second-generation drugs are superior. Of these, the SSRI antidepressants are the most widely used but, despite their clear advantages over the tricyclic antidepressants which they have largely replaced in industrialized countries, they have such side effects as nausea and sexual dysfunction which can affect compliance. While there are clearly differences in the frequency of side effects between the SSRIs, no clear overall advantage emerges for any one of the drugs. Many currently used antidepressants are chiral drugs (for example, tricyclic antidepressants, mianserin, mirtazepine, venlafaxine, reboxetine, fluoxetine, paroxetine, sertraline, citalopram), some of which are administered as racemates (such as the tricyclics, mianserin, mirtazepine, fluoxetine, reboxetine, venlafaxine, citalopram) while others are given as single isomers (paroxetine and sertraline). The relative benefits of the enantiomers of antidepressants vary greatly. For example, when the therapeutic properties of the enantiomers are complementary (for example, mianserin) then use of the racemate is an advantage. However, if there are qualitative, but not quantitative, similarities then it would be beneficial to develop the active isomer. This has recently occurred with the development of citalopram. The S-enantiomer of citalopram (escitalopram) is over 100 times more potent in inhibiting the reuptake of 5-HT into brain slices than the R-form and is devoid of any activity at the neurotransmitter of other receptor types (racemic citalopram has an affinity for histamine receptors and causes sedation). In in vivo studies, escitalopram is more potent than the R-form or the racemate in releasing 5-HT in the cortex of conscious rats; it has been shown to have antidepressant and anti-anxiety properties in both animal models and in patients. With regard to its side effects, the frequency of nausea and ejaculatory dysfunction after escitalopram is approximately the same as that of the racemate. From the results of the published clinical studies, it would appear that the tolerability of escitalopram is slightly better than the racemate and the time of onset of the clinical response may be slightly faster but this needs confirmation. In general, the adverse effects were mild and transient with a low patient withdrawal rate. Early clinical trials suggest that escitalopram is as effective as citalopram in the treatment of depression and anxiety disorders. In CONCLUSION, current evidence suggests that for many psychotropic drugs there are functional differences between the enantiomers and the racemate which could have important clinical implications. However, it is apparent that the possible advantages of developing a single enantiomer must be considered on a drug-by-drug basis. For example, fluoxetine, like most SSRIs, exists in a chiral form but the most active enantiomer found in experimental studies caused cardiotoxicity in some patients. In general, however, it would appear that knowledge of the stereochemistry of psychotropic drugs will help in the development of new, and hopefully more effective, molecules in the near future. In addition to metabolic interactions, consideration should be given to drug-protein binding interactions, although there is little clinical evidence to suggest that such interactions are of any consequence with the SSRIs. It must be stressed that many liver enzymes are non-specific for their substrates and that most drugs are metabolized by multiple pathways. Good therapeutic practice demands that drug interactions should be considered carefully, particularly in subpopulations of depressed patients such as the elderly or those with hepatic dysfunction or a history of alcoholism. In SUMMARY, it would appear that a detailed knowledge of the pharmacokinetics of the main groups of psychotropic drugs is only of very limited clinical use. This is due to limitations in the methods for the detection of some drugs (e.g. the neuroleptics), the presence of active metabolites which make an important contribution to the therapeutic effect, particularly after chronic administration (e.g. many antidepressants, neuroleptics and anxiolytics), and the lack of a direct correlation between the plasma concentration of the drug and its therapeutic effect. Perhaps the only real advances will be made in this area with the development of brain imaging techniques whereby the concentrations of the active drug in the brain of the patient may be directly measured. Until such time as the kinetics of psychotropic drugs in the brain can be properly assessed, it can be concluded that the routine determination of plasma levels of psychotropic drugs is of very limited value. Despite the limited value of measuring plasma psychotropic drug concentrations to assess clinical response, a knowledge of the pharmaco-kinetics of such a drug can be of value in predicting drug interactions. Was this article helpful? Are You Depressed? Heard the horror stories about anti-depressants and how they can just make things worse? Are you sick of being over medicated, glazed over and too fat from taking too many happy pills? Do you hate the dry mouth, the mania and mood swings and sleep disturbances that can come with taking a prescribed mood elevator?
<urn:uuid:4fdf55b1-7b3a-4c34-bc38-4df3b78a94e2>
CC-MAIN-2020-16
https://www.barnardhealth.us/psychotropic-drugs/enantiomers-their-importance-in-psychopharmacology-introduction.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370526982.53/warc/CC-MAIN-20200404231315-20200405021315-00274.warc.gz
en
0.932387
2,910
3.15625
3
Call 01534 733162 Posture is more than just how well you sit or stand – it is a window into the health of your body and tells us how well your muscles, joints and nervous system are working. Dysfunction in the muscles, joints or nerves can be seen as subtle postural changes, such as tightness of a muscle or the position of a joint. What is good posture? There have been various opinions over the years on what ideal posture should look like. We have probably all been told at some time to “pull your shoulders back” or ”tuck in your stomach” but is this actually how we are designed to stand and move? Or is this perhaps doing us more harm than good? Posture is best understood by looking through the lens of ‘developmental kinesiology’ – the study of the mechanics of human movement during early development. Observation and analysis of development from the new-born through to early childhood gives key understanding to how we are truly designed to function and therefore what resulting posture should look like. We all have the same ‘postural’ or movement program stored in the brain. This is the program that takes us from flat on our backs at 3 weeks to standing tall at 3 years. This sequence is not learned – babies are not taught to lift their heads, to turn or to crawl. It is an inbuilt program that has been honed over millions of years of evolution. As the baby’s brain matures during this developmental period so does the control of its muscles allowing eventually for upright posture. By assessing this development sequence we can understand what proper function and posture is, and what it isn’t. Good posture follows a few simple guidelines: 1. Proper ‘centration’ (alignment) The joints should be aligned correctly both when static and when moving. This will differ slightly depending on individual body proportions but the same fundamentals hold true for everybody e.g.: - Head directly over the shoulders, not poking forwards. - Shoulders wide and spread, not hiked. - A ‘long’ spine (not excessively straight, curved or slumped). - Knees in-line with feet, not collapsing inwards. However, if the body is functioning poorly (for whatever reason) we will see deviations from these ideals. Deviations follow common patterns in both the developmental period and later in life. The following illustrations show examples of ‘good’ and ‘bad’ postures in a prone position. Here you can see a ‘good’ postural pattern in both baby and adult. Note the similar pattern in both pictures – neutral position of the low back and head with the shoulders depressed (green arrows). Here you can see a ‘poor’ postural pattern. Note the similar pattern in both baby and adult – sagging lower back, elevated shoulders and jutting of the chin (red arrows). These postural deviations increase the risk of spine, shoulder and neck injuries. 2. Low muscle tone There should be good co-ordination and balance between all the muscle groups. Excess ‘tone’ or tightness in a muscle group signifies imbalance. 3. Proper Breathing Pattern A normal breathing pattern is driven by the diaphragm (the muscle that sits under the lungs). With an ‘in’ breath the diaphragm should contract downwards, inflating the lungs. This filling of the lungs pushes the abdominal organs down leading to expansion of the abdomen. This is called a ‘diaphragmatic’ or an ‘abdominal’ pattern of breathing. In many patients this pattern of breathing is disturbed. Instead of the diaphragm expanding the lungs from below, the muscles of the neck and shoulders lift the ribcage up. Breathing in this fashion overloads the muscles and joints of the neck, which can predispose to pain and injury. What causes poor posture? Anything that changes the function of the muscles, joints or nervous system will ultimately be reflected in a change in posture. This can be the result of many insults: 1) Poor Movement Habits The body will attempt to adapt to any prolonged stress that is put on it. Exercise physiologists call this the ‘SAID’ principle or “Specific Adaptation to Imposed Demand”. This is usually thought of as a good thing; you go to the gym, you lift some heavy weights and your body adapts by building stronger bones and muscles. However, this phenomenon can just as easily have negative effects. If, every time you go to the gym, you only work certain muscle groups or train with poor form you will be setting yourself up for muscle imbalance and injury down the road. Another prime example is sitting. When you sit, certain muscle groups tend to shorten (eg: the pectoral muscles of the chest, the hip flexors and the hamstrings). If you are in the habit of sitting for extended periods, these muscles will eventually adopt a permanently-shortened condition which will wreak havoc on your movement quality and will predispose to injury. 2) Non-Optimal Early Development The quality of development during the first few years has a big impact on structure and movement patterns in later life. As discussed above, brain maturation during development leads to changes in how a baby’s muscles control its body, and these changes in muscle control lead to changes in bone structure, resulting in the typically shaped adult skeleton. However, if development is compromised, which is estimated to occur in around 20-30% of the population , newborn patterns of muscle function and skeletal shape persist into adulthood. Learn more here. This can lead to typical structural changes such as: - Flat feet - Knock knees - Forward tilted pelvis - Slumped spine These structural issues lead to poor posture and movement patterns, resulting in more strain on the surrounding joints and increasing the risk of injury. 3) Protective patterns: When a part of the body is damaged by injury or trauma, messages are sent from the site of injury to the brain. The brain analyses these messages and, if sufficiently threatening, they are interpreted as “pain” and defence mechanisms are triggered to prevent or to minimise further damage. Part of this reflex defence mechanism includes changing how certain muscles work. Some of the muscles tense and guard whilst others are inhibited (or tuned down). In the short term this works well to protect the area from further damage. However, in some instances, this protective pattern can persist long after the injury has healed. As Janet Travell M.D. White House Physician for John F Kennedy famously said: “After an injury tissues heal, but muscles learn. They readily develop habits of guarding that outlast the injury.” If these patterns do persist, movement will be compromised. A good example of this is after knee or ankle injury. We very often see patients who, despite no longer being in pain, show significant residual deficits and poor movement on the previously injured side. A simple screen to assess lower quarter stability is the single leg balance test. To test this yourself stand on one leg with your arms crossed (not shown); you should be able to hold this position comfortably for 30 seconds eyes open and 10 seconds eyes closed. Make sure to check both sides. 4) Genetics and Structure Our skeleton is shaped not only by development in the first years of life (as discussed above) but also via our genetics. An interesting illustration of this is in the sport of Olympic weight lifting. Eastern Europeans have long been dominant in the sport and one reason for this, according to Professor Stuart McGill of Waterloo University, is the shape of their hip sockets. Their genetically shallow hip sockets allow them to squat deeply without flexing their spine (which would otherwise predispose them to back injury). 5) Stress/ Emotional State Stress elicits a sympathetic (“fright or flight”) response. This leads to a variety of changes in the body including increased tension in certain muscle groups. This pattern of increased tension is similar to that seen in the protective or newborn pattern with, for example, hiking of the shoulders. This is why you will often hear people say that “their stress goes straight to their shoulders”. If this stress persists for some time, the body may adapt and change the pattern of muscles it uses to move. Do I have poor posture? To assess someone’s posture requires a range of tests looking at mobility, strength and movement patterns. However, a quick and easy screen of upright posture that can provide valuable information is the ‘Wall Angel’ test. To perform the ‘Wall Angel’ test: - Find a clear wall and stand with your back to it. Your head, mid-back and buttocks should all be touching the wall, your feet can be a comfortable distance (a few inches) away. - Next bring your arms up to the 90/90 position (as shown) with the back of your wrists and fingers against the wall. - Then, without moving your head or hands, try to also flatten your lower back to the wall. Scoring: This test can be scored from 0-3, with the results as follows: 3 – Perfect! - You can comfortably achieve the position as shown. - Your eyes are horizontal, not looking up, and your chin is not jutting out. - You can simultaneously flatten your fingers, hands and spine against the wall. - If you are in this category well done! A ‘3’ doesn’t necessarily mean that you don’t have any postural flaws but it does show you have good global movement in most of the right areas (shoulders, chest, spine). You will likely gain more benefit from focusing on strength and stabilisation training rather than stretching exercises. 2 – Pretty good: - You can flatten your head against the wall with your eyes horizontal and your chin not jutting out. - You can flatten your fingers and can almost flatten your wrists but not quite (<1cm form the wall). - You can almost flatten your spine to the wall but not quite. - A ‘2’ translates to ‘good enough’ – not perfect, but not worth worrying about too much. Your time would likely be better spent working on other areas. 1 – Work needed: - You can’t flatten your head against the wall or you can flatten you head but to do so your chin juts out or your eyes are no longer horizontal. - You can’t flatten your fingers against the wall or your wrists are still a way off (> 1cm from the wall). - You cannot flatten your spine anywhere near the wall. - A ‘1’ indicates a dysfunction of upright posture. If you have neck or shoulder pain this is very likely a contributing factor. If you don’t currently have pain you are likely at higher risk for neck, shoulder or low back injury. 0 – Pain - You experience pain during any phase of the test. - This test should not cause pain, if it does it is not normal. You should have it investigated by a suitable healthcare practitioner. If you are looking for an expert opinion on your posture click here to see how we can help. Can I improve my posture? In almost all cases the answer is yes. The extent to which it can be improved will be different for everyone. The best method for improvement will also be individual specific but there are common considerations that will be important for most people: 1: Manage Sitting Time In many cases time spent sitting is the primary factor in postural problems. Even if you exercise for an hour every day, if you’re sitting for long periods, especially in a poor positon, this is a battle you won’t win. For those that work at a desk there may be no getting around having to sit for some period of time but there are variations and alternatives that can help: - Change your set up: If you are using a chair change its set up throughout the day; height/ angle of seat pan, angle of back rest etc. This will help you avoid continuously stressing the same areas of the body. - Sit-Stand desks: If you work at a desk I would highly recommend an adjustable sit-stand desk. These allow you to move from a sitting to standing work position. More isn’t always better however and standing for too long, particularly if not used to it, can also be harmful. Alternate between sitting and standing, and change standing postures frequently. Anti-fatigue mats can also be useful when standing. - Treadmill desks: Allows you to walk whilst working. - Swiss Ball/ Balance Disc: This will help you use different muscles as you sit. - Squatting – Long before chairs and seats we used to rest in a squat position (in fact many cultures still do). This is a fantastic way to maintain good mobility and strength throughout the whole body. If do you have to sit for work make sure your desk set-up is as back friendly as possible. Click here for infomation on proper chair and desk set up. 2: Move more Walking is the perfect anedote to sitting. Aim for at least 1 hour of walking per day (but this doesn’t have to be in one go). When walking remember to: - Walk upright: Think about someone pulling you tall from the crown of your head. - Relax your shoulders. - Swing your arms: This not only gets your spine moving but research also shows that shows that swinging the arms from the shoulders (not the elbows) reduces spine loading up to 10% . - Walk briskly, without over striding. Small slow steps results in more spinal load, increasing symptoms in many low back cases . 3: Targeted Exercises In most cases targeted exercise is needed to either strengthen weakened postural muscles or lengthen short / tight muscles. Visit our resources section for a more information on targeted postural exercises or download our free ebook: Three exercises to fix the back your desk broke Learn which areas of your body are affected most by a long term sedentary lifestyle and 3 easy exercises you can start doing today for less pain, less stiffness and improved health. Treatment for poor posture In chronic or stubborn cases, in addition to the strategies outlined above, hands-on treatment may be required. There a number of effective treatment options available including: Changes in posture are often accomapnied with restriction of specific joints. Joint restriction is most commonly seen in the upper neck, mid-back, pelvis and feet. Chiropractic manipulation is a controlled, specific force applied to a restricted spinal or extremity joint. It is often associated with a ‘clicking’ or ‘popping’ noise, similar to what you might hear when a wet glass is lifted from a table – this is simply caused by a release of gas from the joint as movement is restored. Manipulation a very quick and effective approach to restore proper movement to restricted joints to improve posture and reduce pain. Mobilisation also works to free restricted joints but involves slower movements so is not usually associated with ‘clicking’ or ‘popping’. Soft tissue work Changes in posture also effects the muscles and other soft tissues (ligaments, tendons etc), with some muscles prone to tightness and others prone to weakness. A variety of soft tissue technqiues can be employed to address short or tight muscles to improve mobility and posture and to reduce pain. Common soft tissue techniques include ischamic compression, cross friction, active release, graston and pin and stretch. Dry needling is a therapeutic technique using an acupuncture needle to penetrate the skin and stimulate the underlying tissue. It is very effective at treating deep trigger points (knots) in the muscles which may be limiting movement or casuing pain. If you are looking for effective treatment for your posture click here to see how we can help. - Meholjic, A. (2010). Can a Motor Development of Risky Infants Be predicted by Testing postural Reflexes According to Vojta Method?. Materia Socio Medica, 22(3), 127-131 - Michaud, T. C. (2011). Human locomotion: the conservative management of gait-related disorders. Newton Biomechanics. - McGill, S. (2007). Low back disorders: evidence-based prevention and rehabilitation. Human Kinetics Figures reproduced from: - Liebenson. C., Journal of Bodywork and Movement Therapies - Kolar, P. (2014). Clinical Rehabilitation. Alena Kobesová - McGill, S. (2007). Low back disorders: evidence-based prevention and rehabilitation. Human Kinetics This page was written by Steffen Toates. Steffen is a chiropractor at Dynamic Health Chiropractic in Jersey, Channel Islands. For more information about Steffen click here.
<urn:uuid:a68333b8-4ba6-4537-bd53-43bb5155b83d>
CC-MAIN-2020-16
https://dynamichealth.je/resources/posture/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500331.13/warc/CC-MAIN-20200331053639-20200331083639-00434.warc.gz
en
0.934756
3,579
3.265625
3
In this artice, I will discuss the connection between herpes and STDs. As recently as the 1960s the term venereal disease referred to five well- known diseases. Syphilis and gonorrhea were the most prevalent. These diseases were reasonably well understood from a medical perspective. The rise of STDs With the 1970s, however, came a sexual revolution. Venereal disease (VD) became known as a sexually-transmitted disease (STD). STD was thought to be a term less judgmental and more to the point. Fortunately, with the sexual revolution came the knowledge revolution. Grouped under the term STD is those infections that people used to talk about under their breaths. We used to think of VD as a disease that is picked up by virginal youngsters using public toilet seats. Today, STDs are in the public eye to a much greater extent than in the past. People now understand that these diseases are caused by sexual contact and not contaminated toilet seats. The U.S. Public Health Service estimates that there are two million new cases of gonorrhea; 1 500,000 new cases of genital warts; 100,000 new cases of syphilis; and 400,000 to 500,000 new cases of genital herpes each year. Media coverage of genital herpes may become responsible for generating a positive effect on the STD statistics. However, the widespread fear of genital herpes may at least be a slowing force in the growth of the numbers. The difficulty in controlling STDs stems from the age-old feelings of embarrassment people feel when they contract a disease. Since most of those partners also require treatment, matters only get worse. Society as a w ole pays the price for general lack of communication and cooperation. Until genital herpes rose to its existing heights of infamy, syphilis and gonorrhea were the two best-known STDs. We will take a brief look at several of these other sexually transmitted diseases. This will help you better understand our target disease. Although it is no longer the wholesale killer it once was, it can still be very dangerous. Once detected it is an easily treated disease, however. Like herpes, it can inhabit the body for years without showing symptoms. When the symptoms of advanced syphilis finally do appear they can be quite serious. Syphilis is caused by the bacterium Treponema pallidum. It can affect any tissue or vascular organ in the body and can be passed from mother to fetus. In acquired syphilis the bacteria enters the body through the mucous membranes or abrasions of the skin. Also like herpes virus, it does not grow on artificial media and cannot survive for long outside the human body. Infection is usually transmitted by sexual contact, including orogenital and anorectal sex. Syphilis has four distinct stages: primary, secondary, latent and late or tertiary. During the primary stage, a lesion or change generally appears within four weeks of infection. This lesion will heal in four to eight weeks if untreated. At the site of inoculation, it develops as a red, pimple-like sore that soon becomes a painless ulcer. The base of the chance is hard. The sore does not bleed on abrasion but gives off a clear serum containing the bacterium. It is usually single but may be multiple. The lymph nodes of the area become enlarged but normally remain painless. Primary changes may occur on the penis, anus, and rectum in men and the cervix, vulva, and perineum in women. Chancres may also be found in the mouth, on the tonsils, or on the fingers. In the secondary stage of syphilis, rashes usually appear within six to 12 weeks after inoculation. They are most noticeable after three to four months. These lesions may disappear in a matter of days or may persist for several months. Mild, flu-like symptoms may accompany this stage. Syphilitic skin rashes may imitate a variety of dermatologic conditions. The third stage of syphilis, the latent stage, may last for a few years. The patient at this point will appear normal. About one-third of these patients will develop tertiary syphilis. Tertiary syphilis can take many different courses in its destruction of the body. It can affect the eyes, skin, brain, heart, nerves and many other parts of the body. Some of the tertiary symptoms will not appear for 25 years after the initial infection. There is a host of diagnostic tests for syphilis. Different tests are used for different stages. Early stages can usually be diagnosed with a microscope study and blood tests. Treatment is easy if the disease is caught early. Penicillin is the drug of choice. Penicillin will not be effective for herpes virus, so it is important to be diagnosed by your health professional. Gonorrhea, also known as “the clap,” “the drip,” and “the gleet,” is caused by a bacteria called Neisseria gonorrhea. It is spread by sexual contact. Gonorrhea is an infectious disease of the lining of the urethra, cervix, vagina, and rectum; and may also involve other areas of the body. Women are frequently symptomless carriers of the organisms and are often only identified through sexual contact tracing. Another symptomless carrier is the homosexual male when the bacteria are found in the mouth or rectum. There has been an increase of involvement of the urethras of both heterosexual and homosexual males in recent years. In men, there is a two-day to two-week incubation period. The onset usually consists of tingling or itching in the urethra. This is followed by a discharge which may be yellow-green in color. As the disease spreads up the urethra, urination may become painful. In women, symptoms usually begin within one to three weeks of infection. Symptoms are generally mild but may occasionally be severe. Rectal gonorrhea is common in either sex. It is usually symptomless, but there may be some discomfort in the anal area. There may be a rectal discharge as well. No blood test is available for gonorrhea. An accurate diagnosis consists of obtaining material (via a cotton-tipped swab) from appropriate locations. This allows rapid identification in most men. Women are not so fortunate. This inspection under the microscope (called a gram-stain smear) is only about 50% to 60% reliable in women. A culture is performed on patients suspected of having gonorrhea who show negative gram stain smear. This is done because of the unreliability of the gram-stain smear. As with herpes, virus cultures must be taken when symptoms are present. Because these diagnostic techniques were sometimes inconvenient, past practice often involved treating suspicious symptoms with penicillin. This approach has been modified by some. The emergence of penicillin-resistant strains of gonorrhea has resulted in the use of tetracycline drugs as initial therapy. This is because they are effective against the disease. Post-Gonococcal Non-Specific Urethritis A common complication of gonorrhea in men is gonococcal non-specific urethritis. In simple terms, this means that the discharge returns after a week or so. This may be due to the presence of other organisms which were simultaneously acquired with gonorrhea. These organisms may have longer incubation periods and may not respond to penicillin. Tetracycline becomes the drug of choice in these cases as well. Patients should, as with other STDs, abstain from sexual activity until a cure is confirmed. Men are advised not to squeeze the penis in search for urethral discharges. Most of the economic, physical, and emotional burden of gonorrhea is borne by women and their offspring. About 10% to 20% of women with gonococcal infection will suffer from salpingitis and PID (pelvic inflammatory disease). Salpingitis occurs predominantly in women under age 25 who are sexually active. It is the result of infection transmitted most commonly by intercourse, less often by childbirth or abortion. Patients with intrauterine devices are thought to be more vulnerable. The principal causative agent is Neisseria gonorrhea, the same bacteria responsible for gonorrhea. Symptoms include severe lower abdominal pain, vomiting, and high fever. Discharge from the cervix is common. Diagnosis consists of gram-stain and culture. A patient may have a recent intrauterine device insertion, childbirth or abortion. Because any of these raise a red flag for the diagnosing physician, a history is important. Treatment should not be delayed. If gone unchecked the disease may cause infertility. Treatment consists of penicillin or tetracycline. The physician must have a truthful history as with other diseases. Examination and treatment of sexual contacts should be done. Some contacts may be found to have non-symptomatic nongonococcal urethritis. Failure to treat male sexual partners is a major cause of recurrent gonococcal salpingitis. Nongonococcal urethritis (NGU) is a common sexually transmitted disease in the U.S. today. Some feel it is even more prevalent than gonorrhea. It is caused most frequently by the bacterium Chlamydia trachomatis. The name NGU derives from the common situation in which an obvious infection of the urethra is not caused by gonorrhea. Distinguishing between gonorrhea and something other than gonorrhea is important in determining treatment. Gonorrhea is usually treated with penicillin but NGU is treated with an antibiotic other than penicillin, normally tetracycline. NGU is more or less discovered as a result of “not finding” gonorrhea. Although Chlamydia trachomatis is known to be responsible for about half of the NGU cases, it is easier to begin treatment with an antibiotic which is known to work in most cases of NGU. This will save the patient time and money. NGU is an entity becoming recognized in cases when gonorrhea is not identified. Trichomoniasis is caused by a protozoan, Trichomonas vaginalis. It’s is almost always sexually acquired and the infection may be asymptomatic and thus go unrecognized. More commonly, it produces a vaginitis (inflammation of the vagina) characterized by frothy vaginal discharge with an unpleasant odor. Males are usually asymptomatic. Sometimes a routine Pap test will indicate the presence of Trichomonas. Microscopic examination can also be used to view these protozoa. It is treated with the drug metronidazole given orally. Pubic lice, are usually transmitted sexually but may also be passed along on towels, bedding, and clothing. This pest is capable of making its way through a college dorm because it is relatively easy to transmit. The pubic louse is a tiny parasite that thrives in pubic hair, infesting the hair near the anus and genitals. The lice are tiny and not easily seen. They lay eggs which attach to the base of the hair. A sign of infestation is a scattering of minute dark brown specks on undergarments. They can be rapidly cured with a shampoo, lotion or cream, called Kwell. Prolonged use of these products should be avoided as they can cause genital dermatitis. Sources of infestation such as bedding should be decontaminated by was being in very hot water or dry cleaning. Recurrence is common. Scabies is readily transmitted, often through an entire household, by skin-to-skin contact with an infected individual. It is sometimes called the “itch mite” and can also be acquired sexually. Similar to lice, it is spread by clothing or bedding. Scabies will exist for only two to three days when away from human skin. Hence, clothes worn prior to three days before treatment would not have living mites and would be safe to wear. Scabies is caused by a tiny mite which lives on and around the genitals. The female mite burrows beneath the skin to lay her eggs. The symptoms, itchy lumps and tacks on the skin, become noticeable after four to six weeks’ incubation. They can occur between the fingers, on the buttocks, armpits, and wrists as well as the genitals. Treatment is the same as for pubic lice. Genital warts are caused by a virus and are usually transmitted sexually. They can also spread as the result of poor hygiene.‘ The incubation period is from one to six months. They occur most commonly on warm, moist surfaces of the genitals. Genital warts normally appear as soft, moist, small, pink or red swellings that grow rapidly. Several of them may be found in the same area, often producing a cauliflower appearance. Diagnosis is usually a matter of identifying warts by appearance. Such diagnosis should, however, be carried out by a physician because human papillomavirus (HPV) has in instances been found in Condylomata acuminata. Researchers suspect a connection between HPV and genital carcinogenesis so, therefore, HPV should be clinically ruled out? Genital warts are treated by careful application of an anti-wart agent of 20-25% podophyllin resin in ethanol or benzoin in weekly applications. Particular attention must be paid to following exact instructions of the prescribing physician. Genital warts may also be removed by electrosurgery or freezing technique utilizing applications of liquid nitrogen until lesions are gone. If you suspect you have an STD, it is most important that you see your physician. It can be dangerous, even life-threatening, to wait for the problem to go away on its own. That is just what some of these diseases will lull you into doing. The symptoms disappear, but the infection is very much alive within your body.
<urn:uuid:dd0a0605-8113-4eb5-ae37-4cb981d8da7a>
CC-MAIN-2020-16
https://astrobiosociety.org/herpes-sexually-transmitted-diseases/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371604800.52/warc/CC-MAIN-20200405115129-20200405145629-00034.warc.gz
en
0.953887
2,921
2.703125
3
Francisco de Quevedo 1580-1645 Representase la brevedad de lo que se vive y cuán nada parece lo que se vivió. 1. “¡Ah de la vida!” … ¿Nadie me responde? ¡Aquí de los antaños que he vivido! La Fortuna mis tiempos ha mordido; las Horas mi locura las esconde. 5. ¡Que sin poder saber cómo ni adónde, La salud y la edad se hayan huido! Falta la vida, asiste lo vivido, Y no hay calamidad que no me ronde. 9. Ayer se fue; mañana no ha llegado; Hoy se está yendo sin parar un punto; Soy un fue, y un será, y un es cansado. 12. En el hoy y mañana y ayer, junto Pañales y mortaja, y he quedado Presentes sucesiones de difunto. Translation: Note: Translations are notoriously problematic, but especially so in poems such as ¡Ah de la vida!… whose impact depends on compressed expressions intended both to demonstrate the poet’s ingenuity and to underline stylistically the theme. The sonnet is preceded by an epigraph which summarises the theme: Here is depicted the brevity of life in progress and how our past life seems to be nothing. Hello there, life! Is there no-one answering me?/ Come back those past years that I have lived!/ Fortune has eaten away my time;/ My madness is hiding the hours. Without knowing how or where,/ My health and lifetime have fled!/ Life is missing; what I have lived is present,/ And there is no calamity that doesn’t haunt me. Yesterday has gone; tomorrow has not arrived;/ Today is going away without stopping for a moment;/ I am a “was”, and a “will be” and a tired “is.” In my today and tomorrow and yesterday, I join together/ Diapers and shroud, and I am left/ An endless sequence of a dead being. The poem is a sonnet, with each of its 14 lines a hendecasyllable (i.e. 11 syllables each line). It is made up of two quatrains, (i.e. each quatrain contains four lines), and two tercets (each made up of three lines). Sometimes we talk of the two quatrains together as an octave and the two tercets together as a sestet. If you have read Góngora’s sonnet Mientras por competir…, you will recognise that this poem by Quevedo has exactly the same rhyme scheme: ABBA, ABBA, CDC, DCD. Like Garcilaso de la Vega’s En tanto que de rosa … and Góngora’s Mientras por competir …, Quevedo’s sonnet deals with time. But that is all they have in common. Both Garcilaso and Góngora take female beauty as their starting point; Quevedo removes all that is human to focus on life itself (or rather, its absence). The two earlier poems take their inspiration from the classical themes (topoi) of Horace’s Carpe diem (“Enjoy the day”) and Ausonius’s Collige, virgo, rosas (“Gather, maiden, the roses”). Quevedo’s source is not classical; he takes as his starting point a conversational colloquialism “!Ah de la vida…!” based on a popular expression, !Ah de la casa…! (“Anyone home”) and follows it with another Aquí de… Central to Garcilaso and Góngora’s sonnet is the passage of time which ruins the beauty of the ladies they address. Not so in !Ah de la vida…! Quevedo focuses exclusively on the absence of life (Falta la vida l. 7). There is no progress from youth to old age, from beauty to death, from colour to nothing. For Quevedo life is paradoxically a “living death.” !Ah de la vida…! is more pessimistic and harder hitting even than Mientras por competir… which ends with the magnificent climax taking us to the lady’s eventual fate: she will end up as nada (“nothing”). Even so there was a time of beauty and colour that preceded old age, death and nothingness. Quevedo doesn’t give us even that consolation. His sonnet is unrelentingly bleak, predictably so given that life is absent. The opening address or apostrophe “!Hello there life!” immediately and dramatically launches us into poem. It demands our attention with the poetic “I” addressing life itself but getting no response. The address is a cry for communication. The “I” is knocking on the door of life, and the following rhetorical question, “Isn’t there anyone answering?” underlines the fact that there is no reply. The “I” realises that there is a void where his life should be and wonders where his life has gone. Alone, the “I” appeals for the return of his past years (l. 2), but as the exclamation mark makes clear, it is a forlorn appeal. Why? Because Fate and his obsession have eaten away and hidden all vestiges of his past (ll. 3-4), leaving the “I” with no idea of how or where his years have fled (ll. 5-6). As a result, life is absent and all that remains is what he has “lived” (asiste lo vivido l. 7), and what he has “lived” is a succession of deaths (ll. 13-14; which explains why life is not answering his call). This is a complicated idea (conceit) which is what makes the poem difficult to understand. The sestet is grim and stripped of all human warmth. Time is so relentless that his very being is no more than an expression of time, a “was,” a “will be” and a tired “is” (l. 11). His life, compressed to a mere link between birth (pañales) and death (mortaja), is an endless series of deaths (ll. 13-14; i.e. he’s been paradoxically a “dead man living” throughout his life, from birth to old age). This is the climax leading to the last word, appropriately in this context: difunto (“dead man“)**. Quevedo’s success lies in his use of language. The sonnet is a serious meditation on life (and its absence) and time, which normally would be accompanied by elevated language. This sonnet, however, opens with two unconventional expressions based on colloquialisms: ¡Ah de la vida l.1 from ¡Ah de la casa (“Anyone in”), and Aquí de los antaños l.2 from Aquí de los nuestros (“Come and help”). They strike a popular tone, typical of sermons of the time, where the message is relevant to all listeners. The vocabulary is straightforward, with no intrusive Latinisms or neologisms or complex puns, all of which were normally very much part of Quevedo’s poetic style. How exactly does Quevedo convey this idea of life being absent? He does so by creating a kind of poetic “skeleton,” i.e. full of verbs, nouns and verbal nouns (fue, será, es), all related to time, and with a notable lack of adjectives and imagery. In fact, there are only two adjectives, cansado (l. 11) and presentes (l. 14). The former suggests exhaustion from knocking at the door, and is linked to the lost health and lifetime of line 6. The latter alludes to the constant presence of death. There is a striking paired metaphor, pañales y mortaja (“diapers and shroud“) alluding to birth and death, with textually no “life” in between. The compressed leap from birth to death in these two juxtaposed words captures superbly the idea that life is absent. Adjectives and imagery create pictures which flesh out a poetic “skeleton.” Here their virtual absence underlines the fact that there is no colour, no warmth, and no picture. There is nothing tangible to grasp, the vocabulary being as abstract as time itself. In the first tercet, time is compressed to “Yesterday,” “tomorrow,” “today,” whose juxtaposition brilliantly continues the concept of life having been squeezed out; without life, the “I” is no more than time metaphorically made flesh, a “was,” “will be” and “is” (l. 11). “I” is simultaneously past, future and present. In the second tercet, the four conjunctions y underline the leap conveyed by the juxtaposed hoy, mañana, ayer (l. 12) and move us (thanks to enjambement, ll.13-14) immediately to the “I’s” bleak conclusion and appropriately the last word of the sonnet: difunto, i.e. he is a dead man living. The language here is straightforward but concentrated or compressed. So, we have to work to make sense of the poem, which was the aim of conceptismo**, a major literary development of the Baroque. Conceptismo demonstrated the writer’s verbal ingenuity or “wit” (agudeza) at playing with ideas, which in turn required the use of the reader’s intelligence to work between the lines or behind the words. Here our common image of life as a passage/ journey from birth to death is turned upside down. This and the artistically arranged temporal contrasts and compression (“yesterday,” “tomorrow,” “today;” “am,” “will be,” “is;” “today,” “tomorrow,” “yesterday” (ll. 8-11) are intended to produce surprise, astonishment or –to use the term much used in the Baroque– admiratio. The poet who achieved such an effect was greatly esteemed. !Ah de la vida…! belongs to those moral poems by Quevedo devoted to the brevity of human life. Its despairing conclusion –that the poetic “I” hasn’t lived but experienced a succession of deaths– places it under the umbrella of desengaño (“disillusion“), a major theme of the Baroque. What appears to be life is really no more than illusion, a cover for death. The opening line summarises in many ways the Baroque culture of uncertainty, the questioning of assumptions and beliefs, and the examination of appearance and reality. We find it in e.g. Góngora’s Mientras por competir…; in Don Quixote’s difficulty in determining what he sees and what others see; in the questioning of honour of El burlador de Sevilla; in the painting of Las Meninas by Velázquez; and many other works, fictional, religious and political. Gaylord, Mary Malcolm “The Making of Baroque Poetry” in The Cambridge History of Spanish Literature, ed. Gies, David T Cambridge 2009 Price, R.M ed. An Anthology of Quevedo’s Poetry Manchester 1969 Rivers, Elias ed Renaissance and Baroque Poetry of Spain Prospect Heights Illinois 1988 (With English prose translations of the poems.) Robbins, Jeremy The Challenges of Uncertainty: An Introduction to Seventeenth-Century Spanish Literature New York 1998 Image of Quevedo from: http://en.wikipedia.org/wiki/File:Quevedo_%28copia_de_Vel%C3%A1zquez%29.jpg
<urn:uuid:c9cd0f5e-3c3f-474c-90e6-774b710d2b0f>
CC-MAIN-2020-16
http://www.spainthenandnow.com/spanish-literature/quevedo-ah-de-la-vida
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496330.1/warc/CC-MAIN-20200329232328-20200330022328-00474.warc.gz
en
0.887389
2,737
3.046875
3
Obstacles and Challenges Why gaming at the library? Public libraries have a mission to provide a variety of materials in a variety of formats. Board games, card games, and videogames are stories & information, presented in new formats. Libraries are about stories & information, not books. Or, as Eli Neiburger says, we’re in the content business Games fit library mission - Public libraries have a mission to provide cultural, recreational, and entertaining materials, as well as informational and educational materials. Games provide stories and information as they entertain and educate. - School & academic libraries have a mission to curriculum support. Games provide stories and information, presented in a new format, that encourage critical thinking and problem solving and accomplish objectives of curriculum frameworks and meet AASL standards. - Special libraries have a mission to provide resources and support their industry or profession. Games provide stories and information, presented in a new format, that meet business goals and objectives and provide continuing education for employees. - Games have literary value you have to know how to read, to play. - Social games encourage language skills through peer learning. In game chat or forums, if “rogue” is misspelled “rouge,” the misspeller will be corrected. “Wield” is another word easy to misspell but easy to learn in a game context. - Games encourage literacy activities like reading, writing & creating content about & around the game. - Games can enrich vocabulary and expose players to language roots e.g. fighting the flaming monster Incendius can plant the key to unlock the more ordinary word “incendiary” upon later exposure. Crone, spawn, inquisitor, hydromancer, lore keeper, magister, elemental, tainted, and evocation are other examples of vocabulary builders that can readily be found in games. - Games meet developmental needs of teens established by the National Middle School Association they encourage social interaction among peers and non-peers, enforce rules and boundaries, encourage creative expression, reward competence and achievement, provide opportunity for self-definition - Some games have a cathartic effect in releasing emotions In Grand Theft Childhood, youth reveal that violent videogames in particular help manage anger & frustration. - Some videogames are healthy! Dance Dance Revolution gets heart rates up to 140 beats per minute, according to “Project GAME (Gaming Activities for More Exercise)” published in Research Quarterly for Exercise and Sport in 2005, and more calories are burned playing Tekken than walking around the block. A 2004 study: The Effects of a ConsumerOriented Multimedia Game on the Reading Disorders of Children with ADHD. in West Virginia discovered a correlation between playing DDR and improving reading test scores. What do gaming events and programs bring to the library? Gaming programs are primarily social events. It’s more about relationship building than gameplay. - New users (who may not visit the library) attend and gain insight into how the library may be relevant to them. - Regular users may see the library in a new light. - All users may be prompted to use other non-gaming library services. - Ideally, all users have a positive library experience. - Gaming programs epitomize library as 3rd place, creating a community place between home and work/school to socialize and play - Some videogame events are also being used to encourage print literacy. In Carver’s Bay (SC), youth who check out books and write book reviews earn extra gaming time. - Some videogame events may be educational in nature. Some libraries are teaching game design with local experts or online through Youth Digital Arts Cyber School Do public libraries circulate or program with videogames rated “M” for mature? Yes. M for mature means the content is designed for people over the age of 17; is equivalent to an R rating for a movie those games are intended for people over the age of 17. Only 15% of games sold in the US last year were rated M. Some libraries carry M rated games in their collections for adults, or host programs or services using M rated games! It depends on the community. - In NY, a library has started an M rated collection for adults. - At the Benicia (CA) Library, teens can play Halo 3 if their parents sign a permission slip. Also, they have hosted two tournaments which included Halo 3 (parents signed permission slips for those under 18). - Charlotte & Mecklenberg County (NC) hosted a Halo 2 - At the City Heights Library in San Diego Halo 3 is played regularly during its gaming programs. No permission slip is required. However, it is the belief of the librarians there that the M-rating of the Halo games is not accurate and deserves a T for Teens rating instead. Is it mostly teenagers that take advantage of these programs? Yes, which is interesting, because the average age of the gamer is 35 and rising! We are starting to hear about libraries doing intergenerational programs. - Cary Memorial Library, Lexington MA teens mentor students in grades 4 to 6 during Saturday morning gaming sessions. - Suffolk (NY) Library has a senior Wii Bowling league, mentored by teens. Are there people that think that games don’t belong in libraries – what are their arguments? Yes. They may be people who only hear what the mainstream media tells them about videogames, who still believe Dungeons and Dragons can lead to practicing witchcraft, who think games are too recreational for libraries. They may be people who have not played games. They may be people who do not have children, or whose children don’t (or didn’t) play games. They may feel: - Games are fluff or junk entertainment Some are! So are many books. There is a serious games initiative in the gaming industry, and many games have an edutainment flair. - Games don’t encourage original thought Although a gamer may follow a path laid out by a designer, they are often several ways to get to the endgame. Playing a game requires creativity and imagination. - Games don’t offer learning opportunities Steven Berlin Johnson says that playing a game is like engaging the scientific method: a constant hypothesize/experiment/ evaluate process. You learn something new every time. - Games are competing with books It’s not books OR games, it’s books AND games. - Games are a replacement for traditional print literacy Literacy is changing – there is a new literacy now. Today’s youth must be fluent in visual literacy, media literacy, social literacy … - All games are violent like Grand Theft Auto 85% of games have content that is NOT rated M for mature. GTA represents a very small portion of available videogames. No one objects to chess, the game that has been playing in libraries the longest; CHESS is a war game that involves “killing” your opponent’s army and monarchy. - Games are addictive Many games are! Some offer immediate rewards and many require concentrated effort. Many encourage self-improvement. Games may be especially addictive for some personality types: moderating gameplay time, interspersing gaming with other activities, and playing with other people helps. Parents and adults need to set appropriate time limits for youngsters and encourage a balanced media diet. - Games are too passive Compared with TV, movies, or even books? Moreover, games like Dance, Dance Revolution or the games of Wii Fit can be quite physically demanding. Is it enough to just put games on the shelves? or should libraries find a way to engage the gaming community further? Libraries should begin with services to gamers such as: - Allowing them to play games on the library computers (perhaps in a “club” environment or program) or card/board games at the library tables - Purchasing gaming strategy guides for circulation - Offering puzzles or board games at the library - Treating questions like, “When does Spore come out?” or, “How do I beat Final Fantasy XII?” like serious reference questions. - How is this for a unique programming idea? Bring game designers, developers, artists, game-music composers, and other creative thinkers from the professional game industry to talk about what they do and how they do it. Offer workshops in game design. –LizD Next, libraries should host gaming programs, to bring in the gamers in the community. Building relationships with the gamers create a panel of experts to query when you are ready to circulate games and it creates trust they will be more likely to take good care of the circulating games, and respect the library and its collection, resulting in less theft and damage. What else can librarians do to create rapport with gamers? Librarians can learn to think like gamers! - Be fearless in risk taking, for we learn from our mistakes, and can always hit the “reset” button. - Embrace change! Look forward to it! Find small ways to create a constantly changing environment in the library (hint: beta programs & services). - Librarians can use games to connect patrons to books - Use games to do reader’s advisory ask, what games do you play? to get a sense of the types of stories, characters, and settings the gamer prefers. - Create “readalike” displays – if you liked this GAME, you may like these books/movies/CDs/games Does this trend of putting games in libraries point to a larger trend? Yes. Libraries looking for ways to reach beyond their traditional patron base. Libraries are striving to deliver what patrons want. Libraries continue to struggle for relevancy in a world where people are willing to pay money for commercial commodities that libraries deliver for free (Netflix, for example). More links and articles with talking points for library gaming: - Buchanan, Kym and Vanden Elzen, Angela M., “Beyond a fad: why video games should be part of 21st century libraries” (2012). Library Publications and Presentations, Paper 1. http://lux.lawrence.edu/cgi/viewcontent.cgi?article=1000&context=lib_pp - Trudeau, Michelle. “Video Games Boost Brain Power, Multitasking Skills” (2010). http://www.npr.org/2010/12/20/132077565/video-games-boost-brain-power-multitasking-skills?sc=fb&cc=fp - Ash, Katie. “Games Evolve As Tools for Teaching Financial Literacy” (2009). http://www.edweek.org/ew/articles/2009/11/18/12financegames.h29.html?tkn=PUVFO0nJqSandhzRG7IURhhm3eePLVOPc67k&print=1 - Timothy, Adam. “Addiction vs. Reflection: Unlocking the Potential of Games” (2013). http://www.edutopia.org/blog/addiction-vs-reflection-gaming-potential-adam-timothy - Ludgate, Simon. “The Solution to Stagnant Games? Librarians!” (2012). http://www.gamasutra.com/blogs/SimonLudgate/20120808/175566/The_Solution_to_Stagnant_Games_Librarians.php
<urn:uuid:c6f30f25-09a9-444c-8457-bf8d84150c84>
CC-MAIN-2020-16
https://games.ala.org/games-in-libraries/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370526982.53/warc/CC-MAIN-20200404231315-20200405021315-00274.warc.gz
en
0.91995
2,414
3.375
3
Did your ancestors suffer from fatigue? If they did, you wouldn’t be reading this blog. In the paleolithic era, fatigue = death. So, if you’re struggling with fatigue, you may want to think about aligning your nutrition with that of your ancestors. In other words, aligning your diet with your genetics. For thousands of years, your ancestors only had access to carbohydrates at certain times during the year. Over that time, these genes perfectly adapted to the environment. The genes that makeup you and I have not had the time to adapt to the modern, agricultural era of food production. Today, you have access to all the carbs you want year-round. This increase in carbohydrate type and amount could be the hidden reason you’re experiencing fatigue! What exactly is a carb? And how do they make you tired? Have you noticed fatigue or brain fog shortly after eating? An intolerance to carbohydrates may be contributing to your low energy levels! Nearly half of the food a typical North American eats comes from carbohydrates. This includes both refined and unrefined carbohydrates. Unrefined carbohydrates are eaten in their natural form and include: - Whole grains Refined carbohydrates have been processed in some way and include: - All forms of sugar - Fruit juices - Flours (whether they contain gluten or not) While you likely know you should eat more food from the first group, it’s safe to say that for most of you in the first world, your carbohydrate intake is primarily of the refined variety. Compare this with your ancestors. If your ancestors were of mixed European descent, it’s likely that they got only 30% of their calories by eating unrefined carbohydrates. Remember, you’re likely getting 50% of your calories from carbohydrates. That’s twenty percent more calories from carbohydrate sources than your ancestors ate. And, it’s likely that a lot of those carbohydrates are coming from refined sources. How many carbs are you eating? For those of you unsure what I mean by the typical North American getting half of their calories from carbohydrates, imagine your daily food intake consists of 1700 calories total. Of this, 50% or 850 calories per day come from carbohydrate sources. If you’re keen on finding out exactly how many carbohydrates you’re eating, download an app like cronometer or my fitness pal. Simply punch in the foods you eat for a few days and you’ll see precise percentages of proteins, fats, and carbohydrates. If even the thought of tracking your food in an app makes you fatigued, the eyeball method can help give you a ballpark estimate of your carb intake. At your next meal, take a good look at your plate. - How much of your plate is filled with protein? (beef, pork, fish, chicken etc.) - How much of your plate is filled with fat? (nuts, seeds, butter, yogurt, oils) - How much of your plate is filled with carbs? (potatoes, rice, grains, breads, fruits, vegetables) Estimate the percentages of proteins to carbohydrates to fats based on how much of your plate they cover. Do this for each meal of the day. This exercise should help you ballpark what percentage of your meals are made up of carbohydrates. How many carbs should you eat to overcome fatigue? Well, it depends… Carbohydrates made up less than 5% of calories in traditional Inuit cultures that depended on hunting for survival. However, the Kitavan tribe in the South Pacific had near year-round access to fresh fruits and vegetables, so nearly 70% of their calories came from carbohydrates. As I mentioned earlier, if you’re of mixed European ancestry, your ancestors likely got about 30% of their calories from carbohydrate sources. But let’s keep in mind that their carbohydrate intake would vary greatly depending on the seasons (remember, this was well before agriculture). During the late summer and early autumn, your ancestors would likely have consumed more than 30% of their daily calories from carbohydrates. This would occur because fruits and vegetables would be plentiful. But in the late fall, winter, and early spring, there would be minimal carbohydrates available. In these times, your ancestors would likely have subsisted on high fat, low carbohydrate or ketogenic style diets. During these months, your ancestor’s carbohydrate intake would be well below 30% – perhaps even as low as 5% – similar to traditional Inuit First Nation cultures. For optimum health, do as your ancestors did. Align your diet to mimic theirs. This is how you beat fatigue. Diets and fatigue You are a unique snowflake. Your friend who completely stopped eating carbs while following the ketogenic diet may have amazing amounts of energy – but it may make you feel intense amounts of fatigue and make your stomach nauseous. And the high carb diet that gives people the energy to CrossFit five times a week may make others so tired they can barely function. The point I’m trying to make is that some carbs will make you tired. Others will give you energy. But what exactly those foods are is unique to you. This is the problem with diets. Diets cause fatigue because they don’t consider you as a unique individual. The only thing diets consider is calories in vs calories out. Diets offer templates – general guidelines to follow. However, you need to be cautious to avoid getting trapped in the rules of a particular diet plan. A proper diet is about far more than calories. It’s about individualized nutrition. It’s about aligning your food with your genetics. Outside of a high carb or low carb diet is a nutrition plan that is just right for you. This plan will allow you to comfortably achieve your wellness goals and reach your optimum energy levels. However, to develop this nutrition plan and beat fatigue, you’ll need to do some searching to discover your ideal carbohydrate intake. I show you how to do exactly this in my eCourse, Stop Feeding Fatigue. In the course, you’ll craft a personalized nutrition plan that is perfectly aligned with your genetics. It’s designed to help you identify exactly which foods give you energy and which foods take it away – all within sixty days! The fatigue-carb connection In an incredible study, researchers continuously monitored blood sugar levels of more than 800 participants. Between the participants, more than 46,000 meals were tested to see the effect on each individual’s blood sugar. Through this study, researchers found that the blood sugar reading between individuals varied widely – even if they ate the exact same meal! (3) In the book Wired To Eat, Robb Wolf cites a study where one participant had a dramatic increase in blood sugar after eating a banana. Yet when this same participant ate a cookie, his blood sugar readings remained stable. The blood sugar readings in another participant were the exact opposite – low blood sugar readings after eating a banana and high blood sugar readings after eating a cookie. Common knowledge would lead you to believe that bananas are good and cookies are bad. But in this example, bananas would actually contribute more to this individual’s weight gain and fatigue levels than cookies. In theory, this individual could be quite healthy if he avoided bananas and ate cookies (in moderation, of course). I know this is an extreme example but it illustrates my point: a personalized approach to nutrition needs to be put in place. How exactly do carbs cause fatigue? The fatigue-carb connection comes about through the relationship between insulin and cortisol. When you eat a carbohydrate source that your body doesn’t tolerate, you’ll experience a rapid rise in blood sugar levels shortly after eating. In response to high blood sugar levels, your body will release a hormone called insulin. Insulin helps to lower blood sugar levels. Unfortunately, your body will often release too much insulin. When this happens, you’ll feel hungry, shakiness, weakness, fatigue, sweating, and anxiety. This phenomenon is called rebound hypoglycemia. Rebound hypoglycemia is a low blood sugar reading that occurs shortly after eating. Low blood sugar (hypoglycemia) is a tremendous stress to your body. In fact, your body cannot tell the difference between types of stress. As far as your body is concerned, stress = stress. It responds in a similar manner whether you run into a bear or if your blood sugar drops to very low levels. In response to this stress, your body will release a different hormone called cortisol. You probably know cortisol as the stress hormone. It’s released in times of high stress – like when you run into a bear on a hiking trail. To combat stress, cortisol pulls sugar out of your cells and back into your blood. This raises your blood sugar. And (hopefully) alleviates those uncomfortable low blood sugar symptoms. If you see a bear, moving sugar into your blood primes your body for the fight or flight response. Your blood sugar goes down but cortisol brings it back up. What’s the big deal? If this was a one-time deal, it wouldn’t be an issue. That’s a small stress that your body can handle. The real problem occurs when this happens daily. Maybe even three (or more) times a day. Each time you eat a carbohydrate source that your body doesn’t tolerate, it has to release cortisol to help re-balance your blood sugar. If you’re eating carbs that you don’t tolerate on the regular, your body is forced to release cortisol on a daily basis. Imagine running into a bear every day. How stressful would that be? It is this chronic release of cortisol that eventually causes fatigue. Are you familiar with adrenal fatigue? When your body is consistently releasing cortisol, it is preparing for a stressful event. Over the long-term, daily preparation for small stresses (like the blood sugar irregularities caused by eating carbs your body cannot tolerate) results in your brain decreasing cortisol production. What’s the main symptom of lowered cortisol levels? This how carbohydrates make you tired. If you change your diet to consume only well-tolerated carb sources, you’re going to overcome fatigue. Let me show you how to identify exactly which carbohydrates you tolerate and which ones are silently making you tired – join my Stop Feeding Fatigue eCourse today! Also published on Medium.
<urn:uuid:93e304ee-024d-4aa0-a2e6-6636184fee1b>
CC-MAIN-2020-16
https://fatiguetoflourish.com/healthy-carbohydrates-make-you-tired/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370511408.40/warc/CC-MAIN-20200410173109-20200410203609-00434.warc.gz
en
0.942853
2,175
2.90625
3
William Penn's persecution and fight for rights of Quakers and how they found peace in the city of brotherly love, Philadelphia, Pennsylvania. When the two prisoners walked into the courtroom on September 3, 1670, the bailiff snatched the hats from their heads. The Lord Mayor of London, Sir Samuel Starling, who presided over the trial, ordered the court's officer to replace the hats, then fined the two gentlemen for failing to uncover their heads in his court. William Penn and William Mead stood charged with violation of the Conventicle Act, which prohibited all worship gatherings except those of the Church of England. They were in fact guilty. The pair had intentionally held a Quaker worship outside the locked doors of the Gracechurch Street Meeting House, in London. The Quakers intended to prove that their meetings were politically harmless and thus gain immunity from the law. When the constables came to arrest Penn and Mead, the two went peacefully. However, as they were being led away, an altercation broke out among the onlookers, and the charge was elevated to conspiracy to incite a riot. This was a serious offense that demanded a trial by jury. As the trial progressed, Sir Samuel allowed only prosecution witnesses to testify and refused the prisoners any opportunity to cross-examine them. Penn argued logically and passionately against the legality of the Conventicle Act, and this, combined with the blatant prejudice of the judge, prompted the jury to return a verdict of not guilty for Mead. They found Penn guilty only of preaching in the street. Sir Samuel rejected their verdict and told the jury to go back and deliberate again. They did and returned with the same decision. Again the judge told them to reconsider. The jury returned and announced that they had indeed changed their verdict -- this time they declared both men innocent of all charges. The enraged judge berated the jury and shouted, 'Gentlemen, you shall not be dismissed till we have a verdict the court will accept and you shall be locked up without meat, drink, fire and tobacco....' 'You are Englishmen; mind your privilege, give not away your right,' exhorted Penn. 'Nor will we ever do it,' returned the jury foreman Edward Bushell. The jury stuck to their decision despite being confined overnight without food, drink or heat. The irate Sir Samuel fined each juror for contempt of court. The jury refused to pay their fines, and Sir Samuel ordered them all into Newgate prison. Penn and Mead also refused to pay their fines, and they, too, went to jail. However, Penn's father, Admiral William Penn, lay dying and posted the money for both men. The admiral died less than a week later. This was not Penn's first imprisonment for his Quaker beliefs, which were still in their infancy in 1670. The Society of Friends began little more than two decades earlier with George Fox, who had had no intention of founding a new sect. As a youth, Fox saw clergy and many of his contemporaries give way to alcohol and tobacco, showing little sign of self-control or integrity. For his part, Fox merely wished to experience God in a true, untainted way, so he sought the advice of learned clergy. But he came away unsatisfied. After much soul-searching, Fox experienced an epiphany, which he described in his journal: But as I had forsaken the priests, so I left the separate preachers also, and those esteemed the most experienced people; for I saw there was none among them all that could speak to my condition. And when all my hopes in them and in all men were gone, so that I had nothing outwardly to help me, nor could tell what to do, then, oh, then, I heard a voice which said, 'There is one, even Christ Jesus, that can speak to thy condition'; and when I heard it my heart did leap for joy. Then the Lord let me see why there was none upon the earth that could speak to my condition, namely, that I might give Him all the glory; for all are concluded under sin, and shut up in unbelief as I had been, that Jesus Christ might have the preeminence who enlightens, and gives grace, and faith, and power. Thus when God doth work, who shall [hinder] it? And this I knew experimentally. Read more: Getting dressed in the 18th century What developed from this was a belief that God inhabited all people and communicated with the individual who acknowledged his presence and submitted to his will. Given that, everyone was equal in God's eyes, so members of the Society of Friends (as Fox's followers came to be called because they greeted everyone as 'friend') refused to recognize social superiors. They did not bow or curtsey; they did not remove their hats before their betters -- even the king; nor did they use formal language. Instead, they took to exclusively using the informal 'thee' and 'thou.' George Fox began preaching his gentle philosophy in 1648. Two years later, Fox and his followers acquired a new name after he was arrested for blasphemy and stood before a judge, whom Fox exhorted to 'tremble at the word of the Lord.' The judge derisively dubbed the group 'Quakers,' and the name stuck. Nevertheless, Fox continued preaching and his simple eloquence won many converts. He spoke of living without extravagance and of nonviolence, and he encouraged his followers not to bear arms. Fox also spoke against the incongruity of taking an oath, which acknowledged the presumption that honesty necessitated a prescribed guarantee. Quakers advocated absolute truth in everyday life. All of this was considered radical thought, especially the manner of Quaker services, or 'meetings.' These were silent affairs during which any individual, even a woman, who was moved by the Spirit could speak. Moreover, the sect saw no need for ordained clergy, church ceremonies, sacraments or a formal church building. Yet perhaps their most damning aspect in the eyes of other Christians was the Society of Friends' refusal to pay the mandatory tithe to support the clergy of the Anglican Church. Persecution and imprisonment followed. Parliament wanted to be rid of Catholics and all nonconformist groups that had sprung up in the religious turmoil of 17th-century Europe. The new sects challenged authority and were filling the courts and prisons, making nuisances of themselves. Therefore, Parliament passed legislation, collectively known as the Clarendon Code, which included the Conventicle Act and the Five Mile Act, which prohibited any nonconformist preacher from coming within five miles of any town. Quakers constantly ran afoul of those strictures. Then came the Test Act of 1673, which required public officials to affiliate with the Church of England and to swear allegiance to the king. Friends were frequently arrested on trivial charges and made to demonstrate their loyalty by swearing allegiance to the crown. Their refusal was interpreted as disloyalty and an indication of papist leanings, and the jails filled with Fox's followers. Approximately 1,000 Friends had been imprisoned by 1657. Fox, too, saw the inside of a jail many times during his life. Other Quakers withstood beatings and torture for their beliefs, and in 1675 the sect began the Meeting for Sufferings to keep a record of their persecutions. Read more: 17 Famous women throughout English history As a son of an admiral and a friend of the royal family, William Penn suffered far less hardship than his fellow Quakers. Born on October 14, 1644, Penn joined the Society of Friends in 1667, and by September of that year, he was in prison. Young Penn quickly dispatched a letter to a local nobleman and was released. Thereafter he traveled the countryside preaching, writing pamphlets and working to liberate Quakers from prison, as well as spending time in jail himself. During one seven-month stint in the Tower of London, Penn drafted his first version of No Cross, No Crown, one of his most notable works, in which he argues against worldliness and advocates virtuous simplicity. Penn grew in prominence in the Society and in time even stood as a substitute for George Fox when needed, as in the fall of 1671 when Fox went to the American colonies to help organize the meeting structure there. Individual Quakers had been emigrating to the colonies since the 1650s. Full-scale migration came in 1675 when the first full shipload of Quakers arrived and settled in West Jersey. Within six years approximately 1,400 members of the Society had emigrated there. Penn had served as a trustee of the West Jersey endeavor and his participation fed the idea of creating a colony of religious freedom. He envisioned a haven from persecution and a place where Quakers could live in harmony, 'love and brotherly kindness,' as an example for all Christians. Indeed, 'there may be room there,' he wrote, 'though not here, for such a holy experiment.' The government owed Penn's father money. In lieu of payment, on June 1, 1680, Penn formally petitioned King Charles II for a land grant west of the Delaware River between New York and Maryland. The king granted the request in 1681 with the stipulation that the new province be named in honor of Admiral Penn. Thus, the Quaker became the proprietor of Pennsylvania, an area of some 600,000 square miles (larger than the present commonwealth of Pennsylvania). As soon as everything was settled, Penn began advertising for the sale of land tracts and sent his cousin William Markham to the colony to act as deputy governor. He instructed Markham to form a preliminary government that granted the right to vote to virtually all free inhabitants. Penn later drafted laws that promised public trials where 'justice shall be neither sold, denied, nor delayed.' Verdicts would be delivered without harassment. All court proceedings would be conducted in English, instead of Latin, and 'in ordinary and plain character, that they may be understood.' Bail would be allowed in all but capital cases. Mindful of his own experience with English jails, Penn wanted also to ensure the humane treatment of prisoners. To that end, he scrapped the traditional practice of charging detainees fees for food, heat and lodging in favor of a system that incorporated rehabilitation. Perhaps most noteworthy, unlike the New England colonies, the new province assured religious tolerance, although only Christians (including Catholics) could vote or hold office. Penn's laws also regulated marriage and outlawed a long list of items that included'swearing, cursing, lying, profane talking, drunkenness, drinking of healths, obscene words...[and] mayhems....' Stage performances, May Day dances, cards, dice and anything else that might 'excite the people to rudeness, cruelty, looseness, and irreligion' were banned. Read more: Jacobite dreams and the uprising of 1745 Markham was also charged with finding a location for a town that would be called Philadelphia, meaning the 'city of brotherly love,' after the ancient city that is praised for its faithfulness in the New Testament book of Revelation. Penn dreamed of a 'great town' built in a grid formation, unlike the sprawling, congested cities of Europe, which had grown up without planning and where fires could wreak havoc. He later gave instructions for laying out the town, calling for 'every house [to] be placed, if the person pleases, in the middle of its plot...so there may be ground on each side for gardens or orchards or fields, that it may be a green country town, which will never be burnt and always be wholesome.' Large numbers of migrants began pouring into the province. The year 1682 saw 23 ships bring some 2,000 colonists to settle in Pennsylvania. Ninety more ships followed during the next three years, and by 1715 approximately 23,000 emigrants had relocated there. Most were either Quakers or Quaker sympathizers. By 1750 the Society of Friends was the third-largest denomination in Britain's American colonies. Relations with colonists Penn did not have good relations with colonists. He seemed incapable of selecting suitable representatives to govern the colony, and a series of incompetent choices created friction with the province's inhabitants and threatened Penn's credibility and authority there. Furthermore, his stand on nonviolence didn't sit well with New York's governor, who expected aid against Indian attacks. Penn's woes included frequent financial straits and a boundary dispute with Lord Baltimore to the south. Nor were troubles confined to the New World. Political anglers in London looked to consolidate crown authority, and on more than one occasion Penn came dangerously close to losing his colony. This became an even greater concern with the 1685 death of Charles II and the subsequent bloodless revolution that saw the removal of Charles' son, the Catholic King James II, just three years later. James' very Protestant daughter and Dutch son-in-law, Mary and William, ascended the throne, but they were not fans of Penn. Because of his friendship with James, Penn was arrested late in 1688, and again in 1691, and charged with conspiring to commit treason. He was quickly released on both occasions, but trouble in London and in Pennsylvania continued to plague him until a stroke in 1712 crippled his mind. Death followed six years later. Despite the difficulties, Penn's is a success story. George Fox's philosophy and William Penn's determined vision proved a powerful combination that had lasting effects. As Penn wrote to the Pennsylvania colonists in 1681: 'You shall be governed by laws of your own making, and live a free and, if you will, a sober and industrious people. I shall not usurp the right of any, or oppress his person.' The Quaker promised, 'Whatever sober and free men can reasonably desire for the security and improvement of their own happiness I shall heartily comply with....' Penn offered the dream of a harmonious, peaceful and self-governing alternative to the raucous Cavaliers to the south and the repressive, puritanical society to the north. All the colonists had to do was live it. And they did -- for eight decades, Quaker society dominated the Delaware Valley. The reality fell short of the dream, but the culture of burgeoning freedom and pluralism made a lasting impression. Many of Penn's ideals live on in the Declaration of Independence and the Constitution. As Brooks Adams wrote of Quakers in The Emancipation of Massachusetts, 'We owe to their heroic devotion the most priceless of our treasures, our perfect liberty of thought and speech.' There is no doubt that William Penn and the Quaker migration is a fascinating tale. * Originally published in Mar 2016.
<urn:uuid:19b30833-d13f-43d2-809a-cdf8747c86fd>
CC-MAIN-2020-16
https://britishheritage.com/church-history/quaker-migration-peace-pennsylvania
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371824409.86/warc/CC-MAIN-20200408202012-20200408232512-00035.warc.gz
en
0.976891
3,037
3.59375
4
Common Nutrient Deficiencies Signs You May Need to Supplement It probably comes as no surprise that Americans eat an unhealthy diet. If you’re reading this magazine, you represent a facet of the population that takes health more seriously. Keep up the good work! While this magazine focuses on optimizing the health of those suffering from celiac disease and non-celiac gluten sensitivity, it is interesting that most of the nutrients found to be deficient in celiacs are not too different from those found to be deficient in the average American. Why are those with an autoimmune disease that temporarily destroys the lining of their small intestine (celiac) not so different nutritionally from an average American? If you’re thinking it’s all about diet, health of the gut, and exposure to environmental chemicals, toxins and pesticides, you’re right. Let’s begin with nutrients for those of you recently diagnosed with celiac. If this is you, there’s still a lot of inflammation in your intestinal lining and damage that requires healing. If you are a celiac suffering with diarrhea, you most likely have some acute deficiencies. You should consider getting tested for and supplementing with the following: - Vitamin B12 - Vitamin D You may also need to address your food intake to ensure you’re getting adequate calories and protein considering the malabsorption associated with celiac and/or if you are over the age of 65. There are additional nutrients to consider if you’ve suffered with diarrhea and fatty stools. When you malabsorb fats, your stool will float due to the excess fat in the stool itself. It can also be a paler brown to yellow color instead of the normal dark brown and be greasy looking and/or smelly. If this is the case, supplementing with the fat-soluble vitamins A, D, E and K is a good idea, while ensuring you’re working with a clinician who monitors your levels and healing. If you’re the parent of a child recently diagnosed with celiac, look at supplementing B vitamins, iron, and folate (vitamin B9), as those are some of the most common deficiencies reported. But if your child suffers diarrhea and weight loss, getting adequate protein, calories, and fiber should also be stressed. A 2002 study validated that a delay in puberty with children suffering from celiac could be related to low amounts of B vitamins, folate, and iron. Another study tracked adult celiacs following a gluten-free diet for 10 years. Half of them had poor vitamin status despite their gluten-free lifestyle. The authors checked homocysteine levels, a blood marker for folic acid, vitamin B6, and B12 deficiency to evaluate and confirm their poor nutrient status. The conclusion was that clinicians should more carefully evaluate celiac patients’ nutritional status. Elevated homocysteine is not only a gauge of nutrient levels, but also a marker for increased heart disease risk. The Center for Disease Controls’ 2017 Nutrition Report concludes that Americans are lacking significantly in key nutrients. The advent of unhealthy foods and trendy diets apparently leaves nine out of 10 Americans suffering from an imbalanced diet. What are those nutrients and how do they differ from those commonly deficient in celiac patients? Out of the 10 nutrients commonly deficient in celiacs, 6 are identical in all Americans. The remaining nutrients – zinc, magnesium, and the B vitamins niacin and riboflavin – are frequently fortified in cereals and flours containing wheat, perhaps explaining why they are less deficient in those consuming gluten. What this means is whether you have celiac disease, gluten sensitivity or not, it is very important to evaluate your nutrition status and improve your diet. It’s always best to get your nutrients from fresh, whole foods. The best diet? Whole-foods including ample fresh vegetables, fruits, beans, peas, lentils, nuts, seeds, and a small amount of low-mercury fish (once or twice per week). Avoid all sugar, trans fats, artificial sweeteners, pre-packaged/pre-prepared foods, junk food, and fast food. Supplement accordingly, depending on what nutrient testing reveals. Symptoms associated with common deficiencies include: Calcium contributes to strong bones, but it’s also associated with nerve, muscle and heart health. If you tend to drink soda or eat a diet with few dark green leafy vegetables, you’re at a higher risk of deficiency. Supplement, if required, as part of a multiple vitamin mineral. Deficiency symptoms include fatigue, anemia, weakened immune system, and depression. Vitamin D3 regulates your calcium absorption, making calcium usable for your body. It plays a role in immune system strength and cancer protection. Symptoms of a deficiency include a low immune system, fatigue, osteoporosis/osteopenia, hair loss, and joint and muscle pain. Americans are often deficient, making it worthwhile to measure your levels via a blood test. When you do supplement D3, consider an addition of vitamin K2, another chronically deficient fat-soluble vitamin. D3 regulates calcium absorption, but K2 ensures the absorbed calcium is delivered to the correct areas – your bones vs. inappropriate deposition in arteries, gall bladder, or kidneys. Abundant in foods of both animal and plant. If you are deficient, or anemic, you can feel quite exhausted and lethargic. You can also feel light-headed, suffer shortness of breath, chest pain, and headaches. Women with heavy periods or those bleeding internally for whatever reason, can suffer low iron. During pregnancy, it’s important to have stable levels due to iron’s role in brain development. Folic acid, or vitamin B9, plays a role in keeping red blood cells healthy. It’s critical during pregnancy to prevent spina bifida. It helps prevent anemia and protect you from heart disease. Seven to nine servings of fresh fruits and vegetables should be adequate, but few Americans reach that target. Folic acid is so important that breads and cereal products are fortified with the nutrient. Certain medications or imbibing excess alcohol could create deficiency. Deficiency symptoms are similar to those caused by lack of iron, with the addition of mouth sores and irritability. B12 is commonly deficient among vegans, but it’s also on the list of the most frequent deficiencies of all Americans, not to mention those with celiac, making B12 an equal opportunity deficiency! B12 helps to form red blood cells, is critical for nerve function, and provides a foundation for hormones, protein, and DNA. Symptoms associated with deficiency include anemia, fatigue, shortness of breath, memory loss, and tingling feet. The best blood test to measure your B12 levels is methylmalonic acid. It’s an effortless supplement to take; most people enjoy the sweet, pink liquid taken sublingually a few times per week. Those with compromised intestinal tracts, including celiacs, may need B12 injections until their gut heals. If you strive to eat seven to nine servings of vegetables and fruits, while enjoying some legumes, your fiber quotient should be addressed nicely. Fiber is believed to help heal the gut and feed the “good guys” – the 10 trillion organisms making up your microbiome. Signs you may need more fiber include bloating, gas, feeling overly full, constipation or diarrhea, and weight gain. The remaining nutrients – magnesium, zinc, riboflavin, and niacin – are more commonly deficient in celiacs, but may also be deficient in the general population. This nutrient can be found in fruit, nuts, seeds, spinach, avocado, and is abundant in healthy plant foods. Deficiency is likely less about inadequate intake as it is related to absorption. Symptoms of a magnesium deficiency include muscle cramps or twitches, fatigue, muscle weakness, irregular heartbeat, and high blood pressure. Issues with diabetes, diarrhea, malabsorption, and celiac are associated with increased risk of deficiency. Zinc is abundant in a variety of foods, from meat and shellfish to legumes, seeds, nuts, and whole grains. Foggy thinking, nausea, diarrhea, trouble sleeping, or a weak immune system, resulting in you getting ill more often, are key signs of a zinc deficiency. Those most at risk are pregnant women, dieters, vegans, the elderly, and those consuming too much alcohol. Riboflavin and niacin Beyond tuna, which is too high in mercury to be on a recommended list, mushrooms, peanuts, avocados, and green peas will provide you with these specific B vitamins. They are also found in enriched cereals and grains. Riboflavin deficiency can create redness and swelling inside the mouth and throat and cracks on the outside of the lips or corners of the mouth. Niacin deficiency can cause canker sores, fatigue, depression, and indigestion. Issues with malabsorption can cause deficiency. I prefer whole food, but it’s a good idea to augment your diet with a well-made multiple vitamin mineral to help prevent these common deficiencies. However, if you’re not absorbing nutrients adequately, you will still have a problem. Find a clinician who can measure your nutrient values and follow up on anything that seems less than ideal. If altering your diet and supplementation isn’t helping, the next step is identifying why your gut is not doing its job. Don’t Eat Gluten-Free Junk Food It would be remiss of me to not urge you to avoid gluten-free junk food. Regardless of whom you talk to about health, whether they encourage a whole food plant-based diet, keto, Paleo, etc., all doctors agree refined carbohydrates and sugars are unhealthy. I know it’s fun to discover a gluten-free brownie mix or other dessert you’ve been missing, but gluten-free doesn’t make a sugary dessert any healthier. It’s still full of sugar, refined grains, and may contain other unhealthy ingredients such as stabilizers, preservatives, etc. Much of what we need to focus on to regain optimal functioning is the restoration of our gut health. Sugar and refined grains do not support a healthy gut microbiome. Enjoy home-made desserts utilizing dates or coconut sugar, as an example. There is an abundance of great ideas in this magazine. Alimentary Pharmacology & Therapeutics. 2002; 16: 1333-1339. Evidence of poor vitamin status in coeliac patients on a gluten‐free diet for 10 years. Hallert C, et al. Hormone Research. 2002;57 Supplement 2:63-5. Mechanisms of abnormal puberty in coeliac disease. Bona G, et al. Journal of Parenteral and Enteral Nutrition. 2012;36:68S-75S. Celiac Disease, Wheat Allergy, and Gluten Sensitivity: When Gluten Free Is Not a Fad. Pietzak, M. Journal of the American Dietetic Association. 2011;111(11):1796. Is there evidence to support the claim that a gluten-free diet should be used for weight loss? Marcason, W. Dr. Vikki Petersen, winner of the “Gluten-Free Doctor of the Year” award, is a Doctor of Chiropractic, Certified Clinical Nutritionist, internationally published author, speaker, and co-founder of Root Cause Medical Clinic. She is the author of The Gluten Effect, a best-seller on gluten sensitivity and celiac disease.
<urn:uuid:ac53fc68-dd6f-4369-8ad0-67fa22576d3d>
CC-MAIN-2020-16
https://simplygluten-free.com/blog/2019/01/common-nutrient-deficiencies.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500331.13/warc/CC-MAIN-20200331053639-20200331083639-00434.warc.gz
en
0.910371
2,473
2.71875
3
BY FAZALE RANA – SEPTEMBER 7, 2016 Good things can come from bad circumstances. This idea is beautifully illustrated by the research efforts of a team of Australian scientists. Climate change has triggered the excessive melting of ice and snow in western Greenland. This loss of snow and ice concerns many people, but, on the other hand, it has been a boon for the scientific community. It has exposed a new outcropping of rocks, giving geologists first-time access to a rare window of the earth’s distant past. As it turns out, these rocks harbor what appears to be the oldest fossils on Earth—stromatolites that date to around 3.7 billion years in age.1 Image: Stromatolites in western Australia This latest insight has important implications for understanding the origin of life. In fact, on the day researchers from Australia reported this discovery in scientific literature, it made headlines in news outlets around the world.2 Evidence for Early Life on Earth As Hugh Ross and I discuss in Origins of Life, geochemists have unearthed a number of chemical markers in the Isua Supracrustal Belt (ISB) of western Greenland that strongly hint at microbial life on Earth between 3.7 and 3.8 billion years ago. But origin-of-life researchers debate the bio-authenticity of these geochemical signatures, because a number of potential abiotic processes can produce similar geochemical profiles. Most scientists doubted that fossils would ever be unearthed in the Isua rock formations because these outcrops have undergone extensive metamorphosis, experiencing high temperatures and pressures—conditions that would destroy fossils. But these newly exposed formations contain regions that have experienced only limited metamorphosis, making it possible for fossils to survive. Careful microscopic and chemical characterization of the Isua stromatolites affirms their biogenecity. These analyses also indicate that they formed in shallow water marine environments. These recently discovered stromatolites (and the previously detected geochemical life signatures in the Isua formations) indicate that a complex and diverse ecology of microorganisms existed on Earth as far back as 3.7 billion years ago. Prior to the discovery of 3.7 billion-year-old stromatolites, origin-of-life researchers widely agreed that microbial life existed on Earth around 3.4–3.5 billion years ago, based on the recovery of stromatolites, microbial mats, microfossils, and geochemical signatures in rock formations found in western Australia. Many origin-of-life researchers have expressed amazement that complex microbial ecologies were present on Earth as early as 3.4 billion years ago. For example, paleontologist J. William Schopf marveled: “No one had foreseen that the beginning of life occurred so astonishingly early.”3 The researchers who recovered and analyzed the Isua stromatolites expressed similar surprise: “The complexity and setting of the Isua stromatolites points to sophistication in life systems at 3,700 million years ago, similar to that displayed by 3,480–3,400 million-year-old Pilbara stromatolites.”4 From a naturalistic perspective, the only way for these researchers to make sense of this discovery is to conclude that life must have originated prior to 4 billion years ago. They state: “This implies that by ~3,700 million years ago life already had a considerable prehistory, and supports model organism chronology that life arose during the Hadean (>4,000 million years ago).”5 Implications for Evolutionary Models However, the researchers’ explanation for the appearance of a complex, diverse microbial ecosystem at 3.7 billion years ago is problematic, when the natural history of early Earth is considered. Traditionally, planetary scientists have viewed the early Earth as hot and molten, from the time of its formation (4.5 billion years ago) until ~3.8 billion years ago. This era of Earth’s history is called the Hadean. Accordingly, oceans were not present on early Earth until around 3.8 billion years ago. They believe a number of factors contributed to the hellish environment of our early planet, chief of which were the large impactors striking the earth’s surface. Some of these impact events would have been so energetic that they would have volatilized any liquid water on the planet’s surface and rendered the surface and subsurface as a molten state. In light of this scenario, it would be impossible for life to originate much earlier than 3.8 billion years ago. To put it another way, if the traditional understanding of early Earth history is correct, then it looks as if complex microbial ecologies appeared on Earth suddenly—within a geological instant. It is impossible to fathom how the explosive appearance of early life could happen via evolutionary mechanisms. More recently, a number of planetary scientists have proposed that early Earth only remained molten for the first 200–300 million years of its history. After which time, oceans became permanent (or maybe semi-permanent) features on the planet’s surface. The basis for this view has been the discovery of zircon crystals that date between 4.2–4.4 billion years ago. Geochemical signatures within these crystals are consistent with their formation in an aqueous setting, implying that oceans were present on Earth prior to 3.8 billion years ago. But this revised scenario doesn’t help the evolutionary approach to life’s origin. Around 3.8 billion years ago, a gravitational perturbation in the early solar system sent asteroids towards Earth. Some estimates have the earth experiencing over 17,000 impact events during this time. This event, called the late heavy bombardment (LHB), was originally regarded as a sterilization event. If so, then any life present on Earth prior to the LHB would have been obliterated. That being the case, again, it appears as if complex microbial ecologies appeared on Earth suddenly, within a geological instant. Recently, some planetary scientists have challenged the notion that the LHB was a sterilization event. They argue that life on the planet’s surface would have been destroyed, but life in some environments, such as hydrothermal vents, could have survived. In other words, there would have been refugiums on Earth that served as “safe houses” for life, ushering it through the LHB. Yet the latest discovery by the Australian scientists doesn’t fit this scenario. The Isua stromatolites formed at the earth’s surface in a shallow water environment. In fact, the research team generated data that effectively ruled out stromatolite formation near hydrothermal vents. But if the refugium model has validity, the Isua fossils should have formed in a high-temperature milieu. Finally, pushing life’s origin back to more than 4 billion years ago doesn’t solve the problem of a sudden origin-of-life—it merely displaces it to another window of time in Earth’s history. Origin-of-life researchers have geochemical evidence suggesting that life was present on Earth between 4.2–4.4 billion years ago. Given that the earth was molten for the first 200–300 million years of its existence (minimally), that doesn’t leave much time for life to originate. No matter the scenario, a naturalistic, evolutionary approach to the origin-of-life can’t seem to accommodate the sudden appearance of life on Earth. On the other hand, if a Creator brought life into being, this is precisely the mode and tempo expected for life’s appearance on Earth. Implications for Creation Models While the discovery of 3.7 billion-year-old stromatolites confounds evolutionary explanations for life’s origins, it affirms RTB’s origin-of-life model. This model is derived from the biblical creation accounts and make two key and germane predictions: (1) life should appear on Earth soon after the planet’s formation; and (2) first life should possess intrinsic complexity. And both of these predictions are satisfied by this latest advance. Origins of Life: Biblical and Evolutionary Models Face Off by Fazale Rana and Hugh Ross (book) Creating Life in the Lab: How New Discoveries in Synthetic Biology Make a Case for the Creator by Fazale Rana (book) “Life May Have Begun 300 Million Years Earlier Than We Thought” by Fazale Rana (podcast) “Early Life Was More Complex Than We Thought” by Fazale Rana (article) “When Did Life First Appear on Earth?” by Fazale Rana (article) “Insight into the Late Heavy Bombardment and RTB’s Creation Model” by Fazale Rana (article) “Origin-of-Life Predictions Face Off: Evolution vs. Biblical Creation” by Fazale Rana (article) - Allen P. Nutman et al., “Rapid Emergence of Life Shown by Discovery of 3,700-Million-Year-Old Microbial Structures,” Nature, published electronically August 31, 2016, doi:10.1038/nature19355. - For a detailed discussion of this discovery and its implications for the creation/evolution controversy, listen to “Fossils Indicate Early Life Was Metabolically Complex and Diverse,” Apologia (Ex Libris), podcast audio, August 31, 2016, https://www.reasons.org/podcasts/apologia-premium/fossils-indicate-early-life-was-metabolically-complex-and-diverse. - J. William Schopf, Cradle of Life: The Discovery of Earth’s Earliest Fossils (Princeton, NJ: Princeton University Press, 1999), 3. - Allen P. Nutman, “Rapid Emergence of Life.”
<urn:uuid:d24e9123-a74b-41f2-b10b-3d280d4b05ff>
CC-MAIN-2020-16
https://twobrothersweblog.com/2016/09/07/science-news-flash-3-7-billion-year-old-fossils-perplex-origin-of-life-researchers/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371604800.52/warc/CC-MAIN-20200405115129-20200405145629-00034.warc.gz
en
0.926952
2,130
4
4
*Clement Chinedu Azodo **Oseremen Gabriel Ogbebor Objective: To compare evoked emotions, feelings and reactions to body and mouth odour among undergraduates. Methods: This questionnaire-based cross-sectional study was conducted among undergraduates of the University of Benin, Benin City, Nigeria. Results: Nearly one-quarter of the participants reported taking into account body odour (23.3%) and mouth odour (24.7%) on meeting people on often/always basis. About half of the participants stated being very disgusted on perception of body odour (52.7%) and mouth odour (52.0%). About one-quarter (24.0%) of the participants expressed anger when in contact with someone with body or mouth odour. About two-thirds (64.5%) and 76.0% of the participants reported being slightly/very unhappy having a classmate/roomate with body or mouth odour respectively. About 12.0% and 12.7% agreed that students with body and mouth odour respectively should be expelled from the university. Assessing reactions to someone with body or mouth odour in a commercial vehicle; 13.3% versus 10.7% changed position and 7.3% versus 8.7% dropped off the vehicle respectively. The majority of the participants felt that body odour or mouth odour negatively influence good employment potential, marriageability and marital relationship but there was no difference. Low proportion of the participants reported avoidance behaviour as their preferred way to help someone with body (10.0%) or mouth (9.3%) odour. Conclusion: Data from this study revealed no differences in the evoked emotions, feelings, perceptions and reactions toward body and mouth odour sufferers among the participants. Keywords: Emotions, feelings, body odour, mouth odour, reactions. Olfaction helps humans in locating food for survival, appreciating food flavour for the palatability and mate selection for procreation.1 It also assists in individual recognition, kin detection, impression formation and societal life promotion.2-4 Offensive odour constitute a huge impedance to social interaction by hampering attractiveness, pleasantry and seduction wishes. These offensive odour may be a general body odour or mouth odour. Body odour unpleasantness was generally associated with socially undesirable traits while oral malodour evoked a sickening feeling.5 Freedom from disabling odour possibly mouth and possibly the body is one of the indicators of social well-being.6 Artificial fragrances have been used for thousands of years to manipulate odour intensity and pleasantness to enhance attractiveness in a complementary fashion.7-9 Perception of pleasant odour plays significant roles in human interaction as in terms of acquaintances, friendship, dating relationship and marriage. Studies have evaluated reactions to body and mouth odours independently but none has compared evoked emotions, feelings and reactions to body and mouth odour.5,10-14 Hence, the objective of this study was to compare evoked emotions, feelings and reactions to body and mouth odour among undergraduates of the University of Benin, Benin City, Nigeria. MATERIALS AND METHODS This cross-sectional study was conducted among Undergraduates of the University of Benin, Benin City, Nigeria. The data collection tool was self-administered validated questionnaire. The questionnaire elicited information on demographic characteristics, contact with body and mouth odour sufferers, evoked emotions, feelings and reactions to body and mouth odour, perceived social effects and perceived ways to help body and mouth odour sufferers. The questionnaires were hand delivered. Informed consent was obtained from the participants. Participation was voluntary and no incentive was offered. The importance the participants attached to their own body, body of others, own mouth and mouth of others were assessed using a single-item on a scale of 0-10, where 0 meant not important and 10 very important. This importance scale was a modification of the one used in a previous halitosis study.15 A higher score indicated a higher attachment of importance. The scores were categorized into low and high importance based on score 0-5 and 6-10 respectively. The obtained data was subjected to McNemar’s test using IBM SPSS version 21.0. Statistical significance was set at P<0.05. The majority of the participants were 21-25 years old (49.3%), males (61.3%) and studying science-related courses (52.0%). A majority of the participants reported high importance to own body (86.7%), body of others (55.7%), mouth (87.7%) and mouth of others (63.3%) (Table 1). The contact experience with individuals with body odour was 70.7% while that for mouth odour was 73.3%. Three-tenth (30.0%) reported having relatives with body odour while 50.0% reported mouth odour in relatives. Nearly one-quarter of the participants reported taking into account body odour (23.3%) and mouth odour (24.7%) on meeting people on often/always basis (Table 2). About half of the participants stated being very disgusted on perception of body odour (52.7%) and mouth odour (52.0%). Nearly one quarter (24.0%) expressed anger when in contact with someone with body odour or mouth odour. About two-thirds (64.5%) of the participants reported being slightly/very unhappy classmate/roomate with body odour while 76.0% slightly/very unhappy classmate/roomate with body odour. About 12.0% and 12.7% agreed that students with body and mouth odour respectively should be expelled from the university (Table 3). Assessing reactions to someone with body odour or mouth odour in a commercial vehicle, 8.3% informed the body odour sufferers while 15.3% for the mouth odour sufferers. About 13.3% versus 10.7% reported position change and 7.3% versus 8.7% dropped off the vehicle as reaction to body odour and mouth odour sufferers (Table 4). The majority of the participants felt that body odour or mouth odour negatively influence good employment potential, marriageable and matrimonial relationship but there was no difference (Table 5). Avoiding someone with body odour (10.0%) or mouth odour (9.3%) was the preferred way to help the sufferers. In this study, a reasonable population of participants claimed to have had contact with individual with body and mouth odour with a significant proportion of them reporting exposure to body and mouth odour being from relatives. This may imply that both body and mouth odour are of high prevalence in the study setting or that participants in this study are highly sensitive people as they dominantly attached high level importance to their body and mouth and those of other people. The agreement with expulsion of students with body and mouth odour among approximately one-eighth of the participants may be explained by the different degrees of unhappiness expressed on having a roommate or classmate with body or mouth odour. The high prevalence of negative emotions of anger and disgust which is an emotional response of revulsion to something considered offensive, distasteful, or unpleasant among the participants may be the additional explanation. Pleasant and unpleasant odour perception influence on cognition and emotion have been reported in terms of mood improvement, anger reduction, working memory impairment and facilitated recognition of disgust facial expressions.16-20 Contact with body odour or mouth odour sufferers in a commercial vehicle trigger varied reactions from the participants ranging from informing the person, tolerating the person, changing position to outright drop off the vehicle. Appreciable number informing the sufferer may be explained by the finding that many Nigerians in study wish to be informed if they have mouth odour because they believe that letting them know will be very helpful.21 The fact that majority of the participants in this study reported readiness to inform friends and non-friends with body odour or mouth odour about the condition as a way to help them is a welcome development in healthcare as this will facilitate their seeking care for the condition. The tendencies of an odour sufferer to be unaware of the condition is high because adaptive ability of olfactory function of nose thus alert the affected person will possibly prompt improved self-care and professional consultation. Avoidance behaviour as a perceived way of help body odour or mouth odour sufferer was also reported. The need to educate the populace on the positive effects of informing odour sufferer about the condition outweighs the negative consequences. The majority of participants opined that individuals with body or mouth odour will have difficulty getting good jobs, getting married and face marital disharmony if married. This may be related to the fact that faces are rated as significantly less attractive when presented with an unpleasant ambient odor in comparison to the no-odor condition and that fragrance affect impressions of people in professional contexts.22 Occupational implication of mouth odour was highlighted in a Libya-based study23 which reported difficulty in interacting with mouth odour sufferers at workplace. Difficulty getting married among body or mouth odour sufferer opined by majority of the participants in this study may be explained by the dating difficulties reported 29.9% of individuals with self-reported halitosis due to relational difficulties.24 Sagging or spacing in the relationship between partners reported in a study as impact of mouth odour on the marital relationship concurred with the belief of this study participants on adverse effects of body odour or mouth odour on marital relationship.14 Data from this study revealed no differences in the evoked emotions, feelings, perceptions and reactions to body and mouth odour sufferers among the participants. Further studies on comparison of the intensity of body odour with mouth odour on the evoked emotions feelings, and reactions is however recommended. - Keller M, Pillon D, Bakker J. Olfactory systems in mate recognition and sexual behavior. Vitam Horm. 2010; 83:331-50. - Havlicek J, Roberts SC, Flegr J. Women’s preference for dominant male odour: effects of menstrual cycle and relationship status. Biol Lett 2005; 1:256-9. - Yamazaki K, Beauchamp GK. Genetic basis for MHC-dependent mate choice. Adv Genet. 2007; 59:129-145. - Lundstrom JN, Boyle JA, Zatorre RJ, Jones-Gotman M. Functional neuronal processing of body odors differs from that of similar common odors. Cereb Cortex. 2008; 18:1466-1474. - Yaegaki K, Takano Y, Suetaka T, Arai K, Masuda T, Ukisu S. Investigation of people’s attitudes and reactions towards oral malodour. A preliminary survey conducted on dental hygienics students. Shigaku. 1989; 77(1):171-8. - Nadanovsky P, Carvalho LB, Ponce de Leon A. Oral malodour and its association with age and sex in a general population in Brazil. Oral Dis 2007; 13(1):105-9. - Dematte ML, Österbauer R, Spence C. Olfactory cues modulate facial attractiveness. Chem Senses 2007; 32:603-610. - Craig Roberts S, Little AC, Lyndon A, Roberts J, Havlicek J, Wright RL. Manipulation of body odour alters men’s self-confidence and judgements of their visual attractiveness by women. Int J Cosmet Sci 2009; 31(1):47-54. - Milinski M, Wedekind C. Evidence for MHC-correlated perfume preferences in humans. Behav Ecol 2001; 12:140-149. - Schiffman SS, Suggs MS, Sattely-Miller EA. Effect of pleasant odors on mood of males at midlife: comparison of African-American and European-American men. Brain Res Bull 1995; 36:31-37. - Rétiveau AN, Chambers IV E, Milliken GA. Common and specific effects of fine fragrances on the mood of women. J Sens Stud 2004; 19:373-39. - Seubert J, Rea AF, Loughead J, Habel U. Mood induction with olfactory stimuli reveals differential affective responses in males and females. Chem Senses 2009; 34:77-84. - de Jongh A, van Wijk AJ, Horstman M, de Baat C. Attitudes towards individuals with halitosis: an online cross sectional survey of the Dutch general population. Br Dent J 2014; 216(4):E8. doi: 10.1038/sj.bdj.2014.101. - Sedky NA. Perceived impact of halitosis on individual’s social life and marital relationship in Qassim Province, KSA. J Am Sci 2015; 11(3):187-96. - Azodo CC, Ogbebor OG. Dental anxiety, halitosis and expected social outcomes. Nig J Dent Res 2017; 2(2):72-80. - Schiffman SS, Suggs MS, Sattely-Miller EA. Effect of pleasant odors on mood of males at midlife: comparison of African American and European-American men. Brain Res Bull 1995; 36:31-37. - Rétiveau, AN, Chambers IV E, Milliken GA. Common and specific effects of fine fragrances on the mood of women. J Sens Stud 2004; 19:373-94. - Schneider F, Gur RC, Koch K, Backes V, Amunts K, Shah NJ, Bilker W, Gur RE, Habel U. Impairment in the specificity of emotion processing in schizophrenia. Am J Psychiatry 2006; 163:442-7. - Habel U, Koch K, Pauly K, Kellermann T, Reske M, Backes V, Seiferth NY, Stöcker T, Kircher T, Amunts K, Jon Shah N, Schneider F. The influence of olfactory-induced negative emotion on verbal working memory: individual differences in neurobehavioral findings. Brain Res 2007; 1152:158-70. - Seubert J, Kellermann T, Loughead J, Boers F, Brensinger C, Schneider F, Habel U. Processing of disgusted faces is facilitated by odor primes: a functional MRI study. Neuroimage 2010; 53(2):746-56. - Adeyemi BF, Kolude BM, Arigbede AO. Attitude and perception of mouth odour in 213 respondents. Niger Postgrad Med J 2012; 19(2):97-101. - Sczesny S, Stahlberg D. The influence of gender-stereotyped perfumes on leadership attribution. Eur J Soc Psychol 2002; 32:815-28. - Eldarrat A, Alkhabuli J, Malik A. The prevalence of self-reported halitosis and oral hygiene practices among Libyan students and office workers. Libyan J Med 2008; 3(4):170-176. - Troger B, Almeida Jr HL, Duquia RP. Emotional impact of halitosis. Trends Psychiatry Psychother 2014; 36(4): 219-221.
<urn:uuid:6bdeff43-088f-4219-b41f-86befe90bdab>
CC-MAIN-2020-16
http://ibommedicaljournal.org/differences-in-evoked-emotions-feelings-and-reactions-to-body-and-mouth-odour/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505826.39/warc/CC-MAIN-20200401161832-20200401191832-00274.warc.gz
en
0.8966
3,237
2.703125
3
On this page you’ll find a summary of the various yoga styles – when they are taught according to their original traditions. This will help you gain a deeper perspective and understanding. First, it helps to understand the meaning of the term yoga. Yoga is an ancient Sanskrit word. It derives from the root ‘yuj’ which means ‘to yoke or connect.’ Yoga therefore means “union.” But with what? Union with your true self, the whole world, or the divine deity, or all of the above. When this happens, you experience inner peace, harmony, and divinity. It’s a blissful experience. Traditionally, this is the goal of yoga. The experience of divinity within a world of apparent chaos. Then why practice all those physical postures? It was simply for the purpose of toning your body’s muscles so that you could sit still and painlessly during long hours of meditation and prayer. Sitting still was your first step towards quieting your mind and observing your thoughts. This was the first baby step in your journey towards union and inner, divine bliss. Nevertheless, yoga is great for your body and health! There is no reason not to practice it exclusively for those reasons, if you so wish. Spirituality is not the only thing that makes our world a better place. So do healthy people. Healthy people can help others. ‘Ashtanga’ is a Sanskrit word that means ‘eight limbs.’ In other words, it is a yoga tradition that follows eight steps. It was expounded by the famous yogi Sage Patanjali more than 2000 years ago. Patanjali’s eight-fold school of yoga became the gold standard that most later yoga traditions echo in many ways till the present day. But it is often wrongly assumed that Patanjali was the Father of Yoga. More accurately, Patanjali was the classical compiler of previously scattered yoga practices. In this sense, his work is seminal. Nevertheless, Patanjali was preceded by Shadanga (‘six limbs’), Asparsha (‘untouched’), Vedic (‘of knowledge’), and possibly other yoga traditions which are now lost. These last three yoga traditions are extremely ancient and rarely practiced today. I will explain them to you last. In traditional scholarship, the very first expounder of yoga is considered to be a mystical being called Brahma. This being represents the entire transcendent foundation of the physical universe! Before the arrival of Patanjali’s Ashtanga yoga, there seems not to have been any comprehensive, written documents describing the various criteria required for a successful experience of inner harmony. Patanjali brought the various techniques together in an authoritative manual. It is called the Yoga Sutra. It comprises nearly 200 short verses. They outline the eight limbs or steps. Right at the beginning of the Yoga Sutra in verses 2 and 3, Patanjali clarifies two important things: the definition of yoga and its aim. 1. Definition: “Yoga is the complete restriction of the whirls (activity) of the mind.” 2. Aim: “When that restriction is achieved, one’s divine nature appears.” Traditionally, the ultimate aim of all yoga is to dissolve the mundane ego – the source of all misery. Our mundane ego comprises a false identity that has been concocted by the story of our physical life. When this false ego is dissipated, your true, divine, and infinitely blissful self shines through. Your authentic self is revealed. To experience this blissful nature you need to pay a relatively small price, says Patanjali: climb up the eight steps of a physio-psychological mountain: 1. Moral Discipline 2. Voluntary Observances (like fasting) 3. Correct Posture 4. Controlled Breathing 5. Sense Withdrawal (of our 5 senses) 6. Concentration The first step, Moral Discipline, is considered indispensable. It includes five subcategories: non-harmfulness, truthfulness, non-stealing, chastity, non-greed. You’ll notice that the very first of these moral disciplines is non-harmfulness. In Sanskrit the word is ‘ahimsa.’ It is often mistranslated as non-killing. But it means much more than that. It actually means ‘non-hurting’ of any creature whatsoever through thought, speech or action! The second criteria is Voluntary Observances. There are five subcategories: bodily purity, contentment, austerity, study of scripture, and devotion to the Lord. This devotion to a higher Being is of great importance in Patanjali’s Ashtanga yoga. Without the Lord’s grace, completing the eight steps is considered impossible. Ultimately, the experience of the divine self and of God within the spirit is a matter of God’s grace alone, says Patanjali. The belief that one is divine, immortal, and pure bliss but not God, is an important component of Ashtanga yoga. God is within your soul and within everything. “Correct Posture,” the third step, concerns the various physical exercises. They are not very vigorous or strength-building. Their focus is to make the body supple so that one can sit comfortably in meditation. Meditation leads to natural and internal Ecstasy. It’s about inner balance, peace, and wisdom. Reversal of many health problems is a happy by-product. This is the core lifestyle of an Ashtanga Yogi or Yogini (female). ‘Hatha’ is a Sanskrit word that means ‘forcefulness’ or ‘strength.’ It also represents the combining of dualities to create that strength. ‘Ha’ means sun and ‘tha’ means moon. (The combination of these dualities is akin to the combination of yin and yang.) Hatha yoga is a relatively recent yoga tradition. It appeared around the 15th century. It’s major proponent was a yogi called Swatmarama. The major difference between his school and that of the much more ancient eight-fold Ashtanga yoga is that it does not emphasize the first two steps: moral discipline and restraint. As such, Hatha yoga does not explicitly advise its practitioners to lead a moral life, live austerely, or to worship a higher Being or God. Its aim is simply to experience the divine self. Nevertheless, some later texts of Hatha yoga do advise that a successful practice may be impeded without a moral life. The separation of yoga from moral teachings and a devotional outlook has made Hatha yoga popular amongst many people. This can be viewed positively or negatively depending on a person’s personal perspective. No harm, no foul seems to be the unspoken rule of Hatha yoga. Another major difference between Hatha and Ashtanga yoga is that the former aims to build strength and stamina through its various postures. Moreover, it recommends various procedures for inner and outer body cleansing that are intensive. Originally, the practice of Hatha yoga was supposed to lead to the divination of the body. The physical body itself would be transformed from a defiled, mortal bag of skin into a divine and immortal light. This belief, however, is no longer opined in Hatha yoga classes for obvious reasons. Comparitively, Ashtanga yoga is more moderate – some would say more balanced. It focuses on creating a more flexible and supple body for the purpose of sitting in long periods of meditation. Additionally, its practitioners would transcend their defiled and mortal body to discover their inner, divine, and immortal selves. This more ‘reasonable’ belief espoused by Patanjali has spread throughout all yoga traditions today, including Hatha. Hatha yoga comprises the following steps: 1. Posture 2. Breath Control 3. Sense Withdrawal 4. Concentration 5. Meditation Hot or Bikram Yoga This is a modern representation of Hatha yoga (‘hot’ is apparently a play on the term hatha). It was conceived by Bikram Choudhury. However, depending on the local teacher, there isn’t always an emphasis on spirituality or moral teachings as there is in Ashtanga. Nevertheless, such values can be taken as a given in almost any yoga class. Bikram Yoga adheres to a specific sequence of 26 postures and 2 breathing exercises. It is also well known for the requirement that it be practiced in a very warm environment. This recreates the sometimes very warm environment in which yoga was developed in India. The heat induces sweating which helps purge the body of poisons. Additionally, the extra heat softens your muscles. This makes them more flexible and conducive to practicing various yoga postures. This is also a modern representation of Hatha yoga. ‘Vinyasa’ is a Sanskrit word that literally means ‘connection.’ Simply then, Vinyasa yoga means ‘connected yoga.’ Instead of each posture being practiced in distinction to the other, each posture is made to lead or “connect” to the next. As such, each posture becomes connected with the former and latter poses. They follow a rhythm. In particular, the rhythm is that of one’s breathing. Typically, whilst bending down one breathes out, and whilst straightening up one breathes in. This coordination of breathing and movement constitutes a type of dance and also doubles as a slow work out. Calming, meditative music is often played in the background. Vinyasa yoga is becoming increasingly popular for its fun and relaxing nature. This adheres to a more traditional form of Hatha yoga. However, it has new inclusions. These are the use of physical apparatus to help practitioners to balance and accurately align their body. The instruments or props used are usually very simple: belts, blocks, and blankets. The advantage of Iyengar yoga is that many postures that would be well beyond the ability of most people come within reach, at least partially. Elderly or injured people are able to benefit from helpful postures that they would otherwise be unable to perform. This is similar in theme to Vinyasa yoga but is modeled on a few Ashtanga yoga postures. It is more akin to a vigorous workout and is often referred to as ‘gym yoga. ANCIENT YOGA TRADITIONS This is possibly the most ancient form of yoga known. It began in India at least 1500 BC or maybe as early as 3000 BC. It was a very mystical and secretive tradition. The Sanskrit word ‘Veda’ means knowledge. All that we can find about Vedic yoga is a few passages in the ancient Rig Veda literature that speak cryptically about yoking the mind to the Divine. To this extent, Vedic Yoga continues to be practiced today through Patanjali’s Ashtanga yoga. Shadanga is a Sanskrit word that means ‘six limbs.’ In other words it is a yoga tradition that follows six steps. It is a very early treatise on yoga that is spiritual like Vedic and Ashtanga. Shadanga yoga is found in the Maitrayaniya Upanishad (6.18-19). “The rule for effecting this union with the self is this: 1. Breath Control 2. Sense Withdrawal 3. Meditation Such is said to be the sixfold yoga. When a seer sees the brilliant Maker, Lord, Person, the source of the creator God Brahma, then, being a knower, shaking off good and evil, he reduces everything to unity in the supreme imperishable.” Yoga by this name is rarely practiced today. However, it is devotional like Ashtanga. This six step practice is not philosophically identical to Hatha yoga. Shadanga explicitly mentions the existence of God the creator. Asparsha Yoga (Wisdom or ‘Jnana’ Yoga) ‘Asparsha’ is a Sanskrit word that literally means ‘untouched.’ It implies something that is intangible, nonphysical, divine, and transcendent. This tradition is based on the Mandukya Upanishad and an ancient commentary called Mandukya Karika. Asparsha yoga relies on the recitation and philosophical meaning of the mantra Om. The school is synonymous with that of Jnana yoga. In other words, the belief that “transcendent wisdom is itself yoga.” This philosophical method of practicing yoga is very much alive today. Its wisdom refers to the experiential knowledge of one’s true divine nature. It doesn’t refer to those who parrot knowledge without personal experience. Additionally, it commonly doesn’t recognize the existence of a God the creator who is distinct from ones self. As a predominantly philosophical and mystical tradition of yoga it doesn’t actively incorporate the usual exercises and postures many people are interested in today. CONCLUSION – Which yoga tradition is for you? Ashtanga, Hatha, Hot, Vinyasa, or Iyengar? The answer depends on two things: 1. What do you want to gain from your practice? Health, tranquility, divine experience, or all of the above? 2. How authentic and skilled is your teacher? a) If you simply want to get better health, any of these traditions – Ashtanga, Hatha, Hot, Vinyasa, or Iyengar – will help. Remember, always consult your doctor first. b) If you want to experience profound peace, then you will need to practice breath control and meditation. Again, all of these traditions (except Vedic and Asparsha) offer these. c) If you want to experience your divine, blissful self, and possibly a higher Being or God within – the divine source of the universe – call it God, Jesus, Buddha or anything else – Ashtanga yoga will probably be the best way to go. Simple sitting postures, breath control, meditation, and devotion are emphasized. Breath control calms the mind. Meditation focuses the mind. After that, devotion takes you beyond the mind to the transcendental. Whatever your goals, if your teacher is authentic, he or she will explain the various criteria and beliefs of the tradition they teach. The good news is, there are many great teachers out there. Good luck!
<urn:uuid:5293743d-3b11-4502-ae26-3796f4e64181>
CC-MAIN-2020-16
https://sanjaycpatel.com/which-yoga-style-is-best-for-you-ashtanga-hatha-bikram-iyengar-or-vinyasa/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371818008.97/warc/CC-MAIN-20200408135412-20200408165912-00154.warc.gz
en
0.948434
3,106
2.59375
3
Most sewing in the industrial world is done by machines known as sewing machines. Equipped with a complex set of gears and arms, each machine pierces thread through layers of cloth and interlocks the thread. The machine can be electrically or mechanically operated, but electric machines are far more common. The sewing machine produces results similar to hand sewing but at a much faster pace. It is used primarily to produce clothing and household furnishings such as curtains, bedclothes, upholstery, and table linens. It may also be used to stitch other flexible materials, such as canvas and leather. The invention and manufacture of the sewing machine has played an important role in the industrial revolution. On the one hand, it has saved countless hours of work and has greatly enhanced the quality of human life. On the other hand, sewing machines are also part of the history of exploitation of human labor, as people were forced to work at them for long hours at low wages. Sewing is an ancient art involving the stitching of cloth, leather, furs, or other materials, using needle and thread. Its use is nearly universal among human populations and dates back to Paleolithic times (30,000 B.C.E.). Sewing predates the weaving of cloth. Before the invention of a usable machine for sewing or dress design, everything was sewn by hand. Most early attempts tried to replicate this hand sewing method and were generally a failure. Some looked to embroidery where the needle was used to produce decorative, not joining stitches. This needle was altered to create a fine steel hook—called an agulha in Portugal and aguja in Spain. This was called a crochet in France and could be used to create a form of chain stitch. This was possible because when the needle was pushed partly through fabric and withdrawn, it left a loop of thread. The following stitch would pass through this first loop whilst creating a loop of its own for the next stitch, this resembled a chain—hence the name. The first known attempt at a mechanical device for sewing was by the German-born Charles Weisenthal, who was working in England. He was awarded British Patent No. 701 in 1755 for a double pointed needle with an eye at one end. This needle was designed to be passed through the cloth by a pair of mechanical fingers and grasped on the other side by a second pair. This method of recreating the hand sewing method suffered from the problem of the needle going right through the fabric, meaning the full length of the thread had to do so as well. The mechanical limitations meant that the thread had to be kept short, needing frequent stops to renew the supply. In 1790 British Patent No. 1764 was awarded to Thomas Saint, a cabinetmaker of London. Due to several other patents dealing with leather and products to treat leather, the patent was filed under "Glues & Varnishes" and was not discovered until 1873 by Newton Wilson. Wilson built a replica to the patent's specifications and it had to be heavily modified before the machine would stitch—suggesting that Saint never actually made a machine of his own. Saint's design had the overhead arm for the needle and a form of tensioning system, which was to become a common feature of later machines. There were various attempts and patents awarded for chain stitch machines of varying types from 1795-1830, none of which were used to any degree of success—many of which didn't work correctly at all. A French tailor Barthelemy Thimonnier made the next major breakthrough. He did not try to replicate the human hand stitch, looking instead for a way of finding a stitch that could be made quickly and easily by machine. His machine worked by using a horizontal arm mounted on a vertical reciprocating bar, the needle-bar projected from the end of the horizontal arm. The cloth was supported on a hollow, horizontal fixed arm, with a hole on the topside, which the needle projected through at the lowest part of its stroke. Inside the arm was a hook, which partly rotated at each stroke in order to wrap the thread (fed from the bobbin onto the hook) around the needle at each stroke. The needle then carried the thread back through the cloth with the upward motion of its stroke. This formed the chain stitch, which held the cloth together. The machine was powered by means of a foot pedal. The easiest way to describe this is to picture the machine working upside-down from how sewing machines are generally thought of today—the stitch was formed on the top of the cloth, not the bottom as with most other chain stitch machines made since. Thimonnier was awarded a French patent in 1830 and 80 of these machines were installed in a factory in Paris to stitch soldiers' clothing. Other tailors concerned for their livelihood invaded the factory and smashed the machines. Chain stitch has one major drawback—it is very weak and the stitch can easily be pulled apart. A stitch more suited to machine production was needed, it was found in the lock stitch. A lock stitch is created by two separate threads interlocking through the two layers of fabric, resulting in a stitch that looks the same from both sides of the fabric. Although the credit for the lock stitch machine is generally given to Elias Howe, Walter Hunt developed it first over ten years before, in 1834. His machine used an eye-pointed needle (with the eye and the point on the same end) carrying the upper thread, and a shuttle carrying the lower thread. The curved needle moved through the fabric horizontally, leaving the loop as it withdrew. The shuttle passed through the loop, interlocking the thread. The feed let the machine down—requiring the machine to be stopped frequently to set up again. Hunt grew bored with his machine and sold it without bothering to patent it. Elias Howe patented his machine in 1846; using a similar method to Hunt's, except the fabric was held vertically. The major improvement he made was to put a groove in the needle running away from the point, starting from the eye. After a lengthy stint in England trying to attract interest in his machine he returned to America to find various people infringing his patent. He eventually won his case in 1854 and was awarded the right to claim royalties from the manufacturers using ideas covered by his patent. Isaac Merritt Singer has become synonymous with the sewing machine. Trained as an engineer, he saw a rotary sewing machine being repaired in a Boston shop. He thought it to be clumsy and promptly set out to design a better one. His machine used a flying shuttle instead of a rotary one; the needle was mounted vertically and included a presser foot to hold the cloth in place. It had a fixed arm to hold the needle and included a basic tensioning system. This machine combined elements of Thimonnier’s, Hunt's, and Howe’s machines. He was granted an American patent in 1851 and it was suggested he patent the foot pedal (or treadle) used to power some of his machines; however, it had been in use for too long for a patent to be issued. When Howe learned of Singer’s machine he took him to court. Howe won and Singer was forced to pay a lump sum for all machines already produced. Singer then took out a license under Howe’s patent and paid him $15 per machine. Singer then entered a joint partnership with a lawyer named Edward Clark, and they formed the first hire-purchase (time payment) scheme to allow people to afford to buy their machines. Meanwhile Allen Wilson had developed a reciprocating shuttle, which was an improvement over Singer’s and Howe’s. However, John Bradshaw had patented a similar device and was threatening to sue. Wilson decided to change track and try a new method. He went into partnership with Nathaniel Wheeler to produce a machine with a rotary hook instead of a shuttle. This was far quieter and smoother than the other methods, and the Wheeler and Wilson Company produced more machines in 1850s and 1860s than any other manufacturer. Wilson also invented the four-motion feed mechanism; this is still seen on every machine today. This had a forward, down, back, and up motion, which drew the cloth through in an even and smooth motion. Through the 1850s more and more companies were being formed and were trying to sue each other. Charles Miller patented the first machine to stitch buttonholes (US10609). In 1856 the Sewing Machine Combination was formed, consisting of Singer, Howe, Wheeler and Wilson, and Grover and Baker. These four companies pooled their patents, meaning that all the other manufacturers had to obtain a license and pay $15 per machine. This lasted until 1877 when the last patent expired. In 1822 J. Makens Merrow purchased a powder mill in Mansfield, Connecticut for the manufacture of gunpowder. The mill was destroyed shortly thereafter by a gunpowder explosion. J.M. Merrow then founded one of the first knitting mills in the United States in partnership with his son, Joseph B. Merrow, under the name J. M. Merrow and Son. This knitting mill was located on the site of the old gunpowder mill in Mansfield, Connecticut. In the 1840s a machine shop was established at the Merrow mill to develop specialized machinery for the knitting operations. And in 1877 the world’s first crochet machine was invented and patented by Joseph M. Merrow, then-president of the company. The crochet machine was the first production overlock sewing machine. The Merrow Machine Company went on to become one of the largest American Manufacturers of overlock sewing machines, and continues to be a global presence in the twenty first century as the last American Overlock Sewing Machine manufacturer. James Edward Allen Gibbs (1829-1902), a farmer from Raphine in Rockbridge County, Virginia patented the first chain-stitch single-thread sewing machine on June 2, 1857. In partnership with James Wilcox, Gibbs became a principal in Wilcox & Gibbs Sewing Machine Company. Wilcox & Gibbs commercial sewing machines are still used in the twenty-first century. In 1905 Merrow won a lawsuit against Wilcox & Gibbs for the rights to the original crochet stitch. Sewing machines continued being made to roughly the same design, with more lavish decoration appearing until well into the 1900s when the first electric machines started to appear. At first these were standard machines with a motor strapped on the side. As more homes gained power, these became more popular and the motor was gradually introduced into the casing. An overlock stitch sews over the edge of one or two pieces of cloth for edging, hemming or seaming. Usually an overlock sewing machine will cut the edges of the cloth as they are fed through. Such machines are called "sergers." Some overlock sewing machines are made without cutters. The inclusion of automated cutters allows overlock machines to create finished seams easily and quickly. An overlock sewing machine differs from a lockstitch sewing machine in that it utilizes loopers fed by multiple thread cones rather than a bobbin. Loopers serve to create thread loops that pass from the needle thread to the edges of the fabric so that the edges of the fabric are contained within the seam. Overlock sewing machines usually run at high speeds, from 1000 to 9000 revolutions per minute (rpm), and most are used in industrial setting for edging, hemming and seaming a variety of fabrics and products. Overlock stitches are extremely versatile, as they can be used for decoration, reinforcement, or construction. Overlocking is also referred to as “overedging,” “merrowing” or “serging.” Though “serging” technically refers to overlocking with cutters, in practice the four terms are used interchangeably. The usage of sewing machines has grown over the years and has outpaced sewing by hand. Modern machines may be computer controlled and use stepper motors or sequential cams to achieve very complex patterns. Most of these are now made in Asia and the market is becoming more specialized, as fewer families own a sewing machine. New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: The history of this article since it was imported to New World Encyclopedia:
<urn:uuid:a592452b-a45e-4cb5-b3e4-9a2fc4966969>
CC-MAIN-2020-16
https://www.newworldencyclopedia.org/entry/Sewing_machine
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371700247.99/warc/CC-MAIN-20200407085717-20200407120217-00194.warc.gz
en
0.973054
2,642
3.765625
4
There is clear evidence for an association between low vitamin D status and pain in the general population. The chronic pain syndromes included chronic back. Back Pain. But sometimes, back pain is evidence of something more – a vitamin D deficiency. Scientific studies have demonstrated that inadequate vitamin D levels are linked with back pain, especially in older populations, and in those with lower than average bone density. Jun 16, 2017. Chronic Headache Conditions, Vitamin D Deficiency, and the Neck. Migraine, Migraines, Headache, Headaches, Head Pain, Migraine. Jul 10, 2018. A blood test is the only true gauge of a vitamin D deficiency, but there are a number. Another bone-related consequence of low D? Back pain. Vitamin D Can Reduce Back Pain. Vitamin D helps your body absorb calcium, so without enough vitamin D, you won’t get enough calcium. Without enough calcium, your bones can weaken, potentially leading to bone and joint pain, or musculoskeletal pain. (Lack of calcium and vitamin D can also lead to osteoporosis, as explained in Causes. Sep 15, 2014 · Back Pain Is a Primary Reason Why the US Has so Many Prescription Drug Addicts. Unfortunately, many people simply end up taking painkillers and retiring to bed instead of increasing their activity once back pain starts. Mar 15, 2015. Vitamin D deficiency has been associated with headache, abdominal, knee, and back pain, persistent musculoskeletal pain, costochondritic. Jun 25, 2018. Vitamin D deficiency: Sunshine may be one of the best sources of Vitamin D, but there are several more. And this essential vitamin is not just. Your vitamin D level is one of those things most of us don’t think about, but let me tell you, once you figure out that you have a vitamin D deficiency and start taking. the worst part, my lower-ba. Vitamin D deficiency has been associated with a number of medical maladies— from depression to chronic pain. However, investigators have not pinpointed a. were excluded from the study. For comparison, we measured vitamin D levels in 392 orthopaedic patients presenting with back pain but without a fracture. Jun 29, 2010 · Vitamin D likely to help treat depression, post-traumatic stress disorder and other mental health problems Jan 19, 2015. Abstract: Background: low back pain (LBP) is an extremely common health Problem in Asian communities. It is a major cause of activity. Back pain is strongly associated with low vitamin D. Back pain was found in one study to be relieved in 95% of patients by vitamin D. Vitamin D treats many types of Chronic Pains. Vitamin D improves bone health, which decreases back problems. (More Back Pain. Learn nine easy tips to boost your vitamin D levels and get you back in balance. Langston says you may have a vitamin D deficiency if you feel pain when you. Oct 24, 2017. It might not surprise you to hear that we see a lot of low back pain in. It is estimated that 60% of Americans are deficient in vitamin D. Why is. Oct 15, 2009. Vitamin D deficiency affects persons of all ages. Common manifestations of vitamin D deficiency are symmetric low back pain, proximal muscle. Neck And Upper Back Pain Exercises What is neck pain (cervical pain)? The cervical spine is a marvelous and complex structure. It is capable of supporting a head weighing 15 or more pounds while moving in several directions. No other region of the spine has such freedom of movement. This combination however, complexity and mobility. Exercise Guidelines for the Best Neck Vitamin D deficiency: Back pain could be a sign of condition. For many people, signs of a vitamin D deficiency are subtle, and are difficult to spot. Other signs of a deficiency include extreme tiredness, frequent infections, hair loss and muscle pain. If you think you may have a deficiency, you should see a GP and get your blood levels checked. Dr. Michele N. Ross is the author of Vitamin Weed: A 4-Step Plan to Prevent and Reverse Endocannabinoid Deficiency and co-author of Train Your Brain to Get Thin. Exercise To Relieve Lower Back Pain Floor exercises for your core muscles. The following exercises will strengthen your lower back, abdominal, and/or pelvic floor muscles. Targeted exercises are useful for these muscle groups, because they may not get much use during daily activities. Lower back pain is extremely common. These gentle stretches address fascial release, mobility AND stability in the upper 10 signs you need more vitamin d. Bone pain. Vitamin D is an essential component to bone health and bone healing. When the bone is not as strong as it could be, chronic aching, deep bone pain can exist. Osteoporosis. Vitamin D is essential along with weight-bearing exercises, calcium, magnesium, and overall good nutrition for strong healthy bones. Chronic Pain and Vitamin D Deficiency. I did gain weight very rapidly from one of the meds. And after seeing a Dermatologist for psoriasis, he referred me to a Rhuematologist. My Rheumotogist saved my life by testing my blood for everything possible. I got a phone call that my vitamin D was extremely low. It added that aches related to vitamin D deficiency are often most noticeable in the knees and back. “Those who don’t have en. Vitamin D deficiency: Lower back pain one of the symptoms of a lack of vitamin D January 17, 2019 Vitamin D helps regulate the amount of calcium and phosphate in the body – nutrients which are needed to keep bones, teeth and muscles healthy. Vitamin D Deficiency Linked With Depression. Vitamin D deficiency is being linked with bone trouble, lower back pain, heart trouble and now depression. Linking vitamin D deficiency and depression makes a certain intuitive sense to me. Vitamin D is produced in your body when your skin is exposed to light. Jan 04, 2018 · -bone pain Vitamin D acts as a hormone steroid in the body and has great anti-inflammatory and anti-pain properties. It also has receptors in the spinal cord, the discs and the nerve roots. Vitamin D deficiency: rickets or osteomalacia, depending on age. soft tissue. proximal muscle weakness and instability; low back pain on both side of the spine. Nov 6, 2017. In adults, severe vitamin D deficiency leads to osteomalacia. Osteomalacia causes weak bones, bone pain, and muscle weakness. Several researchers have indicated that Vitamin D deficiency may be possibly related to chronic musculoskeletal pain including chronic low back pain (CLBP). OBJECTIVES: The present study was conducted to determine the prevalence of hypovitaminosis D and its contribution to. If you have back pain, a vitamin D deficiency could be making your pain worse. A study published in Pain Physician in 2013 found that severe pain was. Symptoms of bone pain and muscle weakness can mean you have a vitamin D deficiency. However, for many people, the symptoms are subtle. However, for many people, the symptoms are subtle. May 24, 2018. Vitamin D is a crucial nutrient for bone health. Some studies suggest a link between vitamin D deficiency and joint or muscle pain, including. You can reduce the back pain by taking Vitamin D supplements as prescribed by your doctor if the root cause of your back pain is Vitamin D Deficiency. So, you need to talk to your doctor in case of severe back pain and take the prescribed pain medications and perform the prescribed physical therapy exercises to reduce the symptoms of back pain. Before vitamin D was routinely added to food, such as milk, children were at risk for a condition known as rickets. In adults, vitamin D wards off osteomalacia (soft bones) and osteoporosis (loss of bone mass). People with vitamin D deficiency are more likely to experience infection and insulin resistance. scouring rush equisetum hyemale herbs for back & joint pain. A lot of people suffer back injuries; I’ve heard estimates that 65 million people in America alone suffer from chronic back pain. Sep 28, 2018. Fatigue, joint pain, low bone density, and weight gain: These and other ailments. the back and knees); Blood sugar issues; Low immunity; Low calcium levels. Other symptoms of vitamin D deficiency include exhaustion and. Jan 7, 2016. Vitamin D deficiency symptoms may not be obvious. Left untreated, these conditions can lead to bone pain, soft and brittle bones, and muscle. Feb 26, 2015 · C-reactive protein (CRP) a protein that is produced in the liver in response to inflammation.CRP is a biomarker of inflammation that is strongly associated with the risk of cardiovascular events, such as myocardial infarction and stroke. Calcification the process of deposition of calcium salts. In the formation of bone this is a normal condition. Surprisingly, though, vitamin D deficiency is actually pretty common. found that there was an association between low vitamin D levels and chronic lower back pain. Some of the women with very low l. More Back Pain Relief Articles … - Acupuncture In Back Pain: Aug 21, 2018. If back pain is reducing your quality of life or you've tried. evidence that acupuncture is more effective than no treatment" in relief of back pain. Because musculoskeletal pain is a commonly seen syndrome for acupuncturists,… - Core Strengthening Exercises For Low Back Pain: Low back pain usually involves muscle spasm of the supportive muscles along. EARLY EXERCISE: Gentle exercise for mobility and stretching (especially the. This myth of “mechanical” failure of the low back has many unfortunate consequences, such as unn… - Hamstrings Stretching For Lower Back Pain: The Best Lower Back Exercises & Stretches for a Strong, Pain-Free Back. By Chelsea Axe, DC, CSCS. Reviewed by Ron Torrance, DO. December 18, 2018 Aug 25, 2014. But adding this easy hamstring stretch to your daily routine can help. on your lower b… - The Doctors Back Pain Treatment: Dec 1, 2006. Some men are able to successfully treat their lower back pain with. In most cases, doctors cannot pinpoint the cause of the pain, and in most. Back pain is commonly heard health compliant and most neglected problem. Back pain originates … - Physical Therapy Back Pain Exercises: Your doctor might suggest this type of treatment if you’ve had an injury or illness that makes it hard to do daily tasks. Physical therapy (PT) is care that aims to ease pain and help you. Dec 14, 2018. physical therapy stretch back pain. The long-te…
<urn:uuid:b1a1afa1-fbb0-4279-b6c4-4a6510a99bb3>
CC-MAIN-2020-16
https://www.orthopedicdoctors.net/back-pain-vitamin-d-deficiency
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370503664.38/warc/CC-MAIN-20200331181930-20200331211930-00194.warc.gz
en
0.92724
2,292
2.9375
3
EROI = every energy source being used for energy production releases a certain amount of energy, which is divided by the energy necessary to build the plant, to supply the plant with energy, to operate the plant, and to take the plant apart at the end of its life span. The same principle also applies to any other energy source we can possibly imagine, in terms of its output and input ratio. Fukushima is a trauma for Japan. It is a disaster of epic proportions and will have dramatic repercussions for Japan’s energy landscape – for many many years to come. Fukushima had a measurable impact on Germany’s nuclear policy. Japan underwent a significant change in its energy policy, considering geothermal energy in the aftermath of the nuclear catastrophy. Keep in mind that nuclear energy and uranium have a very high EROI. EROI, which describes the Energy-Return-on-Energy-Invested. As a starting point for what I will discuss below, nuclear energy has an EROI of around 75. It depends on the exact type of a nuclear power plant and the security measures that have been put into place for these plants. It also depends on nuclear waste management which I discussed previously; as a consequence the EROI varies between 40:1 and 80:1. We also see that many Generation III+ reactors are currently being built. Generation III+ reactors are light water reactors with improved economics. To elaborate on that further, EU countries frequently export their nuclear waste to countries like Russia. Depending on exact terms of the agreement, nuclear waste can be stored in Russia temporarily. Can nuclear waste be stored forever? Other countries like Australia have debated if accepting nuclear waste for long-term storage is an option, based on an assessment of Australia’s geology. Every country has to assess the risk nuclear waste poses individually. On the other hand, a lot of nuclear waste is stored temporarily on site at the nuclear power plant. If we compare nuclear power plants just on the basis of their energy potential, we see that nuclear power plants compare very well with oil and gas production. Of course, we have to be very careful in assessing the long-term impact of storing nuclear waste and the energy that we need to cool the nuclear waste. That is particularly true since the EROI of oil production has fallen considerably. We have to remember that oil production in the United States had an EROI of 80:1 at the lower end, going up to 100:1. But that was back in the early 20th century. Seen through that lens, the United States invested 1 barrel of oil in return for 80 to 100 barrels of oil. A great ratio! Just think of it, the EROI for Norwegian oil production in the North Sea was somewhere in between 40:1 to 50:1 until recently. It is hard to imagine any other energy resource in the 20th century that could possibly compare to that. In the United States, nuclear energy was commercialized in the 1960’s. The first commercial nuclear power plant became operational in 1957, and was built in Shippingport, Pennsylvania. The engineers and managers were aware of the fact that for electricity generation, they had to compete with fossil fuels (oil, gas, coal). Everyone knew that renewable energy had a pretty low EROI, lower than that of nuclear energy and fossil fuels. Hydroelectric power was the exception, possessing an EROI of 35:1 to 50:1, which depends on the exact location and your capacity to store water in reservoirs. Reason number 2 is that renewable energy was extremely costly, which definately was the case in the 1960’s when it wasn’t clear that renewable energy would become as competitive as it is today. Serial production of solar panels and wind energy, all that came much later. Without oil we depend more on nuclear energy We are losing the world’s oil reserves! We have crossed the Rubicon. Geologists pointed out there is a very good chance that we have already reached peak oil globally. Analysis show that oil production peaked around 1971 in the United States. Geologists discovered other world regions have peaked as well. If we take all countries together and put them into one graph, if we take their production curves and put them in there, what we get is a standard normal distribution (SND for short). We rise initially, we reach a peak, and then fossil fuel production declines steadily. The implications of this are even more dramatic. We see the oil reserves are not as abundant as they were in the past. The large oil companies have exploited the world’s most productive oil fields for many years. As investors, these oil companies often had a leg up on state-owned oil companies in terms of designing and operating oil rigs. In the case of Great Britain, one could see how the most advanced equipment has been put to use in the North Sea basin to produce more oil and natural gas in less time. Efficiency and effectiveness meant that these oil fields were used up faster. The oil (and gas) industry needs to make some major changes: Oil wells that have been used for many years, they will go first as the flow rate slows down. When older oil wells have been exploited, less profitable oil wells remain. From a financial point of view, what we are left with are oil wells that do not have a cash flow that is as good as older oil wells. We have to keep in mind that investments in the oil industry are compared with investments in other sectors and industries, including renewable energy installations. We are left with the oil fields that have high capital costs and high maintenance costs, with considerable costs to maintain the complex technical infrastructure and difficulty to commercialize the project. The main problem for oil producers are the high capital costs and operating expenses required to maintain all of that modern infrastructure, for oil rigs. We are left with two options: - Option number 1: We exploit smaller reserves, oil wells with a slower flow rate. That means financing is often much more difficult to acquire, many oil fields are not even profitable when the price of crude oil is low. Crude oil prices were somewhere around 60 dollar per barrel of oil (Brent and West Texas Intermediate) since 2015 and they are unlikely to go much higher in the near future. The world economy is unlikely to absorb a price increase of crude oil above 100 US dollars per barrel of oil. We should keep in mind that the flow rate of shale oil and shale gas wells rises quickly, but then also diminishes quickly. - How does this effect us? - The increased use of high tech implies that oil reserves have a much lower EROI. Looking at oil exploration we clearly see that complexity comes at a huge cost. We already see that the remaining oil fields worth exploring are in OPEC countries, most often in non-Western countries, or countries that do not belong to OPEC such as Russia. It is hard to compete on that basis if your own oil fields are diminishing rapidly. The cash flow is not enough back home, but with new technology one could take on new projects and explore oil fields in OPEC countries. Western oil companies have the advantage that they possess technical knowhow that is of great interest to OPEC countries and oil producing nations. Western knowledge in the mining sector is also of interest. - What we see is that many oil companies have opted to enter cooperative agreements with state-owned oil companies in OPEC countries and oil producing countries like Russia that have undiscovered recoverable oil reserves worth exploring. Russia has many interesting locations for oil exploration in the north of the country. - The second option is as follows: The worldwide oil exploration is centered on ever fewer oil producing nations. Among them we have OPEC nations such as Iran, Saudi Arabia, Venezuela, Nigeria and other oil producers such as Russia. For historical reasons, Russia did not join OPEC. - The global oil business has changed. There is healthy competition between the growing East Asian economy and European economy, which grows at a much slower pace. Both world regions rely on energy imports, they are dependent on oil exporting countries. Both world regions need energy imports to maintain their trade with each other and the rest of the world. I didn’t include the United States. That is because the United States has been able to increase oil production and its share in world oil markets. At least in the short term the United States can draw on shale oil and shale gas, and has access to additional reserves in NAFTA region. American shale oil and shale gas production requires much higher oil prices, to maintain investments in shale oil and shale gas. It is possible that investments in shale oil and shale gas will decline over the coming years, because quite a lot of them aren’t profitable. - I see the problems somewhere else. Problems are mainly centered on the long-term viability of oil prices on commodity markets. Western and Asian nations are in direct competition with one another and sustain countries in the Middle East with imports. Many OPEC countries would need prices above 100 US dollars per barrel of oil to avoid a recession in their national economy. I think that is a huge risk: Lower prices lead to instability in oil producing nations. - We can conclude from this: The EROI of fossil fuels (oil, gas, coal) depends on geological conditions of rock strata and fossil fuels compressed in the rock. Geological conditions influence overall capacity, flow rate and chemical composition. The production of fossil fuels (I have mentioned oil mostly) follows a standard normal distribution curve – a statistical concept that we apply to resource exploration. That means the EROI of fossil fuels can only decrease in the future. Due to market pressure, many oil companies will have to explore new sites and tap into oil fields that are less profitable in the long-term. That might include Canadian tar sand. - From my point of view, there are three reasons to do this: Reason number one is that interest payments are near record low. Reason number two is that there are tax incentives to invest in oil exploration and production. Reason number three is that oil reserves are real assets. The problem with it is that shale oil and shale gas in particular depend on a higher oil price (gas price) to be profitable on the world market. Prices are low at the moment. As I have said, that doesn’t help with the EROI. What about solar and wind energy? Solar energy is worthwhile, but PV really depends on the exact location. The yield (kWh for every kWP) of solar panels in regions with low solar radiation, as it is the case in Europe, is much lower than in regions near the equator. In addition, cloud coverage is much higher in Europe, higher then almost anywhere else on the planet. At first glance photovoltaic installations appear less suitable for Europe, seasonal variations and the earth’s tilts contribute to this. Let’s take Desertec for example. Desertec was supposed to provide base load electricity to the European electricity grid. The plan was to generate electricity in North Africa, with high voltage direct current that electricity had to be delivered to Central Europe, through the Mediterranean. Friction would have meant that a significant portion of the electricity would have been lost on its way to Europe. It is particularly noteworthy that until now, the project could not be financed. But it is also the case that Northern Europe and Germany aren’t ideal locations for photovoltaic installations. In Northern Europe, wind energy makes a lot more sense. Wind velocity in the North Sea is much higher than on mainland Europe. Wind velocity is easier to predict. Whoever takes a look at the map of Europe realizes that all these countries located along the North Sea basin (the Netherlands, the UK, Belgium, Germany, Denmark, Norway) are all pretty close to each other. So that is not an ideal situation, because when the wind doesn’t blow in England it is likely to be a similar situation in the Netherlands. For Europe it is crucial to avoid weather-dependency and intermittent electricity production. But the North Sea basin is not the only region suited for offshore wind energy, and coastal waters off Massachusetts are ideally suited for offshore wind energy. There, offshore wind energy could make a significant contribution to our electricity production. In fact, coastal waters along the eastern seaboard off the United States are known to have high wind velocity, making them ideal sites for offshore wind parks. Let us take another look at the transmission system operators (TSO for short): To make use of offshore wind energy one has to connect the offshore wind park to the grid network, which can be very costly. Very often, it is not clear who has to pay for this, the cost to connect offshore wind parks to the grid network on land. This is the case when investors of the offshore wind parks aren’t the same ones as for the grid network. It lowers the EROI significantly for solar and wind energy, often the EROI is less than 10:1 without energy storage. Solar and wind energy and intermittent energy sources. It means the EROI is lower than it would be with other energy sources that are available at any time of the day. Solar and wind energy cannot serve as base load for transmission network operators. Of consequence is the fact that wind parks on land have a much lower EROI then offshore wind parks. That is not ideal. To compare, hydropower is available every day of the week, the whole year. It remains relatively impervious to short term weather changes. Water reservoirs make hydropower an ideal energy solution. As far as the electricity grid is concerned, we can respond to these changes ahead of time. Seen from this angle, nuclear power and hydropower can serve as future alternatives to fossil fuels I may have touched on this previously: Scientific studies alluded to the fact that industrial societies require an EROI of maybe 15:1 to 10:1 to function smoothly. Some authors are even more conservative when they measure the EROI of solar and wind energy. They put the EROI necessary to maintain modern civilization at around 10:1 or even 9:1. With energy storage in batteries and so on, solar and wind energy are at just at that tipping point where they provide enough energy to ensure most aspects of modern industrial life. Should it be the case that storage options for solar and wind energy do not improve significantly in the near future, we face considerable damage to our economy. Let us take another look at the EROI of renewable energy sources. Renewable energy cannot guarantee the living standard we have come to enjoy in the western world, the wealth of industrial nations relies on an energy surplus. We can’t possibly build that many hydropower plants in Europe, to compensate for the loss of fossil fuels. I just want to put this into perspective to understand the actual importance of hydropower in global energy markets: Not all countries possess water resources sufficient to meet the needs of their local population. So many nations have problems to provide drinking water to their population. Hydropower seems like a mirage, disguised as an energy solution. Water, agriculture and energy issues intersect, and hydropower dramatically lowers ground water levels downriver. But many of these countries require energy solutions with a high EROI ratio. Let us think of it this way: Not all countries possess enough water resources for agricultural production. Many countries face difficulties just to allocate water, for agricultural production and other purposes. Declining ground water levels are a serious issue in some of these countries. Many countries located along the equator face less of an energy problem and more of a water problem. Water resources often do not meet local demand, and energy should be expended converting salt water into fresh water. To do that, nuclear energy would be a good option, but that would come with certain risks in some of these locations, including security concerns how nuclear power plants are being handled. The risk of technical failure remains with us for a long time to come; human error cannot be excluded completely from nuclear power stations. Civilian use of nuclear power stations is on the rise globally and has prevailed as a technological solution in developing countries, that being said, nuclear power has some major drawbacks when it comes to safety as has been shown by nuclear incidents in Seven Island, Fukushima and Chernobyl. Fossil fuel use is not without its risks either. The transportation business is closely linked to fossil fuel use. In fact, looking at it factually, there simply is no energy resource without its associated risks. Statistically, there remains quite a high risk that you face from traffic and individual exposure to fumes and dioxins, NOx and other volatile compounds simply by walking down the street. The health risk from diesel engines can be much greater. In fact, it is quite difficult to demonstrate that your health suffers from car emissions, possibly from diesel engines. Nuclear incidents loom large because they are single events that often have an immediate effect. So in effect, one should not underestimate the risks of nuclear incidents, though we are entering a new age, constructing new plants such as Generation III and Generation III+ reactors. Generation IV reactors generally allow for a much better level of safety and less radioactive waste is produced in the process. The actual problem with nuclear power plants is the radioactive waste being generated. In a previous article, I went into more detail on this topic, and have examined this particular issue so that we may gain a proper understanding of what we can do to resolve our energy predicament. The good news first, we do have alternatives to the fast breeder reactor that we currently use for generating electricity from nuclear power stations: Dual fluid reactors can use thorium, but they do not have to. A considerable amount of research analyzes the use of thorium for generating electricity in an environmentally-friendly way. Most research activity is conducted in Asian countries. Generally, the focus is on specific aspects of its design and researchers examine if it is possible to commercialize dual fluid reactors and using molten salt in various forms as a sort of moderation effect. For that purpose, thorium can be used as breeder material. It appears the EROI exceeds that of conventional nuclear power plants by multiples. Nevertheless, it has to be measured more precisely, for proper assessment. Radioactive decay of thorium is minimal compared to uranium. Thorium is an ideal breeder material. We would have two separate systems which means safety is greatly improved. One system is for the breeder material and another one for the cooling process. The EROI depends among other things on the design of the reactor, on safety measures that have to be taken. There are different waste disposal options available for molten salt reactors, so the EROI should be above Generation III+ reactors because waste disposal is less energy intensive, especially the cooling of nuclear waste. Thorium is a byproduct of rare earth mining, there is a considerable cost associated with processing the material. So we see that the EROI for nuclear power can vary considerably. Many thanks for the shared interest in the energy world! For any inquiries and/or networking opportunities you can contact me at the following email address:
<urn:uuid:cb33f360-41ec-4faf-bb20-f49ee7819469>
CC-MAIN-2020-16
https://boegelsack.energy/2019/06/20/energy-potential-of-fossil-fuels-nuclear-and-renewable-energy/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371624083.66/warc/CC-MAIN-20200406102322-20200406132822-00113.warc.gz
en
0.962341
3,939
3.5625
4
- Crucial role of scientific knowledge and research output in national development - Lack of government support and funding the biggest hindrance to solving specific problems confronting Africa - Impact of brain drain remains acute - Greater scientific research collaboration within Africa an imperative - Virtual collaboration a viable and cost-effective counter to hindrances - Significant benefits of an accessible and continuously updated Pan-African database of research facilities and their equipment inventories In March 2016 I attended the Next Einstein Forum Global Gathering in Dakar, Senegal. It was the first global scientific gathering ever held in Africa, and it was driven by the African Institute for Mathematical Sciences. The event showcased young African scientists who are conducting world class research within and outside the continent, and provided an opportunity to make government officials and investors more aware of the crucial role of science in national development. Workshops were arranged beforehand to facilitate research collaboration between our researchers and counterparts from other parts of the world. It was declared that Africa will be the home of the next Albert Einstein. While such optimism is welcome, there is no cause for complacency. Africa constitutes 15% of the world’s population but produces only about 1% of scientific knowledge. According to the UNESCO Science Report 2010, Africa has 164 scientists per million inhabitants compared to 656 in Brazil, 4,180 in Europe and 4,663 in the USA respectively. It is also clear from this report that there is a direct relationship between the availability of scientists and the research output of a particular region. The statistics might make a curious mind ask if Africans are not particularly interested in science, or whether the current research environment fails to actively encourage the acquisition of scientific knowledge. The promotion of science and scientific research requires very good laboratories and teaching aids in public schools and universities. The notion that Africans are not interested in science is patently false. However, the lack of support by our governments remains the single biggest obstacle plaguing scientific research output. This may sound like a convenient explanation – “blame the government” – but the truth is that the promotion of science and scientific research requires very good laboratories and teaching aids in public schools and universities. While such facilities exist in pockets here and there, investment in education generally, and in scientific education specifically, is devastatingly low throughout Africa. Some efforts have been made to improve scientific research output, for example by setting up exchange programmes between our universities and foreign institutions. The impact of such initiatives on total output has been negligible. One explanation for this is that most researchers who travel abroad for training do not return home. The so-called “brain drain” remains acute. More than 300,000 highly qualified Africans live abroad, of whom one in ten have a doctorate. I do not wish to castigate fellow researchers who decide to work elsewhere, simply to emphasise that policy rarely favours the promotion of indigenous research. Many trained researchers who do return home often leave again, frustrated that their skills cannot be more effectively deployed in their home countries. They seldom find that they are able to apply their skills to solving problems critical to the continent, due to the lack of basic infrastructure, inadequate funding and support, political instability, misplaced government priorities or myriad other reasons. So now we have a scenario where researchers who are trained outside Africa and do not return are likely to focus on research problems that may not necessarily benefit the development of the continent. How to diminish the effects of brain drain is a topic of frequent and wide debate. I would like to focus on just one associated question. Are there ways of ensuring that scientific researchers trained within Africa are as competent as those trained abroad? I am certain the answer to this is affirmative. We, as African research scientists, urgently need to remind ourselves of the benefits of collaboration and ask ourselves how we can advance it. A very good example is the Regional Initiative in Science and Education (RISE), established in 2008 by the Institute of Advanced Study’s Science Initiative Group and funded by the Carnegie Corporation. RISE led to the creation of five research networks throughout Africa. I was a beneficiary of one of these – the African Materials Science and Engineering Network (AMSEN) – and was its representative at the Next Einstein Forum. The others are the Southern African Biochemistry and Informatics for Natural Products Network (SABINA), the Sub-Saharan Africa Water Resources Network (SSAWRN), the African Natural Products Network (RISE-AFFNET) and the Western Indian Ocean Initiative (WIO). The Pan African Materials Institute (PAMI) at the African University of Science and Technology in Abuja, Nigeria, is another example of an emerging initiative to improve science literacy and reduce brain drain. These institutions are providing much-needed support for researchers to solve specific problems confronting Africa. RISE has enabled the acquisition of state-of-the-art research equipment: the Federal University of Technology Akure, where I completed my Masters, was able to acquire a potentiostat, an optical microscope, a polishing machine and other essential equipment for research in my field, metallurgical engineering. It has also facilitated networking opportunities for researchers that have boosted their confidence in research findings and their contribution to scientific knowledge. Despite these initiatives, research collaboration within Africa is still very weak. We, as African research scientists, urgently need to remind ourselves of the benefits of collaboration and ask ourselves how we can advance it. The need (and demand) for collaboration The best approach to problem solving and knowledge creation is multi- and interdisciplinary research. This was acknowledged by the Vice Chancellor of the University of Cambridge at the Next Einstein Forum Global Gathering in Dakar when he said: “Africa’s search for the new Einstein seems to be a misplaced priority because the most efficient way of solving the challenges at hand is through vibrant collaborations”. A typical example is the observation of gravitational waves through the facility called LIGO. Gravitational waves were predicted by Albert Einstein’s general theory of relativity in 1915. One hundred years later, it took around 1,000 scientists from over 130 global institutions to measure these waves. This confirms what can be achieved through collaboration. All continents were represented in this important discovery except Africa. No African university took part. A number of recent articles by African researchers have focused on the imperative to collaborate, but none really address how this can be achieved. Research scientists know all too well about the lack of funding and basic infrastructures, unequal distribution of research facilities on the continent, difficulty in obtaining study visas and expensive travel from one country to another. The prevailing conditions for conferences, workshops and symposiums – where collaboration is often initiated and fostered – are not propitious. But there is a way to counter these hindrances – through virtual collaboration. My fellow students and I knew that the facilities we needed existed in America, Asia and Europe. What we did not know at the time – because most of the literature in my field is published in American, Asian and European journals – was that South Africa had some of the facilities. One thing that is striking is the willingness of the research students and their supervisors to pay for conducting experiments. Today, I am a Doctoral student in the Centre of Excellence in Strong Materials at the University of the Witwatersrand in South Africa. It was not so difficult for me to register for the programme because FUTA and University of the Witwatersrand are both part of AMSEN and I sent my research proposal to potential supervisors in South Africa. I am currently developing less expensive titanium alloys for land-based applications, part of an effort by the South African government to set up a robust titanium industry. On average, I receive three calls a month from students and lecturers who want to carry out experiments using facilities that are not available in their own countries. Most of the calls are from Nigeria, but also from Kenya and Ghana. Some callers work in the same field as me; others are asking if I know any institution with research facilities for chemistry. Nigerian and South African friends at other universities here tell me they receive similar inquiries. One thing that is striking is the willingness of the research students and their supervisors to pay for conducting experiments. Funding for research in Africa may be inadequate, but it is clearly not non-existent. In 2015 I attended the African Materials Research Society Conference in Accra, Ghana. There I met with colleagues who are currently lecturing in Kwara State Polytechnic in Nigeria. It was surprising to learn that a state-owned university now has a scanning electron microscope in good working order. I asked how often they use the microscope and the response was “not very often, because people are not aware of the availability of such a microscope”. Although the microscope is not as powerful as those available in South Africa, good quality images at higher magnifications can still be taken. Other research institutes in Nigeria have acquired facilities that research students who need them do not know about. A friend of mine working in one of the institutes under the National Agency for Science and Engineering Infrastructure (NASENI) told me about new facilities they have in their laboratories. How many researchers in my field – in Nigeria or elsewhere in Africa – know that NASENI’s Prototype Engineering Development Institute (PEDI) and Engineering Materials Development Institute (EMDI) have a CNC lathe, a CNC laser cutting machine, an Instron universal testing machine, an X-ray diffraction facility and a vibrating sample magnetometer? I only discovered by word of mouth. I also learned from my friend that much of the new equipment is either under-utilised or not used at all. The major stumbling block is lack of up-to-date and accessible information. Recently, I had the privilege of assisting my former supervisor at FUTA by commissioning X-ray diffraction (XRD) analysis for him in South Africa. This was much cheaper than if he had come all the way from Nigeria himself. However I subsequently discovered that the XRD measurement could have been carried out at the University of Ghana and it would have cost at least 20% less if the samples had been sent to Ghana rather than South Africa. The irony is that the University of Ghana and FUTA are both AMSEN nodes. The information should have been readily available. The experiences above have emphasised to me the widespread demand for research facilities, the ability of many researchers to pay for experiments and the ready availability of some of the required facilities within Africa. The major stumbling block is lack of up-to-date and accessible information. Towards a virtual resource network All African research institutions have access to the internet, albeit at varying speeds and cost. Academic “aggregator” websites exist, such as researchgate.org and academia.edu. So why not create an online platform to which research institutions can upload a list of their facilities with details of their availability, costs for use and the person responsible for the equipment? In this way, - underutilised facilities would be better used and funds generated for maintenance or the acquisition of new equipment (n.b. frequency of use supports grant applications for new equipment and cases for laboratory expansion) - researchers could more easily carry out the high quality research required for their work to be publishable in high impact scientific journals, thereby raising their profile and enabling them to connect more easily with global networks - researchers could negotiate for recognition as co-authors of papers if they assist in experiments by third parties conducted in the laboratory at their institution - proposals for research grants would be strengthened if the expertise and facilities of researchers in other African institutions could be called on when multidisciplinary research was required Finally, a virtual network of research facilities would make the organisation of intra-Africa conferences and scientific events to influence government policies more straightforward and more likely to yield results. The virtual network I propose sounds simple enough. Implementation is more problematic. A commitment from all universities and other research institutions to provide up-to-date information about their research facilities and equipment would not be easily realised. But it is possible and necessary. We can either continue to bemoan the state of scientific research in Africa, complain about the lack of government support and hope for generous assistance from abroad. Or we can start to address the problem by collectively making the most of the research facilities and expertise we already have. Michael Oluwatosin Bodunrin is a PhD student in the School of Chemical and Metallurgical Engineering at the University of the Witwatersrand, South Africa. Any institution interested in assisting to set up – or host – a virtual network of African scientific research facilities can contact Michael Bodunrin through: Edward Paice, Director of ARI [email protected]
<urn:uuid:4d0a1dc1-580f-4382-a11b-f7b06c8df7f5>
CC-MAIN-2020-16
https://www.africaresearchinstitute.org/newsite/blog/virtual-collaboration-an-antidote-for-low-research-output-in-africa/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505826.39/warc/CC-MAIN-20200401161832-20200401191832-00274.warc.gz
en
0.953308
2,607
2.921875
3
Every June, after the rainy season ends in the grassy highlands of southern Peru, the residents of four villages near Huinchiri, at more than 12,000 feet in altitude, come together for a three-day festival. Men, women and children have already spent days in busy preparation: They’ve gathered bushels of long grasses, which they’ve then soaked, pounded, and dried in the sun. These tough fibers have been twisted and braided into narrow cords, which in turn have been woven together to form six heavy cables, each the circumference of a man’s thigh and more than 100 feet long. From This Story Dozens of men heave the long cables over their shoulders and carry them single file to the edge of a deep, rocky canyon. About a hundred feet below flows the Apurímac River. Village elders murmur blessings to Mother Earth and Mother Water, then make ritual offerings by burning coca leaves and sacrificing guinea pigs and sheep. Shortly after, the villagers set to work linking one side of the canyon to the other. Relying on a bridge they built the same way a year earlier—now sagging from use—they stretch out four new cables, lashing each one to rocks on either side, to form the base of the new 100-foot long bridge. After testing them for strength and tautness, they fasten the remaining two cables above the others to serve as handrails. Villagers lay down sticks and woven grass mats to stabilize, pave and cushion the structure. Webs of dried fiber are quickly woven, joining the handrails to the base. The old bridge is cut; it falls gently into the water. At the end of the third day, the new hanging bridge is complete. The leaders of each of the four communities, two from either side of the canyon, walk toward one another and meet in the middle. “Tukuushis!” they exclaim. “We’ve finished!” And so it has gone for centuries. The indigenous Quechua communities, descendants of the ancient Inca, have been building and rebuilding this twisted-rope bridge, or Q’eswachaka, in the same way for more than 500 years. It’s a legacy and living link to an ancient past—a bridge not only capable of bearing some 5,000 pounds but also empowered by profound spiritual strength. To the Quechua, the bridge is linked to earth and water, both of which are connected to the heavens. Water comes from the sky; the earth distributes it. In their incantations, the elders ask the earth to support the bridge and the water to accept its presence. The rope itself is endowed with powerful symbolism: Legend has it that in ancient times the supreme Inca ruler sent out ropes from his capital in Cusco, and they united all under a peaceful and prosperous reign. The bridge, says Ramiro Matos, physically and spiritually “embraces one side and the other side.” A Peruvian of Quechua descent, Matos is an expert on the famed Inca Road, of which this Q’eswachaka makes up just one tiny part. He’s been studying it since the 1980s and has published several books on the Inca. For the past seven years, Matos and his colleagues have traveled throughout the six South American countries where the road runs, compiling an unprecedented ethnography and oral history. Their detailed interviews with more than 50 indigenous people form the core of a major new exhibition, “The Great Inka Road: Engineering an Empire,” at the Smithsonian Institution’s National Museum of the American Indian. “This show is different from a strict archaeological exhibition,” Matos says. “It’s all about using a contemporary, living culture to understand the past.” Featured front and center, the people of the Inca Road serve as mediators of their own identity. And their living culture makes it clear that “the Inca Road is a living road,” Matos says. “It has energy, a spirit and a people.” Matos is the ideal guide to steer such a complex project. For the past 50 years, he has moved gracefully between worlds—past and present, universities and villages, museums and archaeological sites, South and North America, and English and non-English speakers. “I can connect the contemporary, present Quechua people with their past,” he says. Numerous museum exhibitions have highlighted Inca wonders, but none to date have focused so ambitiously on the road itself, perhaps because of the political, logistical and conceptual complexities. “Inca gold is easy to describe and display,” Matos explains. Such dazzling objects scarcely need an introduction. “But this is a road,” he continues. “The road is the protagonist, the actor. How do we show that?” The sacred importance of this thoroughfare makes the task daunting. When, more than a hundred years ago, the American explorer Hiram Bingham III came across part of the Inca Road leading to the fabled 15th-century site of Machu Picchu, he saw only the remains of an overgrown physical highway, a rudimentary means of transit. Certainly most roads, whether ancient or modern, exist for the prosaic purpose of aiding commerce, conducting wars, or enabling people to travel to work. We might get our kicks on Route 66 or gasp while rounding the curves on Italy’s Amalfi Coast—but for the most part, when we hit the road, we’re not deriving spiritual strength from the highway itself. We’re just aiming to get somewhere efficiently. Not so the Inca Road. “This roadway has a spirit,” Matos says, “while other roads are empty.” Bolivian Walter Alvarez, a descendant of the Inca, told Matos that the road is alive. “It protects us,” he said. “Passing along the way of our ancestors, we are protected by the Pachamama [Mother Earth]. The Pachamama is life energy, and wisdom.” To this day, Alvarez said, traditional healers make a point of traveling the road on foot. To ride in a vehicle would be inconceivable: The road itself is the source from which the healers absorb their special energy. “Walking the Inca Trail, we are never tired,” Quechua leader Pedro Sulca explained to Matos in 2009. “The llamas and donkeys that walk the Inca Trail never get tired … because the old path has the blessings of the Inca.” It has other powers too: “The Inca Trail shortens distances,” said Porfirio Ninahuaman, a Quechua from near the Andean city of Cerro de Pasco in Peru. “The modern road makes them farther.” Matos knows of Bolivian healers who hike the road from Bolivia to Peru’s central highlands, a distance of some 500 miles, in less than two weeks. “They say our Inka [the Inca king] had the power of the sun, who commanded on earth and all obeyed—people, animals, even rocks and stones,” said Nazario Turpo, an indigenous Quechua living near Cusco. “One day, the Inka, with his golden sling, ordered rocks and pebbles to leave his place, to move in an orderly manner, form walls, and open the great road for the Inca Empire… So was created the Capac Ñan.” This monumental achievement, this vast ancient highway—known to the Inca, and today in Quechua, as Capac Ñan, commonly translated as the Royal Road but literally as “Road of the Lord”—was the glue that held together the vast Inca Empire, supporting both its expansion and its successful integration into a range of cultures. It was paved with blocks of stone, reinforced with retaining walls, dug into rock faces, and linked by as many as 200 bridges, like the one at Huinchiri, made of woven-grass rope, swaying high above churning rivers. The Inca engineers cut through some of the most diverse and extreme terrain in the world, spanning rain forests, deserts and high mountains. At its early 16th-century peak, the Inca Empire included between eight million and twelve million people and extended from modern-day Colombia down to Chile and Argentina via Ecuador, Bolivia and Peru. The Capac Ñan linked Cusco, the Inca capital and center of its universe, with the rest of the realm, its main route and tributaries radiating in all directions. The largest empire in its day, it also ranked as among the most sophisticated, incorporating a diverse array of chiefdoms, kingdoms and tribes. Unlike other great empires, it used no currency. A powerful army and extraordinary central bureaucracy administered business and ensured that everyone worked—in agriculture until the harvest, and doing public works thereafter. Labor—including work on this great road—was the tax Inca subjects paid. Inca engineers planned and built the road without benefit of wheeled devices, draft animals, a written language, or even metal tools. The last map of the Inca Road, considered the base map until now, was completed more than three decades ago, in 1984. It shows the road running for 14,378 miles. But the remapping conducted by Matos and an international group of scholars revealed that it actually stretched for nearly 25,000 miles. The new map was completed by Smithsonian cartographers for inclusion in the exhibition. Partly as a result of this work, the Inca Road became a UNESCO World Heritage site in 2014. Before Matos became professionally interested in the road, it was simply a part of his daily life. Born in 1937 in the village of Huancavelica, at an altitude of some 12,000 feet in Peru’s central highlands, Matos grew up speaking Quechua; his family used the road to travel back and forth to the nearest town, some three hours away. “It was my first experience of walking on the Inca Road,” he says, though he didn’t realize it then, simply referring to it as the “Horse Road.” No cars came to Huancavelica until the 1970s. Today his old village is barely recognizable. “There were 300 people then. It’s cosmopolitan now.” As a student in the 1950s at Lima’s National University of San Marcos, Matos diverged from his path into the legal profession when he realized that he enjoyed history classes far more than studying law. A professor suggested archaeology. He never looked back, going on to become a noted archaeologist, excavating and restoring ancient Andean sites, and a foremost anthropologist, pioneering the use of current native knowledge to understand his people’s past. Along the way, he has become instrumental in creating local museums that safeguard and interpret pre-Inca objects and structures. Since Matos first came to the United States in 1976, he has held visiting professorships at three American universities, as well as ones in Copenhagen, Tokyo and Bonn. That’s in addition to previous professorial appointments at two Peruvian universities. In Washington, D.C., where he’s lived and worked since 1996, he still embraces his Andean roots, taking part in festivals and other activities with fellow Quechua immigrants. “Speaking Quechua is part of my legacy,” he says. Among the six million Quechua speakers in South America today, many of the old ways remain. “People live in the same houses, the same places, and use the same roads as in the Inca time,” Matos says. “They’re planting the same plants. Their beliefs are still strong.” But in some cases, the indigenous people Matos and his team interviewed represent the last living link to long-ago days. Seven years ago, Matos and his team interviewed 92-year-old Demetrio Roca, who recalled a 25-mile walk in 1925 with his mother from their village to Cusco, where she was a vendor in the central plaza. They were granted entrance to the sacred city only after they had prayed and engaged in a ritual purification. Roca wept as he spoke of new construction wiping out his community’s last Inca sacred place—destroyed, as it happened, for road expansion. Nowadays, about 500 communities in Ecuador, Peru, Bolivia and northwestern Argentina rely on what remains of the road, much of it overgrown or destroyed by earthquakes or landslides. In isolated areas, it remains “the only road for their interactions,” Matos says. While they use it to go to market, it’s always been more than just a means of transport. “For them,” Matos says, “it’s Mother Earth, a companion.” And so they make offerings at sacred sites along the route, praying for safe travels and a speedy return, just as they’ve done for hundreds of years. That compression of time and space is very much in keeping with the spirit of the museum exhibition, linking past and present—and with the Quechua worldview. Quechua speakers, Matos says, use the same word, pacha, to mean both time and space. “No space without time, no time without space,” he says. “It’s very sophisticated.” The Quechua have persevered over the years in spite of severe political and environmental threats, including persecution by Shining Path Maoist guerrillas and terrorists in the 1980s. Nowadays the threats to indigenous people come from water scarcity—potentially devastating to agricultural communities—and the environmental effects of exploitation of natural resources, including copper, lead and gold, in the regions they call home. “To preserve their traditional culture, [the Quechua] need to preserve the environment, especially from water and mining threats,” Matos emphasizes. But education needs to be improved too. “There are schools everywhere,” he says, “but there is no strong pre-Hispanic history. Native communities are not strongly connected with their past. In Cusco, it’s still strong. In other places, no.” Still, he says, there is greater pride than ever among the Quechua, partly the benefit of vigorous tourism. (Some 8,000 people flocked to Huinchiri to watch the bridge-building ceremony in June last year.) “Now people are feeling proud to speak Quechua,” Matos says. “People are feeling very proud to be descendants of the Inca.” Matos hopes the Inca Road exhibition will help inspire greater commitment to preserving and understanding his people’s past. “Now,” he says, “is the crucial moment.” This story is from the new travel quarterly, Smithsonian Journeys, which will arrive on newstands July 14.
<urn:uuid:05c2d031-31f2-474a-9eaa-d944d010ee9f>
CC-MAIN-2020-16
https://www.smithsonianmag.com/smithsonian-institution/how-inca-empire-engineered-road-would-endure-centuries-180955709/?no-ist
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370529375.49/warc/CC-MAIN-20200405053120-20200405083120-00155.warc.gz
en
0.960131
3,222
3.25
3
The Incarceration of African Americans In the October cover story, Ta-Nehisi Coates explored America’s history of mass incarceration over the past 50 years. He traced the intellectual basis of the policy in part to Daniel Patrick Moynihan’s 1965 report on “The Negro Family.” The article was every bit as harrowing, illuminating, and infuriating as its famous predecessor, “The Case for Reparations” … Coates is one of the great social writers of our time, and singularly qualified to do work of this scale and ambition, which changes how Americans view their own history and how they view themselves. The Atlantic, for its part, is nearly singular in its willingness and ability to approve, finance, and pub-lish this kind of work. As beautiful a writer as he is, what makes Coates’s writing so powerful and so radicalizing is his reporting and research. His telling of history is nauseating precisely because it amounts to no more than the arresting arrangement of iron facts … There is very little that can be described as controversial in his pieces. The only controversy comes in how Americans react to them. In one significant respect, I think Coates fails his readership and fails to represent something vital about African Americans—his writing lacks hope … I suspect it’s this deepening despair that coaxes Coates into making two lamentable errors in “The Black Family in the Age of Mass Incarceration.” First, Coates repeats the significant failure he recognizes in an earlier Moynihan. Coates tells us that the fatal flaw in Moynihan’s infamous report was Moynihan’s decision to omit specific policy solutions. Having seen that so clearly, it’s odd that Coates should repeat that failure so often in the important writing he now undertakes. A mind as formidable as Coates’s ought not stop with descriptive analysis, however compelling its portrayal of the problem. It should push itself to hazard a prescription, to call for some specific redress. But such solution sharing requires hope. Second, I suspect it’s this hopelessness that tempts Coates to reject “respectability politics” perhaps too quickly or too sweepingly. Pastor Thabiti Anyabwile Excerpt from a TheAtlantic.com article The mother of Odell Newton (whose story of serving a life sentence for murder Coates tells) feels like she’s been in prison with her son for the past 41 years. Does the mother of Newton’s victim, Edward Mintz, feel like she is in the grave with Edward? Where is the photo of Edward Mintz’s family—how did they pick up the pieces of a life cut short senselessly? One thing that struck me is the lack of input from families suffering from having a loved one murdered by a previously violent criminal who was released after a five- or 10-year sentence, or by someone who was never imprisoned despite a life of criminal violence. Such an omission is nearly always the case when dealing with this topic from the perspective of the suffering families of those imprisoned for life. If someone wishes to make the argument that most violent offenders, once they get into their 60s, have aged out of their violent tendencies, that’s a debate worth having. But to simply ignore the percentage of murders committed by people who are younger than that, and who have a previous history of engaging in felonious violence, is, well, incomplete. What moral law tells you that when one life is taken or destroyed, so must another? Odell Newton murdered a man when he was 16. Surely the family of the murdered cabdriver feels that pain acutely every day, but why should that mean the end of the 16-year-old’s life as well? Our justice system is supposed to be about rehabilitation. Do my fellow readers not believe that to be worth pursuing? Finally, readers who bring up the cabdriver’s family should remember that Mr. Newton is not paying a debt to that family. He is paying a debt that we decided he owed us, meaning society. So it is not enough to bring up the sorrow of the family of the victim. You must also answer the question “What does society gain by keeping this man in prison?” I hope people have a better answer to that than self-righteousness. “Noting that fear of crime is well grounded does not make that fear a solid foundation for public policy,” Coates argues. Actually, “fear of crime” does make a solid foundation upon which policies addressing that crime must be constructed. Emptying our prisons is not one of them. Nor is blaming police a substitute for personal responsibility. Not even Bernie Sanders is going to fulfill the author’s wish list of government make-work jobs, a guaranteed minimum income, and billions of dollars in reparations. Good thing, because 50 years of Great Society–government free stuff have emasculated the black man’s self-worth more than the old Jim Crow laws. “Lower-class behavior in our cities” continues “shaking them apart,” as Daniel Patrick Moynihan warned all those decades ago. Mass incarceration is an abomination that has disproportionately harmed African Americans. The immediate goal must be to abolish it. The U.S. needs to bring its incarceration rate in line with those of other Western countries and to where it was before the great confinement took off in the 1970s. Coates draws a straight line from slavery to today’s carceral state. There are important parallels between the abolition of slavery and the potential abolition of mass incarceration. If you were back in 1850, would you choose the unconditional abolition of slavery? Or would you prefer a phased-in abolition premised on working out the details of how to transition to civil and political equality, and to 40 acres and a mule for former slaves? The crime crisis is directly related to deeper structural problems in ways that the crisis of the carceral state is not. The only legitimate long-term solution to the crime crisis is another Reconstruction—one that is more durable than the first Reconstruction, after the Civil War, and the second Reconstruction, during the civil-rights movement—even if it takes a long time and requires a major political struggle. The U.S. will finally have to spend what it takes—politically and financially—to rectify the abhorrent consequences for African Americans of centuries of cruel and unequal treatment. Author, Caught: The Prison State and the Lockdown of American Politics Excerpt from a TheAtlantic.com article Ta-Nehisi Coates makes a powerful case against what he calls the “carceral state,” but provides virtually no evidence linking his foil—Daniel Patrick Moynihan—to it. He neither claims Moynihan’s 1965 report on the African American family was responsible for subsequent incarceration policies nor accuses Moynihan of orchestrating them. Instead, he locates one issue beside the other, inviting—all but goading—the reader to draw the causal inference he does not explicitly make. Coates implies. He generalizes. He kneads anecdotes into impressions. In short, he does to Moynihan what he falsely accuses Moynihan of doing to the African American family. This irony is compounded by the fact that Moynihan preceded Coates to this criticism of incarceration policy. His 1993 essay “Defining Deviancy Down” warned that “we are building new prisons at a prodigious rate” and that there was “something of a competition in Congress to think up new offenses for which the death penalty seemed the only available deterrent.” He fought tirelessly for treatment over criminalization for drug offenders at the height of the incarceration craze, including helping to pass a 1988 law on the subject. Coates’s distorted portrait of Moynihan is both unfair to its subject and unnecessary to the essay’s purpose, which is exploring the effects of the mass incarceration of African Americans. The portions of Coates’s essay that pertain to Moynihan could have been excised without detracting from, or even altering, his observations about incarceration, which serves only to accentuate the extent to which he traffics in guilt by editorial association. That such incaution is precisely the accusation he levels against Moynihan suggests that if Coates was insensitive to the unfairness of his portrayal, he might at least have been attentive to its irony. Author, American Burke: The Uncommon Liberalism of Daniel Patrick Moynihan Worcester, Mass. The article by Ta-Nehisi Coates on the mass incarceration of black men was predicated on a gratuitous smear of Daniel Patrick Moynihan, echoed in James Bennet’s editor’s note. Coates asserts that the Moynihan Report (“The Negro Family: The Case for National Action”), printed in 1965, was significantly responsible for the surge in incarcerations of black men for drugs and other crimes decades later. But in expressing his anguish and anger about a deplorable situation, Coates has tortured history. He is entitled to his own opinion but not—as Moynihan used to say—to his own facts. He offered no facts to support his claim, because there are none. In subsequent comments on TheAtlantic.com, Coates cited selectively from other writings for what he maintains is evidence of Moynihan’s racial biases. Most of these writings were not made public until recently, long after the problem of incarceration arose. Pat Moynihan’s extraordinary private memos and diary entries will long be viewed, studied, and debated as artifacts of a turbulent period of history. But the argument that they led to the mass imprisonment of black men is scurrilous and without foundation. Moynihan spoke out often against imprisoning rather than treating drug offenders. Few public officials in modern political history did more to advance the cause of strengthening black families through income support and other programs than Pat Moynihan. Steven R. Weisman Editor, Daniel Patrick Moynihan: A Portrait in Letters of an American Visionary Bethesda, Md. Ta-Nehisi Coates replies: Steven R. Weisman charges that I have “tortured history” and “cited selectively” from Moynihan’s writings. What specifically is the context that would exonerate Moynihan for claiming that the black poor were “unusually self-damaging, that is to say, more so than is normal for such groups” or that crime among the black poor has “given the black middle class an incomparable weapon with which to threaten white America”? What specifically is the context that would make it inoffensive for Moynihan to claim that there is “a virulent form of anti-white feeling” among the black middle class? Weisman offers none, because there is none. The notion that these writings were private and thus, somehow, innocent is absurd—as though dehumanizing people behind their back is somehow more honorable than doing it to their face. Weisman neglects to mention Moynihan’s vote for the 1994 crime bill, which helped drive mass incarceration. In what universe are senators not responsible for the legislation they support? In what world does the fact that one’s advice to presidents was private render it somehow less risible? As I have said repeatedly, Daniel Patrick Moynihan has a complicated legacy. In the main, his intentions were noble. But intentions are not enough. Mass incarceration is built on a view of black people as particularly criminal. Moynihan, as an aide to President Nixon, promoted this view. And then, as a senator, he voted for legislation that emerged from this view. The need to canonize Moynihan is no more intelligent than the need to brand him an intractable racist. For his October cover story on mass incarceration and its effects on black families, Ta-Nehisi Coates spent time in Maryland with the family of Odell Newton, a man serving a life sentence for murdering a cabdriver in 1973, when Newton was 16. Coates explained that Newton, who suffered from lead poisoning as a young boy, had been recommended for release three times, but each time, the governor rejected the Parole Commission’s decision. As the article went to press, Newton’s lawyers filed a motion in court arguing that his sentence violated state law. In Maryland, a murder conviction carries a mandatory life sentence. The judge who sentenced Newton was told by the prosecutor and by Newton’s own lawyer at the time that he could not suspend part of the sentence—but it turns out that he did have that authority. “We argued that because of fundamental errors that occurred at the time he was sentenced, he was entitled to a new sentencing,” one of Newton’s lawyers, Sonia Kumar of the Maryland ACLU, explains. The state’s attorney’s office agreed to settle the case. At a hearing on October 8, the state noted that family members of the murder victim, Edward Mintz, “had expressed forgiveness and wished Odell well,” according to Kumar. Odell Newton was resentenced to time served and released from prison that same day. “It’s a blessing to be out,” Newton told The Atlantic. “I love my family, and they’ve been showing it while I was in prison.” Although he must serve five years of probation, Newton said of his release, “It’s a good feeling.” In Maryland, Kumar says, “anytime you have someone serving a life sentence coming home, that’s exceptional.” The October issue’s Very Short Book Excerpt, “RFK Was a Crummy Lawyer,” highlighted a passage from James Neff’s Vendetta: Bobby Kennedy Versus Jimmy Hoffa that discussed Robert F. Kennedy’s shortcomings in the courtroom. The book excerpt in the October issue repeats the view, attributed to Edward Bennett Williams, that Kennedy “failed to understand that every man deserved a defense if the system was to work.” That opinion fails to recognize what Robert Kennedy did on behalf of poor defendants, starting soon after he became attorney general. He appointed a distinguished national committee to study the problem of poverty and the administration of federal criminal justice. He created the first Office of Criminal Justice in the Department of Justice. He convened a national conference on bail reform. And he sent to Congress a bill, based on the committee’s work, that became the Criminal Justice Act of 1964, creating the role of public defenders, who have represented people accused but without funds for the past half century. This should not be merely the “Department of Prosecution,” he once said, but truly the Department of Justice. Special assistant to the attorney general, 1961–66 New York, N.Y. Our Fragile Constitution In October, Yoni Appelbaum argued that America’s Founders, taking their inspiration from Britain’s Stuart monarchy, established a fundamentally flawed system of government. There are so many errors in Yoni Appelbaum’s article about our “fragile” constitution that it’s hard to know where to start any critique. Appelbaum calls our constitutional system a “mixed monarchy” and a “presidential democracy” (examples of presidential power including the ability to make treaties and—a very low bar indeed—to name his own Cabinet). In fact, no treaty amounts to more than scrap paper unless the Senate approves it, and while the president can tell the Senate who he’d like in his Cabinet, the Senate will decide. Every major power of the federal government is held by Congress; the president is the head of state but not the head of government, and while he can veto legislation, he can make few things happen unless Congress agrees. Any reasonable analysis of how well or poorly our federal government functions must at least start with an understanding of the imbalance between Congress’s considerable authority under Article 1 of the Constitution, and the president’s comparatively small charter under Article 2 (the order is no accident). Member of U.S. Congress representing Oklahoma, 1977–92 “Moving to Mars,” by Alana Semuels (November), cited the lack of surface water on Mars as one obstacle to living there. Shortly after the article went to press, nasa announced that new evidence shows the planet does in fact have briny water flowing on its surface. “It took multiple spacecraft over several years to solve this mystery, and now we know there is liquid water on the surface of this cold, desert planet,” said Michael Meyer, the lead scientist for nasa’s Mars Exploration Program, while announcing the discovery. “It seems that the more we study Mars, the more we learn how life could be supported and where there are resources to support life in the future.” The Big Question: What science-fiction gadget would be most valuable in real life? (On TheAtlantic.com, readers answered October’s Big Question and voted on one another’s responses. Here are the top vote-getters.) 5. Mr. Fusion from Back to the Future Part II, so all our garbage could be turned into energy to heat our homes and run our cars. — John McDougall 4. I’ll go with a time machine, despite H. G. Wells’s cautionary tale. I would like to go to 2021 to see how Deflategate turns out—it should be wrapped up by then. — Gary Vallely 3. The neuralyzer from Men in Black would be equivalent to the “undo” command for computers. If I were arguing with my wife and started losing, I could erase her memory and try again. — Fernando Nunez-Noda 2. Without question, the replicator from Star Trek: The Next Generation. Not only would I have any type of food and drink ready in an instant, I’d never need to spend any time looking for my car keys or that one crucial Lego piece my sons can’t seem to find. — Toby Wahl 1. Easily, the transporter from Star Trek. Not only could you instantly beam yourself anywhere, but you would avoid TSA lines—and you wouldn’t need to take your shoes off. — Doug Garr Because of an editing error, “Amateur Hour” (Jonathan Rauch, November) misstated the “14-Year Rule.” The rule observes that no one gets elected president who needs longer than 14 years to get from his or her first gubernatorial or Senate victory to either the presidency or the vice presidency. The rule does not make any predictions about who will be elected vice president. To contribute to The Conversation, please e-mail firstname.lastname@example.org. Include your full name, city, and state. We want to hear what you think about this article. Submit a letter to the editor or write to email@example.com.
<urn:uuid:b78e9279-89d0-42b8-8723-be146e893819>
CC-MAIN-2020-16
https://www.theatlantic.com/magazine/archive/2015/12/the-conversation/413182/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371807538.83/warc/CC-MAIN-20200408010207-20200408040707-00393.warc.gz
en
0.961048
4,036
2.5625
3
Amphibious tanks helped Indian troops wage a lightning war in a land full of rivers by SÉBASTIEN ROBLIN This is the second in a two-part series on the PT-76 amphibious tank. Part one described the characteristics of the lightly armored vehicle, and detailed its exploits and defeats in the Vietnam War. In the late 1960s, the lightly-armored, Soviet-made PT-76 presented a shock to U.S. troops in Vietnam. A thousand miles to the west, the tank would soon play a role in the fate of what was then East Pakistan — today, Bangladesh. By the fall of 1971 the Indian military was actively assisting the Mukti Bahini insurgency that took up arms following a brutal crackdown by West Pakistan earlier that March. The Indian government of Indira Gandhi expected full-scale war to break out — and it needed a way to provide armored support for its troops despite the rivers of the Ganges Delta that stood in the way. As a result, the Indian Army concentrated its two regiments of PT-76s in the area — the 45th Cavalry Regiment and the 69th Armored Regiment, as well as two independent squadrons, the 1st and 5th. Facing them were five Pakistani squadrons of M24 Chaffee light tanks, totaling 66 in all, and three platoons of PT-76s, some of them captured from India during a war in 1965. Before hostilities officially commenced, the Indian Army on Nov. 21 infiltrated the 800 men of the 14th Punjab Battalion across the border near the hamlet of Garibpur to secure a key highway leading to Jessore. Fourteen PT-76s of the 45th Cavalry’s C Squadron rode in support. The Pakistani army was aware of the Indian presence, and counterattacked early the following morning with a full brigade of 2,000 troops, supported by dozens of M24 Chaffees. The M24 was an American World War II-era light tank armed with a 75-millimeter gun — shared with the Sherman tank — and protected by thin armor that did not exceed 38 millimeters in thickness. For once, the PT-76 faced an armored opponent on a relatively even footing. However, the Pakistanis outnumbered the Indian force roughly three to one. However, the element of surprise is everything in warfare, and the Indian tankers had fortunately anticipated the attack, digging their vehicles in an ambush position and setting up anti-tank recoilless rifles. As the Pakistani tanks rolled through thick early morning fog, the Indian tank crews scrambled to their vehicles under the command of Major D.S. “Chiefy” Narag. The approaching M24s only became visible at ranges as short as 30 or 50 meters. The PT-76s opened fire, destroying 10 Chaffees during an intense 30-minute engagement. The Pakistani tanks struggled to spot their adversaries in the mist. Narang personally destroyed two tank before being killed by a burst of machine gun fire. The Indian tankers later wiped out a second attack by a platoon of M24s. By the time the mists cleared in the afternoon, the Indian squadron counted 14 Chaffees destroyed or abandoned for the loss of six of its amphibious tanks. The accompanying infantry and their recoilless guns inflicted further losses and repelled the Pakistani force. Pakistani F-86 Sabre jet fighters swooped down to the battleground to provide air support. But Indian Gnat fighters intercepted the Sabres at 3:00 p.m. that afternoon, shooting down two of the Pakistani aircraft and damaging a third. The victory by the outnumbered force at Garibpur boosted Indian morale before the war was even formally declared. Hostilities commenced in earnest after a fizzled Pakistani preemptive strike on Dec. 3, 1971. The encircling Indian Army lunged across East Pakistan’s border, aided by native Mukti Bahini guerrillas. The Pakistani army had entrenched itself in one fortified city after another, separated by large rivers that posed formidable obstacles to Indian tanks and heavy weapons. Nonetheless, India intended to wage a Blitzkrieg-style campaign of rapid advances to cut off and surround the Pakistani strongpoints, relying on Mi-4 transport helicopters and PT-76 tanks to ferry troops across the enormous rivers. However, not all of India’s attempts to use the PT-76's amphibious capabilities panned out. The 5th Squadron’s tanks repeatedly bogged down in marshes and fell behind the infantry. When they tried to ford the Meghna River on Dec. 12, the tanks’ hull seals proved leaky, forcing them to take an overland route instead. Furthermore, the lightly armored vehicles suffered losses to Pakistani 106-millimeter recoilless guns, even when making successful attacks. Elsewhere, the amphibious tanks showed their worth. When Indian troops were delayed at Gobindganj by a battalion of Pakistani defenders reinforced with tanks and artillery, the 63rd Battalion executed a flanking maneuver across 55 kilometers of rivers and marshy terrain. Riding on top of the PT-76s were 12-man squads of Nepalese Gurkha troops renowned for their close-quarters fighting skills with curved kukri knives. Not only did the combined tank-infantry team take the town in a surprise attack, knocking out a Chaffee and overrunning a battery of 105-millimeter howitzers, but a detached squadron of PT-76s set up a road block behind the enemy lines, capturing the defenders as they fled. Earlier, charging tanks of the 1st Squadron ejected a stubborn Pakistani infantry company from the town of Mian Bazar on Dec. 4, losing four vehicles to recoilless rifles in the process. Five days later, the same unit stormed the city docks of Chandpur, again with Gurkhas hitching a ride, where they encountered three Pakistani gunboats on the Meghna River. The tanks sank all three boats in a furious exchange of fire, rescuing 180 survivors out of the 540 troops and crew onboard. Two days later, the tankers encountered another gunboat and pounded it with 54 76-millimeter shells until it grounded ashore. The amphibious tanks then began ferrying infantry and equipment back and forth across the vast river, though their engines occasionally overheated in the middle of the water, requiring towing by civilian boats. The Pakistani tankers did have a chance to claim revenge on the 45th Cavalry’s A Squadron on Dec. 9 as they approached the town of Kushtia. Maj. Sher Ur Rahman set up his two platoons of M24s and a supporting infantry company in an ambush positions facing a road on a raised embankment surrounded by open ground. Six Indian PT-76s accompanying a battalion of 22nd Rajput infantry advanced into the open kill zone. The Pakistani guns unleashed hell, blasting one of the Indian tanks in the opening volley. Four of the PT-76s held their ground, knocking out an M24 before being destroyed one after another, while the lead vehicle retreated at top speed, sowing panic in the accompanying infantry. It took two days for the Indian Army to organize a full assault on Kushtia — only to discover its defenders had already quietly extricated themselves. The 45th Cavalry was soon back in the action, swimming down the Bhairab River in order to seize the ferry at Syamganj, resulting in the capture of 3,700 fleeing troops. The tankers got their hulls wet again when A Squadron crossed the Madhmuti River on the night of Dec. 14 with infantry on top to capture the Kumarkhali ferry, bagging 393 more prisoners in the process. Two days later, the commander of Pakistani forces in East Pakistan surrendered the capital of Dacca, leading to the creation of the new state of Bangladesh. The Indian Army had advanced with shocking swiftness across the rivers of the Ganges Delta, a victory in which the PT-76 had ably supported. The lightly armored tanks suffered heavy losses — one source claims 30 destroyed or damaged — and did not always prove reliable. However, by aggressively flanking enemy positions, cutting of retreating troops, and working in close cooperation with infantry, the Indian tankers got impressive results out of their thinly-armored mounts. Crossing the Suez Canal — in both directions Syria and Egypt also fielded PT-76 in their wars with Israel, the latter losing 29 to Israeli tanks in the Six-Day War. But Cairo invested in more of the amphibious tanks, as it had a specific role in mind for them — participating in the epic crossing of the Suez Canal that separated the heavily fortified border between Egypt and Israel in the opening assault of the Yom Kippur War. But in actuality, the PT-76 occupied a modest role in the crossing of 90,000 Egyptian soldiers and nearly 1,000 tanks. Following a heavy Egyptian artillery bombardment, at 2:00 p.m. on Oct. 6, 1973, 20 PT-76s of the 130th Marine Brigade swam across the Great Bitter Lake, escorting a thousand marines mounted in amphibious BTR-50 armored personnel carriers. The Israeli army hadn’t built fortifications or sand ramparts on the far shore of the lake, so the Egyptian marines made it across without opposition by 2:40 that afternoon and began clearing nearby minefields. Two hours later, the marines repelled a counterattack by an Israeli armored company, knocking out two tanks and three APCs with the help of Sagger anti-tank missiles. The mechanized brigade proceeded to conduct drive-by raids on the Israeli air base of Bir El Thamada and nearby radar stations. The brigade’s 603rd Marine Battalion then peeled off to capture and hold Fort Putzer, seizing the unoccupied position on Oct. 9 and holding it until the end of the war despite repeated counterattacks. Meanwhile, the 602nd rolled eastward, where it had the misfortune of bumping into a battalion of 35 Israeli Patton tanks late at night on Artillery Road. This night fight didn’t go well for the battalion’s 10 outgunned PT-76s, which were blinded by the Patton’s Xenon searchlights. The Israeli tanks devastated the battalion, forcing the survivors to retreat back to Egyptian lines. However, the tale of the PT-76 and the Suez Canal does not end there, as the Israeli Defense Forces had two dozen of its own PT-76s captured during the Six-Day War and refitted with American-made engines and machine guns. Several were reportedly used in Operation Raviv in 1969, an amphibious hit-and-run raid using captured armor against new Egyptian radars and surface-to-air missile sites on the Suez canal during the War of the Attrition. A week after the Egyptian crossing, the IDF had stabilized the Suez front line but still faced the bulk of the Egyptian 3rd Army on the Israeli side of the canal. Rather than tackle the army head on, Gen. Ariel Sharon struck its flanks, forcing an armored spearhead back to the canal so that he could cross over to the Egyptian side. Seven IDF PT-76s and eight amphibious BTR-50s of the 14th Armored Brigade swam across the canal on Oct. 14. Once on the far shore, they began marauding down the line of Egyptian support installations, blowing up lightly defended logistical bases, surface-to-air missile sites and radars, allowing Israeli air power to come fully into action. A CIA report even notes that the tanks had Arabic-speaking drivers and Egyptian markings to better sow confusion behind enemy lines. The vehicles were soon joined by many heavier Israeli tanks which crossed using two captured bridges and motorized rafts. These proceeded to encircle the Egyptian 3rd Army in the following weeks, spurring the United States to impose a ceasefire which brought the war to an end on Oct. 25. The PT-76 would be involved in numerous other conflicts. Over a half century, the Indonesian army used its PT-76s to invade East Timor, patrol against the Banda Acheh secessionists, and suppress unrest on the island of Ambon. Angolan PT-76s dueled South African Ratel armored cars in the Angolan Civil War. Iraqi amphibious tanks fought in the Iran-Iraq war and were hammered by U.S. aircraft in 1991 and 2003. Multiple factions in the Yugoslav civil wars fielded the vehicle. China’s derivative, the Type 63, fought in Vietnam during the Sino-Vietnamese war of 1979, suffering heavy losses to rocket-propelled grenades. Type 63s also saw combat in the Sri Lankan civil war. Russian PT-76s even saw combat in Chechnya. In fact, Russia’s naval infantry force only retired its last 30 upgraded PT-76Es in 2015. These had 57-millimeter dual-purpose auto-cannons, new engines and modern targeting systems. Hundreds of PT-76s remain in service across the globe today, so the story of a 60-year old tank that seemed under-gunned and under-protected from the day it left the factory floor may not be over yet.
<urn:uuid:880058e9-fb4d-495f-939d-c637ac15c638>
CC-MAIN-2020-16
https://warisboring.com/indias-armored-cavalry-rolled-and-swam-into-bangladesh/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370497042.33/warc/CC-MAIN-20200330120036-20200330150036-00235.warc.gz
en
0.953013
2,771
3.21875
3
The series “Characters in the Bible” is based on catechism classes I teach to teenagers in my church in 2019-20. However, the blog posts contain additional information. Adam and Eve are the first two human beings who lived on the earth. The name Adam really simply means “human being”. His wife is initially called “woman” (Heb. isha) until Gen 3:20. Then, her husband calls her Eve (Heb. hava), which means “life”. The names of Adam and Eve clearly designate their importance. Adam functions as the representative of the human race; and in Eve we find the profound ability of the woman to carry new life and bring forth generation upon generation of new human beings. Because of these roles of Adam and Eve, it is difficult to speak of them as individuals, without speaking more generally about human beings. But that is the point of the Biblical narrative in Gen 1 through 3: in Adam and Eve we see our ancestors and therefore, by representation and example, ourselves. According to the Bible, Adam and Eve were the first human beings. There is no room for the theory of “humanoids”, where human beings developed from almost-human animals. Genesis 2 says that God “formed the man of the dust of the ground” (2:7) and the woman from one of his ribs (2:21). In the creation story, human beings stand out in three ways. First, they are created last. The case can be made that all prior creation, of the light and the sky and the soil and the plants and the sun and the moon, was meant to provide and furnish a home for human beings. Genesis 1 describes the transformation of a cold and dead planet into one where people can live fulfilling lives. Second, their creation seems more deliberate than the other features of the earth. According to Gen 1:26, God said: “Let us make (hu)man in our image …” There has been much discussion about the plural, “us”. Some think it refers to the angels (or “eons”) who were God’s helping servants in the work of creation. Others consider this the first sign of the Triune nature of God. Indeed, apart from the speaking God we also find the Spirit of God (1:2), and John 1:1-3 identifies the Son of God as “the Word” … “through whom all things came into existence.” Whatever the case may be, all of God’s attention and wisdom is involved in the design and creation of people. Third, the creation of man and woman is immediately followed by a blessing: “Be fruitful and multiply, fulfill the earth and subdue it, and have dominion” over the animals. The first part of this blessing, fertility, humans share with the animals (see 1:22). The second part of the blessing, dominion, they share with the celestial bodies (see 1:18). What is a human being? If we take as starting point the poem of Gen 1:27, two essential characteristics stand out. First, human beings are created in the image of God. This quality does not only apply to Adam and his wife, but also to his offspring (see 5:3). Theologians have discussed at length exactly how we are the image of God. Some have even distinguished between the image of God and the likeness of God (the two words in Gen 1:26). It seems fruitful to speak about the “image of God” in two senses, the broader sense and the narrower sense. In the broader sense, as human beings we are the image of God in our intelligent thought, our consciousness, our role as moral agents, and our creativity. In all these characteristics, we all reflect our Creator-God in a way that the animals don’t. This essence of human beings as image bearers makes human life especially precious and inviolable. This is why it is such as serious offense to kill a human (compared to killing an animal): “Whoever sheds the blood of man, by man shall his blood be shed, for God made man in his own image.” (Gen 9:6) In the narrow sense, human beings were created with moral perfections. In the words of the Heidelberg Catechism (q&a 6), “God created man good and in his image, that is, in true righteousness and holiness, so that he might rightly know God his Creator, heartily love him, and live with him in eternal blessedness to praise and glorify him.” This perfection Adam and Eve possessed for a little while before their Fall (see below). The rest of the history of mankind may be understood in terms of the dire consequences of the loss of this image, and the quest for recovering it. In fact, the New Testament describes salvation as “to be conformed to the image of [God’s] Son” (Rom 8:29), and as putting on a “new self, which is being renewed in knowledge after the image of its creator” (Col 3:10). The second essential feature of humans, according to Gen 1:27, is their sexuality. “Male and female he created them,” the Hebrew words emphasizing the biological functions of men and women. There are, of course, many significant differences between people, but the creation narrative mentions only one: their sexual differentiation. This presentation of sexuality in the Bible is difficult to square with the modern relativism concerning gender and sex. While the entire earth had been created “very good”, Adam and Eve were initially placed in a special region, a garden planted in the land of Eden (2:8). What was their life like? The job description in Gen 2:15 is “to work and keep the garden”. This combination of verbs is interesting, as both are also used in the Bible for priests working in the sanctuary. We can think of the garden of Eden as a sanctuary, a sacred grove, in which Adam and Eve were the priests. The blessing of Gen 1:28 has been called the culture mandate: “”Be fruitful and multiply, fulfill the earth and subdue it, and have dominion…” This describes an important part of what people do (and are supposed to do) on earth. But I would that the highest mandate of humankind is to be priests: they worship the Lord on behalf of all creation and distribute his gifts and blessing to the whole earth. The description in Gen 3:8 suggests that Adam and Eve had intimate fellowship with the LORD in a tangible way, as he “walked” with them in the garden like a friend and tutor. The relationship of God with Adam and Even in the garden is sometimes called the covenant of works. “Covenant,” because it follows the pattern of the rest of the Bible, of an unequal, beneficent partnership that may be summarized as: “I will be your God, and you will be my people.” The addition “of works” suggests that Adam and Eve maintained the covenant on their part by faithfully doing their work. The Westminster Confession of Faith explains: “life was promised to Adam; and in him to his posterity, upon condition of perfect and personal obedience” (WCF 7.2). (I am not fond of the expression “covenant of works”. Rather, I would emphasize that, just as God’s people today, were included in a gracious covenant. The creator of the world condescended to have fellowship with them and love them, asking nothing in return but loyalty. While the commandment in Gen 2:16-17 may be understood as a formal condition, it is not the heart of the covenant.) The idyllic situation of the Garden of Eden—or Paradise, as the Jews would call it later—did not last long. Just as Genesis 1 is necessary to understand our essence and Genesis 2 for our culture, so Genesis 3 is needed to understand our imperfection, failure, and misery. In Gen 3 we encounter evil in the form of the Serpent; the question how this agent of evil could be present in a good creation is not answered in the Bible. The Serpent places before the woman, and indirectly the man, the core temptation: will they be content being humans serving as priests to God, or will they desire to be elevated to gods? The rest, as they say, is history: through Adam and Eve, humankind has chosen to be like gods. This choice was rebellious, violating the boundary between Creator and creature and disrespecting the covenantal relationship. It was also disastrous, because human beings cannot flourish apart from fellowship with God, let alone make god-like decisions concerning right and wrong. Beforehand, God had warned Adam and Eve about the consequence of this sin. “You will surely die” (2:17). Death is both a natural effect and a divine punishment; and the fact that the man and his wife did not die immediately shows God’s grace, as well as his plan to provide a solution for this evil. The consequences of sin for Adam and Eve are hard, and they explain many aspects of our lives today. First, they now recognize and experience the existence of evil, both within themselves and in the world around them. They can no longer stand the presence of God but hide behind the bushes. They no longer have the innocence in which they walked around naked, but feel exposed and vulnerable, and want to protect themselves with clothing. Second, they can no longer live in the perfection of Paradise, in the highest sanctuary of God on earth. They are removed to outside the Garden of Eden, where life is hard work. Both man and woman are condemned to “hard labor”: the man will make his living by sweaty work, and the woman will give birth in serious pain. Third, all of the world lies now under a curse because of their rebellion. In Rom 8:19, Paul summarized this by saying that “the creation is subject to futility”, in “bondage to corruption”. Fourth, although Adam and Eve did not die immediately, they became mortal and died eventually. They passed their mortality on to their offspring, as brought out by the refrain in Genesis 5: “… and he died.” Not only mortality is passed on, but also sin; we confess that we are “conceived and born in sin,” that we inherit both guilt and sinful tendencies from our parents. In the words of Romans 5:12, “sin came into the world through one man, and death through sin, and so death spread to all men because all sinned.” To say that Adam and Eve messed up seriously is an understatement. The contrast between the perfection of Eden and the cursed, death-infested life outside is too big to fathom. Our first ancestors had every reason to be depressed and despondent. But they had hope. They received that hope when the LORD also cursed the Serpent, that agent of evil among them. Genesis 3:15 is a riddle, but it was clear enough to give Adam and Eve the conviction that God would turn around the situation. I will put enmity between you and the woman, and between your offspring and her offspring; he shall bruise your head, and you shall bruise his heel.” (Gen 3:15) This verse has been called the mother promise or the proto-evangel, because it is the first good news of salvation for fallen humankind. In this verse, “I” is the LORD and “you” is the Serpent; and so the LORD declares war between humankind and evil. While evil will be able to “bruise the heel” of humankind, eventually a man will crush the head of the Serpent, taking the power away from evil and removing the sting of death. In this verse, the Christian church has always seen the first announcement of the Savior, our Lord Jesus Christ. The Bible calls him the “second Adam”, because like Adam he will be the representative of humankind; not the original, fallen human beings, but those who will be saved by the grace of God and through faith. Therefore, as one trespass led to condemnation for all men, so one act of righteousness leads to justification and life for all men. (Rom 5:18)
<urn:uuid:c0b85af5-3f37-407c-ad2a-0397265bdf8c>
CC-MAIN-2020-16
https://stoicheia.blog/2019/10/03/characters-in-the-bible-adam-eve-1/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370491857.4/warc/CC-MAIN-20200328104722-20200328134722-00194.warc.gz
en
0.970238
2,637
3.25
3
Nobody is confused by the fact that we don’t use a Ferrari 458 Spider sports car as a dump truck. Nobody is astonished that a Toyota Prius did not qualify for the Indianapolis 500 race this past May. And nobody whom I know drives a Caterpillar earth-moving truck back and forth from home to work (…but, I have to admit, it might be really cool to try – Outta my way, I’m coming through!). We’re not confused by these things because most of us have automobiles and we are generally familiar with the notion of different vehicles being designed, built, and used for different purposes. In a number of different articles I’ve repeatedly stressed the notion that form follows function and function follows mission requirements. The mission requirements for a Ferrari are different than those of a Prius or those of a giant piece of mining equipment and so the resulting products are dramatically different. The same concept of differentiation applies to rocket engines. That’s obvious, right? On one end of the spectrum, you have something like the F-1 engine used for the Saturn V launch vehicle. It had a thrust level of 1.5 million pounds-force of thrust and a specific impulse of about 260 seconds (sea level). It stood nineteen feet high, at the base of the nozzle was over twelve feet in diameter, and it weighed over nine tons. On the end of the spectrum (at least the spectrum that we deal with within LEO), you have the RL10 which, depending on the specific configuration, puts out less than 25 thousand pounds-force of thrust but has a specific impulse over 450 seconds (vacuum). If you have an RL10 without the big nozzle extension, the engine is just over seven feet tall, about four feet in diameter, and it weighs less than 400 pounds. Yes, that’s a flood of numbers, but let me make it a bit more graphic. If we wanted to get the same thrust level using RL10 engines as was obtained on the S-1C stage of the Saturn V (which used a cluster of five F-1 engines), then you would need 336 RL10 engines. That would be an interesting vehicle configuration indeed. Alternatively, try to imagine the Centaur upper stage – the typical use for the RL10 – with something as big F-1 hanging off the end. The whole stage weighs less than five thousand pounds (dry) and is just over forty feet long. If you tried to apply 1.5 million pounds-force of thrust to something like that, then in just fractions of a second, the whole stage would be a shiny metal grease spot in space. So that brings me to the subject of this article. I want to compare the RS-25 and J-2X engines, currently the two primary products of the Liquid Engines Office. These two engines are not as radically different from each other as are the F-1 and the RL10, but the differences are substantial and meaningful. Here is a quickie table that will give you many of the basic characteristics of the two engines: I know, I know. That all looks like a meaningless, banal listing of numbers. Specifications rarely seem interesting unless or until you know the stories behind the facts. So, let’s discuss the stories. First of all, they’re both hydrogen engines. Why? Because they both need to have high specific impulse performance at high altitude and in space. The difference between these two engines is that the RS-25 is a sustainer engine whereas the J-2X is an upper stage engine. The RS-25 sustainer mission is to start on the ground and continue firing on through the entire vehicle ascent to orbit. The J-2X upper stage engine mission is to start at altitude, after vehicle staging, and propel the remaining part of the vehicle into orbit. Also, an upper stage engine can sometimes be used for a second firing in space to perform an orbital maneuver. This difference in missions accounts for the difference in raw power. The RS-25 is part of the propulsion system lifting a vehicle off the ground. It needs to be pretty powerful. The J-2X is the propulsion system for a vehicle already aloft and flying quickly across the sky. The difference in missions is also largest part of the explanation for the different engine cycles used. In past articles, I’ve discussed the schematic differences between a gas-generator engine like the J-2X and a staged-combustion engine like RS-25. The staged-combustion engine is more complex but it generates very high performance. You may look at the two minimum specific values, 450.8 versus 448 seconds, and say that these are not very different, but remember that the J-2X cannot be started on the ground. If we tried to start the J-2X on the ground, the separation loads in the nozzle extension would rip it apart. The RS-25 achieves this very high performance without a nozzle extension because, well, it had to in order to fulfill the mission. Note that a ground-start version of J-2X would have a minimum specific impulse of something like 436 seconds. Something else that is significantly different between the two engines is their throttling capabilities. The J-2X can perform a single step down in thrust level. This capability can be used to minimize vehicle loads or as part of a propellant utilization system since the throttle is accomplished via a mixture ratio shift. The RS-25, on the other hand, has a very broad throttle range. Why? Two reasons. First, because during the first stage portion of any launch vehicle ascent, the vehicle experiences what’s known as a “max Q” condition. Perhaps if you’ve ever listened to a Shuttle launch in the past you’ll hear the announcer talk about “max Q” or “maximum dynamic pressure.” This is the point at which the force of the air on the structure of the vehicle is greatest. It is a combination of high speed and relatively dense air. Later, the vehicle will be flying faster, but at higher altitudes, the air is thinner. Thinner air means less pressure (the equation – thank you Mr. Bernoulli – says that dynamic pressure is proportional to the air density and to the square of the vehicle velocity). Thus, to minimize structural loads on the vehicle, the engines are throttled down deeply for a short period of time, and then brought back to full power. An upper stage engine operating only at high altitudes never has to face a max-Q condition. Second, a sustainer has to be big enough to contribute to the lift off of the ground, but at higher altitudes, after the vehicle has been emptied of most of its propellants, with too much thrust you’ll get too much acceleration. If you had no way to throttle back the engine thrust levels, then the vehicle would accelerate beyond the capacity of the astronauts to survive. An upper stage engine does not generally start out with as much oomph so the throttling needs to lessen acceleration loading is not as great. Lastly, let’s talk about differences in engine control. Engine control typically refers to the parameters of thrust level and mixture ratio (i.e., the ratio of propellants, oxidizer to fuel, being consumed by the engine). When we talk about thrust, we are talking about throttling as discussed above, yes, but also thrust precision, i.e., the capability of the engine to hold tightly at a particular thrust level. When we talk about mixture ratio, we’re generally talking only about the notion of precision (but below in a post-script I’ll tell you a little more with regards to RS-25). Well, what would cause an engine to stray away from a fixed operational condition? Two things: boundary conditions and internal conditions. The most obvious boundary conditions are pressure and temperature of the propellants coming into the engine. A sustainer engine can see a wide variation in propellant inlet conditions due to variations in vehicle acceleration. This is most dramatic during staging activities. An upper stage engine won’t typically see these wide variations. This is why it was very, very useful (almost necessary) for the RS-25 to be a closed-loop engine. A closed-loop engine uses particular measurements for feedback to control valves that, in turn, control engine thrust level and mixture ratio to tight ranges. The RS-25 holds true to the set thrust level and mixture ratio regardless of propellant inlet conditions. The J-2X, on the other hand, is an open-loop engine. The thrust and mixture ratio for J-2X will stray a bit with variations in propellant inlet conditions. Note that this “straying” is predictable and is built into the overall mission design. Because the upper stage engine won’t see the same wide variations in propellant inlet conditions, this is a plausible design solution. The different control schemes for the two engines are also the reason as to why the noted thrust and mixture ratio precision are different in the table above for the RS-25 and the J-2X. Every engine runs slightly differently from firing to firing. These are usually small variations, but they are there. This is part of the “internal conditions” factor in terms of an engine straying from a fixed operational condition. A closed-loop control engine can measure where it is with regards to thrust and mixture ratio and make corrections to accommodate and compensate for slightly different internal conditions. An open-loop engine like J-2X cannot make these accommodations and so it will have a wider run-to-run variability even if everything else remains the same. Note that we could have made J-2X a closed-loop engine. We made the specific decision to not go that way based upon a cost-benefit analysis. Simply put, closed-loop is more complex and, therefore, more expensive to develop and implement. We conducted a trade study, in conjunction with the stage development office, and decided that the benefits in overall stage performance did not justify the additional development and production cost. For RS-25, given its mission, it really had to be closed-loop from a technical perspective to enable the Space Shuttle mission. Plus – thank goodness for us today – the RS-25 control algorithms are validated and flight-proven as we head into the Space Launch System Program. That’s a nice feature of using a mature engine design. So, that’s a top-level comparison of our two engines that we’re managing for the SLS Program. They have a number of common features, which is not surprising given that the SSME design grew out of the original J-2 experience forty-some years ago and the J-2X was developed, in part, with thirty-some years of SSME experience behind us. But they are also quite different machines because they were designed for different missions. No, this is not a case analogous to the comparison of a Ferrari and a dump truck. It’s more like, perhaps, a Bugatti Veyron and a Lamborghini Aventador. Each is just a remarkable creation in its own right (…and I’d probably be reasonably happy with either in my garage…). Post-Script. A quick note about mixture ratio control and RS-25. You will note that the mixture ratio is shown as a range in the table of characteristics up above. The RS-25 can be set to run at any mixture ratio within that range. This is a nice accommodation for stage design efforts today as part of the SLS Program, but that’s not why the range exists. The original design requirements for the SSME included not only the provision for variable, controllable thrust level in run, but also for independently variable and controllable mixture ratio during engine firing. This fact, in turn, explains the rather unusual engine configuration of having two separate preburners, one for the fuel pump and one for the oxidizer pump. I’ll tell you why. Think back to basic algebra. Remember when you had to solve for a number of variables using several equations. The mathematical rule of thumb was that you had to have as many independent equations as there were variables or else you could not arrive at unique solutions for each variable. The same principle is applicable here. With two separate, independently controlled preburners (and therefore independently controlled sources for turbine power), you can resolve to independently control two output parameters, namely thrust and mixture ratio. That’s pretty cool. But here’s the interesting historical part: We never actually used the shifting mixture ratio in flight. As the vehicle matured, it was decided that the mixture ratio shifting capability was not needed. But the design and development of the engine was too far down the road to backtrack and simply. Thus, we have a dual-preburner, staged combustion RS-25 engine. Post-Post-Script. A number of years ago as part of the SSME project, for some specialized development testing, we did actually invoke the capability to shift mixture ratio in run on the test stand. So we have demonstrated this unique capability on an engine hot fire. There just wasn’t ever any reason to use it as part of a mission. Interesting.
<urn:uuid:2237c7fb-b882-472e-a551-aff7d245617e>
CC-MAIN-2020-16
https://blogs.nasa.gov/J2X/2013/08/06/inside-the-leo-doghouse-rs-25-vs-j-2x/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370492125.18/warc/CC-MAIN-20200328164156-20200328194156-00074.warc.gz
en
0.953316
2,761
2.53125
3
The basic principle of the taxi is to move an individual or group of individuals, along with their associated luggage, from point A to point B. There is no specific restraint on the numbers of people or items involved or the distance they travel. Nor is there any specific requirements for the form of transportation, except that it needs to, ideally, be capable of covering the distances involved, or at least a part, thereof. For the sake of clarity this does not include other means of public transport, like buses, coaches, trains, etc. Taxis, while capable of carrying small groups, tend to be much more private and usually do not carry gatherings larger than four or five people1. Point of Interest - the Root of the Word 'Taxi' The term 'taxi' is derived from 'taximeter', the instrument used to determine the distance covered in a journey to ensure that an accurate fare is requested. Types of Taxi The primary difference between official taxis and privately operated taxis is the legal right for the former to ply their trade - to pick up passengers from anywhere without the necessity for prior arrangement, such as London Black Cabs. Privately operated taxis - like mini cabs - can only provide passage on a booking basis. The world over, virtually every state or county runs a system of registration that provides this level of authority, or licence, to a certain number of vehicles. A taxi can normally be hired simply by waving one down from the edge of the street or by finding an organised taxi rank where taxis without customers will come to stop and await fresh business. For the average pedestrian neither option is a guarantee of transportation, but the latter is probably less dangerous and prone to roadside conflict. In many cities flagging a taxi down on the kerb means contending with several other people trying to do exactly the same thing. This can result in much pushing, shoving, shouting and individuals virtually throwing themselves out into the road. On the other hand, taxi ranks present a supposedly controlled supply of taxis. However, in many countries being at the front of a queue doesn't necessarily mean that you'll be entitled to step into the first cab that comes along. Often it's a case of every man for himself and it may be simpler, and a lot less physically testing, to hire a cab by phone. Official taxis also tend to cater for a much higher level of internal security to protect the driver - usually in the form of a clear, toughened plastic or wire mesh barrier. Passengers sit behind the barrier and the driver often has complete control of the rear locks so that travellers don't leave before the fare is paid. Payments are usually made through a small hole in the barrier or by way of a small drawer into which notes/coins are placed and pulled through to the driver's area for retrieval. Minicab is a term used to describe both unlicensed illegal taxi touts and licensed Private Hire Vehicles (PHV) which provide a similar service to licensed taxis. Never use an unlicensed illegal taxi touts as apart from the obvious risks to personal safety there is also issues to do with lack of suitable insurance and poor maintenance which could put you at risk. The rules and regulations relating to licensed Private Hire vary with each licensing authority (normally the local council2). As rule of thumb Private Hire Vehicles will be ordinary cars that can carry nine people or less 3 and which must be pre-booked before the start of the journey although there are local exceptions to this. In some parts of the UK Private Hire Vehicles will have to have specific paint schemes or use accessible vehicles and again in some parts of the UK they will be metered in others unmetered, either way get an estimate when ordering your Private Hire Vehicles. In some parts (but not all) of the UK drivers of Private Hire Vehicles will have to undergo a local knowledge test but wherever you are it should be expected that the licensing authority has carried out background checks on your licensed driver and regularly inspects licensed Private Hire Vehicles. The last place in England and Wales to get licensed Private Hire Vehicles was London and as of April 2005 all Private Hire Vehicles in England and Wales should be licensed. Effectively a minicab company using much newer and far classier vehicles to complete the same task as any other taxi. Celebrities, politicians and significant business people commonly use limousines. They are generally hired for a specific period of time with a specific fee negotiated for this period. They may be used to simply travel between two locations or employed to travel around multiple points based on the wishes of the passenger or the requirements of their itinerary. A variant upon the standard types of taxi, airport transportation exists both as hire and self-hire options. Taxis of this type are usually larger vehicles, like people carriers - vehicles that combine the characteristics of a car and a small bus with room for half-a-dozen passengers and a fair quantity of luggage. They travel between the home location of the passenger and a specified airport, and can, if the charge is reasonable, represent a cheaper option than using airport parking. A common form of taxi for anyone with a family car. Naturally this kind of transportation is not licensed and doesn't involve the payment of a fare, although considerable begging and pleading may be in order. Parental lifts are occasionally organised, but more than likely they are requested late at night from unusual locations under strange circumstances - for example: Hi, Dad... yeah, my van broke down after hitting a cattle gate in the fog. We tried to hitch a lift to the train station, but apparently it's been closed since the Fever and we can't translate everything the locals are saying. Can you pick us up...? When parents aren't around and driving lessons are still a distant dream, there is the option to turn to those friends and acquaintances that do have access to a car to give you a lift somewhere. Of course, they might be going to the same place already (refer to 'Car Sharing' below) but more often than not they aren't - so what you are after is a lift to somewhere close to your desired destination or, by cashing in a favour or through appropriate blackmail techniques, getting them to go completely out on a limb and taking you to the right place. Essentially this is an environmentally friendly type of personal taxi service. Also known as 'car pooling'. Most often used to go to and from a place of work, many places around the world promote this mode of transport by building High Occupancy Vehicle lanes on roads that can only be used by public transport and cars with a certain number of passengers. These lanes are generally nearly empty, therefore providing a great way to avoid traffic congestion. However, for many people the concept of sharing a car is something akin to being asked to share your under-garments - there is just something not quite right about it. Wandering around thumbing down random vehicles might be considered freeform taxi hailing. Hikers effectively turn any vehicle into an ad-hoc taxi. Unlike official taxis, of course, there is a complete lack of security between driver and passenger. However, drivers who are uneasy with the situation are unlikely to have offered a lift in the first place. Car pooling with a chauffeur. Companies will often hire single cars to take employees to the same location or several locations in very close proximity. This tends to apply to training events or trips to and from airports. Private Driving Instruction While driving lessons are most commonly completed in a circular fashion, picking up and dropping off at the same place, it's also practical to use a driving lesson as a means to travel from Point A to Point B without resorting to needing a driving licence. This is admittedly rather an expensive option. A common form of transportation in the Orient, it is basically a long tricycle with a wicker basket on the back into which a couple of passengers can fit. They serve the same function as a taxi, with the 'driver' sitting at the front providing the pedal power. The novelty value of the rickshaw means that it has found its way into various cities across the world as a lure for tourists. A variation is the pedicab, really just a modern version of the richshaw, with a reasonably comfortable two-seater carriage sitting on something like a massive tricycle frame. These are common across the world, from San Diego to the Philippines, and are available for hire through various private firms. The official taxis of the world use a taximeter to measure the distance travelled and therefore generate a specific fare. Fares are generally charged per fraction of a mile or kilometre, with additional charges added for various reasons - usually because of extended waiting periods during the hired period. Non-official taxis may also use taximeters, but these may not undergo the rigorous checking required by official taxis. Mini cabs, and other non-official taxis, may run on the basis of a set charge for any given journey. Passengers are strongly advised to negotiate these charges in advance to ensure that an unreasonable fare is not forced upon them once the destination is reached. Payments of Gratitude Where friendly lifts, car-sharing or hiking are concerned, the payment will often be based on the individuals involved having the conscience to offer something towards the cost of the fuel and the hassle caused to the car owner. In some countries, or with some individuals, the thought of paying for the journey may not necessarily occur without heavy prompting from the driver. Reasonable Advice For Hassle-Free Taxi Travel There are a few basic considerations that should make handling taxi travel less of a chore for both passenger and driver. Keep these in mind and you will find the journey from A to B a lot less troublesome. Before you set out, make sure to have the exact address you need to travel to, as well as the cross streets in places like American cities. This might seem obvious, but so many times a journey can be turned into torture if nobody knows exactly where the target location is. Driving around randomly can also be very expensive. When trying to hire cabs on the street, don't trouble cabs that aren't for hire. There is usually a local system to show this is the case, from a brightly lit 'For Hire' sign to a combination of lights on the roof of the car. Familiarise yourself with the system to save on the embarrassment. Do not give your destination before you enter the cab. Wait until you're inside. This saves on time, but most importantly it means that you've laid claim to the cab before the driver has any opportunity to object to your destination. They still might stop and ask you to leave, but it's more of a problem if you're already firmly seated in the back of the car. In New York, for example, taxis on the outskirts will often flatly refuse journeys 'Downtown' during peak hours. Before you step out of the back of the cab, pause for a moment and check that you haven't left anything behind. Wallets, purses, briefcases, small children, etc are all commonly forgotten and not everyone is as honest as you might be. When hailing for cabs in the street, keep an eye out for 'claim-jumpers'. There are some people who treat other people's efforts to hail a cab as some form of service that saves them the trouble. If you're not on your guard you may find that your ride has been hired and gone before you even have a chance to react. Unusual Taxi Journeys The longest hired taxi journey in the world was 14,414 miles - more than 23,000 kilometres. The journey involved almost two weeks of travel. A couple hired a taxi in Nokia, Finland and travelled down through Scandinavia and Europe to Spain, and finally completed a round-trip back to Nokia. The journey cost 70,000 FIM (Finnish Marks) - approximately £9,000 or somewhere in the region of US$14,000.
<urn:uuid:ec011474-665c-4bce-a6d4-e613aaa9792d>
CC-MAIN-2020-16
http://pliny.h2g2.com/edited_entry/A530623
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370519111.47/warc/CC-MAIN-20200404011558-20200404041558-00195.warc.gz
en
0.962636
2,472
2.65625
3
There’s a common misconception that when you sit at your computer, browsing the internet then it’s pretty much all done in private. Not in a super secure, top secret way just the expectation that it’s all pretty discrete and you’re activities are pretty much anonymous. Unfortunately although this used to be true to some extent, it certainly isn’t nowadays – there is virtually no privacy at all online unless you actively take precautions. What is important to realise is that everyone now has some sort of digital identity and profile. This includes all sorts of information about your age, sex, location, hobbies, political preferences and a host of other metrics. Information which is valuable to all sorts of people for a variety of reasons. When we connect to any major website, we don’t do it anonymously but are tracked, monitored and analysed via our IP addresses, cookies and browsing histories. So What Exactly are Proxies on the Internet? A proxy server is simply a computer configured on the internet to act as a middleman for communication. It sits in between the resource you are trying to access, receiving and forwarding your requests in both directions. This enables you to access that resource without any direct connection. As you can see you only have a direct connection with the proxy server which means that no personal or technical information is accessible to the web site. This is of course dependent on the proxy not forwarding this information automatically. The traditional role of internet proxies are to simply act as a gateway and not modify the traffic in any way. Using Proxies for Anonymity ? This is an obvious role for a proxy server simply down to the way they work. They cut the direct link between you and the web site you’re visiting. The web server won’t even see your location which is generally determined from your device’s IP address. Only the proxy server’s IP address will be visible, so any location information will be determined from that. Indeed this is often used as a way to bypassing geoblocks and filtering based on location. For example millions of people across the world access the UK only BBC iPlayer by simply connecting through a UK based proxy server first. The proxy effectively becomes an extension of your presence online, by switching proxies you can effectively change your digital location. Proxies are not a complete solution for privacy online though and they certainly don’t protect all your information. The core omission is the lack of any level of encryption which means that much of your data is readable including your actual location. - ISP Logs – all requests you make are routed through a gateway normally controlled by your ISP if you’re accessing at home. This includes the web site address, resource you’re accessing and so on which is in turn passed on to the proxy server. This information can be logged and accessed with the appropriate rights (or if someone just has access to them), these logs will contain a complete list of your online activity. - No Protection on the Wire – until your request reaches the proxy server, then it’s all potentially accessible and readable. The ISP is the obvious vulnerability but as the internet is a network build on shared hardware, there’s many other points too. For those using wireless access there’s another potential point for interception too. Of course, nobody’s suggesting that you’re actively under surveillance with bands of covert operatives shadowing your every move. However much of this, especially at the ISP is all recorded automatically and it’s a trivial matter to see an individuals entire internet activity very easily. The solution for fixing these gaps is to use an encrypted connection which will protect your data while it’s being transmitted and ensure all those logs at your ISP are completely unreadable. The popular method is to use something called a VPN, which is effectively a proxy server with an encrypted connection. What Can Proxy Servers Be Used For Making Money with ? You might think that just having a server and a different IP address wouldn’t be of that much use to making money, however you’d be very wrong! Having access to proxy servers means that you can change your digital identity and even run multiple identities online without an issue. If you’ve ever tried running a business or any sort of money making scheme online you’ve probably found having a single fixed digital identity is hugely restrictive. Even running things like multiple searches on Google or Bing to do some research will effectively get your IP address blocked for a period of time as the ‘behaviour looks suspicious’. This can actually happen with manual queries, simply type too many of them in too short a time and you’ll be deemed suspicious. However working online then you’ll definitely be using some sort of automated tools for promotion, research or posting and these multiple requests will happen all the time. Google like many sites doesn’t want it’s interface used for commercial means (without it making a cut of course!). I’ve been working online for well over a decade now and nearly all my useful tools need proxies to work properly. The problem is that a single IP address and identity is simply not enough when you’re trying to run these tools and access different markets. So let’s take a quick run down of some my activities and see which ones need proxies to work properly. My business is primarily involved in affiliate marketing, basically generating revenue from web sites I run or manage. - Keyword Research Tools – I use several like Keyword Elite, Authority Snooper and Question Forge to find keywords that I can rank in Google. This is an essential stage in Search Engine Optimization (SEO). Identifying keywords to try and rank for is arguably the most important task in generating traffic. If I do this from a single internet connection I would get a Google search ban within minutes, then I’d have to wait until it was lifted. - Social Network Posting – to promote my websites I have several accounts on all the major social networking platforms. Whenever I create a new page, I promote it through these accounts. To keep my accounts safe I operate them under separate identities which need different IP addresses. Without using proxies my accounts would be compromised and possibly deleted – some of them are nearly ten years old! - Social Networking Tools – like many online entrepreneurs I also use automated tools to promote on social media efficiently. Although some I do manually, it simply takes too long to do all of this without some sort of assistance. For additional exposure I use a tools called Jarvee to automatically promote my new content on sites like Instagram. You simply cannot use the vast majority of these tools without proxies or they will decimate your accounts! - Link Building Tools – I have used so many of these over the years that there’s too many to mention. What they generally do is to build backlinks to your web pages in order to rank them on the search engines. It works and still works today as long as you’re careful, but again is almost impossible top operate these under a single IP address. The sites you’re accessing will block you as well as the search engines when you’re trying to find them. All the best ones ask you to input proxy servers in order to run properly. - Placing Adverts – one great way to make online sales is of course advertising, yet unfortunately many of the best value and accessible sites won’t allow multiple access. Take for example Craigslist which has localised versions all over the world. These are only accessible though to local people, which the site determines via your IP address. So a UK address won’t be able to place an advert on a New York site even though the product might be perfect for that market. However if you connect through a proxy based in New York everything works fine – see how powerful it is to have multiple identities? That’s only a quick sample off the top of my head, there are loads of other times when I use proxies almost without thinking. In many specialised areas especially when your dealing with volume, it’s impossible to operate on virtually any level without using proxies to some extent. Here’s a quick list of other uses, that I know people who make thousands online use proxies for – Buying and reselling Sneakers – this is incredibly lucrative if you know what you’re doing. You wait for rare and limited release sneakers on sites like Supreme and Nike and then buy as many as possible. Then you can resell them almost instantly at a huge profit very easily indeed. However they’re limited to one per account normally so to buy any volume you need to use proxies to appear as multiple buyers. There’s more to it than this if you want to try it and you read more about Sneaker proxies on this page. Buying and Reselling Tickets – virtually the same concept, buy something in short demand and resell it at a profit. The money to be made on some events is huge if you can buy enough tickets. Obviously this won’t make you particularly popular, they’re normally referred to as Ticket Scalpers, yet it can make you very wealthy! Blackhat Promotion Techniques (AKA Spam) – there are lots of ways to promote your websites quickly and efficiently by using automated tools. These normally involve blasting thousands of links from different sites using automated software and Bots. The sites won’t last long as they will eventually be penalized by the search engines as the practice is detected, but they can generate huge amounts of revenue in the interim. These tools simply don’t work without proxies to provide multiple identities and addresses. The proxy server has come a long way from sitting in a University or Company network server room simply acting as a gateway for internet access or a busy application server. It’s amazing to see how many people now use them in their daily work. Having access to proxies won’t generate you any money directly, however anyone who is will definitely be using them. There’s loads of information on these pages about the different types of proxies you can find, ranging from rotating to mobile servers. Yet ultimately they’re all about owning and maintaining multiple digital identities and hiding your real one. Often it’s a simple matter of scaling up a method or process using these identities. After all if you discover you can make X amount of profits by placing one Craigslist advert in the US, it’s obvious to see that you can multiply those profits by placing loads more!
<urn:uuid:fdd0a703-d39d-4442-9da4-be8ac3feb600>
CC-MAIN-2020-16
https://residentialip.net/what-can-proxy-servers-be-used-for/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505359.23/warc/CC-MAIN-20200401003422-20200401033422-00073.warc.gz
en
0.945559
2,193
3.09375
3
“Whose Streets? OUR Streets!” yell rowdy demonstrators when they surge off the sidewalk and into thoroughfares. True enough, the streets are our public commons, what’s left of it (along with libraries and our diminishing public schools), but most of the time these public avenues are dedicated to the movement of vehicles, mostly privately owned autos. Other uses are frowned upon, discouraged by laws and regulations and what has become our “customary expectations.” Ask any driver who is impeded by anything other than a “normal” traffic jam and they’ll be quick to denounce the inappropriate use or blockage of the street. Bicyclists have been working to make space on the streets of San Francisco for bicycling, and to do that they’ve been trying to reshape public expectations about how streets are used. Predictably there’s been a pushback from motorists and their allies, who imagine that the norms of mid-20th century American life can be extended indefinitely into the future. But cyclists and their natural allies, pedestrians, can take heart from a lost history that has been illuminated by Peter D. Norton in his recent book Fighting Traffic: The Dawn of the Motor Age in the American City. He skillfully excavates the shift that was engineered in public opinion during the 1920s by the organized forces of what called itself “Motordom.” Their efforts turned pedestrians into scofflaws known as “jaywalkers,” shifted the burden of public safety from speeding motorists to their victims, and reorganized American urban design around providing more roads and more space for private cars. For decades, over 40,000 people have died each year in car crashes on the streets of the United States. This daily carnage is utterly normalized to the point that few of us think about it at all, and if we do, it’s like the weather, just a regular part of our environment. But it wasn’t always this way. Back when the private automobile was first beginning to appear on public streets a large majority of the population, including politicians, police, and business leaders, agreed that cars were interlopers and ought to be regulated and subordinated to pedestrians and streetcars. It’s almost impossible to imagine the speed with which conditions on urban streets changed at the dawn of the motorized era. Here’s a quote from the California Automobile Association’s Motorland magazine in August 1927 describing the rapid growth in car ownership: In 1895 there were four cars registered, in 1905 there were 77,400 in use, in 1915 the total had risen to 2,309,000, and in 1925 there were 17,512,000 passenger automobiles on the highways, and the total is now in excess of 20,000,000. With over two million cars clogging city streets in 1915, and death and injury tolls rising, cities took various measures to address the problem (quoting from “Fighting Traffic”): From 1915 (and especially after 1920), cities tried marking crosswalks with painted lines, but most pedestrians ignored them. A Kansas City safety expert reported that when police tried to keep them out of the roadway, “pedestrians, many of them women” would “demand that police stand aside.” In one case, he reported, “women used their parasols on the policemen.” Police relaxed enforcement. The common usage of the streets by all was considered sacrosanct and attempts by motordom and/or police to regulate people’s use of the streets was widely resisted. Plenty of police didn’t agree that pedestrian behavior should be criminalized on behalf of motoring: New York police magistrate Bruce Cobb in 1919 defended the “legal right to the highway” of the “foot passenger,” arguing that “if pedestrians were at their peril confined to street corners or certain designated crossings, it might tend to give selfish drivers too great a sense of proprietorship in the highway.” He assigned the responsibility for the safety of the pedestrian—even one who “darts obliquely across a crowded thorofare”—to drivers… By 1916 “jaywalker” was a feature of “police parlance.” Police use modified the word’s meaning and sparked controversy. “Jaywalker” carried the sting of ridicule, and many objected to branding independent-minded pedestrians with the term… The New York Times objected, calling the word “highly opprobrious” and “a truly shocking name.” Anti-jaywalking campaigns came to San Francisco too. In a 1920 safety campaign, San Francisco pedestrians who thought they were minding their own business found themselves pulled into mocked-up outdoor courtrooms. In front of crowds of onlookers they were lectured on the perils of jaywalking. As the 1920s continued, more and more cars were being sold, and the streets were both crowded and contested. Streetcar operators blamed cars for clogging thoroughfares and slowing down their lines, causing late runs and generally inconveniencing passengers. Motorists parked everywhere, jamming curbsides two-deep, when they weren’t weaving through chaotic urban streets. Attempts to regulate and standardize traffic patterns began during this era, with lanes, crosswalks, traffic signals, and parking regulations slowly emerging as “solutions” to the problems created by tens of thousands of private cars filling the streets. When sales slumped in late 1923 and into 1924, analysts speculated that the market for cars was saturated (at about 7 Americans per car at the time). The car industry consisted of dozens of companies, who began to fail or merge during this first contraction in sales. The industry reorganized its public relations and launched concerted efforts to redefine “saturation”: There was no “buying-power saturation,” [motordom] said. The real bridle on the demand for automobiles was not the consumer’s wallet, but street capacity. Traffic congestion deterred the would-be urban car buyer, and congestion was saturation of streets. By the late 1920s, a young graduate student named Miller McClintock had become the nation’s pre-eminent traffic researcher thanks to his 1925 thesis “Street Traffic Control.” His career is a window into the process of private corruption of public interests that riddles American history up to the present. In his 1925 graduate thesis Street Traffic Control, the old McClintock had maintained that widening streets would merely attract more vehicles to them, leaving traffic as congested as before. The automobile, he wrote, was a waster of space compared to the streetcar, noting that “the greater economy of the latter is marked.” “It seems desirable,” McClintock wrote, “to give trolley cars the right of way under general conditions, and to place restrictions on motor vehicles in their relations with street cars.” He described the automobile as a “menace to human life” and “the greatest public destroyer of human life.” Two years later all had changed. McClintock wrote of “the inevitable necessity to provide more room” in the streets. He called for “new streets” and “wider streets.”… In 1925 McClintock virtually ruled out elevated streets as expensive and impractical; two years later he urged that they be considered. What had happened in the two years between the diametrically opposed advice given by McClintock? He had been hired by Studebaker’s Vice President to head up the new “Albert Russel Erskine Bureau for Street Traffic Research,” which was first placed in Los Angeles where McClintock was teaching at UC, but a year later moved by Studebaker to Harvard University, where the car company continued to fund the ostensibly “independent” institute. As the years went by McClintock became one of the foremost authorities on traffic planning, though his organization dropped the “Albert Russel Erskine” from its name when the chairman of Studebaker Motors committed suicide in 1933! McClintock came to San Francisco early in his career. In the August 1927 Motorland magazine, he penned an article summarizing his research “Curing the Ills of San Francisco Traffic”: “… it is recognized that an ultimate requirement for the solution of street and highway congestion is to be found in the creation of more ample street area.” And sure enough, it was in this exact period that San Francisco embarked on a series of street widenings throughout the city, including for example, Capp Street and Army Street in the Mission District. Interestingly, McClintock’s traffic study shows the predominant car-free life of San Franciscans at the time: On a typical business day studied by the traffic survey committee, 1,073,963 persons entered and left [the central business] district during a fourteen-hour period from 6 a.m. to 8 p.m. Vehicles of all types, including streetcars, carried 744,667 people in and out of the district, In addition, 329,296 pedestrians entered and left the district during the same period… In no other city is there such a large pedestrian movement into the central district, nor such a large outrush of people during the noon hour. Both of these conditions may be attributed to the large capacity of apartment houses immediately adjacent to the district… Incredibly, streetcars were used by 70 percent of the people depending on some kind of transportation to get downtown, while only a quarter used passenger cars, but the latter made up 61 percent of vehicular traffic as compared to 11 percent for the streetcars! What has been poorly understood in the triumphant narrative of the private automobile is how cars benefited from enormous public expenditures, even when they were being used by a relatively small minority of the population. New infrastructure to accommodate motorists far outstripped any public investment in public streetcar service, let alone any subsidies for the privately owned lines. Meanwhile, electric streetcar companies were slowly going bankrupt, with their fares publicly restricted and the public streets on which they operated slowly being taken over by private vehicles. Traditional use of the streets by pedestrians was being criminalized by new traffic codes. McClintock put forth a new Uniform Traffic Ordinance, adopted by San Francisco’s Board of Supervisors, which was intended to “legislate jaywalkers off the streets,” crowed a Motorland magazine editorial. In 1915, Ford already had a factory at 21st and Harrison in the Mission making Model-T’s, and by the mid-1920s, the new car business was fully ensconced along Van Ness Avenue in San Francisco: Miller McClintock continued his work on behalf of the auto industry from his bought-and-paid-for perch at Harvard University. Miller McClintock [became] the impresario of a new kind of highway road show. In the spring of 1937, the Shell Oil Company combined McClintock’s traffic expertise with the talents of the stage designer Normal Bel Geddes to build a scale model of “the automobile city of tomorrow.”… Others interested in the rebuilding of cities for the motor age adopted Shell’s technique. At the 1939 Golden Gate International Exposition, United States Steel displayed its vision of San Francisco in 1999, with wider streets, cloverleaf intersections, and an elevated highway. Overshadowed by the far more successful World’s Fair in New York City, and in particular by the tone-setting “World of Tomorrow” exhibit there built by General Motors, the 1939 US Steel vision of San Francisco in 1999 is worth peeking at: Here’s a description of the exhibit by Richard Reinhart in his book on the 1939 Golden Gate International Exposition “Treasure Island: San Francisco’s Exposition Years” Artist Donald McLoughlin had prepared a dioramic view of San Francisco in 1999 for the US Steel exhibit in the Hall of Mines, Metals and Machinery. This prognostic nightmare showed the city stripped of every vestige of 1939 except Coit Tower, the bridges and Chinatown. All maritime activity had disappeared from the Embarcadero. Shipping was concentrated at a super-pier at the foot of 16th Street. North of Market Street every block contained a single, identical high-rise apartment house. South of Market, sixty-story office towers of steel and glass alternated with block-square plazas in a vast checkerboard pattern. Elevated freeways ran through the geometric landscape. McLoughlin correctly anticipated the removal of maritime activity from San Francisco’s waterfront, though his massive modern pier is spread along the Oakland bay shore rather than on a prominent pier jutting out from 16th Street. Visions like this, and the better known version in New York, informed the post-WWII population as it fled cities for the suburbs. Those who remained though, had a different idea of what our cities would become, and thanks to their stopping the highway builders in their tracks in the late 1950s and early 1960s, San Francisco was not crushed in this way. Interesting to recall that while 30,000 citizens were mobilized to stop freeway building in San Francisco (the very same elevated, pedestrian-free streets McClintock had come to endorse as an industry flack) thousands more, mostly African American and white youth, staged a vigorous civil rights campaign along auto row, demanding that blacks be given equal treatment in hiring by auto dealers, especially Don Lee’s Cadillac dealership. Contrary to the fervent wishes of today’s motorists, streets have not always been the domain of cars. Clever marketing prior to the Depression led to radical redesign of both the physical streets and our assumptions about how public streets should be used. As we ride to and from work on our bicycles these days, or get together in Critical Mass or Bike Party social rides, we are participating in a new push to redefine how streets are used, and most importantly, how we think about public space. While we haven’t yet found a new consensus, the rising tide of bicycling, parklets, Sunday Streets, car-free zones, etc., all amply demonstrate that the private car’s days are in decline. Add a dollop of global warming and a couple of scoops of cheap fossil fuel scarcity, and the question of Whose Streets is once again a key issue of social contestation. Perhaps at least we can stop blindly accepting death and mayhem as an inevitable and natural consequence of our social transportation choices! Cartoon by Jean-Francois Batellier, a French artist who sells his art and books on the streets of Paris.
<urn:uuid:1cd259f7-a914-4e2e-b6b7-42f55f6d05d5>
CC-MAIN-2020-16
https://sf.streetsblog.org/2011/08/09/whose-streets/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493684.2/warc/CC-MAIN-20200329015008-20200329045008-00394.warc.gz
en
0.968351
3,099
2.71875
3
The franchise held by the Minotier family in the Cooper stories is based on this tidbit of Vietnam History: Vietnam was one of the first stops for Chinese immigrating from overpopulated Kwangtung and Fukien provinces in the late eighteenth and early nineteenth centuries. While the Vietnamese emperors welcomed the Chinese because of their valuable contributions to the nation’s commercial development, they soon found the Chinese opium habit a serious economic liability. Almost all of Vietnam’s foreign trade in the first half of the nineteenth century was with the ports of southern China. Vietnam’s Chinese merchants managed it efficiently, exporting Vietnamese commodities such as rice, lacquer ware, and ivory to Canton to pay for the import of Chinese luxury and manufactured goods. However, in the 1830s British opium began flooding into southern China in unprecedented quantities, seriously damaging the entire fabric of SinoVietnamese trade. The addicts of southern China and Vietnam paid for their opium in silver, and the resulting drain of specie from both countries caused inflation and skyrocketing silver prices. The Vietnamese court was adamantly opposed to opium smoking on moral, as well as economic, grounds. Opium was outlawed almost as soon as it appeared, and in 1820 the emperor ordered that even sons and younger brothers of addicts were required to turn the offenders over to the authorities. The imperial court continued its efforts, which were largely unsuccessful, to restrict opium smuggling from China, until military defeat at the hands of the French forced it to establish an imperial opium franchise. In 1858 a French invasion fleet arrived off the coast of Vietnam, and after an abortive attack on the port of Danang, not far from the royal capital of Hue sailed south to Saigon, where they established a garrison and occupied much of the nearby Mekong Delta. Unable to oust the French from their Saigon beachhead, the Vietnamese emperor finally agreed to cede the three provinces surrounding Saigon to the French and to pay an enormous long-term indemnity worth 4-million silver francs. But the opium trade with southern China had disrupted the Vietnamese economy so badly that the court found it impossible to meet this onerous obligation without finding a new source of revenue. Yielding to the inevitable, the emperor established an opium franchise in the northern half of the country and ]eased it to Chinese merchants at a rate that would enable him to pay off the indemnity in twelve years. More significant in the long run was the French establishment of an opium franchise to put their new colony on a paying basis only six months after they annexed Saigon in 1862. Opium was imported from India, taxed at 10 percent of* value, and sold by licensed Chinese merchants to all comers. Opium became an extremely lucrative source of income, and this successful experiment was repeated as the French acquired other areas in Indochina. Shortly after the French established a protectorate over Cambodia (1863) and central Vietnam (1883), and annexed Tonkin (northern Vietnam, 1884) and Laos (1893), they founded autonomous opium monopolies to finance the heavy initial expenses of colonial rule. While the opium franchise had succeeded in putting southern Vietnam on a paying basis within several years, the rapid expansion of French holdings in the 1880s and 1890s created a huge fiscal deficit for Indochina as a whole. Moreover, a hodgepodge administration of five separate colonies was a model of inefficiency, and hordes of French functionaries were wasting what little profits these colonies generated. While a series of administrative reforms repaired much of the damage in the early 1890s, continuing fiscal deficits still threatened the future of French Indochina. The man of the hour was a former Parisian budget analyst named Paul Doumer, and one of his solutions was opium. Soon after he stepped off the boat from France in 1897, Governor-General Doumer began a series of major fiscal reforms: a job freeze was imposed on the colonial bureaucracy, unnecessary expenses were cut, and the five autonomous colonial budgets were consolidated under a centralized treasury. But most importantly, Doumer reorganized the opium business in 1899, expanding sales and sharply reducing expenses. After consolidating the five autonomous opium agencies into the single Opium Monopoly, Doumer constructed a modern, efficient opium refinery in Saigon to process raw Indian resin into prepared smoker’s opium. The new factory devised a special mixture of prepared opium that burned quickly, thus encouraging the smoker to consume more opium than be might ordinarily. Under his direction, the Opium Monopoly made its first purchases of cheap opium from China’s Yunnan Province so that government dens and retail shops could expand their clientele to include the poorer workers who, could not afford the high-priced Indian brands. More dens and shops were opened to meet expanded consumer demand (in 1918 there were 1,512 dens and 3,098 retail shops). Business boomed. As Governor-General Doumer himself has proudly reported, these reforms increased opium revenues by 50 percent during his four years in office, accounting for over one-third of all colonial revenues. For the first time in over ten years there was a surplus in the treasury. Moreover, Doumer’s reforms gave French investors new confidence in the Indochina venture, and he was able to raise a 200-million franc loan, which financed a major public works program, part of Indochina’s railway network, and many of the colony’s hospitals and schools. Nor did the French colonists have any illusions about how they were financing Indochina’s development. When the government announced plans to build a railway up the Red River valley into China’s Yunnan Province, a spokesman for the business community explained one of its primary goals: “It is particularly interesting, at the moment one is about to vote funds for the construction of a railway to Yunnan, to search for ways to augment the commerce between the province and our territory…. The regulation of commerce in opium and salt in Yunnan might be adjusted in such a way as to facilitate commerce and increase the tonnage carried on our railway.” While a vigorous international crusade against the “evils of opium” during the 1920s and 1930s forced other colonial administrations in Southeast Asia to reduce the scope of their opium monopolies, French officials remained immune to such moralizing. When the Great Depression of 1929 pinched tax revenues, they managed to raise opium monopoly profits (which had been declining) to balance the books. Opium revenues climbed steadily, and by 1938 accounted for 15 percent of all colonial tax revenues-the highest in Southeast Asia. In the long run, however, the Opium Monopoly weakened the French position in Indochina. Vietnamese nationalists pointed out the Opium Monopoly as the ultimate example of French exploitation. Some of Ho Chi Minh’s most bitter propaganda attacks were reserved for those French officials who managed the monopoly. In 1945 Vietnamese nationalists reprinted this French author’s description of a smoking den and used it as revolutionary propaganda: Let’s enter several opium dens frequented by the coolies, the longshoremen for the port. The door opens on a long corridor; to the left of the entrance, is a window where one buys the drug. For 50 centimes one gets a small five-gram box, but for several hundred, one gets enough to stay high for several days. Just past the entrance, a horrible odor of corruption strikes your throat. The corridor turns, turns again, and opens on several small dark rooms, which become veritable labyrinths lighted by lamps which give off a troubled yellow light. The walls, caked with dirt, are indented with long niches. In each niche a man is spread out like a stone. Nobody moves when we pass. Not even a glance. They are glued to a small pipe whose watery gurgle alone breaks the silence. The others are terribly immobile, with slow gestures, legs strung out, arms in the air, as if they had been struck dead . . . The faces are characterized by overly white teeth; the pupils with a black glaze, enlarged, fixed on god knows what; the eyelids do not move; and on the pasty cheeks, this vague, mysterious smile of the dead. It was an awful sight to see walking among these cadavers. This kind of propaganda struck a responsive chord among the Vietnamese people, for the social costs of opium addiction were heavy indeed. Large numbers of plantation workers, miners, and urban laborers spent their entire salaries in the opium dens. The strenuous work, combined with the debilitating effect of the drug and lack of food, produced some extremely emaciated laborers, who could only be described as walking skeletons. Workers often died of starvation, or more likely their families did. While only 2 percent of the population were addicts, the toll among the Vietnamese elite was considerably greater. With an addiction rate of almost 20 percent, the native elite, most of whom were responsible for local administration and tax collection, were made much less competent and much more liable to corruption by their expensive opium habits. In fact, the village official who was heavily addicted to opium became something of a symbol for official corruption in Vietnamese literature of the 1930s. The Vietnamese novelist Nguyen Cong Hoan has given us an unforgettable portrait of such a man: Still the truth is that Representative Lai is descended from the tribe of people which form the world’s sixth race. For if he were white, he would have been a European; if yellow, he would have been an Asian; if red, an American; if brown, an Australian; and if black, an African. But he was a kind of green, which is indisputably the complexion of the race of drug addicts. By the time the Customs officer came in, Representative Lai was already decently dressed. He pretended to be in a hurry. Nevertheless, his eyelids were still half closed, and the smell of opium was still intense, so that everyone could guess that he had just been through a “dream session.” Perhaps the reason he had felt he needed to pump himself full of at least ten pipes of opium was that he imagined it might somehow reduce his bulk, enabling him to move about more nimbly. He cackled and strode effusively over to the Customs officer as if he were about to grab an old friend to kiss. He bowed low and, with both of his hands, grasped the Frenchman’s hand and stuttered, “Greetings to your honor, why has your honor not come here in such a long time?”
<urn:uuid:2048e8a3-69e4-4804-aef9-125e24a0430b>
CC-MAIN-2020-16
http://dzchurch.com/dead-legend/french-indochina-the-friendly-neighborhood-opium-den/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506477.26/warc/CC-MAIN-20200401223807-20200402013807-00154.warc.gz
en
0.972484
2,187
2.53125
3
Find answers to frequently asked questions about Peoria's Water Services on this page. Click on a question to reveal the answer. You can also click or tap "Expand/Contract Questions and Answers" to display all responses. The term drought is used to describe 'an abnormally dry time period for a specific geographic area.' Like most of Arizona, Peoria has been in the grip of a serious drought for over 15 years. Yet water continues to flow to Peoria residents, and the City continues to grow economically. California is also experiencing drought, but municipal providers there have been ordered to cut back on water deliveries by 25%. Why the stark contrast? Because Peoria and Arizona have been planning for drought for years. Like most of Arizona, Peoria has been in the grip of a serious drought for over 15 years. Yet water continues to flow to Peoria residents, and the City continues to grow economically. California is also experiencing drought, but municipal providers there have been ordered to cut back on water deliveries by 25%. Why the stark contrast? Because Peoria and Arizona have been planning for drought for years. Will residents always get water first? Municipal providers including Peoria have the highest priority for Central Arizona Project (CAP) water, one of the valley's primary sources of water delivered from the Colorado River via pump stations and the CAP canal. Is water conservation working? We are using less water per person, per household than we did 20 or 30 years ago. In fact, the average household in Peoria is using 15% less water today than they did 10 years ago. How long do droughts last? Above average snowfalls for several years can return our lakes to normal levels and provide years of renewable water supplies. We know that even during the longest drought, a year or more of average or above average precipitation can occur. Will there be water restrictions because of drought? Even with the significant growth and dry periods experienced by Peoria in recent decades, the City has not had to restrict your water usage due to supply shortages, and it doesn't expect to do so any time soon. Is Peoria prepared for a drought? Yes, we are prepared. Peoria has been planning and preparing for drought and water shortages for decades. What is Peoria's Drought Management Strategy? Peoria is preparing for the possibility of more prolonged and persistent drought scenarios by pursuing a strategy of storing enough water underground to carry the City through six years of potential water Why did the City turn my water off? Your water may be turned off for one of the following reasons: 1. An emergency water leak or broken water main: In order to make the necessary repairs, the water may immediately be turned off to several homes. In the case of an emergency we do not always have time to notify everyone affected. 2. Non-payment of utility bill: If your payment is delinquent, the Finance Department will turn your water off. To restore services, contact Customer Service in the Finance Department at (623) 773-7160. 3. Scheduled water leak repair: We leave a flyer on your door, advising when the water will be turned off for repairs. The day of the repair, a member of our crew may knock on your door to give you a time frame. The office staff will also be notified, to keep the customers informed. What causes low water pressure? In most areas served by the City, the water pressure varies between 50 and 80 psi. During times of peak demand, the pressure may fall as low as 40 psi which is sufficient for most uses. To help the City during these high usage periods, monitor your usage and plan high flows such as watering lawn, filling pool and washing vehicles for other times of the day or night to help the City maintain a uniform demand on the system. Why is my water cloudy? Typically milky, cloudy water is the result of air in the water distribution system. The cloudiness are millions of tiny air bubbles that disappear in a matter of 2-3 minutes. As the bubbles surface to the top, the water becomes clear. The water is brown, what's going on? This may happen when a water leak has been repaired or a fire hydrant has been flushed in the area. Sediment is disturbed in the watermains resulting in the brown, rust color tap water. This colored water is not a health concern and can be eliminated by letting the water run for a few minutes until the water runs clear. My water smells and tastes funny sometimes, is this safe? A harmless unpleasant taste or smell may occur from algae that grow naturally in lakes, rivers and canals. Some odor may remain present, even after the water has been treated and filtered at the treatment plant. Chlorine, used for disinfecting the water, may also produce a harmless taste and odor. Other causes may be bacteria growing in your water heater if it has been sitting unused or corrosion in the water heater internal anode. The most common odor causing problem comes from the drain. Over time, soap, hair, and food can accumulate on the walls of the drain, creating bacteria that release sewer smelling gasses. Why is the cold water coming out hot or warm? This is a very common problem in the summer months; water pipes acclimate to the temperature around them, so when our temperatures get so high in the summer, the water pipes absorb heat from the ground around them and the temperature of the water also increases. The water will never be totally cold in the summertime because of the high temperatures. Does the City of Peoria fluoridate its water? The City of Peoria fluoridates its surface water supply treated and produced at its Greenway Water Treatment Plant. Additional surface water treated and provided to the City of Peoria by the City of Glendale is also fluoridated. Groundwater provided by the City of Peoria typically contains naturally occurring fluoride and is not fluoridated. As a public organization with a mission of protecting public health, we rely on, and actively monitor, national health and water industry associations to provide us with a comprehensive and unbiased assessment of fluoride and guidance on protecting public health. We believe that fluoride in drinking water when used at recommended levels is a safe practice which reduces tooth decay in the general population. If you have any questions or require additional information, please do not hesitate to contact one of our drinking water professionals at (623) 773-8467. There always seem to be a water break around the City, why is this? Most water breaks or leaks are due to older pipes, construction near pipes (vibration of machines), defective materials and poor workmanship. The City replaced aging water pipes a few years ago, reducing the number of breaks significantly. What is the purpose of flushing fire hydrants, isn't that wasting water? It's necessary to flush fire hydrants to maintain water quality. High velocity water helps to clean and scour the interior of the pipes. It flushes accumulated sediments out of the system, removes stale water and restores chlorine residual. It also ensures the operability of the fire protection system. Who is responsible for sewer backups? The problem may be in the City main sewer line, one of our Wastewater maintenance crews will determine this. If this is the case, the City of Peoria will make the necessary repairs and claims related to property damage will be referred to the Risk Management Division of the City Attorneys office. If the crew finds that the problem is in the sewer lateral which connects with the City sewer main, usually located in the street and the house, then they will advise the resident to call a plumber. The sewer lateral is owned and maintained by the property owner which includes all piping extending from the house to the City sewer main. Is the City responsible for exterminating sewer roaches? The City of Peoria has a contract with a local pest control business; as a preventative maintenance they treat 5000 of the City's manholes per year, each treatment is good for two years. If a customer calls with a roach complaint, a maintenance crew will be dispatched to the residence, if they find the manholes around the area need treating, they will contact the pest control company who will treat all the manholes in that quarter section of the city. It's a good idea to maintain your own pest control program; when coupled with the City's program, the presence of roaches will be greatly reduced. Occasionally a foul odor comes out of the sink and tub drains, is this serious? How can I fix this problem? In rare occasions the problem can be a serious plumbing flaw, but more often the problem can be solved easily. Check the water traps for water. They are the P shaped traps in the drain lines beneath sinks, tub and showers. The standing water in a trap serves as an excellent barrier against sewer gas. The water may have evaporated due to infrequent use, or the house being vacant. Pour a quart of water in each problematic drain; this is plenty of water to fill the traps and provide a full water seal. Another source of odor is bacteria, dirt, grime, mold, etc. passing through the tailpipe on its way to the sewer. Often some is left behind and over time a thick layer of slime collects on the inside surface of this vertical pipe. Mold and bacteria grow and produce unpleasant odors. A small amount of household bleach poured into each drain will help neutralize any bacteria that may be present and causing odors. If problems persist, pipes can be taken apart and cleaned or replaced. As a last resort, a plumbing professional can quickly diagnose and fix the problem. Where does Peoria water come from? North of Beardsley, mostly Glendale Pyramid Peak which is Central Arizona Project (CAP) water. CAP is all Colorado River Water. Beardsley to Bell is a mix of Pyramid Peak and groundwater. South of and east of the New River is Greenway Water Treatment Plant water, which is on the Arizona Canal of Salt River Project SRP. It is a mix of Salt River, Verde River, CAP and groundwater. If west of the New River, you could have a mix of all the above. South of Grand to 91st Ave. is a mix of Greenway and groundwater. West of 91st, south of Grand & Olive is mostly groundwater. Is Peoria water harder? Do I need to change the setting on my softener? Yes, Greenway water is 13 grains hardness Groundwater ranges from 2-8 grains hardness Pyramid Peak is 17 grains Can you please send me a 'will serve' letter? Please send a description of where the property is located to email@example.com. The description can be either a parcel number (or numbers) or a PDF of a map with the property clearly marked. Make sure that your phone number and address is included in the email. You will receive a response in approximately 1 week. This letter is only a verification that the property is within the City of Peoria water and wastewater service boundary. This is not a 'capacity to serve letter' or 'sewer capacity letter'. I am working on a project in your City and I need to obtain a Sewer Capacity Letter in order to submit for Maricopa County Environmental Services Department (MCESD) review. Can you please let me know what I have to do to obtain this letter? The sewer capacity letter is coordinated with the regular plan review done by the Engineering Department. This is the letter required for the Approval to Construct at the County. Step 1 - The completed Approval to Construct (ATC) for MCESD is requested by the Engineering reviewer, typically during 2nd or 3rd review (at the discretion of the plans reviewer). Step 2 - When requested, the Applicant should add the ATC Packet to their next Engineering submittal and it will be routed internally to Utilities. Step 3 - Utilities will review the ATC Packet and contact the Applicant if there are any required revisions. Step 4 - The signed ATC application (for water) and the Sewer Capacity letter (for Sewer) will be released to the Applicant through the Engineering Department when the entire Engineering review is completed.
<urn:uuid:9f7d82b6-61af-4444-9126-6166ce2fc875>
CC-MAIN-2020-16
https://www.peoriaaz.gov/government/departments/water-services/water-faq
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371660550.75/warc/CC-MAIN-20200406200320-20200406230820-00434.warc.gz
en
0.941296
2,536
2.828125
3
Choosing a coaching model Coaching is an important role for a leader. The sessions conducted in coaching should be properly structured. GROW a model is a powerful tool in making the mentoring sessions because it is simple to use and work on a modest framework. GROW is an acronym that stands for Goal, Reality, Options, and Will. In the first step of the model, Goals are established to see where the person wish to go. The second step includes an examination of current reality. The third step is about the exploration of possible options. These options provide a way to reach the objective. In the last step, the will of a person is established to see what he wants to do now. This comprehensive model starts with setting a goal and ends with making a proper decision after an in-depth analysis of current reality and possible alternatives. Chosen model as best practice for population The chosen population in this scenario is women. This model will help in empowering women to reach their objectives and achieve their desired goals further in life. Women are confused about many consequences because they are bound by responsibilities. Having clear and set objectives in life will be helpful to improve their confidence level and self-esteem. This model delivers a comprehensive picture for making a plan of any particular action. This can also be incorporated into the motivational sessions. The things about which a person is confused only need a clear understanding. Only if the person sits and jots down every possible opportunity in light of the given reality, this will be helpful in making the decision about what should be done next in life. Two techniques and their practice The two techniques selected to practice on another person are Reality and options. In the first technique, the person will be said to describe the current situation he is facing. This involves a couple of steps such as analyzing the starting point clearly. If any information is missing, then that should be added. The things that are happening now, the steps that in the process to reach the goal and any other conflicting goals that a person has should be made clear at the start (Cox et al., 2014). In the second technique, the person will be asked to see all the possible options. The first activity to be done is brainstorming. The suggestions are also provided from the side of the coach to make the select best possible option. Each of the options should be considered while keeping in mind its pros and cons. Analyzing strengths and challenges of coaching techniques The coaching techniques in this model are helpful because of their strong framework. At first, the goals are analyzed on the standards of SMART goals. This framework ensures that the goals are Specific, Measurable, Attainable, Realistic and Timely. Once these goals are clearly defined, the check for reality is done. This check helps in understanding the current position. This is a strong point because it is helpful in showing where the person should move forward in life. Analysis of options is useful because it determines the things that should be done to reach a specific goal. In the end, once the pros and cons of all the alternatives are determined, the will of a person is determined by choosing the options that are the best. The only challenge of this model is that it needs an expert to coach. If the coach does not know the right procedure and temperament necessary for coaching, this will be of no use (Grant, 2011). Things I need to know about my coaching career Coaching is a promising career but there are few things that need to be developed in personality to be up to the mark of its standards. I, as a coach, want to have a clear idea about how can I market my skills. Entering into good contracts is important for me. The area I am focusing on includes motivational speaking. The sessions can be for schools, workplace or even in prisons. To progress in my career, I need to narrow my focus down and work more on improving my knowledge in the areas I am interested in. This will increase my chances of getting good contracts because having a knowledgeable and communicative personality are the essential things in a coaching career. Techniques to build rapport and trust Building rapport means that both parties should be attentive to a common goal and share a level of coordination. The mock session was conducted with a woman who wants to lose weight and need reinforcement for making a plan. In my session, I built rapport and trust by making her comfortable so that she can relax and feel appropriate for the discussion. Next, I tried to build some common grounds with her by reflecting upon the history and finding something similar. Sharing common experiences made it possible for her to define her problem clearly to me. I tried to see things from her perspective that why she want to lose weight by understanding her reasons. This was helpful in the later section of coaching session because she felt comfortable in sharing all the necessary information. Communication skills to create trust At first, being a coach, I showed sincere interest in her goals so that she will have trust that I am serious in listening to her issues. I talked to her with respect and empathy. I told her truth about my experiences regarding her issue that made her comfortable about the fact that I will not judge him in any way. I only assured her of the things that I was confident about delivering. This made her trust me because there were no false promises. Trust cannot be built right away; it requires some time (Lewis & Gates, 2005). In this case, making her believe that I am listening to each detail she is providing was the best thing to do. Therefore, I tried my best to make her comfortable to trust me by showing respect, empathy, and attention. Introduction of informed consent Informed consent is the seeking permission from the client to perform any procedure or it is considered as a pledge that the information will be held confidential. In the case under discussion, it was important to introduce informed consent to her. I first made her understand that informed consent is signed so that I can give her complete details about the procedure. This helped in building trust by making it clear that I will tell her all the benefits, alternatives and risks associated with the procedure (Flory & Emanuel, 2004). I introduced it in a way that it is helpful in making her understand the plan of action. I think that telling her benefit in informed consent was the best way to introduce it. She understood the treatment procedure and all the controversies in the procedure were addressed properly. Skills that can be used in a coaching environment There is a range of skills that should be present in a coach for helping him in conducting successful coaching sessions. Then, these qualities are changed according to the particular situation of person they are working with. Some of the important skills needed by a coach in a coaching environment are here along with their strengths and challenges. The most prominent and significant skill is communication. The main job of a coach is to communicate with the clients and provide them with effective solutions after listening to their problems (Certo, 2018). If the communication is not carried out properly, this will result in many misunderstandings and the right purpose of coaching session will not be fulfilled. The key to effective coaching is building valuable relationships with other players in the activity. This shows that communication is a central thing in coaching. Listening is an important component of communicating and coaching as well so the listening skills of a coach should also be excellent. The strength of this skill is that it is helpful in understanding the situation clearly to coach others. The challenge of this skill is that it is not possible for all the coaches to develop this skill. This is something that can be developed but the person should be an extrovert naturally to demonstrate this skill. Having an open mind is a useful skill for a coach. This is important because the person with an open mind is open to new ideas and accept different ways of doing a thing. It is also helpful in acknowledging the strengths and weaknesses of a coach. He will try to overcome the weaknesses and make the strength areas stronger for a better career. There are many structured activities that can be incorporated into coaching activities to have developments in future skills. This means that coaches should constantly try to improve their skill level have better future. The challenge of this skill is that sometimes coaches think that they are equipped with all the necessary knowledge and this overconfidence hinders them to improve their knowledge. Another important skill of coach is organizing and planning proper coaching sessions. If the coaching sessions are not structured properly, the performers become bored. Having a set of meaningful activities after planning can be helpful to improve the performance and outcome of the session. This process is carried out by identifying clear goals and needs of each performer in the session and then planning the session. When the planning process is carried forward with a systematic plan, the goal can be achieved correctly (Spurk et al., 2015). This strength of this skill is that it increases the interest of performers in the coaching session because every time they experience something new. The sense of confidence and achievement is enhanced. However, the challenge of this skill is that sometimes the plan is not formed according to the requirement of the audience. Evaluation and analysis is a key skill needed for coaching. In the initial stages of coaching, all the technical aspects should be analyzed properly to encounter any issues during the training session. Everything should be prepared on time. Not only physically but also mentally the plans should be perfect. There is a need to revise skills and knowledge of coaches so that they can meet the demands and needs of changing world. The challenge is the unfair analysis of previous sessions that lead to similar problems occurring in future coaching sessions to come. All these things can help in improving the level of coaching provided to performers if the challenges encountered in this process are addressed properly. This is best for the well-being of performer and development of coach as well. Certo, S. C. (2018). Supervision: Concepts and skill-building. McGraw-Hill Education. Cox, E., Bachkirova, T., & Clutterbuck, D. A. (Eds.). (2014). The complete handbook of coaching. Sage. Flory, J., & Emanuel, E. (2004). Interventions to improve research participants’ understanding in informed consent for research: a systematic review. Jama, 292(13), 1593-1601. Grant, A. M. (2011). Is it time to REGROW the GROW model? Issues related to teaching coaching session structures. The Coaching Psychologist, 7(2), 118-126. Lewis, R. D., & Gates, M. (2005). Leading Across Cultures. Nicholas Brealey. Spurk, D., Kauffeld, S., Barthauer, L., & Heinemann, N. S. (2015). Fostering networking behavior, career planning and optimism, and subjective career success: An intervention study. Journal of vocational behavior, 87, 134-144.
<urn:uuid:5d75e99b-c3c2-49e1-b26f-846096f3d52c>
CC-MAIN-2020-16
https://academic-master.com/coaching-model-and-techniques/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370503664.38/warc/CC-MAIN-20200331181930-20200331211930-00193.warc.gz
en
0.962326
2,235
2.75
3
The full list of projects contains the entire database hosted on this portal, across the available directories. The projects and activities (across all directories/catalogs) are also available by country of origin, by geographical region, or by directory. In 2013 a new ecosystem monitoring programme “DiskoBasis” was initiated at Arctic Station on Disko Island, Greenland. The project is partly funded by the Danish Energy Agency. The primary objective of DiskoBasis is to establish baseline knowledge on the dynamics of fundamental physical parameters within the environment/ecosystem around Arctic Station. This initiative extends and complements the existing monitoring carried out at Arctic Station by including several new activities –especially within the terrestrial and hydrological/fluvial field. DiskoBasis include collection of data in the following sub-topics; • Gas flux, meteorology and energy balance • Snow, ice and permafrost • Soil and soil water chemistry • Vegetation phenology • Hydrology -River water discharge and chemistry • Limnology -Lake water chemistry • Marine -Sea water chemistry Program collects data of fresh water phytoplankton, phytobenthos, aquatic invertebrates, fish and plants. It intends to reach sufficient data to assess biological quality of water bodies and monitor their change in time. The program is designed to answer the needs of ecological classification determined by Water Framework Directive. The program is managed by Finnish Environment Institute (SYKE), Regional centres for economic development, transport and the environment (ELY-centre) and Natural Resource Institute Finland. Observations are done in the monitoring of water quality network and in specially designed network for anthropogenically eutrophicated lakes and rivers. Monitoring frequency varies between the locations and measured elements. MOSJ (Environmental Monitoring of Svalbard and Jan Mayen) is an environmental monitoring system and part of the Government’s environmental monitoring in Norway. An important function is to provide a basis for seeing whether the political targets set for the development of the environment in the North are being attained Zooplankton make essential links between producers and predators in marine ecosystems, so mediating in the CO2 exchange between atmosphere and ocean They can be indicators of climate variability, and changes in zooplankton species distribution and abundance may have cascading effects on food webs. West Spitsbergen Current is the main pathway of transport of Atlantic waters and biota into the Arctic Ocean and the Arctic shelf seas. West Spitsbergen Shelf coastal and fjordic waters, therefore, are natural experimental areas to study mechanisms by which the Atlantic and Arctic marine ecosystem interact, and to observe environmental changes caused by variability in climate. The main objectives of the zooplankton monitoring are: a) to study patterns and variability in composition and abundance in zooplankton of the West Spitsbergen Current and the West Spitsbergen fjords and coastal waters; b) to find out environmental factors responsible for the observed patterns and variability in zooplankton, and to understand possible relations between zooplankton and their environment on different space and time scales; c) to observe and monitor the variability in zooplankton in relation to local and global climate changes. Multidisciplinary investigations at the LTER (Long-Term Ecological Research) observatory HAUSGARTEN are carried out at a total of 21 permanent sampling sites in water depths ranging between 250 and 5,500 m. From the outset, repeated sampling in the water column and at the deep seafloor during regular expeditions in summer months was complemented by continuous year-round sampling and sensing using autonomous instruments in anchored devices (i.e., moorings and free-falling systems). The central HAUSGARTEN station at 2,500 m water depth in the eastern Fram Strait serves as an experimental area for unique biological in situ experiments at the seafloor, simulating various scenarios in changing environmental settings. Time-series studies at the HAUSGARTEN observatory, covering almost all compartments of the marine ecosystem, provide insights into processes and dynamics within an arctic marine ecosystem and act as a baseline for further investigations of ongoing changes in the Fram Strait. Long-term observations at HAUSGARTEN will significantly contribute to the global community’s efforts to understand variations in ecosystem structure and functioning on seasonal to decadal time-scales in an overall warming Arctic and will allow for improved future predictions under different climate scenarios. The aim is to observe long term effects of land use practices on waters. Monitoring concerns specific locations, where diffuse loads of nutrient or pollutants of agricultural and forestry origin poses a significant risk on water quality. Monitoring includes biological and physio-chemical elements. The program is part of monitoring according to the Water Framework Directive. It is coordinated by Finnish Environmental Institute (SYKE). The IPY-project ‘COPOL’ has a main objective of understanding the dynamic range of man-made contaminants in marine ecosystems of polar regions, in order to better predict how possible future climate change will be reflected in levels and effects at higher trophic levels. This aim will be addressed by 4 integrated work packages covering the scopes of 1) food web contaminant exposure and flux, 2) transfer to higher trophic levels and potential effects, 3) chemical analyses and screening, 4) synthesis and integration. To study the relations between climate and environmental contaminants within a project period of four years, a “location-substitutes-time”-approach will be employed. The sampling is focussed towards specific areas in the Arctic, representing different climatic conditions. Two areas that are influenced differently by different water masses are chosen; the Kongsfjord on the West-coast of Spitzbergen (79N, 12 E) and the Rijpfjord North-East of Svalbard (80N, 22 E). The main effort is concentrated in the Kongsfjord. This fjord has been identified as particularly suitable as a study site of contaminants processes, due to the remoteness of sources, and for influences of climatic changes, due to the documented relation between Atlantic water influx and the climatic index North Atlantic Oscillation (NAO). The water masses of the Rijpfjord have Arctic origin and serves as a strictly Arctic reference. Variable Atlantic water influx will not only influence abiotic contaminant exposure, but also food web structure, food quality and energy pathways, as different water masses carry different phyto- and zooplankton assemblages. This may affect the flux of contaminants through the food web to high trophic level predators such as seabirds and seals, due to altered food quality and energy pathways. The Nuuk-Basic project aims to establish a climate monitoring programme on the westcoast of Greenland. During two workshops, one being in Nuuk with field survey, framework for a future climate monitoring programme will be established. The programme builds on the concept and institutions already performing climate monitoring in NE-Greenland through ZERO (Zackenberg Ecological Research Operations). The ZERO database contains all validated data from the Zackenberg Ecological Research Operations Basic Programmes (ClimateBasis, GeoBasis, BioBasis and MarinBasis). The purpose of the project is to run and update the database with new validated data after each succesfull field season. Data will be available for the public through the Zackenberg homepage linking to the NERI database. The yearly update is dependent on that each Basis programme delivers validated data in the proscribed format. The aim of the project is to describe and model mercury accumulation up the Arctic food chain. Based on existing knowledge from old projects and new measurements made on frozen tissue samples. This project will contribute to a better understanding of the fate of mercury in the Arctic. This project investigates how solar UV radiation affects planktonic food webs in the Arctic by changing the nutritional quality of the lower trophic levels. UV radiation has been documented to lead to oxidation of poly-unsaturated fatty acids (PUFAs) in phytoplankton. These PUFAs cannot be synthesized de novo by zooplankton, but are key molecules for the marine pelagic food web. A combined approach was chosen with both sampling of field data (physical as well as biological) and experiments which were carried out during two field seasons in Ny Ålesund in 2003 (april/may) and 2004 (may/june). In 2004, the main part of the field work consisted of an outdoor experiment where phytoplankton was exposed to different irradiation regimes, using the natural sunlight. Algae from all different treatments were used for feeding zooplankton in order to trace the transfer of irradiation-induced changes of the fatty acid composition in phytoplankton to the next trophic level. A number of additional parameters will be analysed as well, combined with the results of an extensive measurement series of both PAR- and UV light. The experiment was carried out on the old pier (Gamle Kaia), while the laboratory part took place in the Italian station ‘Dirigibile Italia’. The project investigated small-scale biotic interactions between laminated microbial communities and meiofauna at light-exposed sediment-water boundaries of estuarine lagoons. The production and biological structure of these systems is mainly determined by complex processes at the sediment-water interface which depend on finely scaled patterns, requiring appreciation of how the biota interact within these scales. We tested whether changing light conditions and active emergence of the harpacticoid species Mesochra lilljeborgi and Tachidius discipes are mediated by the activity of benthic oxygenic and anoxygenic phototrophic microbes. Two hypotheses were tested which addresses to the question of causality between changing light conditions and active emergence of the harpacticoid copepods. (1)The harpacticoid copepods T. discipes and M. lilljeborgi will enter the bottom water during daylight when oxygenic photosynthesis of cyanobacteria and eukaryotic algae is blocked and conditions at the sediment-water interface have turned anoxic. (2)Both species will not emerge during dark exposures when transferred to sterilized sediments. To recognize some life cycle strategies linked to adult development and reproduction in the Northern krill, Meganyctiphanes norvegica, in the Gullmarsfjorden population. Sampling of krill and analyses of the distribution of sex, body-size, moult and reproductive development stages. The phsyiological and locomotive reaction to factors that influence environmental behaviour of Nordic krill from the Gullmarfjorden were studied in terms of swimming energetics, predator avoidence and food utilization. In a newly developed experimental approach, individuals were maintained under defined conditions in flow through chambers and continuously monitored for swimming activity and oxygen consumption. Chemical, physical and biological parameters were applied and the reaction of the krill determined. Stress levels, defined this way, will serve as a reference for unfavourable conditions in the field. Thermal characteristics of digestive enzymes from the midgut gland were furthermore identify the optimum conditions for nutrient assimilation. The results will contribute to the understanding of diel vertical migration, dispersion and aggregation of krill which, in turn is essential for the interpretation of ecosystem dynamics and trophic interactions. The general objective of this research concerns the quantitative and qualitative study of particulate matter retained in natural (sea-ice and sediment) and artificial (sediment traps) traps in order to determine the main origin (autochtonous and allochtonous) and the relative importance of different fractions of particulate matter and to follow their fate in the environment. To quantify the autochtonous origin of particulate matter, primary production, nutrient uptake, biomass distribution, phytoplankton community structure and fluxes in the first levels of the trophic chain will be investigated. Studies will be conducted in the sea-ice environment and in the water column and compared to the particle fluxes measured both in the water, using sediment traps and in the sediment, by radiometric chronology, in order to estimate the different contribution of these habitats to carbon export to the bottom. The zooplankton will be identified and counted and primary production, nutrient uptake and phytoplankton dynamics will be related to hydrological structure and nutrient availability in the environment. The Kongsfjord results particularly suitable for the main objective of this research as it is influenced by important inputs of both atmospheric (eolic and meteroric) and glacial origin and is characterised by a complex hydrological situation which may promote autochtonous productive processes, thus determining important particulate fluxes. Most studies of energetics in marine filter feeders have focused on animals living in steady state food conditions. However, copepods experience highly variable access to food because of food patchiness and behavioural avoidance of predators. For small copepods this is especially important since they lack the potential of energy storage, e.g. in the form of lipids. After a period of food deprivation Acartia tonsa show a compensatory increase in ingestion rate, but only temporarily and on the time scale of the gut filling time. The copepods are able to compensate for the lacking input of food. On the other hand, longer periods of starvation (6-14h) induce elevated ingestion rates that lasts longer than gut filling time. Under these circumstances other energetic factors influence the ingestion rate. Consequently, the energetics of the copepods are highly variable in a patchy food environment. The aim of our visit to Kristineberg was to study the stable carbon and nitrogen isotope fractionation of Meganyctiphanes norvegica in response to different food supply, and to evaluate the importance of physiological processes (assimilation and growth) in generating the new stable isotope pattern. This calibration will contribute to the evaluation of the stable isotope method as an approach to the study of food sources of animals in the field. Since nearly all microalgae are associated with bacteria and some harbor intracellular bacteria, it is most likely that these bacteria are involved in the development or termination of natural occurring plankton assemblages. The diversity and development of associated bacteria in microalgae cultures and during phytoplankton succession will be described by molecular analysis of the bacterial community structure and by phylogenetic analysis of involved microorganisms. To study the organisms involved in phytoplankton succession and the Key factors involved. This includes Bacteria-Algae, Algae-zooplankton and Zooplankton-Fish interactions. Aspects such as algal-grazer defence mechanisms and digestability of alage are core topics. Effects of UV-B radiation on microbial communities in Kongsfjorden in relation to metal and dissolved organic matter availabillity.
<urn:uuid:d15927ba-a0d1-40f2-90fb-8c6ee11f763c>
CC-MAIN-2020-16
http://projects.amap.no/project/?media=plankton-zooplankton
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505366.8/warc/CC-MAIN-20200401034127-20200401064127-00515.warc.gz
en
0.918657
3,115
2.78125
3
Flour has been a kitchen staple for centuries。 Powdered food is typically easy to cook with and has a decent shelf life。 The average Western diet consumes plenty of bread, pasta and pastries ― all of which include flour。 Believe it or not, there are probably more than 50 different types of flour。 That’s good news, especially for those with gluten sensitivity。 Not only do flours have many different uses for cooking, but they also vary widely in nutritional value。 In this article, we’ll learn about different types of flour, mainly from a health perspective。 We won’t be afraid to get our hands dirty with a bit of dough either。 You might be surprised to learn that flours come from all kinds of plant, seed, nut and even insect sources。 Bon appétit! Long History of Flour Humans have been grinding wheat seeds since 6000 B.C., so we’ve known for quite some time how to use flour as food. The word “flour” actually comes from the word “flower” since in the old days it meant the best part of the meal. Fine, powdery white flour is found in supermarkets and kitchens all around the world. The problem with white flour is that it typically has limited nutritional value. Plus, white bread quite often contains many unhealthy additives. For example, if you carefully read the label of a loaf of white bread, you might find: - Processed salts - High fructose corn syrup - Trans fats - Oxidant chemicals (“treatment agents”) - Reducing agents When wheat is highly processed, it loses calcium, iron, vitamins and important trace elements. Some evidence even suggests it might even increase allergy and asthma symptoms. Plus, the excess carbohydrates can lead to weight gain and insulin resistance. Aged and Bleached Flour Before wheat flour reaches the consumer, it is set aside to age by exposure to the air。 This process not only makes the flour whiter in color, but it also improves its physical characteristics for baking purposes。 A chemical process, called bleaching, can mimic the aging process. Bleaching agents like benzoyl peroxide or chlorine gas, artificially accelerate the flour’s aging. Unfortunately, this may further reduce the nutritional value of the flour. Plus, a byproduct of bleaching, called alloxan, may cause diabetes. Before we continue and discover alternatives to wheat flour, it’s important to understand the concept of insulin resistance. After a meal, nutrients are absorbed into your bloodstream from your intestines, which causes your blood sugar to rise. This triggers your body to release insulin, which allows the blood sugar to enter your cells. If glucose doesn’t get into the cells, it’s not useful. If you have insulin resistance, something happens to your cells, and they don’t respond normally to insulin。 It’s kind of like a jammed door lock。 This means glucose stays in your bloodstream and does not enter your cells。 If this situation gets worse, it can lead to Type 2 diabetes。 Some symptoms of insulin resistance might be: - Increased appetite - Brain fogginess and trouble focusing your thoughts - Elevated blood sugar - Weight gain and difficulty losing weight - High cholesterol - High blood pressure Wheat Fiber Alternatives As you can see, wheat flour has many potential health and nutritional issues. Additionally, those who have gluten sensitivity or celiac disease are especially prone to unpleasant gastrointestinal side effects if they ingest wheat-containing foods. Gluten is found in wheat, barley, rye and oats. The good news is that there are several nongluten types of flour available. Even if you do not have gluten sensitivity or celiac disease, these alternatives are healthier than wheat flour. As an alternative to wheat-based flour, coconut flour offers many advantages. For starters, it’s a great source of saturated fats which your body uses more efficiently compared to other fats. Because it’s metabolized more slowly, coconut flour doesn’t cause a surge in blood glucose levels like white flour. Foods that are metabolized slowly are said to have a low glycemic index (GI), and a low GI helps prevent insulin resistance. Coconut flour is also rich in dietary fiber, which may help lower bad cholesterol. When cooking with coconut flour, remember that it’s very absorbent. This means 1/4 or 1/3 cup of coconut flour absorbs as much liquid as a cup of wheat flour. Also, coconut flour tends to be dry, so you may need to add more eggs to the mix for moisture. Chickpeas, also called garbanzo beans, are packed with nutrients. These little morsels are the basic ingredient in the popular Middle Eastern dip called hummus. This legume is full of fiber and also carries a favorable low glycemic index. Plus, chickpeas are rich in many elements like folate, copper, iron and zinc. If that’s not enough, the food is also rich in antioxidants, which may help reduce the risk of illnesses such as Alzheimer’s disease and some cancers. Chickpea flour can be used to make pancakes, crackers, crepes, cakes, muffins, fritters and even onion rings. The flour tends to be dense, so if you need a lighter dough, mix in a bit of wheat flour. For those who can’t eat gluten, rice flour can also make the batter lighter. Rice flour can be of the brown or white variety. The good news is that all types of rice are gluten-free. However, brown rice is much more nutritious than white rice since the whitening process strips the rice of vitamins, minerals, fiber and iron. For those who love pasta, rice flour should find its way into your pantry。 It’s especially useful for making noodles and pancakes。 Rice flour is also frequently used as a soup thickener。 White rice flour is easier to bake with and is the best choice for lightening other heavy flours like chickpea flour。 A single ounce of almonds contains 6 grams of protein, 4 grams of fiber and significant amounts of vitamin E, magnesium, riboflavin, calcium and potassium. A rich source of antioxidants, almonds may also decrease your LDL or “bad” cholesterol levels. Even though almonds are packed with calories, studies have shown that those who snack on almonds tend to have more control over their appetite and eat less during the day. Almonds have been shown to be beneficial for your heart, skin, digestion and bones. These nuts may also help prevent diabetes and some cancers. Many nutrition experts consider almonds a superfood due to their dense nutritional value. Because the almond isn’t technically a starch, almond flour is especially attractive for those on low-carb diets. When shopping for almond flour, make sure the package states that it contains 100 percent almonds, especially if you are gluten-sensitive. You can replace almond flour for wheat flour on a one-to-one ratio, but you may need to add an egg for better consistency. When you cook with almond flour, expect a nutty taste and less fluffiness when it comes out of the oven. Buckwheat, also gluten-free, is actually a seed, not a grain. This makes it high in protein and fiber. Buckwheat is especially rich in manganese, magnesium and phosphorus. Some evidence shows that buckwheat may reduce the risk of heart disease and diabetes. Not surprisingly, this seed has a low glycemic index and is rich in antioxidants. Many vegetarians consume a lot of buckwheat since it’s a good plant source of protein. Buckwheat flour can be used to make pancakes, brownies and noodles. You can even purchase premade buckwheat noodles, called soba noodles in Japan, which are a common dish in many Asian countries. If you are seeking a protein-packed alternative flour source, then consider cricket flour. The name isn’t meant to be cute; it’s literal. Cricket flour, or cricket powder, is made up of finely ground, dried crickets. These critters are packed with protein — up to three times more protein per ounce than beef or chicken sources of protein. Even the almond takes a back seat to crickets in protein concentration. While you may cringe at the idea, consider that millions of people worldwide consume insect sources of food。 Plus, if you care about sustainability, crickets are up to 20 times more efficient a protein source than cattle。 Afraid it will taste bad? Well, think again since taste tests typically show that foods containing cricket powder are considered tasty, even by Western palates。 You can buy or make protein bars, smoothies and baked goods with cricket flour. Many preparations of cricket flour come premixed with other flours for easier handling and consistent baking outcomes. Again, for gluten sensitive people, read the labels carefully. 申博体育Don’t get stuck into thinking that wheat flour is your only option。 Have fun trying out different flour sources。 You’ll be sure to find a combination that fits your nutritional goals and taste。 If you want to know what foods will help you restore your natural vitality and get slim and stay slim, then check out the Best Foods That Rapidly Slim & Heal in 7 Days program. The Best Gluten-Free Flours. http://draxe.com/gluten-free-flours/ Simple Way to Shed Pounds and Decrease Tiredness – Stop Eating Bread. http://articles.mercola.com/sites/articles/archive/2011/06/30/we-have-known-bread-has-been-bad-for-your-health-for-over-a-century.aspx Wheat Allergy. http://acaai.org/allergies/types/food-allergies/types-food-allergy/wheat-gluten-allergy Cricket Flour Has 3x More Protein Than Steak + It Even Tastes Good. http://draxe.com/cricket-flour/
<urn:uuid:670b817c-490a-469c-8abf-933368a4dd73>
CC-MAIN-2020-16
http://hallowizard.com/pros-cons-different-types-flour/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506988.10/warc/CC-MAIN-20200402143006-20200402173006-00354.warc.gz
en
0.920208
2,132
2.890625
3
Small and playful, Pug dogs are some of the most beloved companion dogs these days. At the beginning, they were adored by people from the Chinese dynasties and even by the Tibetan monks. They quickly became popular throughout Europe and beyond. The Pug was illustrated in paintings, used as a cartoon character and appeared in movies and TV series. Its temperament is balanced, friendly and it is full of love too. As any other dog breed, this one is characterized by certain peculiarities. Even so, it is often confused with the French bulldog or with miniature copies of Mastiff or Bullmastiff. They are also unrelated with the Shar-Pei dogs. |Dog Breed Group:||Companion Dogs| |Height:||Generally 10 inches to 1 foot, 2 inches tall at the shoulder| |Weight:||Generally 14 to 18 pounds| |Life Span:||12 to 15 years| There is much debate regarding the origins of this dog breed called Pug, but the most viable option is China. There is also a discussion regarding its ancestors. Some say it is akin to the Pekingese, while others argue that there are chances to be linked to the Bulldog and the Mastiff. In terms of history, it is known for sure that this dog breed was used as companion for the Buddhist monks from Tibet. Over time, the Pug has become popular in Europe, especially among royal persons. For example, Napoleon’s wife, Josephine, who had such a dog, used it to send secret messages to her husband in prison. Both the Pugs and the Pekingese were brought to England in 1860. Its popularity is continuously growing since it was recognized as a distinctive dog breed in 1885. Regarding the name of this dog breed, there are 2 theories as it follows. The first theory says that “Pug dog” means “dog with the face of a monkey”. This is due to the fact that these dogs have become fashionable in Britain along with the marmoset monkeys, which have very similar faces. Germans call this dog Mops because that is the slang word from the UK used for naming the marmoset monkeys. The second theory says that the name of this dog breed has Latin origin and it means “clenched fist”. In Latin, “Pugnus” means fist and the name refers to the wrinkled head of this dog that looks very much like a fist. Among the other names used for this dog breed are Chinese Pug, Dutch Bulldog, Mastiff Mini, Carlin and others. - A dog similar with the Pug we know these days belonged to the Buddhist monasteries from Tibet before 400 BC. - Originally from China, the Pug was considered a real treasure by Chinese emperors. These dogs had luxurious living conditions and some of them had guards at their disposal who watched their every step. - They got different names in the European countries in which they have gained popularity, such as Carlin in France, Dogullo in Spain, Mops in the German kingdoms, Caganlino in the Italian peninsula, Lo-sze in China and Mopshond in the Netherlands. - The Pug is a small dog with a jovial character, a round and wrinkled head, a short nose, a rigid body and a curled tail. - The Pug is an active dog that needs daily exercise. An hour walk is enough in order to maintain its weight and physical condition at normal levels. - The Mops has inner calm, it is sociable, sensitive, affectionate and intelligent, but it can also be stubborn and tends to snore. - A malleable dog, funny and cuddly, the Mops is easily kept under control and accommodates very well to a life in an apartment. - These dogs love to romp and run, but intense workouts should be avoided. Also, they must be protected from hot and really cold weather. - They are prone to deformations of the mouth and nose, eyelid and eye problems, heat stroke, hip dysplasia, Legg-Perthes, hip dislocations and dog encephalitis. - William Hogarth was the first person to paint a Pug. Goya also painted his famous pug, in 1785. The origins of the Pug were placed in China, their age being estimated at over 2000 years. The first written description of a similar dog called “dog with short snout” was recorded around the year 600 BC. In the complete dictionary of Chinese characters ordered by Emperor Kang Hsi around the year 950 AD are recorded a “short-legged dog” and a “dog with short head”, descriptions which, according to experts, make reference to the Pug. However, there is some controversy about its origins. Some experts believe that the Pug comes from the Lower Lands and that Dutch traders actually brought it from the Far East. Its ancestor is believed to be a variety of Pekingese with short hair. Another theory says that the Pug is the result of interbreeding between varieties of small sized bulldogs or French Mastiffs. Since the sixteenth century, the Pug became a toy dog for the aristocrats from the European royal courts, reaching a maximum of popularity in the Victorian era. Later, it appeared in Japan and Europe, becoming the favorite of the royal courts. By the mid-sixteenth century, the Pug was popular in the Netherlands as well. Finally, when the British soldiers attacked the imperial palace in Beijing in 1860, they found several copies of Pug and Pekingese dogs that they brought with them back in England. This dog breed’s popularity continued to grow and copies were soon imported to the United States. The Pug was admitted to the American Kennel Club in 1885. Over time, they became famous all over Europe, but known by different names, such as Carlin in France, Dogullo in Spain, Mops in German kingdoms and Caganlino in the Italian peninsula. Pugs had important roles in history, such as when a copy of this dog specimen was really helpful to Prince William of Orange because it saved him from the Spanish troops by announcing their arrival. Also, a Pug has accompanied King William III and Queen Mary II to the throne in 1688. In 1790, Josephine, wife of Napoleon, used a Pug to send messages to Napoleon while he was imprisoned. She was hiding the papers in the dog’s collar. As for art, William Hogarth’s “House of cards” is the first painting in which appeared the Pug. This dog’s debut in the select world of art was depicting it playing cards. In addition, Goya also painted a Pug in 1785. The Pug is categorized as a companion dog, but it is also referred to as toy dog. Therefore, it can safely be said that the Pug is the largest of all toy dogs. Females are 2 inches shorter than males on average, measuring between 10 and 12 inches. So, male Pugs are around 12 to 14 inches tall. Both weight between 14 and 18 pounds, but they can easily exceed the maximum weight because of their tendency to gain weight. They are categorized as small to medium sized dogs. About the Pug’s personality it can be said that it is impressive despite its small size. They are among the funniest dogs in the world. Also, they are very affectionate, they get attached quickly and they prefer the human company more than the canine company. They need attention and, in turn, they offer even more attention to their owners. They often follow their owners from one room to the other. The Pug reconciles with a small living space, but it tends to occupy the whole house if its owner lets it. Pug dogs are lively, loyal, affectionate and loving, with a happy disposition. They are playful and charming by nature. Intelligent and mischievous, they have a big heart. However, they can be very stubborn at times, as well as arrogant. Since they are intelligent dogs, they get bored easily especially when they have the same routine daily. Pugs are very sensitive to one’s tone of voice, so they need patient owners who don’t yell at them. They are good as watchdogs, but they don’t bark excessively. Pugs, also called German bulldogs, are full of life, exuberant, loyal, affectionate and caring, always with an optimistic attitude. They are playful and fascinating, intelligent and mischievous. They can become easily stubborn and inflexible at times. Pugs love to socialize and become best friends with everybody. Remember, however, that it requires attention and becomes jealous if its owner ignores it. The Pug is among the dog breeds with significant risk in terms of spinal congenital abnormalities, along with other dog breeds such as Bulldog, French Bulldog, Yorkshire Terrier and Boston Terrier. The greatest dangers in the event of such involvement are the neurological disorders that can result. - Diseases of the vertebrae. The wedge-shaped vertebrae of the spine are most likely to cause neurological complications if they are affected. Their development leads to faulty spinal deformities, such as kyphosis and scoliosis. The effects are dramatic in time, resulting in general instability of the spine, sprains or fractures of the vertebrae. Among the symptoms are weaknesses in the hind limbs, paralysis, urinary and fecal incontinence, chronic spinal pain. The most probable cause is incomplete irrigation of the vertebrae, which prevents normal development. Treatment is usually conservative and consists of surgical procedures. - Necrotizing meningoencephalitis. It is an inflammatory disease of the central nervous system that often occurs in young, small sized dogs. Chihuahua, Pug and Bichon Maltese are most prone dog breeds to this condition. The meninges and the brain of the affected dogs is inflamed because of unknown causes. Specialists think it might be an autoimmune disease. Necrotizing meningoencephalitis is fatal without the administration of an aggressive immunosuppressive treatment and, unfortunately, keeps a high fatality rate even after such treatment. - An eye condition, entropion manifests as a rash on the surface of the eyeball caused by friction. The eyelashes come in contact because the eyelids are slightly rolled to the inside. Sometimes the cause is an irregular spasm of the eyelid, sometimes the process is continuous. The problem is usually inherited and starts at young age, between 1 and 2. Usually this disease manifests itself in both eyes. In acute forms the treatment is surgery, excess skin being removed. - Hip dysplasia. This represents a major problem for Pugs. No less than 60 percent of all Pugs are affected by it. The Orthopedic Foundation for Animals is the one releasing these statistics and also says that the Pug is the second most affected dog breed. Hip dysplasia is the abnormal development of the hip-femoral joint. It is usually met in large dogs, but the Pug is an exception. Hip dysplasia is characterized by an excessive laxity of the joint, associated with degenerative processes. Factors that cause the disease are varied: fast growth, excessive movement, unbalanced nutrition and in hereditary. - The Pug is one of the dog breeds that are most affected by demodectic mange, also known as Demodex. This condition is favored by immune system weaknesses. Demodex can be easily treated, based on appropriate medications and the use of lotions available in veterinary pharmacies. Demodectic mange is caused by a microscopic mite, called Demodex Canis. All pups are born free of this disease, which is not contagious, unlike sarcoptic mange, but mothers who have this parasite transmit it to their puppies. - Stenosis of the nostrils and extended soft palate. This long denomination actually means that the respiratory channel becomes narrow, fact which makes a Pug breathe with difficulty and snore. Only surgery can fix this problem, but only if it is so severe that the dog cannot breathe properly and has symptoms that get worse often. - Obesity. It is often seen in small dogs, especially in those with a lower energy level, such as the Pug. Although it may seem harmless at first, obesity is a medical problem that may have complications in time, including pancreatic disorders, bone, joint and kidney problems. In such case, an overweight dog should eat according to a food plan made especially for its nutritional needs. Keeping a Pug slim would be much better than putting it on a diet. Pugs are dogs with short, strong and straight legs. They need daily walks and enjoy energetic games that maintain their health. A Pug owner should be careful not to overdo it and stop exercising if he or she notices the dog begins to breathe with difficulty. Although they are considered toy dogs, they need more exercise than the other breeds because they are prone to obesity. The best way to maintain a Pug’s weight is to keep daily walks consistent and take it to the park for a short run. The Pug’s perception about its own size is not the real one. It can be said that a pug feels like an elephant when it comes to its size. Therefore, in order to train such a dog good coordination and cooperation are necessary. A quality, intelligent training style is the only one that works in case of Pugs. These dogs are easy to watch and their actions can influence every owner. Especially during its young months, a Pug is active, temperamental, alert and stubborn. It learns everything it sees its family members doing and these habits are difficult to be corrected afterwards. The Pug is a big dog in a small body. This aspect must be considered in case of training. Endowed with a strong will and a sharp intellect, the Pug gets bored quickly if a command is repeated too often. Despite the fact that it is a small dog, it proves to be effective as a guard dog, but not aggressive in its approach. Any trends that might look violent or aggressive should be corrected as soon as noticed because they are considered a deviation from the standard. The most influential person in the Pug’s life should be the one training it. That person is the one who gets the most attention from the dog. Thus, the best results can be obtained in the shortest time. Pugs are dogs that tend to get fat. Since they would like to eat all day long, it is difficult to keep a Pug in great shape. A dog goes through 4 periods of life, puppy, adolescent, adult and senior. The amount of recommended food changes according to a dog’s period of life. Therefore, a puppy Pug should be fed 3 times per day with 1/3 cup of food every time. In case of an adolescent Pug, feeding should be done 2 times per day with 3/4 cup of food each time. As for an adult Pug dog, it should eat twice per day, half a cup of dry dog food at each meal. Last, a senior dog should eat less, namely 1/3 cup, twice per day. Taking these instructions in consideration is essential for your dog’s health. Every type of dog food has its own instructions, but vets usually do not recommend anyone to follow those because they are not correct. Pugs have a smooth and shiny coat that is easy to keep clean with a brush made of pig hair. They shed seasonally, but abundantly. As for bathing, they should be washed only when it is absolutely necessary and where is needed, not all over. After bathing, they must be towel-dried or blow-dried quickly because they are very sensitive and get cold easily. During the cleaning sessions of a Pug, special attention should be paid to cleaning the folds of its skin from its face and neck. These folds allow external parasites to adhere to the skin. So, these folds should be carefully cleaned and disinfected with special solutions recommended by the vet. A simple cleaning session with water is also effective. Cotton should be avoided when cleaning a Pug’s wrinkles because it might irritate its skin. Because of a Pug’s big, pretty eyes, it can get infected very quickly. Therefore, its owner should clean its eyes carefully 2 to 3 times per week or whenever there are secretions around its eyes. In addition, owners of these dogs should wash their teeth several times per week. Even if the Pug is an ancient dog breed and quite famous, people are still concerned whether or not they get along well with children and other pets. These dogs are children’s best friends because they are playful and childish. Pugs can play for hours with children or just sit with them. Considering the size of this dog, it should not be left unsupervised with children because they might hurt it without realizing. On the dog’s side there is no problem because they are not a danger and are known not to fight back. Therefore, a Pug is compatible with the life of a family from every point of view. Puppies should grow among all family members rather than isolated for safety reasons. When it comes to a Pug’s interaction with other dogs or pets, it is tolerant and behaves impeccably. If we take a look around, we can easily see that the Pug is much more popular and wanted than the Golden Retriever. Once at the top of the list, the Golden Retriever was people’s top choice in terms of friendly, loyal and companion dog. Now, the Pug comes first even if it has wrinkles, bulging eyes, it is small and sometimes fat. Considered both a classic and in trend, the Pug has never been so appreciated as it is these days. Not even when it served as a lap dog for the royalties. This is one of the reasons why breeders trick people and do not sell them pure breeds. They take advantage of their lack of knowledge when it comes to characteristics. The Pug is a sickly and sensitive dog, so choose wisely!
<urn:uuid:78a670c8-b9b4-415c-abdf-0787493c43ed>
CC-MAIN-2020-16
https://dogsaholic.com/breeds/profiles/pug.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371611051.77/warc/CC-MAIN-20200405213008-20200406003508-00355.warc.gz
en
0.972477
3,754
2.8125
3
Innovative sports venues around the world are increasingly promoting economic development, human wellbeing and renewable energy Investment in industry, innovation and infrastructure are crucial drivers for economic growth and sustainable development, both in modern states and the developing world. And in line with the UN’s Sustainable Development Goals (SDGs), the sports industry is becoming an increasingly important contributor to this global effort. Over the past few years, the struggle to attract host cities for mega sports events like the Olympic Games has been well publicised, and the negative economic impact of infrequently used, concrete stadiums often criticised. But now, venues, architects, urban planners, charities and event organisers are becoming wiser to the needs of the modern world, and designing and upgrading sporting infrastructure with a much longer-term view. A model of sustainability in the heart of Africa A shining example of this approach in action lies in the most unlikely of places. Rwanda, the land-locked African country devastated by war and genocide in the 1990s, is now a vibrant, forward-thinking model for economic development, with sport forming a central part of its long-term strategy. In particular, the lack of pre-1994 tradition in cricket makes that sport an ideal vehicle for on-going reconciliation there and, recognising that, British charity Cricket Builds Hope (originally the Rwanda Cricket Stadium Foundation) set about raising around £1m for the construction of a new stadium to host the improving national team. Situated a 30-minute drive from capital city Kigali, and the first in a series of infrastructural projects in an area designated for sports, the Gahanga Cricket Stadium was opened by President Paul Kagame in October 2017. The three-vault pavilion, which resembles a bouncing cricket ball as well as the rolling Rwandan hills that form its stunning backdrop, is a symbol not only of the country’s imbued sense of optimism, but also of what can be achieved when national policy is aligned with sustainable construction methods. Indeed, the entire project was embedded within the government’s Rwanda Vision 2020, which sets forth the country’s goal to transition from an agricultural to an industrial economy. Light Earth Designs, the architectural firm that specialises in renewable, locally made projects that are culturally appropriate for communities in the developing world, carried it out over a seven-year period – and the impact on the local area has been striking. “We tried to find a way to work with that transition of the economy by using local materials to the extents possible, and to try and empower local communities with new skills,” explains Dr Michael Ramage, a founding partner at Light Earth Designs, who also leads the Centre for Natural Material Innovation at the University of Cambridge. “Material that was excavated from the foundations went into the roof [of the pavilion], and we taught [the workers] how to build bricks by hand, using a Spanish technique of thin-tile vaulting. They learned it very quickly, and now can very easily move on to other, similar projects.” As well as providing a legacy for local industry – labour was predominantly sourced through the Vision 2020 Umurenge Programme, a government-led social protection programme aimed at the two poorest categories of the population – the venue is also supporting human wellbeing by promoting health, education and female empowerment. The pavilion, for example, will serve as an HIV testing centre for the community, while Cricket Builds Hope has since rolled out a programme designed to challenge gender stereotypes and gender-based violence in Rwanda through cricket. In 2018, more than 400 women aged 15-25 were taught leadership and financial literacy skills as part of the programme, alongside the basics of how to bat and bowl. Sustainable venues in the mould of the Gahanga Cricket Stadium can also help raise awareness of a country and its problems and needs, boosting tourism and investment. The captain of the Rwandan men’s cricket team, Eric Dusingizimana, is facilities manager of the complex, and doubles up as a tour guide when increasing numbers of visitors take in the stadium en route to seeing the fabled Rwandan gorillas further in-land. While the building is uniquely Rwandan, many elements of the project can be applied to other contexts. Notably, the stadium has been built using earthquake-resilient geogrid technologies – which are also cheaper and easier to transport than steel alternatives – due to Rwanda’s status as a moderate seismic zone. “That’s a big step forward,” says Ramage, “and we can now confidently say that we can design this for other at-risk areas.” Revolution in renewable energy But it’s not just in the developing world that sport is having an impact on industry and infrastructure. In Europe, sustainability leaders are finding new ways to ensure that large, multi-purpose stadiums are serving their clubs and wider communities not only once a week, but all year-round. With over half of the world’s population now living in cities and consuming more and more electricity, one area of growing urgency to the sports industry is renewable energy. And in a recent development, some high-profile football clubs have re-evaluated their sustainability objectives in order to embrace this shift. Last November, Arsenal FC became the first Premier League club to install a battery storage system, which is set to provide revenue for the club and renewable energy for the region, while supporting the UK’s climate goals. The new battery is being charged from electricity supplied by Octopus Energy (and generated from solar power) and was designed by Pivot Power, a company that is working to accelerate the UK’s move towards to a lower carbon economy. “In very simple terms, the battery in the Emirates Stadium is the same type of battery as in a mobile phone,” Pivot Power’s director of business development, Edward Sargent, tells The Sustainability Report. “It’s a lithium-ion battery that can be charged up with electricity and then will release that electricity when it is called for. “Sporting venues have a relatively unique configuration in that they have large amounts of power capacity available for fixtures but a lot of the time that capacity is not used,” he continues. “Utilisation of this ‘spare capacity’ is a source of potential revenue for the venue and, more importantly, a source of storage potential for renewable energy in the region.” Unpredictable weather previously presented a barrier to the adoption of renewable energy in the UK due to the lack of guarantee that sustainable sources of energy, such as solar and wind, will be available during peak demand. But battery storage removes this barrier by storing electricity for when it’s most needed, reducing the use of diesel and gas ‘peaking plants’, and providing a revenue opportunity for stakeholders through the sale of electricity to the National Grid for frequency balancing. “It is a perfect blueprint,” adds Sargent. “The burning of fossil fuels releases greenhouse gases and is the main cause of rising levels of CO2 in the atmosphere. In order to transition to a low carbon economy, it is widely agreed that we need to significantly reduce our use of fossil fuels. By using renewable energy, the battery project is helping the UK electricity system do exactly this.” A nucleus for urban development Meanwhile in the Netherlands, the Johan Cruiyff ArenA – home to AFC Ajax and the Dutch national team, which welcomes more than two million fans each year to matches, concerts and other events – has been operating a similar system that predates Arsenal’s by four months, and which provides a safe back-up store of energy for the stadium in case of an outage. This system reuses battery packs provided by Nissan, thereby contributing to the circular economy for electric car batteries. But more importantly, and like its London equivalent, it provides a more efficient and sustainable energy ecosystem for the stadium and its neighbours, reducing the use of fossil fuel-burning generators. The impact of having this kind of system connected to the Dutch electrical grid, so that it can trade in the batteries’ available storage capacity to help balance rising energy demands in Amsterdam, is “really huge” according to the ArenA’s Director of Innovation, Henk van Raan. “If we use this battery and it’s connected to the grid, then the grid operator can shut down and dismantle the coal powerhouses – so that now, the coal powerhouses are producing energy for just a few minutes per day,” he explains. “We sell this system to the grid operator, 24/7; and the moment we have a match or event, the operation mode shifts from balancing on the grid to back-up power for the venue. Then, when the event is over, we shift the mode from back-up to balancing.” The system has a storage capacity of 2.8-megawatt hour – enough power to fully charge 500,000 smartphones – and with a lifecycle of more than ten years, it will save 116,693 tonnes of CO2, providing benefits across the region and furthering the Netherlands’ low carbon ambitions. The arena also has 4,200 solar panels on its roof that can feed power to the battery system in the rack below, and is installing more and more bi-directional vehicle chargers in the car-park in anticipation of a popularity boom in electric cars. “[Our stadium is] the nucleus for urban development in Amsterdam. We are heavily involved in urban planning and sustainability,” claims Van Raan. “It’s in the interests of every stadium to be linked with citywide development, and renewable energy is a very important movement in which the stadium can play a role.” From East Africa to western Europe, sports venues are implementing new, innovative ideas to stimulate development in their local communities and align with a broader vision put forward by their city or country. And with sustainable blueprints like these for inspiration, the promotion of industry can only be expected to become a bigger priority – with more tangible results – in future sporting infrastructure projects across the globe. How else can sports venues promote economic development, energy efficiency and human wellbeing in their surrounding communities? Which other sports venues are doing a particularly good job in this regard? Let us know in the comments below.
<urn:uuid:b5b1f6f3-107f-4f39-b585-c39e8f373463>
CC-MAIN-2020-16
https://sustainabilityreport.com/2019/02/10/the-modern-stadium-a-hub-for-sustainable-development/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371700247.99/warc/CC-MAIN-20200407085717-20200407120217-00195.warc.gz
en
0.947183
2,166
2.515625
3
What Is a Router Password and How Can You Change Your Router Password What Is a Router Password and Why Do You Need One? There is a lot of confusion surrounding router passwords and wifi passwords. First of all, your router password isn’t your Wi-Fi password – these are definitely not the same thing. Your Wi-Fi password is used by guests when they want to go online from your place, while a router password lets you access your router’s settings. When you receive your router, it will probably come with a default password, which is usually easy to remember and guess. Sometimes the router doesn’t even have a password, making your access to its settings even easier. Default router passwords can usually be found in manuals you receive with the router itself. What’s more, there are lists of router IPs, usernames, and passwords online, allowing you to search for the default password according to your router name. This makes it even easier for others to log into your router. After getting your router, it’s best to change the router’s password in terms of security. In fact, default router passwords are meant to be changed. Otherwise, anyone who has access to it will be able to change the router’s settings. Another thing that might happen is that you may even get locked out of your router. Once someone logs into your router, they are able to change the router’s password and hijack the network, which is by far one of the biggest things users want to avoid. Best case scenario of not changing your password is someone else changing your Wi-Fi password or changing up the DNS server settings. Worst case scenario is your internet connection being used for illegal reasons, your computer files being accessed by others, and hackers introducing viruses and other malware into your home network. Another reason to protect your router with a password is that it will save your bandwidth. With an unprotected router, anyone nearby can exploit your internet connection, which would make your connection slower. For people living in apartment buildings, this may often be the case as they are surrounded by many neighbors who are within range of other people’s internet connections. Moreover, you do not want to be blamed for a crime you didn’t do. Anyone who is using your internet seems like they’re you online. That is, people connecting online through your router are going to have your IP address. If they’re doing something illegal, you will seem to be the one who’s responsible. This could lead to fines or even jail time, and it is pretty difficult pleading your case when all routes end at your doorstep. If you’re wondering how to change your router password, just keep on reading. How to Change Router Password Resetting the router default password is fairly simple and straightforward, and you won’t need to be a brilliant IT mastermind to complete the process. Here is how to get it done. First, reset the router to its default settings. This means all changes will be wiped, and the router will be back in its original state. To reset the router – hold the reset button for a while (10-30 seconds, the length depends on your router). Holding the reset button for too little will reset it, but the router won’t go back to default factory settings. If the reset button is inside the router, you might need to use a thumbtack or a pin to get to it and press it. Next, connect your computer to an Ethernet port in your router. While many routers let you login through a browser-accessible administrator page, some disable administration via wireless connections. Therefore, connect to your router with an Ethernet cable before accessing its configuration page. Afterwards, enter your router’s IP address into the browser’s address bar. You can find your address in your router’s manual or find it online. Here are some standard IP addresses: - Apple: 10.0.1.1 - Belkin: 192.168.1.1 or 192.168.2.1 - ASUS: 192.168.1.1 - Buffalo: 192.168.11.1 - Linksys: 192.168.1.1 or 192.168.0.1 - DLink: 192.168.0.1 or 10.0.0.1 - Netgear: 192.168.0.1 or 192.168.0.227 After having done all the previous actions, type in the default admin name and the default admin password. There is usually a sticker on your router (on the side or at the bottom of it) with the router’s default username and password. You can also find it on the manufacturer’s website. Often, the username is something as simple as admin, while the password is simply blank. Finally, change your router’s admin password. This is generally done in the security settings, though this may vary. When changing the password, make it strong and complex, something on which we’ll give you tips right below. Tips on Creating a Secure Login Here are some ways to make sure no one ever figures out your password. First of all, use longer passwords. This means having between 12-15 characters. The longer your password is, the better, because longer passwords are more difficult to crack. Include capital letters, symbols, and numbers in your password. For instance, you may use $ instead of S, @ instead of A, or simply include some symbols such as #, %, &, etc. Do not use passwords that can be easily guessed by hackers. Social media gives out lots of information on you – your birthday date, names of your pets or children, etc. So, do not use your birth date for a password, or the name of your pet, as these will be easily discoverable. Using a passphrase has been recommended by experts recently. These should be quite long, around 20 characters, and should be formed of random words, including symbols, numbers and lower case and upper case letters. For example, string together words that you will be able to remember, but that others won’t be able to guess. Something along the lines of Blue34Pasta$nicker$rb00ks#. You can also use a password manager. There are programs that create strong passwords for you. You must only remember one password – the one that lets you log in to the program, and the program itself will store and create all the other passwords for you. Keep your devices secure. There is malicious software that records your keystrokes and is used to steal people’s passwords. Using antivirus software and keeping your operating system up-to-date will increase your security. VPN for Routers One way you can keep your router more secure is by investing in a fantastic and reliable VPN. A VPN (Virtual Private Network) makes sure that your connection is always completely secure by acting as a secure tunnel that stands between you and the internet. VPNs protect you from censorship, snooping, and interference. What’s more, they allow you to change your location, or mask it, which lets you surf the net anonymously and bypass geographical restrictions. When you’re using a public Wi-Fi, it can lure in hackers who want to acquire your private information (such as your bank account info) and perhaps commit identity fraud. With a VPN, you will be able to use hotspots freely and stay completely safe. Another danger on the internet are phishing websites, which are full of various malware, not easy to spot, and are just waiting to infiltrate your devices. A VPN will notify you of these types of websites before you get the chance to visit them. Some devices do not support VPN services. These include smart TVs and digital cameras, both of which can be targeted by cybercriminals because of a lack of protection. That’s when installing a VPN on your router comes in handy. If your router has a VPN, all your electronic devices that are connected to it will stay protected as well. This means not only your computer, but also your smartphones, tablets, laptops, and smart televisions. This is great for the home and for businesses, too. Many VPN services have a strict no-logs policy, which means that they are committed to keeping your privacy. They will hide your IP, encrypt your communications, let you stream your favorite content and avoid spying, unblock censored websites, and help you fight ads. In short, here are the main benefits of having a VPN for your router: - You can use the VPN services on all kinds of devices - You don’t have to switch it on and off – the router will always connect through the VPN - If you have VPN on your router, all your devices have it as well - There are also specialized VPN routers, some of which are: - Linksys WRT 3200 ACM router - Linksys WRT1900ACS - Asus RT-AC86U router - Asus RT-AC5300 router - Linksys WRT32X Gaming Router - D-link DIR-885L/R router - Netgear Nighthawk X4S VDSL/ADSL Modem Router D7800 AV for Routers To keep your computer and smartphone protected, you need AV (antivirus) software which you might already have an antivirus for your computer. However, routers are also often targeted by hackers nowadays, which is why they need protection, too. There are numerous examples of people’s routers being attacked by cybercriminals, and because of this, people couldn’t connect to their network and go online. In fact, there are more and more occurrences of routers being attacked by hackers. Without an antivirus, someone may be able to infiltrate your router, monitor you, and steal your data. What’s more, when your router is hacked, attackers also gain access to other devices within your home network. Every day, you’re transferring various data through your devices, including your location, personal messages, passwords, and financial information. Malware can steal your documents or make bots out of your devices that will dig up info without you knowing. VPNFilter is an example of this kind of malware – it infected about 500,000 routers across the world in 2018. Installing AV software comes as a solution to your security problems AV software will protect your router and all your other devices that are connected to the router. It does so by recognizing malware and getting rid of it. Naturally, there are free options and others that you pay for. No-cost options can be very good, but the ones you pay for usually provide more benefits. For instance, they might offer a built-in backup software, antispam protection, protection against phishing websites, and a toolbar for your browser which warns you if a site that you’re visiting is hosting malware. When downloading antivirus software, beware of fake sites made by scammers and always download AV software from official websites. Additional Ways to Secure Your Connection Besides having a VPN and an antivirus, there are more ways that can maximize your Wi-Fi network security. One thing to do is to turn off features that you do not use. Check out the settings on your router and see if there are applications that you’re not using and don’t need. If you don’t need to access your router’s control panel, you can turn off remote management. Another thing you can turn off is Universal Plug and Play, which lets different devices in your network connect to one another. This can be risky if one of your devices is infected, as it can spread the malware onto your other devices and infiltrate them as well. You can also disable the guest network (in case you have it and it isn’t password protected) so that random people do not use your connection for illegal things. Using WPA2 can also improve your network security. WPA2 prevents outside parties from attempting to hack you by guessing your Wi-Fi password. Some older routers do not have WPA2, so make sure that your router is up-to-date. Finally, switch on automatic updates so you don’t need to run updates manually. In case your router is old and its company stops releasing updates for it, invest in a new router in order to keep your home network well protected. Changing your router password is a must. Since router passwords are well-known and very easy to access, they really are meant to be changed as soon as possible. If you do not change it, hackers, or basically anyone who has access to your router password, will be able to change the settings on your router and lock you out of it. When changing the password, remember to create a strong one that no one will be able to break. Use numbers, symbols, and lowercase and uppercase letters. You can even use a passphrase, which consists of random words (as well as numbers and symbols) strung together that you can remember, but that others would never be able to guess. Moreover, purchasing a VPN and connecting it through your router will keep your network secure, and all your devices will be covered and protected. When choosing a VPN service, consider the speed and whether your initial connection speed will change when adding a VPN. There are routers which block VPNs, so make sure to check that the service you chose is compatible with your router. Another thing that will keep you safe is investing in antivirus software, as it will ensure that your router stays safe. You can also turn off features that you’re not using and turn on automatic updates. After undertaking the aforementioned steps to router security, you really won’t ever have to worry about your internet connection not being safe.
<urn:uuid:4de8056a-d1da-429e-9f3e-26f7a92f3419>
CC-MAIN-2020-16
https://securethoughts.com/what-is-a-router-password-and-how-can-you-change-your-router-password/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371880945.85/warc/CC-MAIN-20200409220932-20200410011432-00554.warc.gz
en
0.939089
2,909
2.609375
3
A State Lottery is a Notoriously Unstable and Inefficient Revenue Source and Lousy Economics Recent studies show an Arkansas lottery would raise no more than $50 million or so. Even during good economic times, the lottery has been proven to be an unstable and unreliable source of revenue. From 1997 to 1998, 17 of 37 lottery states had declines in lottery revenues from the previous year. From 1999 to 2000 the decline occurred in 19 of 37 states. Money magazine reported that lottery states collect more in taxes and spend less on schools than non-lottery states. Since 1990, per capita taxes in lottery states have risen more than three times as fast as in non-lottery states. Convenience stores believe that selling lottery tickets hurts their businesses, citing such problems as employee time handling tickets, as well as shoplifting and lost sales due to long lines of lottery customers.1 University of Mississippi Donald Moak has found that state lotteries cost more to operate in southern states than the more dense and urbanized states in the Northeast and Midwest. This is because the operating expenses tied to providing access to more isolated rural areas cuts into net revenues. 2 Money thrown away on lottery tickets is money that is not saved, invested, or otherwise spent on consumer goods and services. Rather than pump money into the economy, lotteries actually draw money out, as people buy tickets with money that otherwise might have been spent at pre-existing businesses. A survey of 1,200 California stores taken by the California Grocers Association reported an average decline in food sales of seven percent since the imposition of its lottery. 3 In 2000, the state of Georgia received only 29.6% of lottery revenues, the state of Louisiana received only 35% of revenues, and the state of Kentucky received only 27.8%. 4 Lottery revenue projections do not take into account costs to the state from compulsive gambling and addiction, including unemployment, health care costs, and bankruptcy. 5 A State Lottery Perverts the Role of Government A lottery causes the state to be an economic predator of its weakest residents rather than a champion for their well-being. No state lottery works unless it is marketed to the poor. In fact, many lottery marketing programs revolve around the time of welfare and social security check distributions. Typical advertising by state lottery agencies reveals the aggressive nature by which states target the vulnerable. In New York one billboard touted the lottery with the slogan “A dollar and a dream.” The Illinois Lottery provides the most notorious examples of bad taste and predatory marketing practices. The most infamous example is the Illinois lottery advertisement in an impoverished Chicago neighborhood which read: “This could be your ticket out.”6 Another Illinois Lottery advertising campaign in the 1980s by the Illinois lottery consisted of 40 billboards reading: “How to Get from Washington Boulevard to Easy Street.” Washington Boulevard and other streets mentioned in these ads are located in a very depressed Chicago neighborhood. 7 Lottery marketing plans also push the envelope of taste, as with this plan from the Ohio SuperLotto Game: “We recommend that promotional ‘pushes’ be targeted as early as possible in the month. Government benefits, payroll and Social Security payments are released on the first Tuesday of each calendar month. This, in effect, creates millions of additional, non-taxable dollars in the local economies of which the majority is disposable.” 8 Who Really Plays the Lottery? Dr. John Clotfelter and Philip Cook call state lotteries “the most regressive tax we know”. Clotfelter and Cook found that lottery players with incomes below $10,000 spend more than any other income group on the lottery, an estimated $597 per year.9 In Texas, high school dropouts spend an average of $173.17 per month on the lottery while those with college degrees spend $48.61. Blacks and Hispanics spend $108.96 and $102.20, respectively, while whites spend $55.02. 10 In 1998, Georgia State University survey found that Georgia families earning less than $25,000 per year spend two to three times as much on the lottery as a percentage of their income than households earning $50,000 or more. 11 The Atlanta Journal-Constitution discovered that Georgia Lottery ticket sales averaged $249 per resident in zip codes where the average annual income is less than $20,000 and only $97 per resident in zip codes with average incomes above $40,000. 12 In New York, a Newsday study showed that those living in the most impoverished areas of the state spent eight times more of their income on lottery tickets than did those living in the most affluent sections. 13 “Heavy players” (defined as those players spending $10 or more a week) of the Maryland Lottery included almost half of all lottery patrons without a high school diploma, almost half of those making less than $20,000 a year and more than 60 percent of all African-Americans players. 14 In one Chicago suburb where the average income is $117,000 a year, the average household spends $4.48 a month on the lottery. In another suburb where the average income is $33,000, the monthly average is $91.82. 15 A study by the Delaware Council of Gambling Problems discovered that lottery machines are strategically placed in poor neighborhoods. There were no machines in the highest-income areas of the state, one for every 17,000 in upper-income areas, one for every 5,000 in lower-middle to middle income areas, and one for every 2,000 in the lowest-income areas. 16 More Addictive Than You Realize 43% of callers to the 1-800-GAMBLER national hotline indicated problems with lottery gambling in 1995. 17 34% of persons who entered publicly funded alcohol and drug treatment centers in Texas stated that the lottery was their most problematic gambling activitiy. 18 The top five percent of lottery players nationally spend nearly $3,400 annually on tickets, accounting for over half of all ticket sales. The top ten percent – who spend an average of $2,250 annually – account for two-thirds of total ticket sales.19 In Virginia, 20 percent of the state’s lottery ticket sales are made to just two percent of its adult population. 20 Dr. Lance Dodes, who runs Massachusetts’ largest outpatient treatment center for problem gamblers says that lottery players comprise 40% of his patients. 21 For Adults Only? Think Again Based on the experience in Georgia and adjusting for the difference in populations, an Arkansas lottery would annually produce 6,000 teenage problem gamblers or those at risk of becoming problem gamblers. According to research sponsored by the Georgia Department of Human Resources, 62 percent of the state’s adolescents have gambled. The study also found that over a three year period almost three percent of 13- to 17-year-olds were already problem gamblers, and another 10 percent were at risk of becoming problem gamblers. In other words, a minimum of 56,000 Georgia adolescents were already experiencing severe problems with their gambling or were at risk of developing gambling difficulties. 22 A state-run lottery won’t prevent children from participating illegally. In Massachusetts, 47.1% of seventh graders and 74.6% of high school seniors have managed to purchase a lottery ticket.23 27%, 32%, 34%, and 35% of 15- to 18-year-olds in Minnesota, Louisiana, Texas, and Connecticut, respectively, have purchased lottery tickets despite being underage. 24 In 1997, researchers at Louisiana State University-Shreveport surveyed 12,066 Louisiana students in grades six through 12. They found that 86 percent had gambled, many by age 13, making experimentation with gambling more common than drug or alcohol use. Two-thirds-–66 percent—indicated they had gambled on scratch-off lottery tickets, and about 32 percent had played Lotto. The survey also found that 10 percent of the state’s students are problem gamblers, and another 5.7 percent have been identified as pathological gamblers. In addition, African-Americans and Hispanics were significantly more likely to be identified as pathological gamblers. 25 Long Odds and Low Returns The odds of winning a lottery vary depending on the game and the cash prize offered. Lottery odds are by far the longest odds in gambling. Some sample odds are:Powerball Jackpot (Multi-State): 80,089,128:1 Georgia “Big Game” Jackpot: 76,275,360:1 The Louisville Courier-Journal estimated that a person is seven times more likely to be struck by lightning than winning the Kentucky Lottery. 28 According to the National Safety Council, these are the odds of dying fromNot only are lottery odds ridiculously long, pay-out rates for lotteries are also very poor, even by gambling industry standards. The pay out rate for lotteries hovers around 50% which roulette pays out 95%, slot machines payout 75-95%, and horse racing pays out 83-87%. 34 A Car Accident: 81:1 29 Poisoning: 344:1 30 Fire: 1,082:1 31 Being Struck by a Falling Object: 4,873:1 32 Falling into a Hole: 37,089:1 33 If Microsoft mogul Bill Gates put $30 billion in a typical state lottery and then played off the returns, his wealth would dwindle to $27.94 in just thirty days. 35 The Georgia Scholarship Lottery and Fading HOPE In 1997, young people in the poorest counties in Georgia received an average of seven cents in education aid for every dollar spent on the lottery in their counties. By comparison, young people in the ten wealthi est counties received almost twenty cents.ing/gambling-Lottery-Quik-Facts.htm#FN36″>36 From the inception of the HOPE scholarship program, African-American enrollment in Georgia public colleges actually dropped by three percent. 37 Some people believe that what low-income people spend on the lottery they get back in better education for their children. University of Tennessee economist Bill Fox says that this isn’t so: “What the research shows is zero impact on lower income college students.”38 Cobb County, Georgia residents, most of whom live in “bedroom communities”, paid $154 per resident for lottery tickets and got back only $32 per resident in scholarships. 39 Only 31% of 1994 HOPE scholarship recipients managed to keep their scholarships by their senior year. 40 Since they require a “B” average, HOPE scholarships encourage the pernicious practice of grade inflation in schools. Marietta High School Principal Gordon Pritz says that “there is a tremendous amount of pressure on teachers to hand kids grades they may not have earned” so that they qualify for the scholarships.41 In 1997-1998, only 36% of HOPE scholarship recipients managed to keep their scholarships for the second year. 42 1 Watson, Tom, “Many convenience stores say lottery sales not a big draw,” USA Today, May 4, 1995, cited in “Thinking about the Lottery,” Bishop’s Task Force on the Lottery, Tennessee Annual Conference of the United Methodist Church. 2 Hill, Dr. John, “Lottery Revenues Not Stable.” South Carolina Policy Council. 3 “Not so small change,” Los Angeles Times, March 26, 1986, cited in Hill, Dr. John, Theft by Consent, fn94, Alabama Policy Institute. 4 Georgia Lottery Corporation; Louisiana Lottery Corporation; Kentucky Lottery Corporation 5 See, for example, NGISC (39), p. 7-21 an analysis of the correlation of gambling and bankruptcy et al. Back to Top of Section “A State Lottery is a Notoriously Unstable and Inefficient Revenue Source and Lousy Economics” 6 Goodman, Robert, “The lottery mystique: why work at all?” Newsday, June 28, 1991, cited in Reno, Roland A., “Lotteries in the United States: A Brief Overview”. 7 United Press International, January 30, 1986, cited in Reno, Ronald Al., “Lotteries in the United States: A Brief Overview”. 8 Clotfelter and Cook, Selling Hope: State Lotteries in America (Cambridge, Mass.: Harvard University Press), 1989, cited in Hill, Dr. John, Going for Broke, fn 260, South Carolina Policy Council. Back to Top of Section “A State Lottery Perverts the Role of Government” 9 The National Gambling Impact Study Commission, Final Report, June 1999, p. 7-10; Clotfelter, Phillip J. Cook. Julie A. Edell and Marian Moore, “State Lotteries at the Turn of the Century: Report to the National Gambling Impact Study Commission,” April 23, 1999, Table 10. 10 Ken Rodriguez, “ Surprise, surprise: The lottery rifles the pockets the poor” San Antonio Express-News 11 Data produced by Charlotte Steeh, Georgia State University, Applied Research Center, School of Policy Studies, September 10, 1998, cited in Hill, Dr. John, Theft by Consent, Alabama Policy Institute. 12 Walston, Charles, “Has the gamble paid off?” Atlanta Constitution, June 27, 1994, cited in Hill, Dr. John Theft by Consent, fn 111, Alabama Policy Institute. 13 Fessenden, Ford and Riley, John, “And the poor get poorer…,” Newsday, December 4, 1995 cited in Reno, Ronald, “Gambling and the Poor”, October 1, 1997. 14 Chinoy, Ira and Babington, Charles, “Low-income players feed lottery cash cow,” Washington Post, May 3, 1998. 15 Phillips-Fein, Kim, “Lotteryville, U.S.A.,” The Norton Reader: Tenth Edition. 16 Karcher, Alan J., Lotteries, (New Brunswick, NJ: Transaction, 1989), p.58, as cited by Sandeep Manalmurti and Robert A. Cooke, cited in Hill, Dr. John, Theft by Consent, fn 118., Alabama Policy Institute. Back to Top of Section “Who Really Plays the Lottery?” 17 Council on Compulsive Gambling in New Jersey, “1995 Statistics for 1-800-GAMBLER Helpline.” March 20, 1996, cited in Reno, Ronald A., “Lotteries in the United States: A Brief Overview”, April 1, 1998. 18 Wallisch, Lynn, “Gambling in Texas: 1995 Surveys of Adult and Adolescent Behavior: Executive Summary”, Texas Commission on Alcohol and Drug Abuse. 19 Horstman, Barry, “Lottery sales: Poorest buy most tickets.” Cincinnati Post, March 20, 1999. 20 Chinoy, Ira and Babington, Charles, “Low-income players feed lottery cash cow”, Washington Post, May 3, 1998. 21 Golden, Daniel and Halbfinger, David, “Lottery Addiciton Rises and Lives Fall”, Boston Globe, February 11, 1997. Back to Top of Section “More Addictive Than You Realize” 22 Rachel A. Volberg, Gemini Research, “Gambling and problem gambling among Georgia adolescents.” Report prepared for the Georgia Department of Human Resources, June 25, 1996. 23 Shaffer, Howard J., “The Emergence of Youthful Addiction: The Prevalence of Underage Lottery Use and the Impact of Gambling.” Massachusetts Council in Compulsive Gambling, Technical Report (011394-100), January 13, 1994. 24 The National Gambling Impact Study Commission, Final Report, June 1999, p. 3-4. 25 James R. Westphal, Jill A. Rush, Lee Stevens, Ron Horswell, and Lera Joyce Johnson, “Statewide baseline survey: Pathological gambling and substance abuse-Louisiana students, 6th through 12th grades” (Louisiana State University medical Center, Department of Psychiatry, April 27, 1998). Back to Top of Section “For Adults Only? Think Again” 26 Multi-State Lottery Corporation. 27 Georgia Lottery Corporation. 28 Statistical Assessment Service. 29 National Safety Council, “What are the odds of dying?”. 30 National Safety Council, “What are the odds of dying?”. 31 National Safety Council, “What are the odds of dying?”. 32 National Safety Council, “What are the odds of dying?”. 33 National Safety Council, “What are the odds of dying?”. 34 South Carolina Policy Council, “The Economic Facts of State-Run Lotteries: Windfall or Hoax”. 35 Gardner, David and Gardner, Tom, The Motley Fool You Have More Than You Think, (Fireside Books: New York), 2001, P. 44. Back to Top of Section “Long Odds and Low Returns” 36 Hill, Dr. John, Going for Broke, South Carolina Policy Council.using data from the Georgia County Guide (Athens: University of Georgia, 1998). 37 McMullen, Jr., Edward T., “Georgia’s Disappointing Education Lottery”, South Carolina Policy Council. 38 “Tennessee senator says lottery will stop the brain drain,” The Tennessean, September 13, 2001, United States Department of Education Digest of Education Statistics 2000. Table 206. 39 Analysis at http://www.georgiastats.uga.edu. 40 McMullen, Jr., Edward T., “Georgia’s Disappointing Education Lottery”, South Carolina Policy Council. 41 McMullen, Jr., Edward T., “Georgia’s Disappointing Education Lottery”, South Carolina Policy Council. 42 McMullen, Jr., Edward T., “Georgia’s Disappointing Education Lottery”, South Carolina Policy Council. Back to Top of Section “The Georgia Scholarship Lottery and Fading HOPE”
<urn:uuid:49e3bcd5-21de-4a89-b4dc-4948d8806dbf>
CC-MAIN-2020-16
http://arfaith.org/lottery-quik-facts/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371604800.52/warc/CC-MAIN-20200405115129-20200405145629-00034.warc.gz
en
0.93718
3,910
2.84375
3
By Charlie Gray What is Sjogren’s Syndrome? Sjogren’s syndrome (pronounced SHOW-grins) is a disease that mainly affects a person’s ability to produce enough saliva and tears. With Sjogren’s, the body’s immune system mistakingly attacks the healthy cells of the lacrimal (tear-producing) and salivary glands. Occasionally, this disease can begin to affect other parts of the body including the liver, kidneys, blood vessels, pancreas, nervous system, digestive system, and urinary tract. In rare cases, a person can develop lymphoma (x). Affecting about 0.5%-1.0% of the overall population, Sjogren’s is considered a relatively common condition. About half of the people who develop Sjogren’s already have another autoimmune condition like lupus or rheumatoid arthritis. Women between the ages of 45 and 55 represent the vast majority of those diagnosed with Sjogren’s but it can affect people of any age, gender, or ethnic background (x). Mild cases are generally manageable. More severe cases, however, can significantly impact quality of life and become debilitating. There’s no cure for Sjogren’s so treatment involves keeping symptoms under control and preventing complications. Medication, lifestyle habits, and dietary supplements can all be aspects of an overall treatment plan (x). Sjogren’s Syndrome Symptoms Most people with Sjogren’s experience dry eyes and mouth to some degree. However, the severity of dry eyes and mouth and the presence of other symptoms vary (x). Dry eyes and mouth can be caused by many things other than Sjogren’s. Because of this, as well as the fact that symptoms can be mild and overlooked, getting a proper diagnosis takes about 3 years on average (x). Sjogren’s syndrome affects the lacrimal glands which produce tears. Tears lubricate the eyes, protect them from germs and foreign objects, and deliver nutrients to the cornea. An underproduction of tears often causes people to feel like their eyes are burning, itchy, tired, or like there’s grit stuck in them. (x) Dry eyes are not only uncomfortable, they can affect vision by causing blurriness or sensitivity to light (x). Reduced production of saliva, also known as “xerostomia”, imparts a dry, cottony, or chalky feeling in the mouth. But besides providing lubrication for the mouth and throat, the compounds in saliva help keep the teeth and gums healthy. They also fight germs, contribute to our sense of taste, and aid in swallowing and digestion. Therefore, having too little saliva can cause quite a few problems including tooth and gum decay, fungal infections like thrush, and trouble swallowing food (x). In addition, lack of moisture in the throat can cause a persistent, dry cough (x, x). In about half of cases, the parotid gland (one of the three salivary glands located in the neck) becomes swollen and tender (x). People with Sjogren’s might also experience dry sinuses and nasal passages that occur from a lack of moisture in the mucous membranes. This can lead to discomfort, nosebleeds, burning sensation, and increased risk of infection. When the sinuses become too dry, the tissues can get irritated and inflamed which in turn causes headaches, sinus pressure, and pain felt in the cheeks (x, x). Body Aches, Pain and Fatigue As is common with many autoimmune diseases, symptoms of Sjogren’s includes physical and mental fatigue that can interfere with daily activities. Aches and pains may also flare up in multiple joints and muscles (x, x). In some people, Sjogren’s affects areas beyond the lacrimal and salivary glands. When this happens, it can lead to (x): - Digestive symptoms similar to IBS (abdominal pain, diarrhea, and/or constipation) - Bladder irritation - Vaginal dryness - Skin rashes and/or sensitivity - Irritation in the lungs - Liver or kidney problems - Lymphoma, though rarely Sjogren’s Syndrome Causes Primary Sjogren’s refers to the symptoms listed above that aren’t associated with another condition. Secondary Sjogren, also known as Sjogren-overlap, occurs in the presence of other autoimmune diseases like lupus or rheumatoid arthritis (x). With autoimmune disease, inflammatory cells erroneously attack the body’s healthy cells. Researchers aren’t quite sure what causes the body to do this. One possible contributing factor may be when a person’s immune system doesn’t fully “shut off” after fighting a legitimate virus or pathogen like the Epstein-Barr virus (x). Another factor involves genes. Since the tendency to develop autoimmune disease (though not necessarily the same one) tends to run in families, genetics appear to play a role in a person’s immune system behavior (x). Additionally, because a majority of those with Sjogren’s are women, the hormone estrogen may play a role in the disease’s development (x). Sjogren’s Syndrome Treatment Treatment for Sjogren’s focuses on managing the symptoms and preventing complications. Taking medications – as well as avoiding others that can cause dryness — can also be part of the treatment plan. Finally, some lifestyle habits and dietary supplements may be appropriate to support overall health. Treating Dry Mouth Toothpastes and mouthwashes containing the compound betaine anhydrous help reduce mouth dryness that can occur with standard oral health products (x). Similarly, a study showed that an oral spray containing malic acid improved the feeling of dry mouth in study participants (x). Your doctor may be able to recommend or prescribe specific products to use. In addition to carefully chosen toothpastes and mouthwashes, part of living with Sjogren’s requires diligence about oral health. The following habits are recommended (x): - Brush and floss after each meal - Sip water during the day - See a dentist regularly - Use products with fluoride or get professional fluoride treatments to prevent cavities - Suck on sugarless candy or lozenges Finally, research shows that acupuncture can stimulate production of saliva in those with dry mouth (x). Treating Dry Eyes Prescription or OTC eye drops or gels, preferably that don’t contain irritating preservatives, can combat dry eyes. Wearing wraparound glasses when outside might also help (x). In severe cases, a surgical procedure allows the eyes to produce more tears (x). Use a Humidifier Sleeping with a humidifier helps reduce dryness in the eyes, mouth, and nasal passages. Research shows that an elimination-based diet called the autoimmune protocol (AIP) can reduce inflammation and improve symptoms in people with autoimmune diseases (x, x, x). This spin-off of the Paleo Diet restricts many foods while, in theory, allowing the body to heal from the constant inflammatory triggers found in food and beverages. The diet focuses on meat and non-starchy vegetables and prohibits grains, legumes, nuts, seeds, many kinds of fruit, coffee and tea, alcohol, processed oils, and sugar. Non-steroidal anti-inflammatory medications (NSAIDs) like ibuprofen or acetaminophen can reduce joint pain. Corticosteroids may also relieve painful inflammation, though long-term use can cause other unwanted side effects (x). A group of medications called “disease modifying anti-rheumatic drugs” are often used for conditions like rheumatoid arthritis and lupus. They’ve also been shown to help manage Sjogren’s syndrome. Examples include the drugs methotrexate and azathioprine. In addition, biological therapies such as one called Rituximab are also options for treating severe cases (x). Supplements for Sjogren’s Syndrome Dietary supplements may support overall health and minimize symptoms of inflammation that go along with autoimmune conditions like Sjogren’s. However, it’s always important to speak to your doctor before taking supplements because they could interact with medication or not be right for your specific situation. Studies show that taking a combination of DHA and EPA (the omega-3 fatty acids found in fish oil) can reduce the symptoms of dry eyes about as well as eye drops (x). However, DHA/EPA provides the additional benefit of lowering inflammation. In other words, it can be part of the overall plan to keep inflammation and symptoms of autoimmune diseases under control (x, x). Suggested serving size for fish oil softgels is 2 capsules, which should be taken anywhere from two to three times per day. Curcumin, the active compound in turmeric, has well-known anti-inflammatory effects. Research shows curcumin can help keep eyes healthy (important with Sjogren’s) and reduce painful inflammation throughout the whole body (x). In fact, one animal study even specifically suggests that it can be an effective intervention for Sjogren-like disorders (x). Curcumin is to be taken in one dose of 1000mg or less per day, depending on intended effect. It should be taken along with water or a meal. Oxidative stress by molecules called reactive oxygen species (or ROS) causes damage to cells and occurs with rheumatic conditions like Sjogren’s (x). Glutathione is an antioxidant that scavenges these ROS. People produce glutathione naturally but sometimes what’s made in the body can’t keep up. Supplementing with glutathione can help support overall health, especially in the presence of an autoimmune condition (x). Suggested serving size can range from as little as 50mg per day to as much as 500mg (regular serving size) per day, depending on intended effect. In order to maximize effectiveness, it should be taken with food. The Bottom Line Sjogren’s syndrome can occur on its own or secondary to other rheumatic autoimmune conditions like lupus or arthritis. The immune system attacks the glands that make saliva and tears, so the main symptoms include dry eyes and dry mouth. While this may not seem very dire, tears do a lot to protect vision. Similarly, saliva plays a big role in oral health and digestion. Dryness can also be very uncomfortable which impacts quality of life. In addition to dry eyes and mouth, people may experience fatigue and joint pain, problems in other areas of their body, or in rare cases, lymphoma. Unfortunately, there is no cure for this condition but treatments such as eye drops, special oral health products, pain relievers, and medication can help. Diet and supplements can also improve symptoms and protect cells from further damage.
<urn:uuid:0ad49bad-1a9b-4e86-955e-3b03ab8b8c2b>
CC-MAIN-2020-16
https://community.bulksupplements.com/sjogens-syndrome-causes-symptoms/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371611051.77/warc/CC-MAIN-20200405213008-20200406003508-00355.warc.gz
en
0.925125
2,362
3.90625
4
2 September 1944: Lieutenant (Junior Grade) George Herbert Walker Bush, United States Naval Reserve, led a flight of four TBF/TBM Avenger torpedo bombers of Torpedo Squadron 51 (VT-51), from the Independence-class light aircraft carrier USS San Jacinto (CVL-30), against a radio transmission station on the island of Chichi-Jima. The Avenger had a crew of three. Along with Lt (j.g.) Bush were Lt. (j.g.) William G. White, USNR, gunner, and radio operator ARM 2/c John Lawson Delaney, USNR. Each airplane was armed with four 500-pound¹ general purpose bombs. The flight was joined by eight Curtiss-Wright SB2C Helldiver dive bombers of VB-20, escorted by twelve Grumman F6F-5 Hellcat fighters of VF-20, from USS Enterprise (CV-6). Chichi-Jima is the largest island in the Ogasawara Archipelago of the Bonin Islands, approximately 150 miles (240 kilometers) north of Iwo Jima and 620 miles (1,000 kilometers) south of Tokyo, Japan. The United States Hydrographic Survey described the island in 1920 as “very irregular in shape,” approximately 4¼ miles (7.2 kilometers) long and 2 miles (3.2 kilometers) wide. The area of the island is presently given as 23.45 square kilometers (9.05 square miles). Its highest point is 326 meters (1,070 feet) above Sea level. The island has a small sea port where midget submarines were based beginning in August 1944. Chichi-Jima was heavily garrisoned with 20,656 Imperial Japanese Army and Navy personnel, and 2,285 civilian workers.² Lieutenant Bush’s flight was scheduled for a time over target of 0825–0830. They encountered heavy antiaircraft fire and Bush’s Avenger was hit. With the torpedo bomber on fire, Bush continued the attack and later reported good results. Unable to return to the aircraft carrier, he flew away from the island to limit the risk of capture of the crew by the enemy when they bailed out. Bush and one other crewman (which one is not known) bailed out. While Bush parachuted safely, the second crewman’s parachute never opened. The third crewman went down with the airplane. Both Lieutenant White and Radioman Delaney were killed. The Gato-class fleet submarine USS Finback (SS-230) was stationed near the island on lifeguard duty during the attack. At 0933, Finback was notified of an aircraft down nine miles northeast of Minami-Jima. Escorted by two F6F fighters, the submarine headed for the location. At 1156, Finback picked up Lt. Bush, floating in his life raft. A search for White and Delaney was unsuccessful. Their bodies were not recovered. (Later that same day, Finback, while submerged, towed a second pilot and his life raft away from Magane-Iwa, as he held on to the sub’s periscope.) Lieutenant Bush and the other rescued pilots remained aboard for the remainder of Finback‘s war patrol (her tenth), and were then returned to Pearl Harbor. In November he rejoined San Jacinto for operations in the Philippines. George Herbert Walker Bush was born at Milton, Massachusetts, 12 June 1924, the son of Prescott Sheldon Bush and Dorothy Walker Bush. He attended high school at the Phillips Academy in Andover, Massachusetts. One day after his 18th birthday, 13 June 1942, Bush enlisted as a seaman, 2nd class, in the United States Naval Reserve. He was appointed an aviation cadet and underwent preflight training at the University of North Carolina, Chapel Hill. He was honorably discharged 8 June, and commissioned as an ensign, United States Naval Reserve, 9 June 1943. At the age of 19 years, 2 days, he became the youngest Naval Aviator in history. (His age record was broken the following month by Ensign Charles Stanley Downey, who was commissioned 16 July 1943 at the age of 18 years, 11 months, 14 days.) Ensign Bush continued flight training at NAS Pensacola, Florida, and then the Carrier Qualification Training Unit, NAS Glenville, Illinois. After training with the Atlantic Fleet, Ensign Bush was assigned to Torpedo Squadron Fifty-One (VT-51), in September 1943. He was promoted to lieutenant (junior grade), 1 August 1944. After leaving San Jacinto, Bush was assigned to NAS Norfolk, Virginia, from December 1944 to February 1945. He then joined Torpedo Squadron Ninety-Seven (VT-97) and then VT-153. Lieutenant (j.g.) Bush was released from active duty on 18 September 1945, retaining his commission. He was promoted to lieutenant 16 November 1948. On 24 October 1955, Lieutenant Bush resigned from the U.S. Navy. During World War II, George H. W. Bush flew 58 combat missions. He flew a total of 1,221 hours and made 126 carrier landings. He was awarded the Distinguished Flying Cross, the Air Medal with two gold stars (three awards), and the Presidential Unit Citation. He would later become the forty-first President of the United States of America. The airplane flown by Lt. (j.g.) Bush on 2 September 1944 was a General Motors TBM-1C Avenger torpedo bomber, Bu. No. 46214. This was a licensed variant of the Grumman TBF-1C Avenger, built by the General Motors Corporation Eastern Aircraft Division at Linden, New Jersey. The Avenger was designed by Robert Leicester Hall, Chief Engineer and Test Pilot for the Grumman Aircraft Engineering Corporation, Bethpage, New York. The prototype XTBF-1 made its first flight 1 August 1941. It was a large single-engine aircraft, operated by a crew of three (pilot, radio operator and ball turret gunner). It was equipped with folding wings for storage on aircraft carriers. Production of the torpedo bomber began with the opening of a new manufacturing plant, Sunday, 7 December 1941. The first production Avenger was delivered to the U.S. Navy in January 1942. The TBF-1 and TBM-1 were 40 feet, 11 inches (12.471 meters) long, with a wingspan of 54 feet, 2 inches (16.510 meters) and overall height of 16 feet, 5 inches (5.004 meters). The airplane had an empty weight of 10,545 pounds (478 kilograms), and its maximum gross eight was 17,895 pounds (8,117 kilograms). The Avenger was the largest single-engine aircraft of World War II. The Avenger was powered by one of several variants of the Wright Aeronautical Division Cyclone 14 (R-2600): GR2600B698 (R-2600-8 and -8A); GR2600B676 (R-2600-10); and 776C14B31. The R-2600 was series of air-cooled, supercharged, 2,603.737-cubic-inch-displacement (42.688 liter), two-row 14-cylinder radial engines. The engines used in the Avengers all had a compression ratio of 6.9:1, supercharger ratios of 7.06:1 and 10.06:1, and propeller gear reduction ratio of 0.5625:1. The R-2600-8, -8A and -10 had Normal Power ratings of 1,500 horsepower at 2,400 r.p.m. at Sea Level, and 1,700 horsepower at 2,600 r.p.m. for takeoff. The R-2600-20 was rated at 1,600 horsepower at 2,400 r.p.m., and 1,900 horsepower at 2,800 horsepower, respectively. Dimensions and weights varied. The R-2600-8 and -8A were 64.91 inches (1.649 meters) long. The -10 was 74.91 inches (1.903 meters) long, and the length of the -20 was 66.08 inches (1.678 meter). The R-2600-8 and 8A and -10 were 54.26 inches (1.378 meters) in diameter. The -20 was 54.08 inches (1.374 meters). The -8 and -8A both weighed 1,995 pounds (905 kilograms). The -10 weighed 2,115 pounds (959 kilograms) and the -20 weighed 2,045 pounds (928 kilograms). The engines drove a three-bladed Hamilton Standard Hydromatic constant-speed propeller. The TBF/TBM had a cruise speed of 147 miles per hour (237 kilometers per hour) and maximum speed of 276 miles per hour (444 kilometers per hour) at 16,500 feet (5,029 meters). The service ceiling was 30,100 feet (9,174 meters). Its maximum range was 1,010 miles (1,625 kilometers). The Avenger was armed with one air-cooled Browning AN-M2 .50-caliber machine gun mounted in each wing, firing forward. Another .50-caliber machine gun was installed in an electrically-operated dorsal ball turret. In the ventral position was a Browning M2 .30-caliber aircraft machine gun in a flexible mounting. The primary weapon of the Avenger was carried in an enclosed weapons bay. It could be armed with one Mk. 13 aerial torpedo, ³ or up to 2,000 pounds (907 kilograms) of bombs. The Grumman Aircraft Engineering Corporation produced TBF Avengers from Early 1942 until 1943, when production was taken over by the General Motors Corporation Eastern Aircraft Division. Grumman produced 2,290 TBFs, while Eastern built 9,836 TBMs. Lieutenant Bush’s aircraft carrier, USS San Jacinto (CVL-30), was an Independence-class light carrier. It had been started by the New York Shipbuilding Corporation as a Cleveland-class light cruiser, USS Newark (CL-100), but was converted during construction. Construction took 11 months and the ship was launched 26 September 1943. It was commissioned 15 November 1943. The carrier was 622.5 feet (189.7 meters) long, with a beam of 71.5 feet (21.8 meters) and draft of 26 feet (7.9 meters). It had a full load displacement of 15,100 long tons (16,912 short tons, or 15,342 metric tons). The ship was powered by steam turbines producing 100,000 horsepower and driving four shafts. San Jacinto was capable of a maximum 31.6 knots (36.4 miles per hour, or 58.5 kilometers per hour). San Jacinto had a complement of 1,549 men, and carried 45 airplanes. For defense, it was armed with 28 Bofors 40 millimeter anti-aircraft guns and 40 Oerlikon 20 millimeter autocannon. San Jacinto was decommissioned 1 March 1947 and was later scrapped. On 7 October 2006, the tenth and final Nimitz-class supercarrier was christened USS George H.W. Bush (CVN-77) in honor of President Bush’s service to his country. ¹ The most common U.S. 500-pound general purpose bomb of World War II was the AN-M64. Nominally a 500-pound (227 kilogram) bomb, the munition actually weighed from 516.3 to 535.4 pounds (234.2 to 242.9 kilograms), depending on the explosive used. It contained 266 pounds (120.7 kilograms) of TNT, or 258.5 pounds (117.3 kilograms) of a 50/50 TNT and Amatol mixture. For easy identification, these were marked with a single 1-inch (2.54 centimeter) wide yellow band painted at the nose and tail. Composition B bombs, which were marked with two yellow identification bands, contained 272.7 pounds (123.7 kilograms) of explosive, while the heaviest was filled with 278.3 pounds (126.2 kilograms) of Tritonal. This variant was marked with three yellow bands. The bomb, without fins or fuses, was 36 inches (0.914 meters) long. The overall length was 59.16 inches (1.503 meters), including nose and tail fuses. The maximum diameter was 10.9 inches (0.277 meters). ² Personnel numbers as of 3 September 1945. ³ The U.S. Navy Torpedo, Mark 13, was a gyroscopically-steered single-speed anti-ship torpedo designed to be dropped from aircraft. It was 13 feet, 8.55 inches (4.180 meters) long, 1 foot, 10.42 inches (0.570 meters) in diameter and weighed 1,949 pounds (884 kilograms) ± 20 pounds (9 kilograms). The warhead contained a 400 pound (181 kilogram) TNT explosive charge. The Mk. 13 was driven by a two-stage alcohol-fueled geared steam turbine, turning 10,983 r.p.m., with the coaxial counter-rotating propellers turning 1,150 r.p.m. It was capable of running at 33.5 knots (38.6 miles per hour, or 62.0 kilometers per hour), with a range of 6,300 yards (5.8 kilometers). This same type torpedo was used by the U.S. Navy’s PT boats late in the war. Thanks to regular TDiA reader Joolz Adderly for suggesting this topic. © 2017, Bryan R. Swopesby
<urn:uuid:034c09fc-cc49-46c8-a0ff-edb27730384e>
CC-MAIN-2020-16
https://www.thisdayinaviation.com/tag/general-motors-eastern-aircraft-division/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370497042.33/warc/CC-MAIN-20200330120036-20200330150036-00234.warc.gz
en
0.955737
2,908
2.9375
3
Hematopoietic stem cell transplantation (HSCT) has contributed substantially to survival among patients with severe hematologic disorders, including leukemias and lymphomas, since the first successful bone marrow transplant was performed in 1968. More recently, the development of new immunosuppressive regimens, preconditioning protocols, and better HLA typing has continued to improve posttransplant survival rates.1 Although anything that increases survival among patients with these life-threatening diseases is great news, the downside is that physicians are also seeing a rise in the rates of chronic complications of HSCT, especially graft-vs.-host disease (GVHD), as patients are living longer, according to Martine J. Jager, MD, PhD, at Leiden University Medical Center in the Netherlands. Because GVHD has a multitude of systemic manifestations—which may involve the skin, gastrointestinal tract, liver, musculoskeletal system, and more—its ocular manifestations are too often overlooked,2 both at cancer treatment centers and in eye clinics. Yet ocular GVHD is not uncommon. “More than a third of the survivors who get allogeneic HSCT develop significant eye disease,” said Reza Dana, MD, MPH, at Harvard Medical School. “That’s a very considerable disease burden. Patients survive due to bone marrow engraftment, but the ‘price’ is that these cells can attack the tissues of the recipient.” Ocular GVHD most often presents as severe dry eye and ocular surface disease, which can have a profound impact on the quality of life in survivors. Unfortunately, ocular GVHD is often misdiagnosed. Helping these patients involves both raising awareness of the condition at large and tailoring the ophthalmic treatment approach to each affected individual. An HSCT Primer Stem cells used for transplantation may be derived from donor bone marrow, peripheral stem cells, or cord blood. HSCT is known as autologous when the cells are harvested from the patient; syngeneic when taken from an identical twin; and allogeneic (allo-SCT) when the donor cells are from either a related or an unrelated individual. HSCT can be used to treat a variety of diseases, but over 80 percent is performed in the setting of cancers such as leukemia or lymphoma. Patients receive either autologous or allogeneic transplantation depending on the type and status of their cancer. Immunological warfare. Before HSCT is administered, the patient undergoes a preconditioning regimen, in which the patient’s own bone marrow—and, accordingly, immune system—is depleted through intensive chemotherapy, with or without radiation therapy. The transplanted donor cells repopulate the recipient’s marrow and reconstitute the patient’s hematologic profile, including the immune system. GVHD occurs when the donor-derived graft cells, often T cells,2 react to the recipient’s autoantigens. Stella K. Kim, MD, at the M.D. Anderson Cancer Center, said, “Patients are cured of their cancers, but often the cure can result in GVHD, which is essentially an autoimmune disease,” with a variety of clinical manifestations, often with a protracted course. Prophylactically curtailing the activity of the donor immune cells could reduce the incidence of GVHD. However, the anticancer efficacy of HCST requires a vigorous graft-vs.-tumor response; thus, one of the greatest challenges facing clinicians is to modulate the GVHD while maintaining the therapeutic effect of the transplant.2 The Spectrum of Ocular GVHD GVHD can occur in acute and chronic patterns. Dr. Kim said, “Acute forms of ocular GVHD can be quite severe, resembling toxic epidermal necrolysis, requiring immediate evaluation and early intervention. These patients are typically in their early course post-HSCT and are often being treated in the inpatient setting. On the opposite end of the clinical spectrum are patients who are years out from their HSCT and doing well systemically but are dealing with severe ocular surface disease, such as cicatricial keratoconjunctivitis, from chronic ocular GVHD.” Chronic ocular GVHD is more commonly seen in ophthalmology outpatient clinical practices than is the acute form.3 Principal manifestation: dry eye. Although chronic ocular GVHD spans a range of ocular surface disorders, the published literature and the specialists interviewed agree that severe dry eye is by far its chief clinical manifestation.4 In her clinic, Dr. Jager said, “Ocular GVHD patients are like very severe dry eye patients. They usually have both lack of aqueous tear formation and lack of oil secretion as a result of meibomian gland dysfunction. Some just don’t make tears anymore; they’re absolutely dry.” Other ocular surface problems. At the worst, Dr. Jager continued, patients go on to develop other forms of corneal and ocular surface disease, including limbal deficiencies, scarring, and symblepharon formation. Eyelid inflammation and scarring may cause cicatricial entropion and trichiasis, further irritating the cornea. According to Drs. Dana, Jager, and Kim, the treatment approach for chronic ocular GHVD is essentially the same as for other types of severe dry eye, but understanding the status of the patient’s systemic GVHD can influence the ocular treatment strategy. For example, all three doctors consider the use of topical steroids earlier for ocular GVHD than they would for ordinary dry eye. Basic principles of therapy. The seminal 2006 NIH consensus guidelines listed the four major supportive care goals for ocular GVHD as lubrication, control of tear evaporation, control of tear drainage, and decreasing ocular surface inflammation.4 The 2013 major review coauthored by Dr. Dana had a similar list of treatment goals: lubrication and tear preservation, reduction of inflammation, prevention of tear evaporation, and epithelial support.2 Both sources agree that the treatment needs to be matched to each patient’s particular mix of symptoms; the individual’s systemic medications also should be taken into account. Ideally, said Dr. Dana, “Treatment regimens should be realistic, starting with readily available things such as topical steroids and punctal plugs. If there’s no response, then you step it up to another level.” It’s important to note that a patient’s clinical presentation may require managing some or all of the following problems simultaneously. Inflammation. Randomized clinical trials have shown that topical steroids remain the most useful treatment overall for chronic ocular GVHD. However, because of their side effects, including increased risk of infections, cataract, and increased IOP, research is continuing on other immunosuppressive agents such as anakinra, tacrolimus, and ultra-low-dose interleukin-2. Dr. Jager said that her patients have had excellent results with topical cyclosporine drops. “It does not have side effects, and it’s one of the nicest drugs for chronic GVHD.” However, it is not commercially available in Europe and must be compounded, taking it out of the price range for many patients, she said, adding that U.S. patients are fortunate to have a commercially available form (Restasis). However, Dr. Kim noted, “Depending on the severity of chronic ocular GVHD, topical cyclosporine can have varying degrees of efficacy. More clinical research is needed in this area.” Decreased tears. The first steps in reducing the symptoms of decreased tearing are the old standbys of dry eye therapy: preservative-free artificial tears and punctal occlusion with silicone plugs or cautery. Although oral agents to increase tear secretion, such as pilocarpine or cevimeline, have been tried, there is little clinical or trial evidence to support this therapy in GVHD.2 Tear evaporation/dysfunction. Tear evaporation and break-up can be reduced through the use of warm eyelid compresses to improve meibomian gland secretion. Patients may also benefit from increasing humidity in their home and workplace, using moisture goggles, or trying nutritional supplements such as flaxseed oil or fish oil. Oral doxycyline or minocycline may be useful in meibomian gland dysfunction for their anti-inflammatory as well as antibiotic effects. Given that GVHD patients are often on a multitude of drugs, Dr. Dana cautioned that adding any systemic therapy, including the tetracyclines, should be coordinated with patient’s hematologist-oncologist. Epithelial damage. Autologous serum eyedrops contain many growth factors and vitamins that support the healing and integrity of the ocular surface.2 These drops have proved beneficial in clinical studies and anecdotally. Dr. Jager said that she finds them very helpful in patients with epithelial problems. Her patients say that the drops “make their eyes feel much, much better, and if they have been off for a few weeks, they beg me for them.” The limitation of this therapy is that it may be difficult to obtain except at specialized medical centers. Dr. Dana said, “There are a lot of issues in terms of obtaining or compounding autologous serum tears,” in part because, as a blood product, they require specific testing and handling regimens. Scleral contact lenses, including the PROSE device, have been shown to improve vision and comfort in patients with epithelial damage. “Providing scleral contact lenses can completely change patients’ lives,” said Dr. Jager. These devices are more effective in reducing patient symptoms than improving epitheliopathy, however.2 Drs. Dana, Jager, and Kim agreed that ocular GVHD should be considered a lifetime condition. Even after the need for intensive intervention has passed, patients should be examined routinely—not just to follow their dry eye but also to monitor for complications such as infections, cataract, or increased intraocular pressure. (See “Web Extra,” below.) Dr. Dana added: “Like any chronic disease—from hypertension and cardiac disease to MS or diabetes—ocular GVHD comes in a wide range of flavors. Patients who respond well to treatment can be followed by their local ophthalmologist. It’s only the severe or nonresponsive cases that continue to require specialized attention.” Late Diagnosis and Misdiagnosis Despite the availability of treatments, in many cases, the patient’s symptoms may be far advanced before the diagnosis of ocular GVHD is established. Such treatment delays cause unnecessary suffering and, in some cases, permanent ocular damage. Dr. Jager recounted the story of her worst patient, who had received HSCT for leukemia: “She was admitted to intensive care, and for six weeks nobody looked at her eyes. By the time she had survived intensive care, her corneas looked like completely dried-out pieces of leather. Now, three years later, she is doing fine in terms of her leukemia, but she comes to my clinic every week for eye problems. If someone had paid attention to her eyes when she was in intensive care, she wouldn’t be in the state she is now.” Why is the timely and accurate diagnosis of ocular GVHD so difficult? Overshadowed by bigger issues. These patients and their doctors have been dealing with a life-threatening hematologic disease requiring intensive treatment; and after HSCT, they may be coping with multisystem GVHD. Dr. Dana noted that eye disease is a relatively late complication of GVHD, with other forms, such as skin and oral, typically occurring first. Thus, eye conditions do not top the list of medical concerns. Dr. Jager said, “I recently spoke to a hematologist, and he said, ‘Eye problems? What are you talking about? I never see any eye problems in my leukemia patients.’ But I think he never asked.” She added that doctors may be “so thrilled by the survival that everything else seems trivial.” By the time patients come to her eye clinic, their ocular disease is severe. Condition is not widely known; thus misdiagnosed. According to Dr. Dana, “A lot of GVHD patients end up getting misdiagnosed when they see their eye doctor. They’re told they have ‘an eye infection’ or conjunctivitis. Then they mention it to their hematologist, who says, ‘No, that’s eye GVHD,’ and they eventually end up going to the cornea specialist.” Solutions: Education and Awareness “From a public health standpoint, there’s a critical need to educate both optometrists and ophthalmologists about ocular GVHD manifesting as severe dry eye,” said Dr. Dana. Distinguishing between GVHD and garden-variety dry eye “is primarily based on the history of the patient’s condition”; thus, clinicians need to be aware of a patient’s prior HCST and its ocular implications. More visibility. Dr. Jager noted that there has been an upswing in conference presentations on ocular GVHD in recent years, and she is hopeful that this increased visibility will continue to raise awareness. Communication between transplant and ophthalmology teams. Dr. Kim said that “awareness by the primary team and having readily accessible ophthalmology teams can facilitate earlier evaluation of patients.” For example, she continued, “because the M.D. Anderson Cancer Center ophthalmology clinic is within the hospital, we are able to treat patients both in the early and late course of their GVHD.” She noted that other centers that have an ophthalmology presence or designated individuals with interest in GVHD are also highly successful in treating HSCT patients with ocular GVHD. 1 Hahn T et al. J Clin Oncol. 2013;31(19):2437-2449. 2 Shikari H et al. Surv Ophthalmol. 2013;58(3):233-250. 3 Dignan FL et al. Br J Haematol. 2012;158(1):62-78. 4 Couriel D et al. Biol Blood Marrow Transplant. 2006;12:375-396. Reza Dana, MD, MPH, is the Claes Dohlman Professor of Ophthalmology at Harvard Medical School; Senior Scientist at Schepens Eye Research Institute; and Director of Cornea and Refractive Surgery at Massachusetts Eye and Ear Infirmary. Financial disclosure: Consults for Alcon/Novartis, Allergan, Bausch + Lomb, Eleven Biotherapeutics, Genentech, Novabay, and Novaliq. Martine J. Jager, MD, PhD, is senior medical specialist at Leiden University Medical Center, Leiden, the Netherlands, and guest faculty professor at Peking University Health Center in Beijing. Financial disclosure: None. Stella K. Kim, MD, is associate professor and Director of Clinical Research in Ophthalmology, Ophthalmology Section, University of Texas, M.D. Anderson Cancer Center. Financial disclosure: Consults for Bayer, Eli Lilly, Sanofi Aventis, and Seattle Genetics.
<urn:uuid:1374662c-bbbe-46f5-9b03-6c18f2df9240>
CC-MAIN-2020-16
https://www.aao.org/eyenet/article/ocular-graftvshost-disease-downside-of-success-2
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506477.26/warc/CC-MAIN-20200401223807-20200402013807-00155.warc.gz
en
0.942217
3,326
2.953125
3
This is part 3 of a series on Weird Ruby. Don’t miss Weird Ruby Part 1: The Beginning of the End, Weird Ruby Part 2: Exceptional Ensurance, and Weird Ruby Part 4: Code Pods (Blocks, Procs, and Lambdas). Welcome back to Weird Ruby! This time we’re going to talk about Ruby’s rarely seen flip-flop operator and how you can use it to confuse and annoy future versions of yourself. Have you ever longed for a way to execute part of a loop part of the time? Do you also feel like if/else statements are too “clear” and “understandable” for your clever code? Then the flip-flop operator is perfect for you! A flip-flop is a range operator that compares two conditions inside of a loop. It evaluates to true when the first condition is met until the second condition is met, after which it returns Yes that’s about the best I can do, there isn’t a really easy way to explain this without showing you, so here we go: Let’s say you’re out shopping for a hoverboard, and you don’t want to plunk down your credit card until you find one that’s totally rad. You’ve also decided that after finding a rad hoverboard if you see an ugly one you’re just going to stop caring. This is an ideal opportunity to do some flip-flops on your hoverboards. hoverboards = [:blue, :pink, :rad, :lasers, :peuce, :ugly, :double_decker] hoverboards_to_buy_maybe = hoverboards.each do |hoverboard| if (hoverboard == :rad)..(hoverboard == :ugly) hoverboards_to_buy_maybe << hoverboard end end So let’s start looping over our 7 hoverboards. On the first pass, hoverboard is :blue is not rad so the first conditional fails and we skip our if block; nothing gets added to (:blue == :rad)..(:blue == :ugly) => false # hoverboards_to_buy_maybe: Next up we find the :pink hoverboard, and while arguably an improvement over that creepy :blue board we’re still not in rad territory, so we skip our if block again and move on. (:pink == :rad)..(:pink == :ugly) => false # hoverboards_to_buy_maybe: On the third pass through our loop we finally find our :rad hoverboard. The flip-flop evaluates to true and we shovel :rad into our hoverboards_to_buy_maybe list. Because we’ve now seen a :rad hoverboard our flip-flop operator will continue to return true until we meet the second condition. (:rad == :rad)..(:rad == :ugly) => true # hoverboards_to_buy_maybe: [:rad] Next up is our fourth hoverboard, :lasers, and the flip-flop state is still true because we just met our first condition. The second :ugly condition is not met, because a hoverboard made of lasers is an unimaginably beautiful thing, so the flip-flop state doesn’t change. We haven’t met the second condition to turn off our flip-flop, so it stays true and we shovel :lasers right into (:lasers == :rad)..(:lasers == :ugly) => true # hoverboards_to_buy_maybe: [:rad, :lasers] Next up we find a :peuce hoverboard, a poorly defined color that is sometimes hideous and other times not, but the important thing to us now is that it’s not exactly :ugly. We shovel the :peuce board into our list and since we still haven’t found the :ugly board, our flip-flop remains true as we head into our sixth iteration. (:peuce == :rad)..(:peuce == :ugly) => true # hoverboards_to_buy_maybe: [:rad, :lasers, :peuce] In our sixth pass we finally find our :ugly hoverboard and reach the absolute bottom of our bucket of cares. Since we’ve tripped our second ‘hoverboard == :ugly’ conditional our flip-flop is now set to false. We shovel this final monstrosity into our array and move on with our shopping. (:ugly == :rad)..(:ugly == :ugly) => false # hoverboards_to_buy_maybe: [:rad, :lasers, :peuce, :ugly] In our seventh and final iteration we completely ignore the glory that is the :double_decker hoverboard, as we’ve now decided we don’t care. If this board happened to meet our first condition, our flip-flop would have gone back to true, we would have executed our if block and shoveled the hoverboards_to_buy_maybe. It’s probably for the best, double decker hoverboards sound incredibly dangerous. When our loop is finished hoverboards_to_buy_maybe looks like this: [:rad, :lasers, :peuce, :ugly] To recap, we iterated until the first condition in our flip-flop was met, skipping the if block each time. Once we met the first condition, the flip-flop state became true, so we started adding things to our list. We continued iterating and executing our if block each time until we met the second condition, after which the flip-flop state was set to false and we transformed back into an apathetic lump. Two-dot, three-dot, red-dot, blue-dot The flip-flop operator itself is actually a range of conditionals, where the range only returns false: true once the first condition is met and until the second condition is met, then false until the first condition is met again. As you may know there are two types of ranges in Ruby: the two-dot and the three-dot. In our example we’re using the two-dot version, which evaluates both of the conditionals for a given iteration. So if we meet our first condition and our second condition in a single pass, the flip-flop will finish the loop set to false, though it will execute the body of the if block exactly once. To demonstrate let’s change our flip-flop so both conditionals check for hoverboards.each do |hoverboard| if (hoverboard == :rad)..(hoverboard == :rad) hoverboards_to_buy_maybe << hoverboard end end When we make our third iteration and hoverboard is :rad we will meet the first condition and set the flip-flop state to true. We’ll add the :rad hoverboard to hoverboards_to_buy_maybe because the state is now set to true, and afterwards we’ll check the second condition. The second condition also evaluates to true, so we change the flip-flop state back to false and we move to our next iteration. Since we never find :rad again the flip-flop state never returns to true and our final boards_to_buy_maybe looks like this: We met the first condition, executed the block and met the second condition in a single iteration. The three-dot version of this same flip-flop behaves quite differently, evaluating only one of the conditionals for each iteration: (hoverboard == :rad)...(hoverboard == :ugly) The flip-flop starts out set to false, so until we find :rad it will remain false. We only check the first condition with the three-dot until we find the :rad hoverboard and the flip-flop state changes to true. If we don’t meet the first condition we never evaluate the second condition. Once we’ve met the first condition our flip-flop is true and we will stop evaluating the first condition until the second is met. So on each iteration we now check only for the When we find the :ugly hoverboard the second condition evaluates to true and the flip-flop is set back to false. We stop evaluating the second condition and continue through our loop evaluating only the first condition, which in our case never evaluates to The three-dot change in behavior is especially obvious when both conditions are the same: (hoverboard == :rad)...(hoverboard == :rad) # hoverboards_to_buy_maybe: [:rad] When we find :rad and the first condition evaluates to true, we execute the body of our if block, adding boards_to_buy_maybe. Unlike the two-dot version we do not evaluate the second condition on this iteration, so our flip-flop state remains On our next iteration hoverboard is now equal to (:lasers == :rad)...(:lasers == :rad) # hoverboards_to_buy_maybe: [:rad, :lasers] We don’t evaluate the first condition at all since the flip-flop state is true. Instead we check the second condition, which evaluates to false, and we go on our merry way. Since we don’t have another :rad in our hoverboards we will never meet the second condition, and the flip-flop stays true until the end of the loop. We end up with this in our final [:rad, :lasers, :peuce, :ugly, :double_decker] :rad and flipped to true but didn’t check the second condition, because three-dot flip-flops check only one condition on each iteration. By the time we came back again it was too late to find a :rad hoverboard so we never tripped our second condition and the flip-flop remained true all the way down. Flipped and flopped To review, the two-dot version of the flip-flop operator evaluates both conditions on each iteration. The three-dot version will evaluate only a single condition on each pass: the first condition if the flip-flop state is true and the second condition if the flip-flop state is false. You’ll need to decide which flip-flop is the right one for your particular hoverboard shopping use case. I hope that this “sometimes I care” hoverboard example has made clear the value of Ruby’s flip-flop operator, because I have no idea why anyone would actually want to use this thing in the real world. I assume that someone made up the flip-flop operator for a reason, so if you have a good application for flip-flop operators please leave a comment in the New Relic community forum. I am incredibly curious. if (true)..(false) puts “<3 Jonan" end
<urn:uuid:f13162ad-0516-4da6-b523-61a9b3ed2794>
CC-MAIN-2020-16
https://blog.newrelic.com/engineering/weird-ruby-part-3-fun-flip-flop-phenom/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370494064.21/warc/CC-MAIN-20200329074745-20200329104745-00274.warc.gz
en
0.867217
2,439
2.71875
3
Take control of any hereditary genetic risk for cancer or heart disease with genetic testing for these conditions Genetic testing, also known as DNA testing, can help you or family members identify whether you are at risk of developing common hereditary cancers or heart conditions. Understanding your genetic risk affords you the opportunity to prevent disease or detect it at an earlier treatable stage. This is not to be confused with ancestry DNA which helps to identify your origins. The genetic testing we offer helps estimate your chance of developing cancer or cardiac problems in your lifetime and so allows you to take pro-active steps. We offer a variety of packages that cover a wide range of genetic testing: - BRCA1/BRCA2: detects cancer markers that significantly increase your risk of breast and ovarian cancer. - Color 30 Gene Panel: analyses up to 30 genes associated with the most common hereditary cancers including breast, colorectal, - melanoma, endometrial, prostrate, ovarian, stomach and uterine. - 77 Cardiac Genes: covers heart disorders such as arrhythmia, cardiomyopathy, arteriopathy, blood pressure. It’s also possible to do combined genetic testing for both cancer risk and heart disease with: - Extended 30 Gene panel: combined screening including Color 30 Gene panel, BRCA 1/BRCA2 and cardiac profile. Genetic cancer screening: BRCA and Color 30 Gene Panel “Both men and women have a one in two risk of being diagnosed with a form of cancer in our lifetime.” Cancer Research UK Proactive genetic testing enables you to learn about how your genes could potentially impact your health. It allows you and your healthcare provider to create a personalised health plan designed to prevent, detect and treat cancers early. Today in the UK, women tend to be offered genetic testing for the corrupted gene if cancer runs in the family. By understanding their risk factor, women can be monitored more closely for cancer and if necessary, have preventative surgery. It can also have a wider reaching impact and may be important information to share with your relatives. For example, if a man or woman carries a mutation in BRCA1, each of their parents (mother and father), siblings (brothers and sisters) and children (sons and daughters) have a 50% chance of carrying the same mutation. BRCA gene testing The name “BRCA” is an abbreviation for “BReast CAncer gene.” BRCA1 and BRCA2 are two different genes that have been found to impact a person’s chance of developing breast cancer. A small percentage of people (about 1 in 400) carry mutated BRCA1 or BRCA2 genes. A BRCA mutation occurs when the DNA that makes up the gene becomes damaged in some way. When a BRCA gene is mutated, it may no longer be effective at repairing broken DNA and helping to prevent breast cancer. Due to this , people with a BRCA gene mutation are more likely to develop breast cancer, and more likely to develop cancer at a younger age. The carrier of the mutated gene can also pass a gene mutation down to his or her offspring and for some people, though, the chances of having a BRCA gene mutation are much higher. Genes are inherited, which is why knowing your family history is important when determining breast cancer risks. With early detection, the vast majority of breast cancer cases can be successfully treated and that’s true even for people who have a BRCA1 or BRCA2 mutation. Who should get BRCA testing? In the UK around 1 in every 300-400 people carries a harmful BRCA mutation so everyone should do this test for peace of mind, although a person could be considered at high risk for BRCA mutations if they have a family history of: - Breast cancer diagnosed before age 50. - Male breast cancer at any age. - Multiple relatives on the same side of the family with breast cancer. - Multiple breast cancers in the same woman. - Both breast and ovarian cancer in the same woman. - Ashkenazi Jewish heritage. Those with BRCA1 mutations are at significantly greater risk of developing certain cancers in their lifetime: - Chances of getting ovarian cancer range between 10% and 60%, compared to a 2% risk for those without the mutation. - Women have between a 45% and 90% risk of getting breast cancer, compared to 12.5% for those without the mutation. Whether you are considered at high risk for BRCA mutations or not, this genetic test might change your life. Color 30 Gene Panel Test: Hereditary Cancer Test Understanding your genome structure provides many added benefits even if you don’t think there is a hereditary link. The tests we provide offer you the opportunity to get a comprehensive view of your health risks and work with our consultants to build a proactive health plan for your future. Our tests provide: - Clear results about the presence or absence of any mutations that increase your risk for developing cancer - Detailed information on how your mutation status might effect relatives - Personalised reports tailored to your personal health and family cancer history - Support before, during and after you get your results. Color 30 Gene Panel Cancer Test analyses 30 genes for mutations that could increase your risk for hereditary breast, colorectal, melanoma, ovarian, pancreatic, prostate*, stomach, and uterine cancers. Genetic testing for cancer predisposition gives you personalised screening and prevention treatment 10-15% of most cancers are due to inherited genetics mutations. Using just a sample of your saliva, we can now analyse for certain gene mutations that attribute to many common cancers. Knowing you have a mutation that increases your risk allows you and your healthcare provider to create a personalised plan designed to prevent or detect cancer like breast, ovarian, colorectal, pancreatic, earlier or more treatable stage. Prevention can be life-changing for you and your family! Knowing you have a genetic mutation may be a piece of important information to share with your relatives. For example, if a man carries a mutation in BRCA1, each of his children has a 50% chance of carrying the same mutation. “Genomics has the potential to transform the delivery of care for patients which is why the NHS has prioritised it in its Long Term Plan” Prof Dame Sue Hill, Chief Scientific Officer of NHS England Genome Testing & Heart Disease Prevention Heart disease in many cases is caused by a combination of lifestyle choices – being overweight, having an unhealthy diet, smoking and drinking all increase your risks of suffering from this disease. However, studies show that for one in 200 people it can have a genetic basis. As research and technology advances we are beginning to identify more genetic mutations that cause these heart conditions. Knowing if you are at risk allows you take early action on lifestyle change and can help you reduce your risk of developing this disease. What is Inherited Heart Disease? Inherited heart disease incorporates a range of conditions affecting the heart and circulatory system and can affect people of any age – it can be life-threatening. As there are many different types of inherited heart conditions, diagnosing the exact disease and gene mutation is key to providing effective treatment. The cardiac test covers 77 genes that are associated with an increased risk of the following types of heart disorders: - Genetic forms of high blood pressure and high cholesterol Genetic testing in this area provides you with the opportunity to determine if you have an increased risk of developing any of these conditions. The information gathered from your results can then be used in combination with family and personal health history to create a personalised health plan. How Does DNA Testing Work The process for gathering a sample for your test is quick, simple and painless – all we need is a saliva sample. Below are step-by-step instructions for completing your test. Request your kit Contact our team and request a BRCA1 and BRCA2 test, 30 Gene Panel or Genetic Cardiac saliva test kit. We can then post it to you or you can pick one up from the Centre at your convenience. Provide a saliva sample Use the tube in your kit to provide a saliva sample and complete the request and consent form. You can do this at home or at the Centre. Return the kit You can either return the kit to the Centre where we will be happy to post this for you, or you can post the sample pack free of charge at any post box. Return the kit Your sample with be sequenced and your genes analysed. Results are usually reported within 4-5 weeks after receipt in the laboratory. Urgent BRCA results are delivered in 2 weeks Who can use the DNA test kit Please be aware that these tests are only available to those aged 18 years or above. How to read the genetic cancer screening results All patients will have a pre-test telephone medical review before any sample is taken. Once we’ve completed testing your sample, one of our trained medical team will contact you to discuss your results. Clear results about the presence or absence of any mutations that increase your risk for developing cancer will be conveyed. A positive result Confirms that a mutation or a genetic change in a specific gene has been identified. Your personal results will contain detailed risk information specific to the mutation identified in your genes, and our consultant will carefully talk you through this. This result does not mean that you have cancer or that you will definitely develop cancer in your lifetime. There may also be implications for other family members and our consultants will be able to discuss this with you when explaining your result. A negative result Means that no mutations or genetic changes were identified in the genes that were analysed. This result does not eliminate your risk of developing cancer. You may still be at an increased risk due to other factors, such as mutations not detected by current technology, or mutations in other genes. Prices cancer & cardiac genetic screening - BRCA1/BRCA2 : £750.00 - Color 30 Gene Panel : £550.00 - 77 Cardiac Genes : £750.00 Combined screening including Color 30 Gene panel, BRCA 1/BRCA2 and cardiac profile: - Extended 30 Gene panel: £815.00 You might also be interested in... We offer the range of healthcare packages to support you with all your healthcare needs.
<urn:uuid:37e26801-e72c-42d1-957d-0b00fac7cdb9>
CC-MAIN-2020-16
https://www.womenswellnesscentre.com/genetic-dna-testing/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371880945.85/warc/CC-MAIN-20200409220932-20200410011432-00553.warc.gz
en
0.933186
2,212
2.796875
3
TAX LAWS- NETHERLANDS From 1 January 2001, the Netherlands has a new tax system. This system involves substantial changes to income tax. The new tax system creates a robust taxation system with a broader base and lower rates: a taxation system that addresses future developments. The objectives of this revision of the taxation system include: - stimulation of employment opportunity, and strengthening of the - Nederlands' economic structure and international competitive edge; - reduction of the burden of taxation on labour; - promotion of sustainable economic development ('greening'); - creation of a balanced and just burden of taxation; - broadening and strengthening of the taxation base, through reduced - and amended deductions; - promotion of emancipation and economic independence; simplification of the taxation system. In order to stimulate the economy and employment opportunity, the basic rates of taxation are lowered.Work is made more attractive by the introduction of an 'employment rebate': those people in paid employment enjoy a tax advantage in the form of a fixed non-taxable deduction. The reduction in taxation on labour is financed by reductions in expenditure and by increases in indirect taxes, such as vat and environmental levies. By lowering the taxation on income from employment, together with a shift from direct to indirect taxation, the Netherlands' economic structure and its international competitive edge will be strengthened. Greater emphasis on environmental levies will make a significant contribution towards achieving sustainable economic development. TAXES ON INCOME AND PROFITS Income tax is a tax on a natural person's annual income. It is levied at a progressive rate. Personal circumstances are taken into account when making the assessment of the amount of tax to be paid, and certain expenses are tax-deductible. The scheme provides for a personal allowance, the amount of which is dependent on the individual circumstances.There are four tax rates, 32.35%, 37.60%, 42% and 52%. The first two rates include both tax and social security contributions; the last two rates consist solely of tax. Income tax has two advance levies, which are a payroll tax, and a dividend tax. The payroll tax and the social security contributions are levied jointly on earned income or benefits. The employer or body paying the benefit deducts the tax and contributions directly from the pay or benefit, and pays these to the Tax Department. Many natural persons pay only payroll tax, and are not subject to income tax. For natural persons with a high income or many tax-deductible items, the payroll tax serves as an advance levy, and they are subsequently issued with an income tax return and an assessment. The other advance levy for income tax is the dividend tax. Under the present Income Tax Act taxation on income from investments is based on the assumption that people will have a taxable return of 4% on their stocks and other shares. Shareholders are not separately liable for income tax on the actual dividend they receive. For non-residents the dividend tax levied on a dividend is in principle a final levy.Tax conventions generally provide for a lower rate than the 25% mentioned above. Corporation tax is levied on the taxable profit of both private and public companies. Foundations (called 'stichtingen' in Dutch) may also be liable for corporation tax. An important feature of corporation tax is the participation exemption, which ensures that corporation tax is levied only once on the profit obtained within a group. This means that a company receiving dividends does not have to pay corporation tax on these dividends since the tax has already been paid by the company distributing the dividends. Corporation tax is levied at a rate of 35%. The first nlg 50,000 (eur 22,689) taxable profit is levied at a rate of 30%. The Inheritance Tax Act has two forms of tax, which are inheritance tax and gift tax. These taxes are, in general, to be paid by the recipient. There are substantial exemptions from both inheritance tax and gift tax. There are no exemptions from inheritance tax payable upon the inheritance or donation of specific assets, for example property. The rates are the same for these taxes, and depend on the value of the assets that have been received and the relationship between the giver and the recipient. There is a minimum and maximum rate. Tax on games of chance The tax on games of chance is levied on prizes that exceed nlg 1,000 (eur 454). The rate is 25%. The organization awarding the prize generally pays the tax, and the winner receives a net prize. TAXES AND DUTIES ON GOODS AND SERVICES Import duty is levied on imported goods. This usually amounts to a percentage of the value of the goods being imported. Various rates are applicable, which are determined by the eu. The rates are usually lower for minerals or raw materials, and higher for finished products. Import duty is levied on goods that are imported from countries outside the eu. The revenue is destined for the eu. Value added tax Value added tax (vat) is a general consumer tax included in the price consumers pay for goods and services. Consumers pay this tax indirectly, and companies remit the tax to the Tax Department. All companies pay vat, although there are a few exceptions. The vat paid by one company to another may be reclaimed from the vat to be paid to the Tax and Customs Administration. There are three rates for vat: - a general rate of 19%; - a lower rate of 6%, applicable mainly to food and medicines; - a zero rate, applicable mainly to goods and services in international trade, so that goods can be exported free from vat. On private cars and motorcycles The tax is included in the price a buyer pays when purchasing a new private car or motorcycle. It is usually paid by the manufacturer or importer. The tax rate depends on the net listed value of the private car or motorcycle. On private cars with a diesel-engine a higher tax rate is applied. For imported vehicles the tax percentage is reduced according to their age. The minimum tax rate is 10% of the net listed value of the vehicle, unless it is more than 25 years old. There are several environmental taxes in the Netherlands. Suppliers or users of mineral oil and other fuels have to pay fuel tax.Taxes have been levied on the withdrawal of groundwater and the disposal of waste since 1 January 1995.A regulatory energy tax came into force on 1 January 1996. A tax on tap water supplies was introduced on 1 January 2000. OTHER IMPORTANT TAXES AND DUTIES Social security contributions In addition to income tax, everyone pays social security contributions on their income. The contributions are deducted or levied at the same time as the payroll tax and income tax. Employers deduct these contributions directly from employees'pay. Self-employed persons must pay income tax and social security contributions themselves. There are also employeeinsurance schemes for persons in paid employment. Social security schemes are applicable to the entire population. Everyone can make use of the facilities funded by the contributions. Contributions are made to three social security schemes: - The Widows' and Orphans' Benefits Act (anw): Persons who have been widowed or orphaned receive an anw benefit; - General Old Age Pensions Act (aow). Everyone who reaches the age of 65 receives an aow pension; - The Exceptional Medical Expenses Act (awbz). Persons who incur medical expenses not reimbursed by a health insurance fund or a private medical insurance scheme receive an awbz benefit. Under the waz, (Invalidity Insurance Self Employed Persons Act) which has been in effect since 1998, entrepreneurs unable to work as a result of an illness or handicap are entitled to benefit. Contributions to the waz are collected by the Tax Department. Employers make contributions to the disability benefits schemes for their employees. Excise duty is levied on certain consumer goods, i.e. petrol and other mineral oils, tobacco products, and alcohol and alcoholic beverages.A special consumer tax is levied on non-alcoholic beverages. Excise duty, like vat, is included in the price consumers pay for these goods. The manufacturers and importers of the goods liable to excise duty remit the tax. Taxes on legal transactions Three taxes on legal transactions are levied in the Netherlands: these are transfer tax, insurance tax and capital duty.Transfer tax is levied on the acquisition of property located in the Netherlands. The rate is 6% of the market value of the property. Insurance tax is levied on insurance premiums at a rate of 7%. The following types of insurance are exempted from insurance tax: life insurance, accident insurance, invalidity insurance, disablement insurance, medical insurance, unemployment insurance and transport insurance. Capital duty is levied when capital is contributed to companies located in the Netherlands when the capital is comprised of shares. The rate is 0.55% and the tax due is calculated on the value contributed (assets less liabilities), or on the nominal value of the shares, whichever is higher. In certain circumstances, an exemption is made for mergers or reorganizations. Motor vehicle tax With the exception of buses, motor vehicle tax is paid on vehicle ownership. The amount depends on the type and weight (sometimes gross) of the vehicle, and in the case of private cars also on the type of fuel the vehicle uses. Furthermore, the amount for private cars and motorcycles is dependent on the province in which the person/owner is resident or the company/owner is established. Buses are charged a levy on the use of the roads. Tax on heavy vehicles The tax on heavy vehicles (also known as the eurovignette) is a tax on vehicles with a gross weight of 12,000 kg or more. It is levied for the use of motorways in the Netherlands. The tax has to be paid before the vehicle uses the motorway. There are rates of tax, which are based on the number of axles of the vehicle. There is one rate for three axles or less, and another for four axles or more. Both rates are further divided into three rates for the engine characteristics: non-euro, euro l, euro ll and cleaner. The tax can be paid daily, weekly, monthly or annually. A similar tax, based on a directive of the European Union and a Treaty, is levied in Belgium, Denmark,Germany, Luxembourg and Sweden. Corporation tax is levied on companies established in the Netherlands (resident taxpayers) and on certain companies not established in the Netherlands which receive income in the Netherlands (non-resident taxpayers). In this context, the term 'company' includes companies with a capital consisting of shares, cooperatives, and other legal entities conducting business. The main types of companies referred to in the Corporation Tax Act are the public company (nv) and the private company with limited liability (bv). Whether a company is deemed to be established in the Netherlands depends on the individual circumstances. Relevant factors include the location of the effective management, the location of the head office, and the location of the shareholders' general meeting. Under the Corporation Tax Act, all companies incorporated under Dutch law are regarded as being established in the Netherlands. TAX BASE AND RATES Corporation tax is levied on the taxable amount, which is the taxable profit made by the company in a particular year less deductible losses. The taxable profits are the profits less tax-deductible donations. In principle, the profits should be calculated in accordance with the provisions laid down in the Income Tax Act to determine the business profits of natural persons. In certain cases, additional stipulations made in the Corporation Tax Act are also applicable. The taxable profit has to be computed in guilders or euros. However, under certain conditions, taxpayers will be allowed to compute their taxable profit in another currency (the 'functional currency') for a period of at least 10 years. Corporation tax is levied at a rate of 30% for taxable profits up to nlg 50,000 (eur 22,689) and 35% if and to the extent taxable profits are above that levelt. Taxpayers: residents and non-residents Under the present Income Tax Act residents are liable for income tax on their world-wide income. Non-residents residing in an eu Member State or in a country with which the Netherlands has concluded a double taxation convention providing for the exchange of information may opt for enforcement of the sections of the Income Tax Act for residents.Non-residents are taxed only on the income from a limited number of sources in the Netherlands. The Netherlands has concluded many double taxation conventions to prevent the double taxation of world-wide income. If no convention is applicable, tax relief may be obtained on the basis of the Unilateral Decree for the prevention of double taxation. (If certain requirements are met, foreign employees temporarily posted to the Netherlands may request the application of a special tax arrangement known as the 30% rule, see 4.4) The legal definition stipulates that someone's place of residence is determined 'according to circumstances'. Several factors are of relevance when deciding whether someone maintains personal and economic ties with the Netherlands. These include a family home, employment, or registration in a municipal register. Nationality is not a determining factor, but it may be relevant in some cases. The law also provides for a number of special cases. The crews of ships and aircraft with a home harbour or airport in the Netherlands are deemed to be residents of the Netherlands unless they have established residence abroad. Dutch diplomats and other civil servants serving abroad remain residents of the Netherlands. Foreign diplomats and the staff of certain international institutions are exempt from Dutch income tax. People pay tax individually as far as possible. Therefore partners pay tax on their own income and can only use their own deductible items. However, some income and deductible items are joint. Joint income and deductible items can be divided randomly between both partners as long as 100% of the income and deductible items is declared. The choice applies, among other things, to the notional rental value for owner-occupiers and the deductible items from the owner-occupied dwelling, childcare expenses and items that come under the personal deduction. If partners are married or have registered their partnership at the Records Office, they are automatically each other's partners (unless they are permanently separated). Partners living together have to meet certain conditions in order to be considered fiscally as part TAX RATES AND TAX CREDITS The amount of tax owed is calculated by applying the tax rates to the taxable income. The result is reduced by one or more tax credits. Everyone has the right to a general credit on the tax owed: the general tax credit. Additional credits over and above are available. Which additional credits apply depends on someone's personal circumstances. The general credit is nlg 3,473 (eur 1,576). For individuals with income from current employment the credit is increased by a maximum of nlg 2,027 (eur 920). For taxpayers with children under 27 living at home the credit is nlg 2,779 (eur 1,261). For single parents in paid employment with children under 12 living at home said amount is increased by a maximum of nlg 2,779 (eur 1,261). For people aged 65 and over, the credit is increased by nlg 520 (eur 236) unless their income in boxes 1, 2 and 3 exceeds nlg 61,052
<urn:uuid:8cd92080-b33d-40e0-80de-294ca822cbb4>
CC-MAIN-2020-16
http://www.helplinelaw.com/article/netherlands/93
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371807538.83/warc/CC-MAIN-20200408010207-20200408040707-00393.warc.gz
en
0.959712
3,181
2.84375
3
In recent decades, there has been a growing tendency in the politics and philosophy of ecology that can only be described as anti-humanist. While this doesn’t constitute a coherent movement as such, there are several ideological assertions that crop up repeatedly across the entire spectrum of anti-humanist thought. The anti-humanist argument tends to include the following ideas in one shape or another: - Humans are uniquely destructive. - The impact of human civilization on the environment is “unnatural.” - Humans (both as a species and as individuals) bear a moral responsibility (or guilt) for their environmental impact. - Any view that sees humans as exceptional due to their intelligence is hubristic. In its most extreme forms – which are far more common that one would hope, particularly on the Left – this takes on the shape of “humans are a virus” and “the world would be better off without humans.” Before I continue, I’d like to make perfectly clear that I take the scientific reality of anthropogenic climate change as a given. I’m not interested in questioning whether human civilization is having an impact on the environment of Earth; but I am interested in the historical, scientific and ethical context of this impact. Moreover, I think there is an unassailable case to be made for the idea that humanity is exceptional, precisely because only human beings are capable of making an ethical assessment of their environmental impact. But let us begin with historical context. In fact, let us go back to the beginning of life on our planet: the origin of oxygen. When we discuss anthropogenic climate change, we’re discussing changes in the chemical composition of the environments of the Earth (such as the atmosphere and the oceans) due to the activity of a species, and the impact of that activity on other species. We’re talking about terraforming. But terraforming the Earth is hardly a uniquely human activity. One of the characteristics we associate most closely with the life-rich environment of modern Earth is its oxygen-rich atmosphere. But when life first developed on this planet, there was no free oxygen on Earth at all. That changed with the Great Oxygenation Event, approximately 2.3 billion years ago. Oceanic cyanobacteria, having developed into multicellular forms more than 2.3 billion years ago (approximately 200 million years before the GOE), became the first microbes to produce oxygen by photosynthesis. Before the GOE, any free oxygen they produced was chemically captured by dissolved iron or organic matter. The GOE was the point when these oxygen sinks became saturated and could not capture all of the oxygen that was produced by cyanobacterial photosynthesis. After the GOE, the excess free oxygen started to accumulate in the atmosphere. The increased production of oxygen set Earth’s original atmosphere off balance. Free oxygen is toxic to obligate anaerobic organisms, and the rising concentrations may have wiped out most of the Earth’s anaerobic inhabitants at the time. Cyanobacteria were therefore responsible for one of the most significant extinction events in Earth’s history. In other words: long before any humans evolved on this planet, cyanobacteria radically terraformed the Earth (triggering an ice age) and simultaneously caused an “ecocide” that utterly dwarfs the extinctions caused by human activity. This – although other examples of organisms spreading suddenly or developing features destructive to others are plentiful – by itself demolishes the first argument, of humanity’s unique destructiveness. In fact humanity is merely one of many species to outcompete others on a large scale, altering its environment and causing extinctions. This is not moral judgement; it is merely historical fact. It is worth investigating the moral question, however. Given their actions, do we: - hold cyanobacteria morally responsible for their impact? - consider cyanobacteria to be “unnatural” because of this outsized impact? The commonly-given answer to both of these questions appears to be “no.” Why? In the case of the first question, the answer would appear to be that cyanobacteria are simply not moral agents. Lacking intelligence, they cannot have or be asked to have a code of ethics; they are incapable of morality or immorality. However, if cyanobacteria are held to be blameless, and humans are not, then there must be some significant difference in the moral nature of human beings and cyanobacteria. In fact, if we extend this to other species that have caused terraforming or extinction, and of all these find only humans to be morally responsible, then clearly human beings must somehow be extraordinary. As for the second question, we must ask ourselves: what is unnatural? Is this term actually philosophically useful? The case of the Great Oxygenation Event illustrates the problem: if the impact of humans and cyanobacteria is in some way comparable, why is one natural, and the other unnatural? It cannot be their destructiveness, as they have that in common. It cannot be their changing of the environment, as they also have that in common. Are human beings not animals? Why would they exist outside of nature – in fact, how would it even be possible to exist outside of nature? Neither the building of colonies, nor the production of tools, nor the changing of the environment to fit one’s needs, nor an accidental impact on the environment, nor indeed the destruction of other species are unique to human beings. Either cyanobacteria are unnatural, or human beings are natural, or the term is useless. Speaking of nature, where does the idea originate that the extinction of species is immoral? Anti-humanists will frequently deride human beings as not only a pest, a virus, a plague, but also as criminals; as murderers. But if we observe the history that preceded us – if we, that is, take a non-human-centric view of planetary history – then it becomes abundantly obvious that extinction is the order of the day. More species are extinct than live today, yet no other species are seen as particularly morally bankrupt for their participation in the continual holocaust that is the process of evolution. Just as importantly, extinction events occur without any interference from living beings at all. In anti-humanist writing, reference is frequently made to humans upsetting a natural “balance.” 66 million years ago, the Cretaceous-Paleogene extinction event wiped out 75% of all species on Earth. Before that, 252 million years ago, the Permian-Triassic extinction event wiped out a stunning 90-96% of all species. (The latter may have partially involved microorganisms known as methanogens.) How do we assess the morality of such events? If we assert the existence of some kind of natural order, or even the existence of some sort of controlling force (“Mother Nature”), then we must come to the conclusion that mass extinction and terraforming are, in fact, part of the plan. If we attempt to derive a code of ethics from pre-human natural history, that code cannot classify the actions of modern-day human beings as crimes. If such a force as “Mother Nature” truly existed, human beings would be perfectly in keeping with her previous methodology. But what if we do, in fact, assert that the destruction of other species is not desirable? What if we express the concept that life, in its beauty and diversity, is valuable? As we have seen above, it is impossible to attribute such ethical concepts to Nature. There has never been a balance for humans to disrupt; the history of life is the history of constant disruption, extinction, destruction. Therefore we must acknowledge that the love of Life – not of individual life, but of the concept itself – is a wholly human trait. What distinguishes us from other species is not our destructiveness (which is very common) or our tendency to terraform (also very common), but our utterly unique ability to question and evaluate our impact on the biosphere. It is important to acknowledge that human beings do occupy a different moral position than cyanobacteria. But that does not make humans transgressors against some imagined natural order. Humans are not setting out to cause destruction; we are, like so many species before us, setting out to live, to thrive. Like the cyanobacteria before us, we are very successful at it, and that’s having side effects. But what’s truly remarkable, what’s truly unique, is that we human beings can use our capacity to reason to observe material processes and seize control of them in order to change outcomes. We may not be unique if our ability to destroy, but we are unique in our ability to care. The anti-humanist meme that “humanity is a virus” has little to offer us. Those who ahistorically and unscientifically suggest that humanity is a unique threat to the biosphere are perpetuating the same old myth of Original Sin that has so long been used to stifle the ingeniousness of Homo sapiens by those who profit from scarcity and fear. There is a much better metaphor for what human intelligence could truly mean for this planet. For billions of years, this planet has been ravaged by extinction events. From bolide impacts to supervolcanos to, yes, the effect of particularly successful species, the biosphere has been violently assaulted. Life on this little rock is under constant threat – by threats far worse than human beings. If you truly value Life, and are not merely lost in a haze of misanthropy, then you cannot simply shrug off these threats. If you truly value Life, then you must recognize that humanity’s capacity for reason does not represent a threat, but a possibility. The dinosaurs didn’t have a space programme. Bacteria cannot tell what their chemical processes are doing to the atmosphere. No other species even has the concept of “protecting the environment.” Humanity’s unique capacity for moral judgement on an abstract level, our equally unique productive capabilities, our insights into the origins of natural phenomena – all these signify that for the very first time in the history of the planet, there exists a species that can express belief in the value of Life and take meaningful action to protect Life. We are not a virus. We’re an immune system.
<urn:uuid:045e6f7a-b151-4153-a456-c436135221da>
CC-MAIN-2020-16
http://www.jonas-kyratzes.net/2016/08/02/we-are-not-a-virus/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371660550.75/warc/CC-MAIN-20200406200320-20200406230820-00435.warc.gz
en
0.94739
2,162
2.78125
3