text
stringlengths 213
609k
|
---|
On 30 March 1976, Palestinian citizens of Israel declared a general strike and held large demonstrations against land expropriations by Israeli authorities in the Galilee.
Now observed annually as Land Day, these events marked the first organized popular rebellion by Palestinians inside present-day Israel. They had undergone three decades of disenfranchisement and intimidation.
In 1948, Zionist militias, which would later constitute the Israeli army, occupied the majority of historic Palestine.
Using force and the threat of force, some 750,000 Palestinians were expelled.
Those who remained in the territory then unilaterally declared as Israel were granted Israeli citizenship, but the new authorities imposed military rule on them that was not lifted until 1966.
Even after military rule, systematic Israeli attempts to squelch Palestinian dissent and colonize both land and minds continued.
The Zionist project is fixated on controlling as much land as possible with as few Palestinians on it as possible. It has used both naked violence and legal frameworks to gradually reduce Palestinian land ownership in present-day Israel to just a tiny fraction of what it was before 1948.
Land Day was an act of resistance to an Israeli government plan to confiscate thousands of acres in the north of historic Palestine.
But it was also a form of collective defiance against attempts to erase Palestinian identity. The workers and farmers Israel had tried to turn into obedient subjects took to the streets en masse on 30 March to fight for their lands and to take control of their destiny.
The Palestinian villages of Sakhnin, Arraba and Deir Hanna — known as the Land Day Triangle — were the most affected by the confiscation plans and witnessed the most violence.
In total, six Palestinians were murdered by Israeli police on that day.
They were Khadija Shawahna, a 23-year-old farmer who was killed by an Israeli bullet while looking for her brother among the demonstrators; Khader Khalayleh, shot with a bullet to the head as he tried to help a wounded teacher and protester; Khayr Yasin, shot dead by Israeli soldiers during an unarmed protest in Arraba; Raja Abu Rayya, killed by soldiers after defying a curfew to protest the killing of Khalayleh; Muhsin Taha, a 15-year-old boy killed during a large protest in the village of Kufr Kana near Nazareth and Rafat Zuhairi, a student and refugee killed by soldiers who raided the town of Taybeh before a demonstration.
The grievances and injustices that sparked the protests in March 1976 linger throughout Palestine today. But those injustices are not exclusive to Palestinians.
Four decades later, on the other side of the globe, at least three prominent indigenous land defenders were assassinated for resisting the onslaught of multinationals on their rivers and forests.
The assassination of indigenous Honduran environmental activist Berta Cáceres on 3 March captured international attention. She was the cofounder of the Civic Council of Popular and Indigenous Movements of Honduras.
She was well known for organizing campaigns against hydroelectric power projects, particularly against construction of the Agua Zarca dam on the territory of the indigenous Lenca people.
Cáceres was shot dead in her home at La Esperanza one day shy of her birthday. She had long complained of death threats from the police, army and corporations.
But she would not be the only victim of state-backed corporate and police brutality that month.
Less than two weeks after her death, fellow activist Nelson Garcia was shot in the face and killed by unidentified gunmen after spending the day with the Río Chiquito community.
More than one hundred Honduran police and military officers had evicted dozens of Lenca families from their land.
On 12 March, another activist, Walter Méndez Barrios, was assassinated near his home in Guatemala. He had been a prominent environmental leader who fought against deforestation and hydroelectric projects, and for community-based, sustainable forest management.
Forefront of struggle
Indigenous activists in Latin America are at the forefront of the struggle to save Mother Earth and prevent the privatization of natural resources, the dispossession of rural communities and the exploitation of the most vulnerable under the guise of growth and development. And thus they bear the brunt of repression.
A report published by the group Global Witness found that of the 116 environmental activists known to have been killed in 2014, 40 percent of them were indigenous and three-quarters were in Central and Latin America amid disputes over mining, agri-business and hydroelectric power.
These activists pay the price for leading the fight against a deadly neoliberal assault, protected by state terror and on many occasions directly backed by the United States, as in the case of Honduras.
In fact, Berta Cáceres had singled out Hillary Clinton for her support as secretary of state of the 2009 coup in Honduras and the subsequent whitewashing of atrocities in its wake.
The social movements sprouting in Central and Latin America grasp the multiple facets of their fight and the need to connect the struggle against the corporations seizing their lands with resistance to capitalism, imperialism, patriarchy, militarism and environmental destruction.
A feminist, an anti-capitalist and a staunch opponent of US imperialism, Berta Cáceres was acutely aware of the intersection of these battles and repeatedly called for solidarity between social movements around the world.
Her internationalist perspective was not mere rhetoric, but resulted in action, as it led her to create bonds between her movement and other grassroots movements outside Honduras.
Her perspective has expanded even beyond Latin America.
For Palestinians, Honduras and Guatemala might seem too distant, even too irrelevant for our struggle. And while there are some apparent stark differences in our lived realities and in the faces of our oppressors, there are commonalities as well.
In Palestine as well as in many parts of Central and Latin America, the oppression is directly sponsored by US military and financial aid. And in all these places our collective survival rests upon defending and preserving our land.
Similarly, our struggle for self-determination is inseparable from the struggle against capitalism and militarism.
This does not mean that the forms of resistance employed in Central and Latin America should simply be copied in Palestine or vice versa. Rather, it means that we can create strategic alliances that draw from our respective experiences and build a global movement.
To survive, repressive regimes collaborate with one another and to defeat them, oppressed peoples have to create networks of solidarity.
Transnational corporations are so good at blurring borders to increase their profits; we should break those same borders to create a decolonized, more humane, just and diverse world.
“Berta was a force rooted in the past and imagining a different, decolonized future, free of the three systemic forces she routinely identified as her true enemies. Capitalism, racism and patriarchy,” freelance journalist and friend Jesse Freeston said of Cáceres.
To honor her, and to confront those who murdered her, it is necessary to step up the fight and to revive the spirit of Land Day in Palestine and Honduras and throughout the Middle East and the Americas. |
Pragmatic speech refers to the social or conversational skills of a person. It is a person’s ability to use language appropriately with consideration for their audience and the situation. It is also referred to as “pragmatics.” Speech therapy for pragmatic skills can help individuals improve their ability to communicate effectively and appropriately in social situations, which can help improve their relationships with others and their quality of life.
Speech therapy for pragmatic skills is a type of speech therapy that focuses on helping people learn to communicate more effectively with other people in a way that is natural and appropriate for the given situation.
Pragmatic skills include nonverbal communication, such as facial expressions and gestures, as well as verbal communication, such as using correct tone and body language when communicating.
Speech therapy for pragmatic skills focuses on improving social communication and the ability to express oneself. Pragmatic skills are an important part of daily life and can be challenging to learn if one has a condition that makes it difficult to understand the nuances of human interaction, such as autism spectrum disorder. Therapy sessions typically focus on group discussions and role-playing exercises where individuals engage in deliberate practice.
Pragmatic skills cover a wide range of real-world communication tasks, including social interactions and storytelling. Speech therapy can be an effective way to improve pragmatic skills in children who have difficulty with them.
Speech Therapy for Pragmatic Skills is a style of speech therapy that focuses on communication skills. It works on the level of nonverbal cues, as well as the verbal. It also focuses on higher-level skills like inferencing and perspective taking.
Speech therapy for pragmatic skills is a treatment approach geared toward improving a person’s ability to communicate in their daily lives. It’s often used to address problems with social interaction, like difficulty asking for help or making friends.
Speech therapy for pragmatic skills can be helpful for people who have had a stroke, brain injury, or other neurological condition that has affected their ability to communicate. It can also be useful for people with developmental disorders, including autism spectrum disorder (ASD).
Speech therapy for pragmatic skills is a type of speech therapy that focuses on the practical application of language. In the United States, it is often used to help students with autism spectrum disorders or other developmental disabilities.
Speech therapy for pragmatic skills is aimed at correcting a client’s pragmatic skills, which relate to social communication. The skills include eye gaze, body posture, gestures and facial expression, vocal tone and pitch, turn-taking, and more. Speech therapists work with clients on these skills in order to help them communicate more effectively in a variety of contexts.
Pragmatic skills are social skills that are used to communicate with others. These skills include following rules, understanding non-verbal cues and knowing how to start, maintain and end a conversation.
These skills help individuals to successfully participate in social interactions, which is an important part of our everyday lives. Individuals who have difficulty with pragmatic skills may be perceived as being rude or socially awkward. This can result in decreased self-esteem and cause difficulties making friends, being employed and participating in the community.
Speech therapy for pragmatic skills involves learning rules that help individuals improve their interpersonal communication skills. The speech therapist will work with the individual to create goals that relate to improving the ability to interact with others. |
I’ve always had a special space in my heart for Two Bad Ants by Chris Van Allsburg because of the memories associated with it. Somehow, my teacher knew how to reach me on my first truly horrible big kid bad day. She helped calm down with a picture book, a new book that had just come in to the library. I think it’s a good reminder how picture books can touch the lives of big kids. Just because we can read chapter books does not mean we need to cut out the picture books!
Not only did this book help me when I first read it, but it also has great writing to learn from. Two Bad Ants is a great example of how a picture book (with illustrations that tell half of the story) can help older students learn to write with creative word choice.
Examining Word Choice in Two Bad Ants
Two Bad Ants is so much fun because the illustrations are essential to truly understanding the story. In his words, Chris Van Allsburg explains an adventure the ants go on: through the woods and up a mountain to find a sparkling treasure. The true journey and destination, however, are very clear to the reader who also has the illustrations at his fingertips: the “woods” are really blades of grass. The mountain is really a brick wall.
Just as with A Tale of Two Beasts that I discussed yesterday, this is a perfect opportunity to discuss different perspectives. This book is told from the ants’ perspective. to the ants, the grass stems are (what we would call) the woods. The wall is like a mountain.
What other ways could your students describe the items in the kitchen from the ant’s perspective? Brainstorm with them to get the ideas flowing. For example, the sparkling crystal could be a “diamond,” the strange red glow could be “fire” and the waterfall could be a river.
Creating a New Adventure with Strong Word Choice
As a creative writing prompt, I think Two Bad Ants has a lot of potential. What other characters would have a similar foreign experience in a setting that is unfamiliar to them? What creative words could students use to describe the new setting?
Although this post is referring to word choice a lot, what the book and the writing prompt are doing is encouraging the students to think in terms of metaphor. A metaphor is defined as “a word or phrase for one thing that is used to refer to another thing in order to show or suggest that they are similar” (Merriam-Webster.com).
Download the Lesson Idea
Once again, I’ve put together some simple worksheets to go along with the process I describe above. If you’d like it, download it from my shop! |
Is called so to all them processes of union of metals that is made by fusion located of them parties to join. Can be with and without contribution of material to them parts together, and can be the material of contribution of equal or different type to these. All these systems have reached a high grade of technology that leads to the achievement of unions guaranteed, durable and with high index of repeatability in its quality. According to the way of producing fusion they are grouped into three categories:
- Welding: He is performed using the flame of a torch melting solder on junction, and can melt also this area of parts to join. Solder can be from the same parts to metal rod with high content of silver (low melting point) used in the welding of thin sheets, sensitive areas or pieces of different metals.
- Arc welding: The Foundry is produces to the cause an arc voltage of high intensity and low voltage between the area to join and an electrode. You can be with contribution of material in which case either electrode is going by fusing and adding to the welded area or using electrodes of metals of high melting point (TIG) and contribution of material on rods, or without input from the atmosphere of inert gases (TIG) material.
- Welding by resistance: Consists of compressing a small area of two fine sheets by means of two electrodes that is passed an electric current through controlled parameters which gets melting junction enters both sheets at the point of pressure of the electrodes.
- LASER welding: Fundamentally it is a weld without contribution of material, high accuracy and excellent finish and quality. Allows welding in three dimensions. |
No one is born with the ability to ride a skateboard, surf or even stand on their tiptoes. Unlike other mammals, human beings have no balance at birth – virtually no capacity to walk or even stand. Before that can happen, their vision, hearing, muscles, bones and brain must develop. This takes months, and for some activities, even years.
Infants typically begin rolling over when they’re 6 months old. They generally start to crawl by 9 months, and stand around a year old. By 18 months old, most can walk alone and go up steps. By age 2, toddlers can perform more complex tasks, such as kicking a ball. By 3 years old, most children run well and can walk up and down stairs with one foot on each stair. Some children reach these milestones faster, and some are slower, and that’s normal.
Balance is a skill
As you get older, you may notice that some people are really good at keeping their balance. They can dance well, jump rope and do somersaults. But they were not born with this ability. Instead, it took practice. Balance is a skill – the more you practice any skill, the better you become, though some people may be more naturally adept at it.
As a physical therapist for over 15 years, I’ve seen patients of all ages who struggle with balance, and I’ve learned that it takes three of the body’s systems working together to keep a person in good balance: the visual, somatosensory and vestibular systems.
The visual system includes the eyes, the optic nerves that connect the eyes to the brain, and the brain’s visual cortex. Babies are born nearsighted, able to see only about 10 to 12 inches away. As their visual system develops, their brain learns how to process visual information, so they get better at moving and balancing.
The somatosensory system registers sensations detected by the muscles, joints, skin and the body tissues that connect them, called the fascia. These perceptions of touch, pressure, pain, temperature, position, movement and vibration travel via pathways in the spinal cord, brain stem and thalamus – a small, egg-shaped structure in the middle of the human brain – where they are integrated and analyzed.
For example, when a baby tries to stand, their brain processes the feelings coming from their feet, legs and hands to help them balance.
The vestibular system, which is the body’s system of hearing as well as balance, consists of five distinct organs in the ear. Inside these organs there is fluid, which moves when the body and head move. As this fluid moves, it sends signals to the brain, which in turn makes a person aware of their position and helps them balance.
Healthy individuals rely roughly 70% on somatosensory information, 20% on vestibular system information and 10% on vision to maintain balance on firm surfaces.
Abnormality in any one of these three systems may result in balance problems. But when one system is affected, the other two can be trained to compensate.
There are many ways to lose one’s balance. Standing on slippery ice, the sensory receptors in the feet are unable to send appropriate signals to the brain quickly enough for the brain to activate muscles to maintain balance.
For many people, walking in the dark means risking a fall because the brain is receiving so little visual information about the environment. People with poor or no eyesight learn to rely more on the other two sensory systems to maintain balance.
When something knocks a person off balance, such as being bumped while walking or running, it can cause something called a “vestibulospinal reflex.” The vestibular and somatosensory systems send signals to the brain, which in turn activate the appropriate muscles to save the person from falling.
As people get older, their balance often declines due to age-related changes to their muscle strength and vision, as well as other causes. This increases their risk of falling. In fact, falls are a leading cause of physical injuries for adults 65 years and older. Older adults can work on balance, strength and flexibility exercises as a way to prevent falls.
People can also have trouble with balance due to neurological problems, arthritis and joint injuries.
Learning better balance
All of this explains why it’s necessary to practice if you want to improve your balance. For example, gymnasts who practice walking on narrow beams continuously challenge their somatosensory and vestibular systems. This trains their brains to respond to very subtle changes, which means they get better and better at staying on their toes.
People are sometimes born with disorders or developmental problems, such as cerebral palsy, that affect the visual, vestibular or somatosensory systems. Infants with such issues ideally start physical therapy very early, which allows them to achieve developmental milestones – from holding their heads up to standing and moving independently.
When I treat people with balance problems, I begin by evaluating whether their somatosensory system is working properly, and I ask about injuries to muscles or bones. Depending on what the problem is, we may do simple exercises such as standing or marching in one place, and progress to more difficult exercises such as walking fast or walking while talking.
Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to CuriousKidsUS@theconversation.com. Please tell us your name, age and the city where you live.
And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.
Gurpreet Singh, Assistant Professor of Physical Therapy, Binghamton University, State University of New York |
Globally, 690 million people suffer from hunger and three billion cannot afford healthy, nutritious food. Despite this, the Food and Agriculture Organization of the United Nations (FAO) estimates that one-third of all food fit for consumption is either lost during the production or shipping process or wasted. Access to healthy food is another issue altogether, with, particularly, many cities facing an increase in the number of urban areas known as “food deserts”, areas with limited access to affordable, nutritious food.
Not all solutions to our global food crisis require advanced technological solutions. It may not be entirely obvious to many of us but some of the most nutrient-rich foods originate from forests: nuts, berries, mushrooms and plants are just some examples. What if we used this invaluable forest ecosystem in a whole different setting, our cities, to provide nutritious, healthy and locally produced food for urban residents?
In the run-up to the Food Systems Summit, UNECE and FAO are releasing a new video introducing the concept of “food forests” – a way of combining agriculture and forestry in an urban environment to create edible landscapes. By mimicking how plants grow naturally on multiple layers within a forest, food forests consist of a canopy with tall fruit and nut trees, shrubs and bushes which bear fruit, a layer including herbs and vegetables, and ground-hugging plants, vines and roots. In addition to being less maintenance-heavy than crops, these food forests boost biodiversity, contribute to food security, and help build more sustainable and resilient communities.
Cities worldwide are the most active in experimenting with food forests and urban gardens to tackle food availability and food deserts. The city of Atlanta, USA, for example, has local food forest which provide a variety of nuts, fruits, vegetables, and mushrooms, all freely available to its residents. Many European cities are also taking action. For example, in Switzerland, Greece, Spain, and the United Kingdom, local authorities are increasingly turning to community orchards and urban gardening as a way to connect residents to local food sources and build awareness of how the food on our tables every day is produced. These edible landscapes are not only integral part of our food systems but bring together diverse stakeholders for more sustainable food production and consumption, increased biodiversity in our cities and environmental benefits for years to come.
Learn more about food forests by watching the video.
Learn more about the Food Systems Summit here. |
Professor Philip Williamson from our Department of History shares his insights on the history of royal funerals.
Great royal events in the United Kingdom are often a mix of old and new, and the commemoration and funeral of Queen Elizabeth II will be no exception. While there will be several strikingly new features, the seemingly traditional elements are not as old as they may appear. While some newer elements are revivals from the past.
The modern history of royal occasions is one of innovation and tradition to preserve the monarchy’s popularity and relevance. Public service and the monarch’s ability to represent the whole nation have become the main themes.
The organisation of public mourning for Elizabeth II, which began with her death on September 8 and will end after her funeral on September 19, is a huge national undertaking. However, the funerals of sovereigns have not always been public spectacles.
Since the 18th century, all British monarchs were buried at Windsor and for a long period, funeral ceremonies took place within Windsor Castle.
Changes began with the death of Queen Victoria in 1901. In part, this was in recognition of her long reign of 63 years. But it was also a culmination of efforts, noticeable at her jubilees in 1887 and 1897, to make the monarchy more public. This was to encourage greater popular attachment towards the royal family in a society which was becoming more democratic – and potentially more critical of an ancient and privileged institution.
The day of Queen Victoria’s funeral was proclaimed a day of national mourning, during which all work ceased. This was done in the expectation that many people would attend memorial church services – which then was the chief means to express public grief and respect.
For the first time after a monarch’s death, the Church of England issued special commemorative services for use in all its local places of worship, and the leaders of most other religious communities in the United Kingdom also encouraged the organisation of local memorial services.
Everywhere, church and chapel services were crowded. That Queen Victoria died at her home on the Isle of Wight created the opportunity for further mass demonstrations of grief, as her coffin, on its route to Windsor, was carried in a long and slow procession across London, through streets lined with huge crowds. Public processions have remained central to later funerals of monarchs, though now focused around Westminster.
Following the deaths of Victoria’s successors, further measures were taken to involve the public.
When Edward VII died in London in 1910, a public lying in state at Westminster Hall was introduced. His son, George V, insisted that access should be “democratic” and nearly 300,000 members of the public paid their respects by filing past the coffin. He also asked that all local memorial services on the day of the funeral begin at the same time as the service at Windsor, to create simultaneous national participation in the prayers of commemoration.
For George V’s own funeral in 1936, the day of mourning was replaced by a national two-minute silence to avoid loss of work during a time of economic depression. The silence also linked the king’s death with the annual mass ritual of remembrance of the dead of the first world war.
His lying in state was attended by over 750,000 people. Radio broadcasts created a vast audience for the public ceremonies, in a new form of mass participation.
For the commemoration in 1952 of King George VI, who had achieved great public prominence during the second world war, two further additions were made.
After the funeral at Windsor, a special remembrance service was held at St Paul’s Cathedral, attended by members of the government, parliament and other national leaders. The memorial services and funeral procession in London became the first royal events to be broadcast by television as well as radio.
Many facets of royal commemorations since 1901 remain integral to the arrangements in 2022 but there are new elements.
Some of these features result from advances in television and electronic media, others are a tribute to an even longer reign than Queen Victoria’s.
The state of the union of the United Kingdom has also influenced the plans that the civil service and Buckingham Palace have maintained and regularly revised since the 1930s for “the demise of the Crown” – known more recently under the code name “Operation London Bridge”.
The union has weakened since 1952, with the development of independence parties and devolved administrations. The plans include events to help sustain the monarchy’s position in the different parts of the union during the delicate transition between sovereigns. As such, the new king and queen consort will attend “national” memorial services in Scotland, Wales and Northern Ireland.
The unexpected element was the Queen’s death in Scotland, which has enabled the organisation of a well-publicised and televised journey through numerous communities. It has also led to a procession and an additional public lying-at-rest of the coffin in Edinburgh to supplement the memorial service in St Giles’ Cathedral.
Another factor that has influenced new additions is the public expectation that royalty should be more accessible and visible, which they have become under Queen Elizabeth II.
The transition from Elizabeth II to Charles III seemed likely to be delicate because of recurrent criticisms of the royal family, including the new king. Recent troubles relate to the Duke and Duchess of Sussex and the Duke of York, but deeper concerns date from the breakdown of the King’s first marriage, the popularity of Diana and the outpourings of grief after her death in 1997.
Consequently, additional opportunities have been created for national leaders and the public to emphasise their respect for the monarchy. The return of state funerals to Westminster Abbey, which had been common until 1760, was probably long planned.
The Abbey can accommodate a larger congregation than St George’s Chapel Windsor and its central location allows more people to watch the procession – as was witnessed at Diana’s funeral and the funeral of Queen Elizabeth the Queen Mother in 2002.
Moving the national service of remembrance at St Paul’s Cathedral from after the monarch’s funeral to the day after the Queen’s death has provided a sharper focus for the start of national mourning. The new king’s broadcast address, the first broadcast of an accession council and the unusually early televised message of the king’s receipt of the condolences and congratulations of Parliament were all designed to ease the change of sovereign in the public consciousness.
There will now be a one-minute silence on the Sunday evening before the funeral, as well as a two-minute silence on the funeral day itself. A revival of the national day of mourning will also increase public involvement, allowing huge audiences to watch the televised funeral ceremonies and bringing massive crowds to the procession route and screening points in London.
Both the great popular admiration for the late Queen and the successful presentation of her commemoration can be measured by the extent to which members of the public are prepared to express their respect. Huge numbers of people are expected to queue for many hours, perhaps even overnight, to attend her lying-in-state for five days in Westminster Hall – just as there were long queues for the lying-at-rest in St Giles’ Cathedral. On the day of the funeral, even greater numbers are predicted to travel to London to witness the processions and ceremonies in and around Westminster Abbey to say goodbye.
*This article is republished from The Conversation under a Creative Commons license. Read the original article. |
Moose are among the world’s largest and most majestic creatures. Despite their impressive size, moose inhabit a surprisingly wide range of environments around the world. This article will explore where these animals live, discussing both their natural habitats as well as areas where they have been introduced.
Moose can be found in many different types of ecosystems, from boreal forests to wetlands and even mountain ranges. They prefer habitat that provides access to abundant food sources such as shrubs and aquatic plants for browsing, with open meadows or clearings for bedding down during winter months. In addition to this preferred habitat type, moose may also inhabit areas near human populations if appropriate conditions exist.
The presence of moose is not only limited to North America; across Europe and Asia there is evidence of historical occurrence as well as recent reintroduction efforts by wildlife conservationists. Additionally, isolated pockets of wild moose can still be found throughout Siberia and Mongolia today.
Ultimately, understanding the habitats favored by this species helps us better understand its needs and how we can help support healthy populations into the future.
Moose are one of the largest species in the deer family, with males weighing up to 1,800 pounds. They inhabit many regions across North America and Eurasia, giving them a wide habitat range that is estimated at 9 million square miles.
Although they live in a variety of habitats types including boreal forest, tundra, aspen parkland and taiga, moose prefer natural habitats such as swamps and wetlands due to their marshy nature which provides greater access to food sources.
Moose are also able to adapt quickly to changing environments; for example, some populations have been observed living near urban areas where there is an abundance of vegetation available for grazing.
The moose’s range extends from Alaska through northern Canada into parts of New England in the United States. The most southern point within their habitat range is located in Colorado and Utah; however, this area does not typically provide suitable conditions for mammoths during winter months. In addition to these locations, isolated pockets of moose can be found throughout Europe ranging from Scandinavia all the way down to Spain.
Given its vastness and global presence, it’s no surprise that moose are widely studied by wildlife biologists who seek to better understand their behavior and how best to conserve their natural habitats. By understanding more about this iconic species we can ensure their continued survival in our increasingly urbanized world.
Alaska Moose: Giants of the North – Discover the awe-inspiring Alaska moose, one of the largest species of deer on the planet. Explore its habitat, behavior, and role in Alaskan ecosystems.
Moose are usually browsers, which means that they use their long muzzle and prehensile upper lip to pluck vegetation from trees, shrubs and other plants. Moose have a wide range of dietary requirements and feed on over sixty different species of browse.
Common items in the moose diet include willow, aspen, birch twigs, maple leaves, aquatic plants and grasses. In addition to these common items, moose may also consume conifer needles or fruit during certain seasons.
The feeding behavior of moose varies greatly depending on time of year and location. During summer months when food is plentiful, moose can be found grazing throughout the day while browsing at night.
When winter approaches and resources become scarce due to snow cover, moose must adjust their foraging strategies accordingly by selecting foods with a higher nutrient content such as evergreens. Additionally, some individuals will migrate to areas where food sources are more abundant or move into deeper parts of forests in order to find shelter from extreme weather conditions.
In sum, moose possess diverse diets consisting mainly of browse species combined with foraging strategies adapted for seasonal resource availability and weather conditions. Their ability to rapidly select high-quality plant material has enabled them to thrive across many ecosystems around the world.
What Do Moose Eat? Discovering Their Diet – Unravel the dietary preferences of the magnificent moose. Explore their herbivorous diet, including their favorite plants and browse, and understand the importance of food availability for their survival.
Moose can be found across many parts of the Northern Hemisphere, from North America to Europe and Russia. Interestingly, moose have even been reported in Kazakhstan.1 As such, it is important to understand not just where moose live but also their migratory behavior and patterns.
Studying moose migration can provide useful insight into the species’ ecology as well as its ability to adapt to different environments over large geographical areas. Moose are capable of long-distance seasonal movements that often depend on food availability or climate conditions.2 Migration routes differ depending on location; however, some things remain consistent:
- Moose will typically migrate between summering grounds and wintering grounds;
- Females tend to migrate shorter distances than males;
- Longer migrations occur when there is an abundance of food resources available at both ends of the journey.3
Migration events demonstrate a complex interplay between environmental factors, individual physiology, and social interactions within herds. In addition, they may help facilitate genetic exchange among populations separated by distance while allowing individuals to take advantage of local environment features such as nutrition sources or shelter sites.
Ultimately, understanding these dynamics provides greater knowledge about how the species has adapted through time and space.
What Eats Moose? Exploring Their Predators – Delve into the predators that pose a threat to the moose, the largest member of the deer family. Learn about the natural enemies of moose and the dynamics of predator-prey relationships in North America.
Reproduction And Mating
Moose reproduce during the late summer and early fall. During this period, known as their calving season, bull moose compete with one another to mate with cow moose in order to produce offspring. Breeding behavior typically begins when bulls are two or three years old and continues until they reach full maturity at age four or five.
|Compete for mating opportunities
|Attracted to older bulls
|Late summer/early fall
|Two-three years old
|Prefers larger body size
|Four-five years old (full maturity)
|Selective of partner
|Peak breeding activity
|High testosterone levels
|More selective than males
Bulls have higher testosterone levels during mating season which drives increased aggression towards other males; dominant bulls will establish territories that cows must enter if they wish to breed. Cows tend to be more selective than males when it comes to choosing a mate, often preferring larger bodied bulls as mates.
As peak breeding activity approaches, male moose become increasingly vocal in order to attract females and warn off any potential rivals. In general, reproduction is an essential part of the life cycle of these large mammals, allowing them to persist across different environments around the world.
What Are Moose Senses Like? Understanding Their Perception – Gain insights into the sensory world of moose and explore how their senses of sight, hearing, and smell contribute to their survival. Learn about their remarkable perception and detection abilities.
Anatomy And Characteristics
Moose are large and powerful mammals with a distinct physical appearance. Their bodies are covered in thick fur coats of brown or dark grey, while they have long legs that help them traverse through various habitats.
Moose also possess hoofed feet and the males feature antlers atop their heads which can span up to six feet wide. In addition, moose boast two large nostrils for breathing and sensing danger; this allows them to detect predators from afar more easily than other animals do.
The size of an adult male moose can range between 800-1,200 pounds and stand as tall as 6-7 feet at the shoulders when fully grown. Females tend to be smaller, usually weighing around 500-800 pounds and standing 5-6 feet high at the shoulder.
Both sexes exhibit similar coats of fur across their body but only male moose grow antlers each year during mating season. These antlers may reach lengths of over four feet, making them one of the most impressive features on any species within the deer family.
Overall, it is evident that moose possess many unique characteristics which aid them in survival within different environments throughout North America and parts of Eurasia.
They use their keen sense of smell to protect themselves against predators, while their immense size helps ward off potential threats from afar. Furthermore, their thick coat provides insulation against cold temperatures and protects them from water damage caused by rain or snowfall.
Moose Lifecycle: From Birth to Adulthood – Follow the fascinating journey of a moose from birth to adulthood. Explore their reproductive behavior, growth stages, and the challenges they face as they transition through different life phases.
Predators Of Moose
Moose, the largest members of the deer family and a staple in North American wildlife, have several predators. Wolves, lynx, coyotes, bears and cougars all prey on moose throughout their natural habitats. Knowing more about these animals helps us to better understand how they interact with each other within ecosystems.
Throughout much of its range, wolves are one of the most common predators for moose calves. As apex hunters that specialize in taking down large game such as elk or bison, wolf packs can easily take down an adult moose if there isn’t enough snow cover preventing them from getting up close.
Lynx also frequently hunt small mammals like hares and grouse but will occasionally take down a young moose calf when food is scarce during winter months.
Coyote populations are increasing across northern forests so they too pose a serious threat to both adult and newborn moose alike. While normally targeting smaller prey like rabbits, coyotes can become quite bold when hungry – even making attempts at hunting fully-grown adults!
Bears are another predator of concern and while they don’t actively seek out moose as food due to their massive size, they often scavenge off kills made by other predators or go after weakened individuals struggling through harsh winters. Lastly, big cats like mountain lions typically prefer deer over any other type of animal; however some studies suggest that this species does go after young moose calves every now and then if given the opportunity.
These fierce competitors offer challenge for survival among North America’s iconic ungulates; it’s important not only to know which animals are preying on them but where these predators live in order to protect vulnerable herds from concentrated predation events over time.
Moose Characteristics: An Insight into Their Traits – Discover the physical and behavioral traits that define moose. Learn about their antlers, size, weight, and other unique characteristics that make them iconic symbols of North American wildlife.
The conservation status of moose is variable across their range. In some areas, populations are stable or increasing due to successful management and protection efforts. However, in other regions, the species faces serious threats from habitat loss, climate change and overharvesting. As a result, many subpopulations have become endangered or critically endangered.
In recent years, global conservation efforts have been launched to protect the species and help prevent further population declines. These initiatives include implementing sustainable harvest policies, protecting habitats through land-use zoning regulations, promoting public awareness campaigns, reintroducing captive-bred individuals into the wild and studying population trends to inform future conservation strategies.
It is clear that concerted action must be taken if this iconic animal is to avoid extinction in certain parts of its range. The next step involves assessing current conservation efforts for effectiveness and introducing new methods where necessary to ensure long-term survival of moose around the world.
Where Do Moose Live? Habitat and Distribution – Discover the diverse habitats where moose can be found in North America. Learn about their range, preferred ecosystems, and the factors that influence their distribution.
Moose are an important and impressive species to observe in the wild. Though they are large, powerful animals, their habitats, diets, mating rituals, and conservation statuses can be quite complex and diverse.
While moose live primarily in boreal forests of North America as well as northern Europe and Asia, some migrate seasonally between lowland areas in search of food during winter months.
Furthermore, these mammals have adapted to a variety of climates including mountainsides and tundra regions. Their diet consists mostly of aquatic plants and other vegetation which is why they live near wetlands or water sources when possible.
During mating seasons males will battle for dominance while females give birth after 8-9 months gestation periods. Additionally, moose possess unique characteristics such as long faces with antlers that often reach up to 6 feet wide on mature bulls.
Predators like wolves or bears may hunt them although humans present the most danger due to hunting activities or vehicle collisions. Finally, moose populations remain stable throughout much of their range though there has been concern about decreasing numbers in certain areas from overhunting or habitat destruction.
In conclusion, moose play influential roles within ecosystems around the world through their presence alone yet also by their interactions with prey items and predators alike. Research into this species continues today so we can better understand how best to conserve this important animal for future generations to enjoy.
How Big Are Moose? Understanding Their Size – Marvel at the impressive size of moose and gain a deeper understanding of their physical dimensions. Explore their height, weight, and other measurements that make them one of the largest land mammals.
Bryan Harding is a member of the American Society of Mammalogists and a member of the American Birding Association. Bryan is especially fond of mammals and has studied and worked with them around the world. Bryan serves as owner, writer, and publisher of North American Nature. |
Last weekend, northern Greenland’s temperature reached 15 degrees Celsius (60 Fahrenheit), 10 degrees warmer than what it usually is this time of year, according to CNN.
Between July 15-17, Greenland’s ice sheet melted 18 billion tons of ice, with scientists warning that sea levels are continuing to accelerate quickly.
That ends up being six billion tons per day.
According to the US National Snow and Ice Data Centre's data, the most recent ice melting equates to 7.2 million Olympic-sized swimming pools.
That’s enough to cover the entire of West Virginia in a foot of water.
Senior research scientist at the National Snow and Ice Data Centre based at the University of Colorado Ted Scambos told the outlet: "The northern melt this past week is not normal, looking at 30 to 40 years of climate averages.
"But melting has been on the increase, and this event was a spike in melt."
Senior scientist in the National Centre for Atmospheric Research’s Climate and Global Dynamics Laboratory William Lipscomb told USA TODAY: "In recent years, we've seen a lot of heat waves in Greenland, this recent warming of it being one example.
"Any temperature above freezing can cause some surface melting."
Last year, Greenland’s ice sheet also experienced unprecedented levels of melting as it reached an all-time record-breaking temperature of 19.8 degrees Celsius (67.6 Fahrenheit), according to The Guardian.
This time last year, the ice sheet lost so much water that it could cover the whole of Florida in two inches (five cm) of water.
Glacier expert at Columbia University and adjunct scientist Marco Tedesco at NASA said: “It’s a very high level of melting and it will probably change the face of Greenland, because it will be a very strong driver for an acceleration of future melting, and therefore sea-level rise.”
He added: “We had these sort of atmospheric events in the past but they are now getting longer and more frequent.”
According to a report conducted by the UN's Intergovernmental Panel on Climate Change, even if humankind significantly reduces greenhouse gas emissions, sea levels are expected to rise by half a metre (19.6 inches) by the end of the century.Featured Image Credit: Alamy Stock Photo. |
Getting a vaccination: What were the sensations and what sensory receptors detected them? What is around you? Are you smelling alcohol from how the site of the shot was prepared? How are the smells detected and processed by your olfactory system?QUESTIONS TO ANSWER:Describe the experience in general terms. How would you describe the stimulus and how did it make you feel or behave (act)? You will provide more information as you answer each question below.What are the physical characteristics of the physical stimulus? (You might need to do research to find out more about the stimulus, for example, if it was gas burning from a stove, you might want to learn more about the gas and how that gas is processed by the olfactory system)How does the physical stimulus interact with the sensory receptor? How is the stimulus energy transduced into neural energy?How is activation of the receptor conveyed to the central nervous system (CNS)?Which features of the stimulus (for example, wavelength or intensity or other) are conveyed to the CNS?How does the transduced neural signal get to your cerebral cortex? (Include the pathways from receptor, spinal cord, brainstem, thalamus, to the specific lobe of cerebral cortex)Feel free to provide images and be sure to explain the relevance of each image to your sensory experience.Cite all sources in full APA format. |
Any event that causes damage to brain tissues is considered a brain injury. There are two general categories of brain injury: congenital (before or during birth) and acquired (injury after birth). Damage can occur suddenly (as in trauma or stroke), or more gradually (as in diseases). Damage can result from a traumatic brain injury (TBI), stroke, brain tumor, lack of oxygen to the brain (anoxia or hypoxia), degenerative diseases, encephalopathy, or other causes.
In a stroke, the damage is often localized to a specific area, whereas in anoxia/hypoxia, encephalopathy, degenerative diseases or a TBI, damage is much more widespread which means the symptoms will be more severe and more complex.
The brain is the "executive" of the body, with ultimate control over all functions, including speaking, thinking, moving, swallowing, and breathing. It receives messages, interprets them, and then initiates and monitors responses. Brain injuries are as individual as the people who are injured. The short and long-term effects of a brain injury vary widely depending on the cause, location of the injury, and severity. Understanding a person's personality and cognitive abilities before the injury is vital towards understanding any changes that may have occurred after the injury.
Physical, cognitive, perceptual/sensory and behavioral/emotional changes are common. Even if the symptoms are mild or atypical, every brain injury is a serious medical condition that requires prompt attention and diagnosis. That diagnosis can be complex, as numerous other conditions (such as depression, epilepsy, and post traumatic stress disorder) can have similar symptoms. For a more complete list, please see "Common Symptoms."
The term "traumatic brain injury" (TBI) refers to an injury to the brain that results from a blow to the head (such as in a motor vehicle accident or a fall); it can also be from a penetrating injury such as a gunshot wound. The injury may occur at the site of impact, on the opposite side as the brain rebounds against the skull, or diffusely throughout the brain as a result of twisting and turning on its axis. Injury may occur at the time of impact or it may develop afterwards as a result of swelling and bleeding within the brain.
Physical changes which occur from brain injury, such as weakness and visual changes, are much more visible and therefore more recognized than changes in cognition or behavior. What that often means is that a person who does NOT have overt physical signs of injury may NOT seek help from a doctor or hospital initially; or if he/she does, the hospital stay is often relatively short and may not include any rehabilitation or therapy. Many routine assessments done in the emergency room (CT scan, MRI) will not show any evidence of brain injury - but that does NOT mean there IS no injury!
Cognitive problems may not initially be recognized. However, as a person returns to normal activities these problems may impact how that person functions in daily life. It is not uncommon for a person to attempt to return to work after the initial injuries have healed, only to find he/she is unable to concentrate, remember, self-organize, and complete tasks as easily as before the injury. If the brain injury is not properly diagnosed and treated by professionals trained in brain injury rehabilitation, the person may not be able to return to normal activities.
Sometimes the difficulty is misdiagnosed as psychiatric or even "laziness". The importance of early and proper assessment and treatment by members of a rehabilitation team - physiatrist (M.D. who specializes in rehab medicine), neuropsychologist, physical therapist, occupational therapist, speech therapist, and other team members - is vital to helping survivors of brain injury return to their place in the community. Without this help, it is easy to see how these problems can lead to job loss, changes in relationships, and depression.
Authored by the BIC Team
About Brain Injury was written by Shirley Wheatland, CCC, MS, senior speech pathologist in the acute rehabilitation department of Alta Bates Summit Medical Center and Matthew Harris, PhD, licensed clinical neuropsychologist and an assistant professor at UNC Hospitals in Chapel Hill, NC. |
Emily Dickinson Biography, Facts About Her, American Poet
Emily Dickinson, born on December 10, 1830, inherst, Massachusetts, is widely regarded as one of America’s greatest poets. Despite being relatively unknown during her lifetime, her poetry gained immense recognition and critical acclaim after her death in 1886.
Dickinson’s literary career began in the 1850s, but she was largely unknown during her lifetime. She wrote nearly 1,800 poems, but only a handful were published while she was alive. It wasn’t until after her death in 1886 that her work gained recognition and started to receive critical acclaim.
Dickinson led a reclusive life, rarely venturing beyond the confines of her family home. She remained unmarried and lived with her parents and younger sister Lavinia for the majority of her adult life. This seclusion allowed her to devote herself entirely to her craft, resulting in a remarkable body of work.
Although Dickinson published only a handful of poems during her lifetime, she left behind nearly 1,800 poems, which were discovered and published posthumously. Her unique style and unconventional use of punctuation and capitalization set her apart from her contemporaries. Her poems often explored themes of death, nature, love, and the human experience, showcasing her profound insight and introspection.
Despite her limited engagement with the outside world, Dickinson corresponded extensively with friends and family through letters. These letters not only provide insight into her personal life but also offer a glimpse into her creative process and the depth of her intellect.
Dickinson’s reluctance to seek publication during her lifetime remains a subject of debate among scholars. Some attribute it to her reclusive nature, while others suggest that she may have been dissatisfied with the prevailing literary conventions of her time. Regardless of the reasons, her decision to keep her work private allowed her to preserve the integrity and authenticity of her poetry.
It was not until the 1890s, when her sister Lavinia discovered her extensive collection of poems, that the world became aware of the literary treasure hidden within the confines of her home. Publication of her poems began in 1890, and since then, Dickinson’s unique voice and poetic style have captivated readers across generations.
Her poems continue to be celebrated for their depth, wit, and profound observations on the human condition. The themes she explored, such as the passage of time, the nature of existence, and the complexities of the human heart, resonate with readers even today. Dickinson’s ability to distill complex emotions and profound thoughts into concise and evocative language remains unparalleled.
Emily Dickinson’s legacy as a pioneer of American poetry is undeniable. Her contributions to literature have had a lasting impact, inspiring countless poets and writers. Her work continues to be studied, analyzed, and celebrated in classrooms, literary circles, and beyond.
Emily Dickinson, a woman ahead of her time, defied societal norms and dedicated herself to her craft. Her reclusive lifestyle may have kept her secluded from the world, but it also allowed her to develop a poetic voice that resonates with readers even today. Her poems, filled with depth, insight, and introspection, have cemented her place in literary history as one of America’s most cherished and influential poets.
Emily Dickinson’s poetry often explores themes of nature, love, death, and spirituality. She had a unique style characterized by short lines, unconventional punctuation, and the use of slant rhyme. Many of her poems were written in the form of quatrains or short lyrics.
Her poetry is known for its enigmatic quality and profound insights into the human experience. It often explores the complexities of emotions, existential questions, and the mysteries of life. Dickinson’s use of language and imagery creates a sense of intimacy and invites readers to contemplate deep philosophical ideas.
Despite her reclusive lifestyle, Dickinson maintained a rich and vibrant inner world, which she expressed through her poetry. She had a close circle of family and friends with whom she corresponded extensively through letters. These letters provide valuable insight into her thoughts, feelings, and creative process.
Emily Dickinson’s impact on American literature cannot be overstated. Her unique voice and poetic style have influenced countless poets and continue to inspire readers today. Her poems are celebrated for their depth, honesty, and timeless relevance. Emily Dickinson remains an iconic figure in the literary canon, leaving behind a legacy that continues to captivate and resonate with audiences worldwide. |
Today is February 24th , 2024
Canadian Charter and Freedom
Canadian Constitutional Act
Canadian Bill of Rights
The Canadian Charter of Rights and Freedoms
The Canadian Charter of Rights and Freedoms came into force on April 17,1982. Section 15 of the Charter (equality rights) came into effect three years after the rest of the Charter, on April 17, 1985, to give governments time to bring their laws into line with section 15.
The Charter is founded on the rule of law and entrenches in the Constitution of Canada the rights and freedoms Canadians believe are necessary in a free and democratic society. It recognizes primary fundamental freedoms (e.g. freedom of expression and of association), democratic rights (e.g. the right to vote), mobility rights (e.g. the right to live anywhere in Canada), legal rights (e.g. the right to life, liberty and security of the person) and equality rights, and recognizes the multicultural heritage of Canadians. It also protects official language and minority language education rights. In addition, the provisions of section 25 guarantee the rights of the Aboriginal peoples of Canada.
The Charter and Canadian society
The Charter regulates interactions between the state (federal, provincial and territorial governments) and individuals. It is, in some respects, Canada's most important law because it can render invalid or inoperative any laws that are inconsistent with its provisions. For more than 20 years, the Charter has been the driving force of change, progress and the affirmation of our society's values. Canadian courts have rendered more than 300 decisions in which they invoke the Charter to bring Canadian laws into accordance with the principles and values of Canadian society.
The Charter has had a major impact on the promotion and protection of human rights in Canada. With respect to language rights, it has reinforced the rights of official-language minorities. With regard to equality rights, it has led to the recognition and enforcement of the rights of a number of minority and disadvantaged groups. In penal matters, the Charter has clarified to a considerable extent the state's powers with respect to offender rights.
Other human rights laws
There are many other laws that protect human rights in Canada. The Canadian Bill of Rights was enacted by Parliament in 1960. It applies to legislation and policies of the federal government and guarantees rights and freedoms similar to those found in the Charter (e.g. equality rights, legal rights, and freedom of religion, of speech and of association). The Bill is not, however, part of the Constitution of Canada.
The federal and provincial and territorial governments have adopted legislation (human rights acts or codes) prohibiting discrimination on various grounds in relation to employment, the provision of goods, services and facilities customarily available to the public, and accommodation. This legislation differs in its application from the Charter's section 15 on equality rights in that it provides protection against discrimination by individuals in the private sector, as well as by governments.
What to do if your Charter rights have been denied
Anyone who believes his or her rights or freedoms under the Charter have been infringed by any level of government can go to court to ask for a remedy. The person must show that a Charter right or freedom has been violated. If the limit is one set out in the law, the Government will have an opportunity to show that the limit is reasonable under section 1 of the Charter. If the court is not convinced by the Government's argument, it can grant whatever remedy it feels is appropriate under the circumstances. For example, a court may stop proceedings against a person charged with an offence if his or her right to a trial within a reasonable time has been denied. A remedy can also be requested from a court if an official acting for the Government violates an individual's rights, for example, a police officer improperly searching for evidence on private property. Finally, if a court finds that a law violates Charter rights, for example the law is found to be discriminatory under the equality rights section, the court can declare that law has no force.
LEGAL ABUSE SYNDROME |
Nope, they have fairly good vision in fact!
Groundhogs, also known as woodchucks or whistle pigs, are large rodents native to North America. They belong to the family Sciuridae and are a type of marmot, closely related to squirrels.
Groundhogs are known for being burrowers, creating extensive tunnel systems where they sleep, rear their young, and hibernate during winter. They are also famous for their alleged weather prediction abilities on Groundhog Day, February 2nd.
Groundhogs are robust animals, measuring about 16 to 26 inches in length (including their bushy tail) and weighing between 4 to 9 pounds. They have short, powerful limbs with curved claws that are well-suited for digging. Groundhogs have dense fur that ranges from grayish-brown to reddish-brown in color.
Groundhogs have well-developed eyesight, which is essential for their survival.
They have large eyes that are positioned on the sides of their head, giving them a wide field of view. This placement helps them to detect predators, such as hawks, foxes, and coyotes, from a distance.
Their eyes are also situated high on their head so that they can see while sticking their head out of their burrow.
Groundhogs are primarily herbivores, feeding on a variety of plants, such as grasses, clover, dandelions, and alfalfa. They also eat fruits, vegetables, and occasionally insects. Their sharp incisors and strong jaw muscles allow them to efficiently bite and chew their food.
Groundhogs are diurnal animals, meaning they are active during the day and sleep at night. They are most active in the early morning and late afternoon when they forage for food. They tend to be solitary creatures, with the exception of the mating season and when raising their young. Groundhogs are known for their ability to whistle, which they use as an alarm call to warn other groundhogs of potential danger.
During the winter months, groundhogs enter a state of hibernation, which is a deep, prolonged sleep that allows them to conserve energy. They will prepare for hibernation by digging a separate burrow used solely for this purpose. Before entering hibernation, groundhogs will eat large amounts of food to build up their fat reserves. During hibernation, their body temperature, heart rate, and breathing rate all decrease significantly.
Groundhog Mating and Reproduction
Groundhogs mate in the early spring, shortly after emerging from hibernation. The female gives birth to a litter of 2 to 6 pups after a gestation period of about 32 days. The young groundhogs, called kits or chucklings, are born blind and hairless. They will develop their vision and fur within a few weeks and will stay with their mother for about two months before venturing off on their own.
Groundhog Day is a popular tradition in the United States and Canada, celebrated on February 2nd each year. According to the folklore, if a groundhog emerges from its burrow on this day and sees its shadow, it will retreat back into its burrow, signaling six more weeks of winter. If the groundhog does not see its shadow, it is said to predict an early spring.
Human Interactions with Groundhogs
Groundhogs have been known to cause damage to gardens and crops due to their burrowing and foraging habits. They can also undermine building foundations and create tripping hazards with their burrow entrances. In some cases, groundhogs are considered pests and may be trapped or removed to prevent further damage.
In conclusion, groundhogs are not blind. They have well-developed eyesight that plays a crucial role in their survival. Here are ten quick facts about groundhogs:
1.Groundhogs are rodents native to North America.
2. They are also known as woodchucks or whistle pigs.
3.Groundhogs have well-developed eyesight.
4. They are primarily herbivores, feeding on a variety of plants.
5.Groundhogs are diurnal animals, active during the day.
6. They hibernate during the winter months to conserve energy.
7.Groundhogs mate in the early spring, giving birth to 2 to 6 pups.
8. Groundhog Day is a popular tradition celebrated on February 2nd.
9.Groundhogs can cause damage to gardens, crops, and building foundations.
10. They are not considered endangered and have a stable population.
Is it OK to let a groundhog live in your yard?
It is generally safe and acceptable to allow a groundhog to live in your yard as long as it is not causing damage to your property or posing a threat to human or pet safety. However, it is important to keep in mind that groundhogs are wild animals and should be observed from a safe distance.
Can a groundhog hurt you?
Groundhogs are not typically aggressive towards humans and will usually try to flee if they feel threatened. However, they do have sharp teeth and claws and can bite or scratch if cornered or provoked. It is best to give groundhogs their space and avoid approaching or handling them.
Should I leave the groundhog in my yard?
If the groundhog is not causing any harm or damage to your property, it is generally recommended to leave it alone and let it continue to live in its natural habitat. If it is causing damage or posing a threat to your safety, it may be necessary to contact a professional for assistance in relocating the animal.
Are groundhogs aggressive to humans?
Groundhogs are generally not aggressive towards humans, but they may become defensive if they feel threatened or cornered. It is important to give them space and avoid approaching them, especially during mating season or when they have young.
Can a groundhog hurt a dog?
Yes, a groundhog can potentially hurt a dog if it feels threatened or cornered. Groundhogs have sharp teeth and claws that they can use to defend themselves. Additionally, groundhogs can carry diseases that can be transmitted to dogs through bites or scratches. It is important to keep dogs away from groundhogs and other wild animals to prevent any potential harm.
Can groundhogs see at night?
Yes, groundhogs have good night vision and can see in low light conditions. |
In general, spectroscopy is the science of studying the interaction between matter and radiated energy while spectrometry is the method used to acquire a quantitative measurement of the spectrum. Spectroscopy (scopy means observation) does not generate any results. It is the theoretical approach of science. Spectrometry (metry means measurement) is the practical application where the results are generated. It is the measurement of the intensity of the radiation using an electronic device. Often these terms are used interchangeably, but every spectrometry is not spectroscopy (e.g. mass spectrometry vs.
X-ray spectroscopy is a general term for several spectroscopic techniques for characterization of materials by using x-ray excitation. When an electron from the inner shell of an atom is excited by the energy of a photon, it moves to a higher energy level. Since the process leaves a vacancy in the electron energy level from which the electron came, the outer electrons of the atom cascade down to fill the lower atomic levels, and one or more characteristic X-rays are usually emitted. As a result, sharp intensity peaks appear in the spectrum at wavelengths that are a characteristic of the material from which the anode target is made. The frequencies of the characteristic X-rays can be predicted from the Bohr model. Analysis of the X-ray emission spectrum produces qualitative results about the elemental composition of the specimen.
We hope, this article, X-Ray Spectroscopy, helps you. If so, give us a like in the sidebar. Main purpose of this website is to help the public to learn some interesting and important information about radiation and dosimeters. |
Importance and Issues
- American pikas (Ochotona princeps) depend on mountain ecosystems and are extremely temperature-sensitive. Due to a high body temperature and low upper lethal temperature, pikas have difficulty thermoregulating and rely on crevices and cavities in rocks for cover and shade. They cannot tolerate high temperatures for more than a few hours. Pikas are considered early warning signs for warming in western North America.
- Localized extirpations of the American pika have been documented in isolated areas of their range, and some scientists believe that these extirpations may be due to increasing temperatures.
- Climate change is predicted to result in high summer temperatures and reduced snowpack in many areas, both of which are expected to negatively affect pikas and their habitat.
- Crater Lake and Lassen Volcanic national parks have typical pika habitat of high elevation talus fields. Craters of the Moon National Monument and Preserve and Lava Beds National Monument consist of lower elevation lava flows, but they also provide pikas with a unique habitat type.
- Long-term monitoring is essential for our understanding of pikas, and for their long-term viability in our parks.
- Determine the status of pika site occupancy at selected Pacific West Region parks.
- Determine the trend in pika site occupancy at selected Pacific West Region parks.
Pika Monitoring Videos
Monitoring the American Pika at Craters of the Moon National Monument and Preserve. Long-term monitoring is important to understand the pika's sensitivity to climate change, and to detect changes in pika populations over time.
- 4 minutes, 44 seconds
Last updated: December 3, 2018 |
Distracted driving occurs when motorists allow other activities to pull their focus from the road. Distractions prevent drivers from anticipating and responding to hazards in time to prevent a collision.
Many people underestimate the risk that distractions pose to motorists, pedestrians and other people on the roadways. All drivers should understand the dangers before getting behind the wheel.
How dangerous is distracted driving?
The National Highway Traffic Safety Administration (NHTSA) reports that, in 2019, distracted driving resulted in more than 3,100 fatalities and accounted for 15% of all police-reported automobile collisions.
Of the many types of distractions, texting is the most prevalent and the most dangerous. Sending a text requires motorists to take their eyes off the road for about five seconds. At 55 mph, this is the equivalent of driving the length of a whole football field. According to the NHTSA, texting while operating a vehicle is six times more dangerous than driving under the influence of alcohol.
How can drivers prevent distractions?
One of the most important things drivers can do to reduce distractions is to place their mobile phones in Do Not Disturb mode. Motorists should also avoid other distracting behaviors, including:
- Eating and drinking
- Adjusting navigation, entertainment or climate control settings
- Listening to loud music or other audio
- Talking with passengers
Though people cannot control what other motorists do on the road, they can control the distractions within their own vehicles. If an accident still occurs, motorists may be able to collect compensation for any injuries or damage. |
Jane Austen’s “Pride and Prejudice” unfolds against the backdrop of Regency-era England, offering a witty and insightful portrayal of societal norms, love, and individual growth. At the center of this literary masterpiece is Elizabeth Bennet, a character renowned for her intelligence, wit, and independent spirit. In this comprehensive character sketch, we explore the multifaceted nature of Elizabeth Bennet and the enduring impact she has had on literature and readers alike.
Intellectual Brilliance: Elizabeth Bennet is distinguished by her intellectual brilliance. In a society that often undervalues women’s intelligence, Elizabeth’s sharp wit and discerning mind set her apart. Her ability to engage in witty repartee and express her thoughts with clarity adds depth to her character.
Independent Spirit: Elizabeth embodies an independent spirit that defies the societal expectations of her time. Unwilling to conform to conventional norms that prioritize social standing and wealth, she seeks personal fulfillment and authentic connections. Her pursuit of independence becomes a central theme in the narrative.
Keen Observational Skills: Elizabeth’s keen observational skills are a driving force in the story. Her ability to read people and situations with accuracy contributes to her wit and perceptiveness. It also plays a pivotal role in her evolving understanding of the world around her.
Moral Integrity: Moral integrity is a cornerstone of Elizabeth’s character. Despite societal pressures and the temptations of financial security through advantageous marriages, she remains committed to her principles. Her refusal to compromise her values adds a layer of strength to her character.
Courage in the Face of Adversity: Elizabeth demonstrates courage in the face of adversity. Whether confronting societal expectations, facing personal biases, or challenging her own preconceptions, she does so with a fearless determination. Her courage becomes a catalyst for personal growth and transformation.
Complex Relationships: The complexity of Elizabeth’s relationships, particularly with Mr. Darcy, contributes to the richness of her character. Her initial prejudice against Darcy evolves as she confronts her own biases and discovers the depth of his character. The nuanced exploration of love and understanding is a key element of Elizabeth’s journey.
Sense of Humor: Elizabeth’s sense of humor is a defining trait. Her ability to find amusement in the absurdities of societal norms and the foibles of those around her adds a delightful and comedic dimension to the story. Her wit becomes a means of navigating the challenges presented by her social environment.
Empathy and Compassion: Elizabeth’s empathy and compassion shine through in her interactions with others. Despite her penchant for satire and critique, she possesses a genuine concern for the well-being of those she cares about. This empathy contributes to the complexity of her character.
Resilience in Love: The resilience Elizabeth displays in matters of the heart is noteworthy. Despite initial misunderstandings and obstacles, her journey toward love is marked by emotional resilience. Her refusal to settle for a marriage devoid of genuine affection reinforces her commitment to personal happiness.
Social Critique: Elizabeth Bennet serves as a vehicle for social critique within the novel. Through her experiences and observations, Austen offers a commentary on the limitations imposed by class, gender expectations, and societal prejudices. Elizabeth’s character becomes a lens through which the reader can reflect on these societal norms.
- Intellectual Brilliance: Elizabeth Bennet is distinguished by her sharp wit and discerning mind.
- Independent Spirit: Her independent spirit defies societal expectations, prioritizing personal fulfillment over conformity.
- Keen Observational Skills: Elizabeth’s keen observational skills contribute to her wit and perceptiveness.
- Moral Integrity: Moral integrity is a cornerstone of Elizabeth’s character, and she remains committed to her principles.
- Courage in the Face of Adversity: Elizabeth demonstrates courage in confronting societal expectations, personal biases, and challenging her own preconceptions.
- Complex Relationships: The complexity of Elizabeth’s relationships, particularly with Mr. Darcy, adds richness to her character.
- Sense of Humor: Elizabeth’s sense of humor adds a delightful and comedic dimension to the story.
- Empathy and Compassion: Her empathy and compassion shine through in her interactions with others.
- Resilience in Love: Elizabeth’s resilience in matters of the heart is noteworthy, refusing to settle for a loveless marriage.
- Social Critique: Elizabeth serves as a vehicle for social critique, offering insights into class, gender expectations, and societal prejudices.
Conclusion: Elizabeth Bennet, the indomitable heroine of “Pride and Prejudice,” continues to captivate readers with her intelligence, wit, and unwavering commitment to personal principles. Her character, a beacon of independence and resilience, transcends the pages of Austen’s novel to become an enduring symbol of timeless values. Elizabeth’s journey of self-discovery, complex relationships, and social critique resonates across generations, making her a beloved and iconic figure in literature. Through Elizabeth Bennet, Austen masterfully navigates themes of love, societal expectations, and personal growth, creating a character whose impact extends far beyond the confines of the Regency era.
Rahul Kumar is a passionate educator, writer, and subject matter expert in the field of education and professional development. As an author on CoursesXpert, Rahul Kumar’s articles cover a wide range of topics, from various courses, educational and career guidance. |
Benjamin Lee is a child psychologist with a special interest in early childhood development. He has written numerous articles on child behavior and development. Benjamin believes in the importance of understanding each child's unique needs and abilities in order to provide the best learning environment.
Hey there! Circle time is a fantastic way to engage preschoolers and create a sense of community in the classroom. It's a time for learning, sharing, and having fun together. Today, I'm going to share some tips and ideas on how to conduct circle time with preschoolers.
First things first, let's talk about the structure of circle time. It typically begins with a welcoming activity or song to set the tone and get everyone excited. This can be as simple as a "Good Morning" song or a fun greeting where each child gets to say hello in their own special way.
Next, you can move on to a calendar or weather discussion. This helps children develop an understanding of time and seasons. You can use a large calendar or a weather chart to involve the children in tracking the days, months, and weather patterns. Encourage them to share their observations and ask questions.
After that, it's time for some interactive learning activities. This is where you can introduce a theme or topic for the day. For example, if you're learning about animals, you can bring in some animal puppets or pictures and have the children guess the animal based on its characteristics. You can also incorporate counting, colors, shapes, or letters into these activities.
Circle time is also a great opportunity for storytelling. Choose age-appropriate books that are engaging and interactive. Encourage the children to participate by asking questions, making predictions, or even acting out parts of the story. This helps develop their listening skills, imagination, and language abilities.
Don't forget to include some movement and music during circle time. Preschoolers love to sing and dance! Incorporate action songs or fingerplays that get them up and moving. This not only helps with their physical development but also keeps them engaged and energized.
Another important aspect of circle time is fostering social-emotional development. Use this time to talk about feelings, friendship, and problem-solving. You can introduce simple mindfulness exercises or breathing techniques to help children calm down and focus.
Lastly, end circle time with a closing activity or song. This signals that circle time is coming to an end and helps transition the children to the next activity. You can use a goodbye song or a closing circle where each child gets a chance to share something they enjoyed during circle time.
Remember, circle time should be flexible and tailored to the needs and interests of your preschoolers. Be creative, have fun, and adapt the activities as necessary. It's all about creating a positive and engaging learning environment for your little ones.
I hope these tips and ideas help you conduct circle time with preschoolers. If you're looking for more specific activities, songs, or curriculum ideas, be sure to check out our website, Preschool Playbook. We have a wealth of resources to make circle time and preschool learning a blast!
Keywords: circle time games for preschoolers, circle time songs preschoolers, preschool circle time activities, preschool circle time ideas, preschool circle time songs, songs for preschoolers circle time, valentines day circle time activities for preschoolers, preschool circle time curriculum, how to plan circle time for preschool |
Orangutans are massively threatened by extinction through deforestation and shrinking habitat. This is why the serene apes with their reddish hair have already been on the International Union for Conservation of Nature (IUCN) Red List for years, and the number one reason why their numbers continue to decline.
Land grabbing destroys the habitat of orangutans
The forest is the orangutan’s home, yet human activities are shrinking their world at an alarming rate. Between 1972 and 2015 around 10 million hectares of the Borneo orangutan’s habitat was destroyed, leaving just under half of their original distribution area. The main culprits of this are unsustainable practices such as oil palm monocultures, mining, and clear-felling logging; industries which are showing no signs of slowing down despite their irreversible impact on the planet and its wildlife.
The Indonesian government assigns protected areas of forest in which these activities are forbidden, and this serves to secure the remaining orangutan habitat. However, 75% of orangutans on Borneo live outside of these protected areas. This means that the orangutan population is plummeting daily, with an estimated 100,000 Bornean orangutans lost between 1999 and 2015 (an average of 17 a day!). Because orangutans only give birth once every 6-8 years, each loss can have devastating effects for generations to come.
As their habitat shrinks, orangutans struggle to find food, as they would typically roam long distances in search of seasonal fruit. The lack of available food means that they often ‘trespass’ on human settlements which have encroached upon their habitat. This leads to conflict with locals and being killed to protect crops.
It is difficult to estimate just how many orangutans die as a result of these interactions, but one study estimates that between 1,950 and 3,100 are killed every year in Kalimantan (Indonesian Borneo) alone.
The Loss of a Mother
As a result of their shrinking habitat, orangutan mothers are increasingly confronted by human beings. Many apes do not survive these encounters, and their children become orphans. This makes their prospect of survival very slim, as orangutans rely on their mothers – for protection, comfort and to show them how to thrive in the jungle.
Like humans, and other great apes, most orangutan behaviours are not innate; they are learned. This means they need years of guidance before they can build a sturdy nest to sleep in the treetops; before they have the skills to identify and evade predators; and before they can be sure that the fruit they choose to eat is delicious and juicy, and not poisonous; and how to use tools to crack open succulent fruity pulps hidden behind rock-hard shells. Given that some 15,000 plant species exist on Borneo, it should come as no surprise that the loss of a mother leaves orangutan babies feeling terrified and helpless.
The ORANGUTAN FOREST SCHOOL
Together with our Indonesian partner organisation, 'Jejak Pulang' (meaning ‘return home’), we have built a FOREST SCHOOL for orangutan orphans where they develop the skills they will need to eventually return to the wild once during adolescence
Eight orphans (five male, three female) from the ages of just 2 to 8 years make up the first cohort of pupils at the 100-hectare FOREST SCHOOL, led by Dr Signe Preuschoft of FOUR PAWS, a primatologist with 20 years’ experience working with great apes.
With a team of ‘foster mothers’ we attempt to impart the knowledge the orphans need to survive on their own in the wild. The bond between a mother and a child can never be replaced, but the aim of the project is to give our orangutans the best possible chance of surviving, thriving, and eventually reproducing so that they can create an unshakable bond with their own child.
Read more about our FOREST SCHOOL here. |
In recent years, online gaming has become a global phenomenon, captivating millions of players across various platforms. Beyond providing entertainment, online games have also caught the attention of scientists and researchers who are delving into the intricate ways in which gaming affects the human brain. Contrary to the traditional belief that gaming is a mere pastime, studies now reveal that engaging in online games can have profound cognitive benefits, stimulating different areas of the brain.
- Cognitive Skills Enhancement:
Online games require players to navigate complex virtual worlds, make split-second decisions, and strategize in real-time. These cognitive demands contribute to the enhancement of various mental skills. Problem-solving, critical thinking, and decision-making are constantly put to the test as players tackle challenges and overcome obstacles within the game environment.
Research suggests that playing online games can significantly improve spatial awareness and hand-eye coordination. The need to coordinate visual information with physical actions in a fast-paced gaming environment exercises the brain in a way that traditional learning methods may not. This has led to the integration of gaming elements in educational settings, harnessing the power of interactive experiences to enhance learning outcomes.
- Memory Boost:
Online games often feature intricate storylines, detailed environments, and numerous characters, requiring players to remember and recall information quickly. The constant engagement with these elements can contribute to the improvement of both short-term and long-term memory.
Studies have shown that regular gaming can stimulate the hippocampus, a brain region associated with memory formation and consolidation. As players navigate through game scenarios, they are required to remember rules, strategies, and the spatial layout of virtual environments, providing a continuous workout for their memory functions.
- Social Interaction and Emotional Resilience:
Contrary to the stereotype of gamers as solitary individuals, many online games encourage social interaction and collaboration. Multiplayer games, in particular, create virtual communities where players communicate, strategize, and form alliances. This social aspect of gaming has been linked to the development of interpersonal skills and the ability to work effectively within a team.
Furthermore, online gaming provides a platform for emotional expression and resilience. Players often face challenges, setbacks, and competition, which can evoke a range of emotions. Learning to cope with both success and failure in a virtual setting can contribute to emotional resilience, a skill that extends beyond the gaming world and into real-life situations.
- Dopamine Release and Motivation:
One of the key factors contributing to the popularity of online games kaisar888 is their ability to trigger the release of dopamine, a neurotransmitter associated with pleasure and reward. The anticipation of rewards, achievements, and in-game progression activates the brain’s reward system, creating a sense of satisfaction and motivation.
This dopamine-driven reward mechanism is a powerful tool in keeping players engaged and motivated. Game designers strategically incorporate challenges, achievements, and progression systems to maintain a steady flow of dopamine, ensuring players remain invested in the gaming experience.
The science of gaming is gradually dismantling preconceived notions about the impact of online games on the human brain. Far from being a mindless pastime, gaming has proven to be a complex and stimulating activity that engages various cognitive functions. As researchers continue to explore the intricacies of this interaction, the potential for harnessing gaming as a tool for education, skill development, and cognitive enhancement becomes increasingly apparent. So, the next time you dive into the virtual realm, remember that you might just be giving your brain a workout that extends far beyond the confines of the digital screen. |
Magnolia Warbler: Medium-sized warbler with dark back, yellow rump, and black-streaked yellow underparts. The head has a blue-gray crown, yellow throat. Wings are dark with two white bars. Tail is dark with white patches and undertail coverts. Bill, legs and feet are black.
Range and Habitat
Magnolia Warbler: Breeds from British Columbia across central Canada to the northeastern U.S. and Appalachian mountains south to Virginia. Rare visitor to the west coast; winters in the tropics. Breeds in open stands of young spruce and fir. During migration, it can be found almost any place with shrubbery or trees.
The Magnolia Warbler was named in 1810 by Alexander Wilson, who collected a specimen from a magnolia tree in Mississippi. He used the English name "Black-and-yellow Warbler" and used "magnolia" for the Latin species name, which became the common name over time.
Unbeknownst to Wilson, the warblers he encountered were spring migrants on their way toward Canada--far north of the range of the Southern Magnolia tree in which he first saw them.
Though it has very specific habitat preferences in the breeding season, it occupies a broad range of habitats in winter: from sea level to 1,500 meters elevation, and most landscape types, except cleared fields.
A group of magnolia warblers are collectively known as a "corsage" of warblers.
The Magnolia Warbler has a large range, estimated globally at 3,600,000 square kilometers. Native to the Americas and surrounding island nations, this bird prefers forest and shrubland ecosystems, though they can live on arable farm land. The global population of this bird is estimated at 32,000,000 individuals and does not show signs of decline that would necessitate inclusion on the IUCN Red List. For this reason, the current evaluation status of the Magnolia Warbler is Least Concern. |
As cities and towns continue to grow and expand, the infrastructure beneath our feet has become increasingly complex.
From gas lines to electric cables to water mains, underground utilities are an essential part of modern living. However, with so many different types of utilities crisscrossing beneath our streets and sidewalks, it can be difficult to keep track of where everything is located. This is where the universal colour code for underground utilities comes in. By using a standardized system of colours and markings, workers can quickly and easily identify what lies beneath the surface, helping to prevent accidents and ensure the safe and efficient operation of our essential services. In this blog post, we will explore the importance of using the universal colour code for underground utilities and how it benefits us all.
The history of the universal colour code for marking underground utilities dates back several decades. It began as a response to the increasing number of accidents caused by excavation and digging activities that damaged underground utility lines. These accidents resulted in loss of life, injuries, and property damage. In the 1960s and 1970s, the damage caused by these accidents became a significant concern for utility companies and the government, prompting them to develop a standardized system for identifying and marking underground utilities.
The first system to be developed was the American Public Works Association (APWA) Uniform Colour Code, which was introduced in the 1970s. This system uses different colours to represent different types of utilities, making it easier for excavators and construction workers to identify and avoid them.
The APWA Uniform Colour Code has been widely adopted throughout the United States and Canada and has become the standard for marking underground utilities. The system consists of six colours, each representing a specific type of utility.
These colours are:
The importance of the universal colour code for marking underground utilities cannot be overstated. It provides a standardized system for marking and identifying different types of utilities, reducing the risk of accidents and damage caused by excavation and digging activities. This not only saves lives and reduces injuries but also helps to prevent property damage and costly repairs.
In addition to the APWA Uniform Colour Code, there are other colour codes used around the world. For example, Australia uses orange for electricity, white for communications, red for fire services, and brown oils and flammables other than natural gas. France uses green for communications, purple for heating and cooling lines, and brown for sewer lines.
Below is an example of a site that has been marked incorrectly leading to confusion. The client is installing a new water main, however the surveyors have marked out the proposed new water line route in blue as opposed to pink for temporary survey lines. This will lead to confusion with differentiating from the current water line markings and the new proposed route when it comes time to excavate. This is why it’s important to always use the appropriate colour of marking.
In conclusion, the universal colour code for marking underground utilities has a rich history and is an essential tool for ensuring safety in excavation and digging activities. Its adoption has helped to reduce accidents, injuries, and damage caused by excavating and digging around underground utilities. It is vital that all stakeholders involved in excavation and digging activities familiarize themselves with the colour code and adhere to it to ensure safety and prevent damage. |
Human color vision is trichromatic and requires the normal function of three classes of cones responding to wavelengths of approximately 420nm (blue cones), 530 nm (green cones), and 560 nm (red cones). Dichromatic color vision discussed here is based on responses of red and green cones whose pigments are generated from contiguous gene regions on the X chromosome encoding OPN1MW (green pigment), and OPN1LW (red pigment).
The degree of color deficiency is variable and some males are so mildly affected that they are unaware of any defect until tested. The human eye is capable of seeing about a million colors which is made possible in part by the wide range of comparative signal outputs from the three classes of cones. In addition, the ratio of red and green cones varies among individuals and these factors collectively influence how each individual interprets the spectrum of wavelengths that enter the eye. The phenotype of red-green color blindness is highly variable.
Four subclasses of red-green color vision defects are recognized:
Protanopia - only blue and green cones are functional (1 percent of Caucasian males)
Deuteranopia - only blue and red cones are functional (1 percent of Caucasian males)
Protanomaly - blue and some green cones are normal plus some anomalous green-like cones (1 percent of Caucasian males)
Deuteranomaly - normal blue and some red cones are normal plus some anomalous red-like cones (5 percent of Caucasian males)
Blue color blindness (tritanopia; 190900) is the result of mutations in the OPN1SW gene on chromosome 7. ERG flicker responses can be used to define the type and nature of the cone defects. |
Hypertension (High Blood Pressure)
What Is High Blood Pressure?
High blood pressure, or hypertension, is when the force of the blood pushing on the blood vessel walls is too high. When someone has high blood pressure:
- The heart has to pump harder.
- The arteries (blood vessels that carry the blood away from the heart) are under greater strain as they carry blood.
How Does Blood Pressure Work?
Blood pressure is the force against blood vessel walls as the heart pumps blood. When the heart squeezes and pushes blood into the vessels, blood pressure goes up. It comes down when the heart relaxes.
Blood pressure changes from minute to minute. It's affected by activity and rest, body temperature, diet, emotions, posture, and medicines.
What Causes High Blood Pressure?
The most common type of high blood pressure is called primary hypertension. This means that no other medical problem is found that is causing the high blood pressure. Primary hypertension is more common in people who are overweight or obese, and those who have high blood pressure in their family.
When a medical problem is found that is causing high blood pressure, it is called secondary hypertension.
Secondary hypertension often is due to:
- kidney disease
- hormone problems
- blood vessel problems
- lung problems
- heart problems
- some medicines
What Are the Signs & Symptoms of High Blood Pressure?
Most of the time high blood pressure doesn't cause symptoms. In rare cases, severe high blood pressure can cause headaches, blurry vision, dizziness, nosebleeds, a fluttering or racing heartbeat, and nausea.
If you have high blood pressure and any of these symptoms, get medical care right away.
How Is Blood Pressure Measured?
Health care providers measure blood pressure with a cuff that wraps around the upper arm. When the cuff inflates, it squeezes a large artery, stopping the blood flow for a moment. Blood pressure is measured as air is slowly let out of the cuff, which lets blood flow through the artery again.
Blood pressure is measured in two numbers:
- The pressure when the heart pumps.
- The pressure when the heart rests between beats.
You hear blood pressure reported as the first number "over" the second number, like 120 over 80 or 120/80.
How Is High Blood Pressure Diagnosed?
A single reading showing high blood pressure doesn't mean that you have hypertension. Sometimes, blood pressure needs to be checked several times over a period of days or weeks to know if someone has hypertension. Your doctor will probably weigh and measure you. He or she might do urine tests or blood tests to check for other conditions that can cause hypertension.
Some people have what's called "white coat hypertension." This means that their blood pressure goes up when they're at a doctor's office because they're nervous. When they feel more relaxed, their blood pressure usually goes down. To make sure high blood pressure readings aren't caused by anxiety, doctors will sometimes track a person's blood pressure over a whole day. This is called ambulatory blood pressure monitoring.
How Is High Blood Pressure Treated?
If high blood pressure is due to a condition like kidney disease or a hormone problem, treating the condition might be enough to get the blood pressure back to normal.
Doctors often recommend lifestyle changes. If you have hypertension, your doctor might want you to:
Eat a healthy diet:
- Eat more fruits, vegetables, and low-fat dairy.
- Limit salt.
- Avoid caffeine (found in sodas, tea, coffee, and energy drinks).
- Avoid alcohol.
Get regular exercise:
- Try to exercise for 30–60 minutes at least 3–5 times a week. Teens with severe hypertension should check with the doctor to see which sports and activities are safe. Some — like weightlifting or power-lifting, bodybuilding, or strength training — might not be allowed until their blood pressure is better controlled.
- People with high blood pressure should not smoke, and their home and car should be smoke-free.
If diet and exercise changes do not improve the blood pressure, doctors may prescribe medicine.
What Else Should I Know?
It's important to follow the advice of your care team. A healthy diet and exercise, taking medicine if needed, and getting regular blood pressure checks can help you stay healthy. |
You visit a young volcanic area and find volcanoes erupting magmas that range in composition from basalt to dacite. There are two hypotheses for how this range of magmas arose: 1. Crystal fractionation from a parent basaltic magma. 2. Mixing of basaltic magma with rhyolite magma derived by partial melting of Precambrian granite. Devise a geochemical test, using isotopic data, for evaluating these hypotheses. What does each predict about isotopic variability in the suite? |
It’s just a few minutes, geologically speaking. But it was forty centuries ago in our human past that people in some rocky canyons in Texas made some of the most remarkable rock art in North America. At twenty years a reproductive cycle, that’s only two hundred generations, which isn’t a lot. Two hundred fruitfly generations, by contrast, is a little less than eight years.
The Lower Pecos art features countless depictions of humans and animals, along with religious symbols and designs — done in varied styles with distinctive materials. By using portable X-ray fluorescence (pXRF), archeologists working at the site have discovered that different styles of art use pigments made with different ingredients — yielding new insights about the artists who created these still-evocative images:
Using a handheld pXRF device, which looks something like a state trooper’s radar gun, the archaeologists were able to scan rock paintings on site and get immediate readings on the chemical makeup of the pigments used to make them.
In Seminole Canyon’s Black Cave, for instance, the scientists analyzed giant tableaux painted in what’s thought to be the region’s earliest style, known simply as the Pecos River Style — featuring colorful, towering human-like figures sporting headdresses, holding staffs, and flanked by animals or shamanic symbols.
But the same cave also bears pictures made in a simpler, smaller-scale style known as Red Linear — portraying stick-like figures of people and animals in more quotidian scenes, like hunting parties or fertility rites.
These people sang and danced, probably in ways that still echo in the music of present-day Native Americans. We’ll never know for sure; there is nothing like a pXRF scanner for long-vanished sounds. Human history is full of musics that have vanished without a trace, leaving only faint stylistic echoes on the songs and singing that succeed them.
Some researchers have attempted a musicological version of comparative linguistics in an effort to demarcate the contours of our species’ earliest music.
In linguistics, the comparative method is a technique for studying the development of languages by performing a feature-by-feature comparison of two or more languages with common descent from a shared ancestor, as opposed to the method of internal reconstruction, which analyses the internal development of a single language over time. Ordinarily both methods are used together to reconstruct prehistoric phases of languages, to fill in gaps in the historical record of a language, to discover the development of phonological, morphological, and other linguistic systems, and to confirm or refute hypothesized relationships between languages.
So, what WAS music like in the year 000001, you ask? Astonishingly, there is more than enough evidence for us to speculate meaningfully on the nature of humankind’s first musical style. Linguists would love to reconstruct the very first — or “Ur” — language, based on what is known about the nature of all the various language families in existence today. There’s not much hope that any such effort could be successful, as there are very few (or perhaps too many) clues to work with and the whole process of reconstruction would have to be based on a long series of untestable assumptions and speculations. However, music would seem to operate in a very different manner than verbal language and as a result there may be no need to reconstruct humankind’s earliest music, because it is still being performed today — we can simply listen to it. But how can that be? If languages have changed so much over time, wouldn’t musical styles also have changed? One would think so, certainly. But the evidence would seem to tell a different story.
The work of learning about our species’ pre-history culture has the side-effect of locating us (industrial modern humanity) smack-dab in the middle, between an increasingly defined past and an increasingly uncertain future. When the remotest past was only a thousand years away, it was impossible to imagine any future more distant. Now we can see the artwork of people many thousands of years in the past, we can analyze our DNA to reveal clues about our ancestors millions of years ago — and our future stretches out beyond the scope of our imagination.
People four thousand years from now will be able to hear our music, unless we lose on climate.
Will we live to fill that future with song?
Don Cherry and Ed Blackwell perform “Terrestrial Beings.” |
By: Nick Rampe
Artificial intelligence is on the verge of turning the world on its head. Whether it’s cheating on a history paper or copying a famous musician’s voice to create a brand-new song, there are serious ethical implications for what might happen as this technology is only becoming more advanced and more accessible. While this rapidly advancing technology that seems to have world-changing possibilities can feel overwhelming, it does not have to be entirely negative.
Researchers have found an interesting use for artificial intelligence chatbots: self-disclosure. In 2018, Annabell Ho, Jeff Hancock, and Adam S. Miner, researchers at Stanford University conducted a study to understand how people would react when disclosing personal information with what they believed was a chatbot. They had three different hypotheses based on previous research on other topics, specifically studies about the benefits of self-disclosure.
First, the Perceived Understanding Hypothesis was based on the idea that people feel more satisfied with a self-disclosure conversation when they feel as though they have been understood. This, paired with this study, created the hypothesis that people would feel less emotionally satisfied if they believed they were talking to a chatbot, due to a computer’s inability to truly understand human emotion.
Second, the Disclosure Processing Hypothesis suggests that people would be more willing to disclose more information to a chatbot, because they would not be judged by a computer the same way they would be by another person. This, in turn, would make talking to a chatbot more beneficial than talking to another person, if the participant was willing to disclose more personal information.
Finally, the Computers as Social Actors or CASA framework predicts that the act of disclosing personal information is beneficial regardless of whether the discloser believes they are talking to a person or a computer. It argues that since computers and technology are so intertwined with our daily life, they are ‘social actors’ the same way that people are. This was the foundation for the Equivalence Hypothesis, which believes that the disclosure would have an equal effect on the participants whether they believed they were talking to a person or a chatbot.
The study was conducted by using what they call the “Wizard of Oz method.” By this, they mean that the thing that the discloser was talking to was always another person, but some were told it was a chatbot. Chatbots, even just six years ago when this study was conducted, were significantly more limited in what they could do, so the option of using a real chatbot was not feasible. The participants were recruited from university research participation websites, and the original 128 participants were whittled down to 98 after various disqualifications. On the other end, there were three undergraduate research students that were trained to validate the disclosers’ feelings without offering advice, while also encouraging the participants to go into further detail.
The study walked away with a variety of interesting findings. It found that there was a significant difference in emotional benefits, but it had more to do with the subject matter of the conversation, rather than who the participants thought they were disclosing to. When they were stating objective facts, there was less emotional satisfaction than when they were discussing their personal feelings. Similarly, the participants reported feeling emotionally closer to their conversation partner after the conversation, regardless of what they thought their partner was. The findings were mostly in line with the equivalence hypothesis. Self-disclosure is known to have significant emotional benefits, and the participants both felt more understood and disclosed more information as the conversation went on, but it was not because of their beliefs about their partner.
The implications of this study could be even greater now compared to when the study was published. Artificial intelligence chatbots are significantly more advanced than they were six years ago, and more easily accessible, namely Chat GPT. It is even possible that if this study were conducted today, they could have a group of people talk to an actual chatbot. We can take this a few steps farther as well. Maybe someday the average person will talk to an AI therapist or have AI friends. While those are obviously extreme examples, it is not only possible, but more than likely that AI will have a significant impact on our daily lives in the near future. While many impacts of AI seem to be leading us to a scary, dystopian future, there are certainly things to be excited about as well. |
The 5 Layers of the Epidermis: Understanding Their Functions for Healthy Skin
The skin is an incredible organ that serves as a barrier between our bodies and the outside world. It shields us from harmful UV rays, bacteria, and pollutants, while also regulating our body temperature and preventing water loss. At the heart of this amazing organ is the epidermis, the outermost layer of the skin. This layer is made up of five distinct layers, each with its own unique functions and characteristics.
In this article, we'll explore the five layers of the epidermis in detail, discussing their functions and how they work together to keep our skin healthy and vibrant.
Layer 1: Stratum Corneum
The Stratum Corneum is the outermost layer of the epidermis and is composed of dead skin cells, or corneocytes. These cells are flattened and tightly packed, forming a protective barrier that prevents water loss and shields the underlying layers from damage. The Stratum Corneum also plays a role in regulating the skin's pH balance and protecting against pathogens.
Layer 2: Stratum Lucidum
The Stratum Lucidum is a thin, translucent layer that lies beneath the Stratum Corneum. It is composed of flattened, dead skin cells that are rich in a protein called keratin. This layer is only present in certain areas of the body, such as the palms of the hands and soles of the feet. Its primary function is to provide additional protection and support to the skin in areas where there is a lot of friction and pressure.
Layer 3: Stratum Granulosum
The Stratum Granulosum is the layer of the epidermis where the skin cells begin to die off and become less active. This layer is composed of several layers of flattened cells that are filled with granules of keratin and other proteins. These granules help to reinforce the skin's barrier function and provide additional protection against environmental stressors.
Layer 4: Stratum Spinosum
The Stratum Spinosum is a thick layer of skin cells that are still alive and active. These cells are connected by protein structures called desmosomes, which provide strength and stability to the skin. The Stratum Spinosum is responsible for producing keratinocytes, the cells that eventually form the outermost layer of the skin.
Layer 5: Stratum Basale
The Stratum Basale is the deepest layer of the epidermis and is composed of active, dividing cells. These cells are responsible for producing new skin cells that will eventually migrate up to the surface of the skin. The Stratum Basale also contains melanocytes, the cells that produce melanin, the pigment that gives our skin its color.
Q: How does the epidermis protect us from UV radiation?
A: The Stratum Corneum and other layers of the epidermis contain melanin, a pigment that absorbs UV radiation and protects the skin from damage.
Q: What can I do to maintain healthy skin?
A: Maintaining a healthy diet, staying hydrated, and avoiding excessive sun exposure can all help to keep your skin healthy. Using gentle cleansers and moisturizers, and avoiding harsh chemicals and abrasive exfoliants can also help to maintain the skin's natural barrier function.
Q: Why is the skin's pH balance important?
A: The skin's natural pH balance helps to protect against harmful bacteria and fungi, and also plays a role in maintaining healthy skin hydration levels.
In conclusion, the five layers of the epidermis work together to protect and nourish our skin. Understanding the functions of each layer can help us to maintain healthy, vibrant skin throughout our lives. By following a few simple skincare routines and avoiding harsh chemicals and excessive sun exposure, we can keep our skin looking and feeling its best. |
Content Area Standards
L.4.3.a: Choose words and phrases to convey ideas precisely
4.L.1.3: Students know that humans can adapt their behavior in order to conserve the materials and preserve the ecological systems that they depend on for survival.
4.G.1.2: Explain the impact that human activity has on the availability of natural resources in North Carolina.
Students will be able to :
Analyze the types of non-consumable waste and develop a deep understanding of cost of non-consumable waste in relationship to natural resources, environment (human and animal health), and finances resulting in students educated assessment of the true cost of packaging.
Manufacturing, Natural Resources, Recycling, Waste Management, Environmental impact, cause and effect
Powerpoint presentation, Rehearsals
Earth Cycles (seasons, precipitation), Development of factual ideas based on research
Utilize formative assessment by asking guiding questions and moving around the classroom observing students as they conduct group or individual tasks to determine students understanding of the lesson objective.
Students that are unfamiliar with recycling and environmental awareness will be grouped with students that recycle their waste and have a home that fosters environmental stewardship. |
History of postcards
Tracing back the origins of the picture postcard is difficult because postcards were not simply invented — instead, they evolved. Their history is inevitably linked with the development of the postal service, but also features innovations in printing and photography, daring proposals... and even a 300-meter tower!
We try to chronicle the history of postcards through a timeline of relevant events, going back a few centuries to provide the context that culminated in postcards being officially issued and recognized by a postal operator, on October 1st 1869.
Following the popularization of printing presses, visiting cards, bill heads, writing paper and other types of paper ephemera started to have illustrations on them, often with delicate engravings and tasteful designs.
Already in 1777, French engraver Demaison published in Paris a sheet of cards with greetings on them, meant to be cut and sent through the local post, but people were wary of servants reading their messages... so the idea was not very well received.
A postal reform in the UK unified the cost of domestic mail delivery to 1 penny per envelope, to be prepaid by the sender. The proposals of Sir Rowland Hill also included that the pre-payment was to be made by issuing printed sheets of adhesive stamps. The Penny Black, the world's first adhesive postage stamp, made its debut in May 1840.
Simultaneously, decorated prepaid letter sheets (similar to today's aerograms) were also put on sale by the post office. These were designed by William Mulready and showed Britannia with a lion at her feet, sending mail messengers to all parts of the world. Though this particular design turned out to be unpopular and often ridiculed, this was the first postal stationery item issued by the post office that had decorations on the outside. They were replaced the following year by plain pink envelopes, with a printed 1 penny stamp on the corner.
Already that year, Theodore Hook Esq, a British writer, mailed himself a caricature of post office workers, shown to be writing mail in order to sell more stamps. Most likely mailed as a joke (and delivered against the post office regulations of the time), this could probably be the earliest record of a postcard being sent through the mail.
In late February, the US Congress passed an act that allowed privately printed cards, weighing one ounce or less, to be sent in the mail.
Later that year, John P. Charlton from Philadelphia patented a postal card and sold the rights to Hymen Lipman (founder of the first envelope company in the US and inventor of the lead-pencil and eraser). However, with the start of the Civil War a month later, these Lipman Cards as they became known were forgotten and not used until almost a decade later.
The earliest record of Lipman card's being used is from October 25, 1870, sent from Richmond, Indiana. It featured an illustrated advertisement of Esterbrook Steel pens, and was the first pictorial postcard to be mailed in the USA.
At the Karlsruhe postal conference, Heinrich von Stephan proposed the creation of offenes Postblatt (or, open post-sheets). The goal was to simplify the etiquette of the letter format, but also to reduce the work, paper and costs involved in the sending of a short message.
He suggested the introduction of a rigid card, roughly the size of an envelope, which could be written on and mailed without the need for an envelope, having the postage pre-printed.
The idea was not so well received in Germany: the post office feared the complexity and cost of implementing the scheme in all the different states, each emitting their own stamps.
Despite this setback, Von Stephan was a prominent figure in the history of postal services in Germany. Beginning his work as a postal clerk in 1849, he was successively promoted until he reached the post of Minister of Postal Services in 1895. He focused on the standardization and internationalization of postal services, and later helped establish the Universal Postal Union.
1st October 1869
In Austria-Hungary, Dr. Emanuel Herrmann (a professor of Economics from Vienna) wrote an article in the Neue Freie Presse pointing out that the time and effort involved in writing a letter was out of proportion to the size of the message sent. He suggested that a more practical and cheaper method should be implemented for shorter, more efficient communications.
His recommendations impressed the Austrian Post, who put them to practice on October 1st 1869, resulting in the Correspondenz-Karte, a light-brown 8.5x12cm rectangle with space for the address on the front, and room for a short message on the back. The postcard featured an imprinted 2 Kreuzer stamp on top right corner, costing half the price of a normal letter.
The postcard was born!
It is not known whether Dr. Herrmann had any knowledge of Von Stephan's earlier proposal for a very similar card.
Seeing the immense popularity of this new means of communication, Switzerland, Luxembourg, the United Kingdom and some states of Germany quickly followed suit, issuing postcards less than a year after the initial launch.
Belgium, Holland, Denmark, Finland, Sweden, Norway, and Canada issued cards in 1871, and the following year also Russia, Chile, France, and Algeria added postcards to their offers. In 1873, France, Serbia, Romania, Spain, Japan and the United States issued their own postcard offerings. By 1874, Italy, Romania and Serbia had also began to issue theirs.
The General Postal Union (later renamed Universal Postal Union) was created in Bern, Switzerland. One of its first postal treaties fixed a standard postage for letter mail sent to the members of the Union, and determined that half that rate should be applied to postcards.
This made sending postcards abroad much cheaper, and less complicated.
Today, the UPU is a specialized United Nations agency that coordinates postal policies among its 192 members, standardizing procedures and making international mail delivery much simpler. Prior to its establishment, each country had to organize separate treaties with every single country to engage in international mail delivery with them.
In 1894, twenty years after its inception, an estimated 1.7 billion postcards were exchanged between UPU member countries.
In the 1880s, many postcards were printed with small sketches or designs (called vignettes) on the message side, initially just in black, but increasingly also in color. Slowly, Germany came to dominate the industry of chromolithography, with many postcards being printed there. A large number of these featured illustrated views of a town and the expression Gruss Aus (or, Greetings from), leaving enough space for a message.
At the end of the decade, the Eiffel Tower made its debut on the Exposition Universelle of 1889 that took place in Paris. French engraver Charles Libonis designed postcards for the occasion featuring the monument, which was the tallest tower in the world at the time. The novelty postcards, which could be mailed from the Eiffel Tower itself, were much beloved by the visitors and became known as Libonis.
The 1890s saw photography starting to be used in postcards, gradually increasing in popularity over the next few decades. All matter of subjects were photographed with topography (urban street scenes and general views) being a recurrent topic.
At the turn of the century, Kodak launched the No. 3A Folding Pocket camera with negatives that were the same size as postcards, and could thus be printed directly onto postcard card stock without cropping, keeping it simple.
Already in 1854, French photographer Andre Disdéri had patented a version of the photographic carte de visite, which proved to be incredibly popular as visiting cards. They could be reproduced inexpensively and in large quantities, and had space on the back to write a note. Visiting or calling cards could be given out in person or when making social calls, and were incredibly popular in Europe and the United States.
The World's Columbian Exposition opens in Chicago, a world fair where 46 nations participated with exhibitions and attractions. Over 26 million people visited the fair, and for many of them, this was a once-in-a-lifetime chance to discover what lies beyond their own country's borders.
Publisher Charles W Goldsmith seized the opportunity to produce a novelty set of official postcards, showing the pavilions and other interesting sections of the exhibition in color. These were the first commercially produced pictorial postcards to be printed as a souvenir in the United States, and they proved to be a sensational hit.
A year later, prominent London journalist James Douglas wrote:
"Like all great inventions, the Picture Postcard has wrought a silent revolution in our habits. It has secretly delivered us from the toil of letter-writing. There are men still living who can recall the days when it was considered necessary and even delightful to write letters to one's friends. Those were times of leisure. (...) Happily, the Picture Postcard has relieved the modern author from this slavery. He can now use all his ink in the sacred task of adding volumes to the noble collection in the British Museum. Formerly, when a man went abroad he was forced to tear himself from the scenery in order to write laborious descriptions of it to his friends at home. Now he merely buys a picture postcard at each station, scribbles on it a few words in pencil, and posts it. This enhances the pleasures of travel.
Many a man in the epistolary age could not face the terrors of the Grand Tour, for he knew that he would be obliged to spend most of his time in describing what he saw or ought to have seen. The Picture Postcard enables the most indolent man to explore the wilds of Switzerland or Margate without perturbation. "
In June of 1897, the World Association Kosmopolit was founded in Nuremberg, a postcard collecting club with thousands of members. They would send postcards to each other with the greeting Gutferngruß, requesting a return card to be mailed back, thus collecting postcards from all over the world.
The association was active until the First World War, and at its peak counted with more than 15 000 members in Germany alone.
The turn of the century saw the golden era of postcards. An article on the Standard (a British newspaper) from August 21, 1899 read:
"The illustrated postcard craze, like the influenza, has spread to these islands from the Continent, where it has been raging with considerable severity. "
With multiple daily pickups and deliveries (up to 12 times per day in large cities!), postcards were effectively the text messages of their time. It was cheap and convenient to send them, and postcard-obsession reached its peak in the Edwardian era with billions of them being sent every year.
Scenic landscapes, portraits, exhibitions, royal visits, humorous scenes or even current events were quickly printed in postcards shortly after taking place. The many surviving examples of such postcards tell a vivid picture of the time.
On August 21, 1899, an article on the British newspaper Standard read:
"The illustrated postcard craze, like the influenza, has spread to these islands from the Continent, where it has been raging with considerable severity. Sporadic cases have occurred in Britain. Young ladies who have escaped the philatelic infection or wearied of collecting Christmas cards, have been known to fill albums with missives of this kind received from friends abroad; but now the cards are being sold in this country, and it will be like the letting out of waters.(...)"
"Germany is a special sufferer from the circulation of these missives. The travelling Teuton seems to regard it as a solemn duty to distribute them from each stage of his journey, as if he were a runner in a paper chase. His first care on reaching some place of note is to lay in a stock, and alternate the sipping of beer with the addressing of postcards. Sometimes he may be seen conscientiously devoting to this task the hours of a railway journey. Would-be vendors beset the traveller on the tops of hills, and among the ruins of the lowlands, in the hotel, the café and even the railway train. They are all over the country, from one end of the Fatherland to the other, — from the beech woods of Rügen on the North, to the southernmost summit in the Saxon Switzerland. Some of these cards, by the way, are of enormous size; and anyone who is favoured with them by foreign correspondents is subjected to a heavy fine by the inland postal authorities, who are not content with delivering them in a torn and crumpled state."
In 1902, the British Post Office allowed messages to be written on one half of the side normally reserved for the address, paving the way for the divided back era of postcards. This left the reverse side of the card free to be completely filled with an image.
However, these postcards could not be sent abroad until other Universal Postal Union members agreed to do the same. An agreement on the matter was reached at the Sixth Postal Union Congress in Rome, in 1906.
An American of German descent, Curt Teich started a publishing company in Chicago in 1898 focused on newspaper and magazine printing. A few years later, in 1908, Curt Teich Co. introduced postcards to their portfolio, and over the next few decades became the world's largest printer of view and advertising postcards.
Curt Teich was an early pioneer of the offset printing process, and the first to understand the advantages of using lightly embossed paper to speed up the drying of ink, allowing the finished product to retain brighter colors. Because of their texture resembling linen, these embossed postcards became known as linen cards.
He is best known for the Greetings From postcards with large letters, having successfully adapted the idea of the earlier Gruss Aus cards to the US audience.
1908 was also the year in which E. I. Dail, a salesman from Michigan, invented the revolving postcard rack. The metal contraption could be placed in a counter and allowed customers to view and select postcards for themselves.
Starting in 1913 and well into the 1930s, postcards featuring a white border became commonplace in the US.
Typically, multiple postcards were printed in rows on a large sheet of paper, which had to be trimmed around the edges of each postcard — a job that required a great deal of precision. The white borders were introduced to give some margin of error to the process, thus making them less expensive to produce.
The expression carte-maximum (maximum card or maxicard) was first used in 1932, when a collector named Lecestre published an article on Le Libre Échange detailing the design of this philatelic item. A maxicard consists of picture postcard with a postage stamp and a cancellation mark affixed on the picture side of the card. The themes of all these three elements should match in terms of motives, time and location, so that they are in "maximum concordance".
The study, creation and collection of maximum cards is called maximaphily.
On July 14, 2005 Postcrossing was launched!
The website platform was built by Paulo Magalhães, a Portuguese software engineer who loved receiving postcards but did not know many people he could exchange them with. So he coded a website on his free time with the goal of connecting him with other people who also enjoyed sending and receiving postcards. What started as a small side project quickly became a worldwide hobby, shared by many postcard enthusiasts. To date, over 57 million postcards have been exchanged through the platform, with thousands more on the way.
On the 150th anniversary of the postcard, Postcrossing organized a worldwide campaign to celebrate the special occasion.
A postcard contest received thousands of submissions from all over the world sharing their enthusiasm for postcards, filled with kind and thoughtful messages.
A selection of some of the best postcards was showcased during October in an exhibition at the Universal Postal Union headquarters in Bern, Switzerland. More details of the exhibition can be found on Postcrossing's blog.
Many postal operators, museums, libraries and even schools joined the celebrations with postcard related events and initiatives.
Some of the events were:
- 58 meetups
- 11 postcard exhibitions
- 8 special cancellation marks
- 8 workshops
- 6 seminars
- 4 commemorative postcards issued by post offices
- 3 guided tours
- 2 postage stamps
After a successful celebration in 2019 of the 150th anniversary of the postcard, Postcrossing, with the help of Finepaper, decided to launch the World Postcard Day on every October 1st — a day to celebrate the postcard and the connections it brings.
A postcard design contest was organized among design&art students to create an official postcard for the event that was made available for everyone to use on this date.
In the midst of a very unusual year, the special day was nonetheless commemorated all over the world, with the issue of commemorative postcards, dedicated cancellation marks, events in schools, philately fairs, libraries, museums, discounts at post offices and, above all, many many postcards.
- Willoughby, Martin, A History of Postcards (1992), Bracken Books, ISBN 1858911621
- Staff, Frank, The Picture Postcard & Its Origins (1979), Lutterworth Press, ISBN 0718806336
- Hill, C. W., Picture Postcards (1991), Shire Publications Ltd, ISBN 0747803986
- Atkins, Guy, Come Home at Once (2014), Bantam Press, ISBN 9780593074145
- Gruß aus Berlin (1987), Kohler & Amelang, ISBN 3733800087
- Daltozo, José Carlos, Cartão-Postal, Arte e Magia (2006)
- MetroPostcard History of Postcards
- Kosmopolit - Gut Fern Gruss |
Conservation efforts have successfully boosted the populations of rare bird species, which are threatened by habitat loss, hunting, pollution and climate change. Organizations have restored habitats, enforced hunting and trapping restrictions, and started captive breeding programs. Climate change mitigation and pollution reduction have also helped protect bird populations. Examples of successful conservation include the Mauritius kestrel, California condors, and black stilt in New Zealand, whose populations have steadily risen due to conservation efforts. Rare and endangered bird species can thrive with the right conservation initiatives.
Conservation Efforts Boost Populations of Rare Bird Species
Birds are an essential part of our ecosystem, and their conservation is necessary for maintaining the balance. Some bird species are rare and endangered due to habitat loss, climate change, hunting, and other threats. However, conservation efforts have been successful in boosting the populations of rare bird species in recent years.
Why are some bird species rare and endangered?
Birds populations have been affected by various environmental factors over the years. Some of the reasons why bird species are rare and endangered include:
– Habitat loss: deforestation, urbanization, and agriculture have led to the loss of natural habitats, which affects the survival of some bird species.
– Hunting and trapping: some bird species are hunted and trapped for food or cultural reasons.
– Climate change: global warming affects bird populations and their habitats, leading to changes in the breeding and migration patterns of some bird species.
– Pollution: air pollution, water, and land pollution affect bird habitats, affecting the survival of some species.
Conservation efforts for rare bird species
Conservation efforts for rare bird species have been successful, with some species seeing significant population growth. Here are some conservation efforts for rare bird species:
– Habitat restoration: reforestation and preservation of natural habitats help restore the habitats of some bird species.
– Hunting and trapping restrictions: hunting and trapping restrictions help reduce the hunting and trapping of rare bird species, especially those hunted for food or cultural reasons.
– Captive breeding programs: capturing and breeding rare bird species in captivity, followed by reintroduction into the wild, helps boost their populations.
– Climate change mitigation: reducing carbon emissions and protecting habitats from the effects of climate change can help birds adapt and survive climate change.
– Pollution reduction: reducing pollution and waste disposal, especially in bird habitats, helps protect bird populations.
Examples of successful conservation efforts for rare bird species
– The Mauritius kestrel, a rare and endangered bird species, was brought back from the brink of extinction through captive breeding programs, habitat restoration, and hunting restrictions. The population of the Mauritius kestrel has grown from only a few birds in the 1970s to over 800 today.
– California condors, an endangered bird species, were almost extinct in the 1980s, with only 22 left in the wild. Conserved breeding programs, hunting restrictions, and habitat restoration have boosted their populations, with over 400 California condors in the wild today.
– The black stilt, a rare bird species in New Zealand, is now recovering thanks to conservation efforts like habitat restoration, predator control, and captive breeding.
Conservation efforts have been successful in boosting the populations of rare bird species. Through habitat restoration, hunting restrictions, captive breeding programs, climate change mitigation, and pollution reduction, rare bird species have a chance to thrive |
For centuries, the world has been captivated by the groundbreaking art of Michelangelo. Working in multiple mediums, the Italian artist was a true Renaissance man, culminating in an impressive collection of world-famous works that includes the Sistine Chapel ceiling, an iconic interpretation of David, and the Pietà, a monumental marble sculpture of the Madonna cradling Christ.
Crafted in the late 15th century, the Pietà remains one of the most beloved sculptures in the world. Here, we take a look at this piece in order to understand how its iconography, history, and artistic characteristics have shaped such an important legacy.
What is a “Pietà”?
In Christian art, a Pietà is any portrayal (particularly, a sculptural depiction) of the Virgin Mary holding the body of her son, Jesus. According to the bible, Jesus was crucified for claiming to be the son of God. Though Mary embracing her dead son is not explicitly mentioned in the holy book, the scene has proven a popular subject among artists for centuries, after German sculptors introduced wooden Vesperbild (a term that translates to “image of the vespers”) figurines to Northern Europe during the Middle Ages.
By 1400, the tradition had reached Italy, where Renaissance artists adapted it as marble sculpture—and Michelangelo made his mark with his unprecedented rendition.
Toward the end of the 15th century, young Florentine artist Michelangelo di Lodovico Buonarroti Simoni was already an esteemed artist. He was particularly renowned for his ability to paint and sculpt biblical figures with realistic anatomical features, culminating in commissions from Rome's religious elite.
In late 1497, Cardinal Jean de Bilhères-Lagraulas, the French ambassador to the Holy See, asked Michelangelo to preemptively craft a large-scale Pietà for his tomb. The following year, Michelangelo began working on the sculpture, which he carved from a single block of Carrara marble, a material derived from Tuscany. Historically used by ancient Roman builders, this medium was prized for its quality and popular among Renaissance artists.
When the piece was completed in 1499, it was overwhelmingly met with praise, with contemporary painter, architect, writer, historian, and Michelangelo biographer Giorgio Vasari among its most faithful fans. “It is certainly a miracle that a formless block of stone could ever have been reduced to a perfection that nature is scarcely able to create in the flesh,” he chronicled in The Lives of the Artists.
In fact, the piece was so celebrated that, fearing he wouldn't be given credit, Michelangelo—who is known for never signing his work—famously inscribed it with his name. According to Vasari, the artist overheard onlookers erroneously attribute the piece to Il Gobbo, a Milanese artist. In response, Michelangelo “stood silent, but thought it something strange that his labors should be attributed to another; and one night he shut himself in there, and, having brought a little light and his chisels, carved his name upon it.”
A Renaissance Masterpiece
What makes Michelangelo's Pietà so special? Like other works by the artist, the piece illustrates Renaissance ideals; in particular, it showcases an interest in naturalism.
During the High Renaissance (1490-1527), artists in Italy began to reject the unrealistic forms found in figurative Medieval art in favor of a more naturalistic approach. At the forefront of this trend, Michelangelo crafted sculptures that focused on balance, detail, and a lifelike yet idealized approach to the human form.
The Pietà perfectly reflects these Renaissance ideals. In order to suggest balance, he rendered the sculpture as a pyramid. Popular in Renaissance painting and sculpture alike, the use of pyramidal composition—an artistic technique of placing a scene or subject within an imaginary triangle—aids the viewer as they observe a work of art by leading their eye around the composition. Such a silhouette also suggests stability, which Michelangelo further implied through the use of heavy drapery covering Mary's monumental form.
While, in this sense, the Virgin's large size lends itself to the sculpture's naturalism, it paradoxically also appears unrealistic, as she appears much larger than her adult son. Why did Michelangelo opt for these proportions? While most art historians believe it was a matter of perspective (a massive figure sprawled across a smaller figure's lap would look unbalanced), there exists another, more poignant theory that can be traced back to the Vesperbild tradition.
While discussing a late 14th-century figurine, the Metropolitan Museum of Art explains that Jesus' “small scale may reflect the writings of German mystics, who believed that the Virgin, in the agony of her grief, imagined she was holding Christ as a baby once again in her arms.”
Since its 15th-century unveiling, the Pietà has had an eventful life. While, for centuries, it was housed in the cardinal's Vatican City-based funerary chapel, it eventually found a permanent and prominent place in St. Peter's Basilica, where it remains today.
Though the piece boasts a 520-year history, many highlights of its legacy have emerged only recently. In the middle of the 20th century, for example, it saw much fanfare when it was displayed at the 1964 New York World's Fair. Less than a decade later, it attracted attention when a man brandishing a hammer vandalized it. And, as recently as early 2019, the piece yet again made headlines when historians concluded that a small terra cotta statue discovered in Paris likely served as its study.
Even without these recent developments, however, the Pietà has undoubtedly solidified its role as one of the world's most significant sculptures. |
Overview of Ginkgoaceae Family
The Ginkgoaceae family consists of only one extant species called Ginkgo biloba, commonly known as the Maidenhair tree. The family is part of the Ginkgophyta division and Ginkgoales order, making it one of the oldest living tree species in the world. The Ginkgoaceae family has three extinct species that existed during the Mesozoic era, dating back 270 million years.
Classification and Taxonomic Details
The Maidenhair tree, Ginkgo biloba, is classified and named as follows:
- Kingdom: Plantae
- Division: Ginkgophyta
- Class: Ginkgoopsida
- Order: Ginkgoales
- Family: Ginkgoaceae
- Genus: Ginkgo
- Species: biloba
The Ginkgoaceae family is unique in various ways, including:
- Ginkgo biloba is the only extant species in the family.
- The Ginkgo tree is dioecious, meaning that male and female reproductive organs are on separate trees.
- The leaves of the Ginkgo tree are fan-shaped and have parallel veins, similar to ferns and other primitive plants.
- It has a unique reproductive system that involves the production of motile sperm, which swim to the egg cell.
- The Ginkgo tree is tolerant of air pollution and has been used as a natural air purifier in urban areas.
- The plant is also known for its medicinal properties and has been used in traditional medicine to treat various ailments, including memory and cognitive disorders.
Distribution of Ginkgoaceae Family
The Ginkgoaceae family consists of a single extant species, Ginkgo biloba, which is commonly known as the Ginkgo tree. This family is considered as a living fossil since it is the only surviving member of the division Ginkgophyta. The Ginkgo tree is endemic to China, but it is widely cultivated in different parts of the world due to its ornamental, medicinal, and dietary values.
The Ginkgo tree has a long history, and it is believed to have existed for more than 270 million years. The tree has survived several mass extinctions and has been able to adapt to various environmental conditions. It is found in many countries around the world, including the United States, Japan, South Korea, and Germany.
Habitats of Ginkgoaceae Family
The Ginkgo tree is a hardy plant that can survive in different climatic conditions. The tree prefers well-drained soils and moderate sunlight. It has been able to adapt to adverse environmental conditions due to its unique reproductive system, where it can produce both male and female gametes on the same tree.
In its natural habitat, the Ginkgo tree is found in broad-leaved deciduous forests and mixed coniferous-deciduous forests. It can tolerate frost and can grow in high altitudes, up to 4,000 meters above sea level. The tree is also resistant to pests and disease, which has contributed to its survival for millions of years.
Ecological preferences or adaptations of Ginkgoaceae Family
The Ginkgo tree has several ecological preferences and adaptations, such as its resistance to pests and diseases. The tree also has a unique reproductive system, which allows it to adapt to different environmental conditions and increase its chances of survival.
The Ginkgo tree has been able to thrive in different climates and soil types due to its tolerance to harsh environmental conditions. It can tolerate air pollution, which has made it a popular plant for urban areas. The tree has also been able to survive several mass extinctions due to its ability to adapt to changing climates and environments.
Overall, the Ginkgoaceae family has unique characteristics that have enabled its survival for over 270 million years. The Ginkgo tree is a living fossil that has adapted to various environmental conditions, and it continues to provide valuable benefits to humanity.
General morphology and structure
The Ginkgoaceae family comprises only one extant species, Ginkgo biloba, which is commonly referred to as the maidenhair tree. The plant is deciduous and grows up to 40 meters tall. The bark is light gray, rough, and furrowed. The leaves of Ginkgo biloba are unique and fan-shaped with two lobes, hence the common name "maidenhair." The leaves are typically 5-15 cm long and 3-5 cm wide. The tree produces separate male and female reproductive structures on different trees, and the female trees produce a distinctive and foul-smelling fruit. The male trees are usually preferred for landscaping due to fruitless characteristics.
Key anatomical features and adaptations
Ginkgo biloba is a gymnosperm, meaning that its seeds are not enclosed in a fruit. The plant has several distinctive anatomical features, including its thick-walled and branched xylem cells, which help to transport water and nutrients through the plant. The leaves of Ginkgo biloba also have specialized structures known as stomata, which are small pores responsible for gas exchange. The stomata are located on the underside of the leaves, which helps to reduce water loss through transpiration. Additionally, the plant has a high tolerance for pollution and is often planted in urban areas due to its air-purifying properties.
Variations in leaf shapes, flower structures, or other distinctive characteristics
While the Ginkgoaceae family only contains one extant species, there are several fossil species known from various regions around the world. The fossil record shows that the leaves of Ginkgoaceae species have varied considerably throughout their history. Some species have long, narrow leaves with serrated edges, while others have more rounded leaves with smooth edges. The reproductive structures of fossil species have also varied, with some species producing cones and others producing fleshy fruits.
Reproductive Strategies in the Ginkgoaceae Family
Plants in the Ginkgoaceae family, such as Ginkgo biloba, use a combination of sexual and asexual reproduction to propagate. These plants usually have separate male and female individuals, with some species also producing bisexual flowers on individual trees.
Mechanisms of Reproduction
Male and female gametophytes are produced within the ovule and pollen grain, respectively, through meiosis. Pollination occurs through wind dispersal of pollen grains, making them compatible with female flowers. Once fertilized, the ovule develops into the seed while the ovary develops into a fruit-like structure called a sarcotesta.
Ginkgoaceae plants can also reproduce asexually through vegetative propagation, where new plants grow from cuttings or suckers from the parent plant's roots.
Flowering Patterns and Pollination Strategies
Ginkgoaceae plants produce flowers once a year in early spring before the leaves emerge. The female flowers possess ovules and are located on short shoots, whereas the male flowers produce pollen and grow on longer shoots. The bisexual flowers are less common and can occur sporadically throughout the canopy.
The pollination strategy of Ginkgoaceae plants is primarily through wind dispersal. Plants produce large amounts of pollen per flower, with each pollen grain having two flagellum-like structures that enable movement and retention in the wind.
Seed Dispersal methods and Adaptations
The seeds of Ginkgoaceae plants are highly adapted for dispersal. The sarcotesta, which is fleshy and contains butyric acid, becomes attractive to animals as it decays. The animals eat the sarcotesta and spread the seeds over a larger area. Bark is another mechanism of dispersal, where animals collect the seeds and bury them to store for later consumption.
The seeds themselves are also unique in their ability to withstand harsh environmental conditions. Ginkgoaceae plants produce a pair of gametophytes that grow independently within the seed, which provides extra food and enables growth in isolation.
Economic Importance of the Ginkgoaceae Family
The Ginkgoaceae family has great economic importance due to the various medicinal, culinary and industrial uses of its plants. For instance, extracts from Ginkgo biloba have health benefits and are widely used in Chinese traditional medicine as a memory enhancer, anti-inflammatory, and antioxidant. Ginkgo biloba is also used in western medicine for treating circulation disorders and age-related memory loss. The seeds and leaves of Ginkgo biloba are eaten in some regions of China as a delicacy. In addition, due to its hardness and resistance to decay, Ginkgo wood is used for furniture, flooring, and construction.
Ecological Role of the Ginkgoaceae Family
The Ginkgoaceae family plays an important role in many ecosystems. Ginkgo trees are shade tolerant and can grow in a wide range of soils. They are commonly used in urban landscapes due to their aesthetic appeal and the shade they provide. Ginkgo trees also support a variety of wildlife, providing habitat for birds and insects. The leaves of Ginkgo biloba are known to be allelopathic, which means they release chemicals that inhibit the growth of nearby plants, helping to reduce competition and create a unique niche in the ecosystem.
Conservation and Ongoing Efforts
Currently, the Ginkgoaceae family is represented by only one living species, Ginkgo biloba, which is considered endangered in the wild. It is estimated that fewer than 10,000 mature trees exist in its natural range. The main threats to Ginkgo biloba are habitat loss due to urbanization and agriculture, and selective logging for timber. Therefore, various ongoing efforts are being made to conserve this species and its natural habitat. Some organizations are working on ex-situ conservation, such as planting Ginkgo biloba in botanical gardens and arboretums, while others are focusing on in-situ conservation, such as protecting and restoring the remaining natural habitats. In addition, conservation efforts also aim to raise public awareness and promote sustainable management practices to preserve the economic, ecological, and cultural importance of this species. |
Most optical lenses have spherical surfaces because those can be most easily fabricated with high optical quality. That surface shape, however, is not ideal for imaging; the outer parts of the lens are then too strongly curved. This is most obvious when considering a ball lens. Figure 1 demonstrates this for a ball lens with 10 mm diameter and a refractive index of 1.515 (N-BK7 glass at 633 nm), which is used to focus parallel incoming light. The outer incoming rays are crossing the optical axis substantially sooner than the paraxial ones.
When using lenses with spherical surfaces for imaging applications, the explained effect leads to so-called spherical aberrations which can seriously degrade the image quality. Similarly, the use of spherical lenses for focusing or collimating laser beams leads to beam distortions.
In many cases, the aberration effects are far less extreme than those shown above for the ball lens, since the involved curvatures are not that strong.
Spherical Aberrations from Plane Plates
The problem of spherical aberrations can be generalized to all aberrations associated with a non-ideal radial dependence of phase changes. That can occur even for plane surfaces, e.g. of plane-parallel plates, when divergent or convergent light travels through such a plate. This is essentially because the law of refraction contains the sine rather than the tangent function, which would be required to avoid spherical aberrations.
An example case in shown in Figure 3.
Quantification of Spherical Aberrations
The strength of spherical aberrations of an optical system or an optical component such as a lens is often quantified by plotting the deviation of the longitudinal position of the image focal point as a function of the transverse offset of incident rays. Often, one exchanges the coordinate axes, so that the resulting plot corresponds more closely to a horizontal optical axis. The mentioned position error may scale with the square of the transverse beam coordinate, but in cases where the spherical aberrations are partly compensated (see below), that compensation may work for a particular horizontal offset but not so well for other offsets.
Reducing Spherical Aberrations
Spherical aberrations can be reduced in different ways:
- The simplest method is to restrict the area of the incoming light with an optical aperture. That way, one can prevent that the outer regions, where spherical aberrations are most extreme, contribute to the image. However, that implies a reduced light throughput.
- One can use aspheric lenses, which have modified surface shapes such that spherical aberrations are avoided.
- One can use a combination of spherical lenses designed such that spherical aberrations are well compensated. This method is frequently used in photographic objectives, for example.
To some extent, one can also reduce spherical aberrations by choosing an appropriate type of lens, depending on the required configuration (see Figure 3):
- For imaging a small spot to a spot of equal size, the symmetric biconvex lens is well suited. However, it is even better to use two plano-convex lenses in combination, with the flat surfaces on the outer sides.
- For an asymmetric application, such as focusing a collimating beam or collimating a strongly divergent beam, a plano-convex lens can be more appropriate. The best solution would actually be an asymmetric lens with optimized curvature radii on both sides, but a plano-convex lens is often close enough. It must be oriented such that the curved surface is on the side of the collimated beam. Both lens surfaces then contribute to the focusing action.
Generally, lenses should be used such that both surfaces contribute similarly to the focusing action.
The development of improved optical fabrication methods for aspheric optics has led to their increased use, allowing manufacturers to make high-performance objectives with fewer lenses – which can also result in improved light throughput. Note, however, that other kinds of optical aberrations can then still occur.
Questions and Comments from Users
Here you can submit questions and comments. As far as they get accepted by the author, they will appear above this paragraph together with the author’s answer. The author will decide on acceptance based on certain criteria. Essentially, the issue must be of sufficiently broad interest.
Please do not enter personal data here; we would otherwise delete it soon. (See also our privacy declaration.) If you wish to receive personal feedback or consultancy from the author, please contact him, e.g. via e-mail.
By submitting the information, you give your consent to the potential publication of your inputs on our website according to our rules. (If you later retract your consent, we will delete those inputs.) As your inputs are first reviewed by the author, they may be published with some delay. |
Propeller shafts or in marine terminology also known as tail shafts are commonly found on the ship’s aft segment behind the ship’s propulsion. It is a round metal fixture attached to rotate, connect, and mediates force from one component to another.
The tail shaft is a part of the propulsion system not only in marine but also in automotive and aviation. In general, it serves as a connection between the engine and the drive unit, for boats the drive unit is called the propeller.
A propeller is a fan-like structure situated underwater and attached to the vessel shafts. These propellers have various types and forms but the main principle is to have twisted blades that when rotated cause lateral motion on the plane where it is being spun. This lateral motion is therefore seen as the ship’s movement either ahead or astern.
The connection built by the tail shaft between the main engine and the propeller also transmits the power (speed and torque) produced by the main engine to the ship’s propellers.
Precisely, the energy coming from the main engine whether diesel or electric that is manifested on its flywheel is connected to a marine gearbox (if equipped) or directly to the propeller shafting, depending on if an intermediate shaft is needed or only the propeller shaft is installed.
The energy held into the driving shaft is delivered to the propeller where it translates the rotating motion to the forward and reverse movement of the ship’s hull.
On a broad understanding, the connection between the engine to the shaft and to the propeller acting as a single unit forms a system called propulsion. As you can see, the design varies to fit the vessel conditions for example vessel size and type.
The propulsion system adapts to the vessel’s needs. Manufacturers of such take worth considering the length between the ship engine and the propeller. The longer space between the said components more additional fittings/ parts are installed to ensure the firmness and efficiency of the system.
The length between the engine and propeller differs per vessel type. As we all commonly know the propellers are always fitted on the ship’s rearrest portion but the engine position can either be in the middle or later part of the ship.
The position is determined during ship construction by naval architects who plan the vessel’s general arrangement in line with its stability. Vessel general arrangement is a layout where the various equipment (generators, main engine, boilers, steering gear and etc.) aboard is identified/ placed around the ship.
On the other hand, ship stability focuses the weight distribution around the ship (machinery, cargo, and structures) to know the behavior of the ship and how it will react at light load and full load conditions, on calm and heavy seas.
The stability focuses on determining the ship’s stresses acting upon the hull and identifying dynamic points such as the center of gravity, the center of buoyancy, and the metacenter of the ship.
For instance, container ships versus bulk carriers. Container ships do have superstructures on the amidship were at the bottom of this structure, the ship’s heart – the main engine is placed while bulk carriers have theirs installed at the aft portion.
This difference presents different arrangement problems the system has to face which is why intermediate shafts, intermediate bearings, thrust blocks and etc. are introduced. The items mentioned will be discussed later on
What are the types of propeller shafts?
In various ways, the propeller shaft is connected to the main engine, depending on it the tail shaft can be classified. Three types used are flanged connected, tapered shafts, and muff coupling.
Flanged connected propeller shafts are shafts having a rim around its tip used to connect to another shaft with the same lip size and a number of holes. The holes are flawlessly aligned and fastened with bolts. Reliant to the tail shaft size and application, bolt sizes and material composition are assigned.
Tapered shafts are shafts having tapered ends or decreasing diameters. On those reduced diameter tips, a shaft crown is installed. This shaft crown is fixed mechanically along the shaft body with a key on its center to ensure the crown won’t slip as it rotates. The crown is coupled with stud bolts to the main engine driving shaft or gearbox flange.
Muff Coupling is a type of shaft coupling where two perfectly aligned shafts are enveloped by a muff or sleeve. This sleeve consists of an outer and inner conical jacket. The outer jacket is pushed parallel and hydraulically so that it will compress the inner sleeve that is tapered and slightly larger than the shaft diameter.
Once the sleeves are compressed at the manufacturer’s recommended pressure, it is then released. The muff sleeves form a strong couple between the two shafts.
Main Components Of Marine Propeller Shaft Arrangement
The shown diagram is an almost complete illustration of a ship’s propulsion system. The purpose of the items presented above will be discussed shortly below from rightmost to leftmost.
The thrust shaft is a propulsion shaft portion where it is supported by the thrust block. This shaft connects the driving engine to the intermediate shaft. The sole purpose of this shaft is to deliver the rotating motion of the engine to the intermediate shaft.
Propeller Shaft Thrust Block
A thrust block is also a form of bearing mounted steadily on the ship structure to resist the thrust produced by the propeller shaft. This resistance transmits the force to the ship’s hull and creates movement. The block holds and lubricates the thrust shaft while it rotates. These blocks are also referred to as thrust bearings or thrust boxes. The oil lubricating the thrust shaft is enclosed by the thrust block and is sealed by two labyrinth rings known as forward and aft end seals.
Journal bearings are metal rings that are surfaced between the rotation shaft and the thrust block. These bearings center the shaft inside the thrust block. The support provided by the journal bearings maintains the shaft orientation while it spins inside the hollow portion of the block where lubricating oil is encased.
Intermediate shaft quantity and length are reliant on the ship’s length. For ships having wider space between the engine and propeller the intermediate shaft length increases and its purpose is to link the propeller shaft to the thrust shaft.
Intermediate bearings support the intermediate shaft on its vital points to keep it aligned and allow it to move freely as it rotates on its axis. The longer the intermediate shaft the more bearings are placed to hold.
Stern tube tunnel bearing
Stern tube tunnel bearing like the first two mentioned bearings supports the shaft of the propulsion system but mainly concerns the last shaft segment called the propeller shaft. These tunnel bearings are found adjacent to the rigidity tube where the propeller shaft exits the ship spaces underwater.
Inboard seals and outboard seals are closure designs to prevent seawater ingress to the ships’ hulls and oil leakage to the sea. These seals provide a barrier between two medians (water and oil) to mix. These seals are very important with regards to ships’ stern tube lubrication systems. As the name suggests, inboard seals barricades oil or water to enter the ship spaces. On the other hand, outboard seals prevent the loss of oil by spillage to the sea. Sealing arrangements will be elaborated on later in this article.
The stern tube is the part of the vessel’s hull where it houses the bearings and propeller shaft going out from the ship space to the underwater. Sometimes called the rigidity tube this housing offers passage for the propeller shaft cooling and lubrication arrangement.
The tail shaft or the propeller shaft is where the propeller is attached to the end. This shaft is majorly placed underwater and receives primarily the thrust created by the spinning propeller.
A strut is support attached to the ship’s hull underwater and holds the tail shaft as it is being propelled. They are also fitted with a bushing bearing which keeps the tail shaft alignment.
Propeller as discussed at the beginning is wheel-like structures fitted with warped blades that are attached to the tail shaft. This structure provides push or pull movement to the ship hull as it is processed by the propulsion system.
The Stern Tube Sealing Arrangement
The sealing arrangement of the stern tube varies with its lubrication type. the stern tube can either be hydro-lubricated or oil lubricated.
Hydro-lubricated stern tube systems have bushing bearings installed between their tail shaft and stern tube. The bushing bearings have grooves or flutes around them where it allows a direction of seawater to flow. This flow of seawater reduces the friction between the bearing and the propeller shaft as the shaft rotates on it.
The system introduces a pump suctioning from the sea chest to deliver the seawater to the shaft bearings where it passes continuously until it escapes back to the sea. The nonstop flow of seawater provides cooling to protect bearings from premature wear as the shaft is rotated especially at high RPMs.
Oil-lubricated stern tube system uses biodegradable oils as cooling and lubricating element. This system utilizes an oil reservoir circulated by a pump to create a cycle. The oil goes in and out of the stern tube conservatively.
The oil is being conserved to prevent oil spillage to the sea. In any unfortunate event, the oil is biodegradable to prevent harm to marine life. The oil has the same function as seawater in the hydro-lube system protecting the stern tube bearing from premature wear.
The system dumps the heat collected by the oil thru a heat exchanger which is connected to the seawater system. In any case, the heat collected is transferred from the shaft to the stern tube oil to the seawater where it is released back into the vast ocean.
Generally, there are three types of stern tube sealing arrangements. These are mechanical face seals, packing-type seals, and lip seals. Like what we commonly see on pumps’ mechanical seals and packing glands, mechanical face seals and pacing-type seals are quite similar.
Mechanical face seals or dripless face seals have rubber below, a carbon/graphite ring, and a rotating flange. The bellow’s end wraps around the stern tube opening while the opposite end is equipped with a graphite ring. The bellow also acts as a spring as the rotating flange is pushed on the attached carbon ring connected on its end. The rotating flange is fastened securely to the tail shaft.
The amount of compression is given by the seal’s manufacturer in line with the shaft specifications. The surface contact between the carbon ring and the rotating flange creates waterproofing to prevent the entry of water into the ship space.
The carbon ring is being sacrificed as it laps and grinds itself around the rotating flange. The diminishing surface of the carbon plate is compensated by the spring action of the rubber below to maintain the seal created on the surface.
Packing-type seals use packing glands or in simple terms a special greased rope. The packing is wrapped around the shaft and is compressed by a flange that is adjusted over time of the gland wear. Compression made by the retaining ring against the packing generates a seal between the packing and a lantern ring preventing the entry of lubricating media.
Lastly, lip seals are seals that have rubber lips and a flange mounted on the stern tube ends. These rings control the water ingress by resisting the movement of the rubber pressed against the stern tube opening by fixing a flange behind that is securely tightened by stud bolts.
The seals explained above are responsible for the propeller shaft’s rotational movement underwater without allowing the water to enter the ship. |
BREAKING: Astrophysicist have spotted “Worm-Holes” Created by Something unknown
It’s possible that an extremely advanced alien civilization has created a transportation network of wormholes around the universe — and we might even be able to spot them.
While it’s certainly a far fetched theory, according to a new piece by BBC Science Focus, it has some scientists intrigued. Take Nagoya University astrophysicist Fumio Abe, who told the publication that we may have even already captured evidence of such a network in existing observations — but lost them in the sea of data, leading to the intriguing prospect that reanalyzing old observations could lead to a breakthrough in SETI.
“If the wormholes have throat radii between 100 and ten million kilometers, are bound to our Galaxy, and are as common as ordinary stars, detection might be achieved by reanalyzing past data,” Abe told Science Focus.
It’s an alluring theory, in other words, that suggests one more pathway to figure out once and for all whether humans are alone in the universe.
In simple terms, wormholes are theoretical tunnels with two ends at separate points in time and space. While they don’t violate Einstein’s general theory of relativity, we still have no idea if they could actually exist, let alone if a sufficiently advanced civilization would be capable of producing them
For a wormhole to exist, though, it would take astronomical amounts of energy.
“Intrinsically unstable, a wormhole would need ‘stuff’ with repulsive gravity to hold open each mouth, and the energy equivalent to that emitted by an appreciable fraction of the stars in a galaxy,” reads Science Focus‘ story. The idea would be that “if ETs have created a network of wormholes, it might be detectable by gravitational microlensing.”
That technique has been used in the past to detect thousands of distant exoplanets and stars by detecting how they bend light. Whether it could be used to detect wormholes, to be clear, is an open question.
Fortunately, spotting wormholes isn’t our only shot at detecting life elsewhere in the universe. Science Focus also pointed to the search for theoretical megastructures that harness the energy of a star by fully enclosing it, or atmospheric chemicals linked to human pollution, or extremely thin reflective spacecraft called light sails, any of which could theoretically lead us to discover an extraterrestrial civilization.
The concept of wormholes is a tantalizing prospect, especially considering the fact that they could give an alien civilization — or even us — the ability to travel over vast stretches of space and time.
But for now, unfortunately, they’re not much more than a fun thought experiment. |
100 words each question 1. How can you measure arousal and anxiety? 2. Identify three personal sources of stress. 3. Discuss three personal factors and three situational factors that mediate one’s interpretation of anxiety.
1. Measuring arousal and anxiety can be challenging because these psychological states are subjective experiences. Nevertheless, several methods have been used to assess arousal and anxiety levels. Physiological measures, such as heart rate, skin conductance, and brainwave activity, can provide objective indications of arousal. Self-report measures, such as questionnaires and rating scales, can also be used to gauge subjective feelings of anxiety. Additionally, behavioral observations of individuals’ actions, such as fidgeting or avoidance behaviors, can offer insights into their anxiety levels. Combining these different measures can provide a comprehensive assessment of arousal and anxiety.
2. Personal sources of stress can vary greatly among individuals, as factors that cause stress are highly individualized. However, three common sources of personal stress include work-related stress, interpersonal conflicts, and financial pressures. Work-related stress can stem from excessive workloads, lack of job security, or conflicts with colleagues. Interpersonal conflicts, such as strained relationships or conflicts within families, can generate significant stress. Financial pressures, such as debt, unemployment, or struggles to make ends meet, can also contribute to personal stress.
3. The interpretation of anxiety is influenced by both personal and situational factors. Three personal factors that mediate one’s interpretation of anxiety are coping style, self-esteem, and past experiences. Coping style refers to the individual’s habitual way of dealing with stress and can influence how one appraises and responds to anxiety. People with high self-esteem may interpret anxiety as a temporary state and feel more confident in managing it. Past experiences with anxiety, such as previous successful coping strategies or traumatic events, can shape one’s interpretation of current anxiety.
Three situational factors that mediate the interpretation of anxiety include social support, the nature of the anxiety-provoking situation, and cultural factors. Social support, such as having friends or family who provide emotional support, can buffer the impact of anxiety and influence one’s interpretation of it. The nature of the anxiety-provoking situation, such as its controllability or familiarity, can also shape how anxiety is interpreted. For example, an unfamiliar and uncontrollable situation may lead to higher anxiety interpretations. Cultural factors, such as societal norms and values, can shape individuals’ understanding and interpretation of anxiety. Different cultures may have different beliefs about what causes anxiety and how it should be managed.
Overall, the interpretation of anxiety is a complex process influenced by a variety of personal and situational factors. Understanding these factors can provide valuable insights into how individuals perceive and respond to anxiety, allowing for tailored interventions and support to alleviate distress. |
What do we want to know?
Small-group discussions have been strongly advocated as an important teaching approach in school science for a number of years. The principal aim of the review is to explore the nature of small-group discussions aimed at improving students' understanding of evidence in science.
Who wants to know?
Policy-makers; practitioners; students
What did we find?
- In general, students often struggle to formulate and express coherent arguments during small-group discussions, and demonstrate a relatively low level of engagement with tasks. There is very strong evidence that teachers and students need to be given explicit teaching in the skills associated with the development of arguments and the characteristics associated with effective group discussions.
- There is good evidence that the stimulus used to promote discussion should involve both internal and external conflict, i.e. where a diversity of views and/or understanding are represented within a group (internal conflict) and where an external stimulus presents a group with conflicting views (external conflict).
- There is good evidence on group structure; it tends to indicate that groups should be specifically constituted so that differing views are represented. Assigning managerial roles to students is likely to be counterproductive. Group leadership which promotes inclusion and reflection can be effective.
- There is some evidence that small-group discussion work does improve students' understanding and use of evidence.
What are the implications?
Small-group discussion work needs to be supported by the provision of guidance to teachers and students on the development of the skills necessary to make such work effective.
Small-group discussion work can assist students in the development of ideas about using evidence and constructing well-supported arguments. Teachers should be encouraged to incorporate such discussions into their teaching, provided that appropriate support is offered to help them develop the necessary skills.
How did we get these results?
Nineteen studies were included in the in-depth review.
This summary was prepared by the EPPI Centre
This report should be cited as: Bennett J, Lubben F, Hogarth S, Campbell B and Robinson A (2005) A systematic review of the nature of small-group discussions aimed at improving students’ understanding of evidence in science. In: Research Evidence in Education Library. London: EPPI Centre, Social Science Research Unit, Institute of Education, University of London. |
Yawning is contagious for animals as well as humans, but researchers can’t quite figure out why. Now, new research on lions suggests a potential function for the contagious yawn for at least one creature. The study, published last month in the journal Animal Behaviour, finds that after a yawn sweeps through a group of lions, the animals tend to coordinate their subsequent movements, reports Mary Bates for National Geographic.
For New Scientist, Christa Leste-Lasserre reports the results are the first to show that communal yawning can orchestrate synchronized behavior in animals.
“Lions share a lot of things, like highly organized hunts and caring for [cubs],” Elisabetta Palagi, an ethologist at the University of Pisa in Italy, tells New Scientist. “So obviously they need to synchronize movement, and they need to communicate and anticipate the actions of their companions.”
The study came about after Palagi saw videos recorded by her master’s students in South Africa. Time and time again, after a yawn ricocheted through a group of lions, she observed the animals standing up and moving in near unison just a few moments later, according to New Scientist.
Inspired to look into the phenomenon formally, Palagi directed her team to spend five months filming 19 lions from two prides living in the Makalali Game Reserve in northeastern South Africa.
After analyzing the results, the team found lions that had just seen another pride member yawn were 139 times more likely to yawn themselves within three minutes compared to lions that hadn’t seen the behavior. The big cats were also 11 times more likely to mirror the movements of the lion that initiated the bout of contagious yawning, which the researchers call the “trigger,” according to New Scientist.
“After they yawned together, if the trigger stood up, then within seconds the second lion did the same,” Palagi tells New Scientist.
Palagi tells National Geographic that the findings show a clear correlation between contagious yawning and coordinated action, which suggests the behavior may be important for lions and other highly social species that rely on each other to find food and defend the group from danger.
Andrew Gallup, a biopsychologist at the State University of New York Polytechnic Institute who was not involved in the research, tells National Geographic that the study’s findings support the notion that the synchrony that follows contagious yawning may give animals that live in groups “advantages for collective awareness and threat detection.” |
“Commentary” is a news genre that presents the writer’s views and opinions on a specific topic or issue. Like opinion pieces, commentary is distinguished from “objective” news, which aims to present information and facts in a neutral and unbiased way. However, commentary is typically more focused on a specific topic or issue, and may provide more in-depth analysis and context.
Guidelines for journalists and users of a citizen journalist website to follow when writing and consuming commentary include:
- Clearly label commentary: Commentary pieces should be clearly labeled so readers know that they are reading the writer’s personal views and opinions on a specific topic.
- Disclose any conflicts of interest: Conflicts of interest may affect a writers views or opinions and should be disclosed in the piece.
- Respect the views and opinions of others: Writers and readers should respect the views and opinions of others, even if they disagree. This means avoiding personal attacks and harassment, and instead engaging in respectful and constructive debate.
- Present factual information accurately: While commentary pieces are meant to present the writer’s personal views, they should still be based on accurate and factual information.
- Provide context and analysis: Commentary should provide context and analysis on the topic or issue being discussed, rather than simply expressing the writer’s personal opinions.
By following these guidelines, journalists can help ensure that commentary is presented in a responsible and transparent way and that readers are able to understand and evaluate the writer’s perspective. |
meta - lan - guage
There are many reasons to learn how to divide metalanguage into syllables. Separating a word like metalanguage into syllables is mainly to make it easier to read and pronounce. The syllable is the smallest sound unit in a word, and the separation of the metalanguage into syllables allows speakers to better segment and emphasize each sound unit.
Knowing how to separate metalanguage into syllables can be especially useful for those learning to read and write, because it helps them understand and pronounce metalanguage more accurately. Furthermore, separating metalanguage into syllables can also be useful in teaching grammar and spelling, as it allows students to more easily understand and apply the rules of accentuation and syllable division.
In the case of the word metalanguage, we find that when separating into syllables the resulting number of syllables is 3. With this in mind, it's much easier to learn how to pronounce metalanguage, as we can focus on perfecting the syllabic pronunciation before trying to pronounce metalanguage in full or within a sentence. Likewise, this breakdown of metalanguage into syllables makes it easier for us to remember how to write it. |
What causes surface flooding?
Surface flooding happens when rain is so substantial that the ground cannot cope and drain water away quickly enough. In the instance of flooding, roads can become like rivers, buildings can become flooded and cars can be carried away.
Severe flooding is caused by atmospheric conditions that lead to heavy rain or the rapid melting of snow and ice. Anywhere can flood, but areas located near rivers are more at risk.
Monday’s IPCC report on climate change signalled ‘a code red for humanity‘. The report stated that “heavy rains fueled by warmer air will increase the number of deadly floods across the planet’.
Gravel Driveway Grid
Does climate change cause flooding?
The UN Scientific Report stated:
“Human activity is changing the climate in unprecedented and sometimes irreversible ways.”
The study warns of increasingly extreme heatwaves, droughts and flooding, and a key temperature limit being broken in just over a decade.
Researchers have said “flooding is the environmental disaster that impacts people more than any other.”
By 2030, millions will experience increased flooding due to climate and demographic change.
The IPCC report stated: “The recent flooding disasters in Germany, China and Afganistan share a characteristic that is on the rise: flash flooding. It isn’t clear whether such individual events can be blamed on climate change, even though generally higher temperatures can cause more moisture to gather in the atmosphere. And other factors like the age of infrastructure and river management can contribute to death tolls from flooding.
“But as intense rains have become more common across the globe, these types of rapidly moving and often deadly flood events have increased in number of the past decade,” according to Robert Brakenridge of the Dartmouth Flood Observatory at the University of Colorado.
Urbanisation and flooding
Flooding is increasingly effecting areas not pinpointed as “at risk areas”.
When Hurricane Harvey hit Texas in 2017, around 80,000 homes were flooded that were not on government risk maps.
Between 2000 and 2018 around 290 million people were directly affected by flooding. These areas have seen population growth of up to 86 million over the last 20 years.
This represents an increase of 20-24% in the proportion of the world’s population that is exposed to flood risk – ten times higher than previous estimates.
The countries with increased flood exposure risk were mainly located in Asia and sub-Saharan Africa.
Around 90% of the flood events observed by scientists were in South and Southeast Asia.
Dr Beth Tellman from the University of Arizona and Chief Science Officer at Cloud to Street, a global flood-tracking platform, said:
“We were able to capture a lot of floods in Southeast Asia more than other places, because they’re so slow-moving and so the clouds move and we’re able to get a really clear image of the flood.
“But there was also just a lot of flooding, really high impactful flooding in southern Asia and Southeast Asia. There’s also a large human population that settled near rivers for really important reasons [such as] agriculture. This, unfortunately, exposes people to a lot of flooding events.”
The global population grew by over 18% between 2000 and 2015, while in areas of observed flooding, the population increased by 34%.
While climate change has significant consequences on climate change, economics – and urbanisation – plays an important role.
“Places that have flooded tend to be really cheap land for informal development, so in Guwahati, India and Dhaka in Bangladesh, we see people moving in, and so flooded areas then become settled.
“It may not be people’s choice to live in those areas because they might not have a lot of agency. If there were really good public housing programmes or other options, I think people probably wouldn’t choose to settle in a hazardous area.”
There has been an increase of 20-24% of population at risk of flooding.
Flood risk areas have seen population growth of 86 million.
How many people are at risk of excess flooding?
It is estimated that by 2030 there will be an extra 25 countries experiencing increased flooding (in addition to the 32 being impacted at present).
Dr Tellman said: “We estimate that an additional new 179.2 million people will be exposed to floods by 2030 in 100-year sozes and most of that is due to demographic change.
“Around 50 million extra people will be exposed to inundation, we think, due directly to climate change explicitly.”
Surface flooding in the UK and Europe
Flooding has taken a toll all over Europe. In Germany 93 people died after extreme flooding devasted the country.
The western state of Rhineland-Palatine and North Rhine-Westphalia are worst hit by extreme weather conditions with “buildings and cars washed away.”
The Netherlands, Luxembourg and Switzerland have also been badly affected.
BBC Correspondent Jenny Hill reported: “All along the River Ahr there are flooded homes, broken bridges, the twisted remains of campsites and caravan parks.”
In the district of Ahrweiler “up to 1,300 people are unaccounted for” and villages were “almost entirely destroyed.”
London also experienced flash flooding this summer, when “nearly three inches of rain hit the capital in 90 minutes.” This resulted in flooded streets, basement flats, Tube stations and high streets.
According to meteorologists said the floods in Germany were caused by “a low-pressure vortex circling over Europe” that is “hemmed in by other weather fronts.”
The “near-stationary” low-pressure weather system, dubbed “Bernd” has concentrated freak rainfall over European nations.
European Commission President Ursula von der Leyen said these floods “really shows the urgency to act” on climate change, as extreme weather events are “hitting Europe more frequently as climate change warms the continent.”
2020 was Europe’s joint hottest since records began. Eight of the ten hottest ever years occurred in the past decade.
“Warmer air holds more water which, in turn, can lead to extreme downpours.”
Euronews states: “Changes in the geography of the land can also contribute to flooding, with important vegetation and other land barriers broken down as part of changing temperatures and freak weather patterns. “This means that many of the natural preventative measures against flooding are no longer there.”
The IPCC predicts that flooding will be concentrated in northern and central Europe, including the UK and Ireland, while “the south roasts”.
Sea levels will also rise causing “extreme and permanent flooding along Europe’s coasts”, especially in low-lying cities in Germany, Belgium and the Netherlands.
2020 was the hottest year in Europe since records began.
Eight in ten of the hottest years on record have taken place in the last ten years.
Who is responsible for managing flood risk in the UK?
- A map showing the potential extent of flooding to properties from rivers, surface water or reservoirs across the UK, including he long term flood risk for a property.
- A flood map used in development planning to find out the probability of flooding for a location, by flood zone.
- A 5-day flood risk forecast highlighting the risk level of areas from very low to high.
Who is responsible for managing flood risk in the UK?
Different authorities and individuals are responsible for different types of flooding.
The householders are responsible for managing flood risk from:
- Internal flooding
- Any damage caused by storm events
- All prevent drainage inside and up to the boundary of the property
Watercourse owners are responsible for maintaining watercourses that run through, beneath, or adjacent to their land, including culverts, ditches, brooks, dykes and streams.
River flooding is the responsibility of The Environment Agency. River flooding occurs as a result of intense or sustained rainfall across a catchment that exceeds the capacity of a rivers’ channel. The Environment Agency has overview of all sources of flooding and coastal erosion.
Surface water flooding is when the volume of rainfall exceeds the capacity of drains and surface water sewers and is unable to drain away through drainage systems or soak into the land. The intensity of this flooring can be increased by blocked road gullies, drains and sewers and waterlogged land. It is made worse by hard surfaces. The Lead Local Flood Authority (LLFA) have responsibility for this type of flooding.
Groundwater flooding occurs when the water table rises up above the surface, usually during a prolonged wet period. Low lying areas, cellars and basements are more likely to experience groundwater flooding.
Sewer flooding can occur as a result of blockages caused by misuse of the sewerage system. It is the responsibility of water and sewerage companies.
Coastal flooding occurs due to high storm winds and low pressure. It is overseen by the Environment Agency.
Road flooding is most commonly caused by blocked gullies and drains.
Flash flooding occurs as the land is unable to cope with heavy rainfall. It is most common on land which has experienced a prolonged dry spell and urban areas with large amounts of hard surfaces. The source of the flash flood will determine who is responsible.
Reservoir flooding can cause major damage. Most large reservoirs are operated by water companies or the Environment Agency, and are regularly monitored and inspected to ensure they are safe.
Canal flooding is overseen by The Canal and River Trust.
Flood alerts provide pre-warned risks towards properties, via the Environment Agency. These include:
Flood Alert: Flooding is possible. Stay vigilant and make early preparations for a potential flood.
Flood Warning: Flooding is expected. Immediate action is needed to protect yourself and your property.
Severe Flood Warning: Severe flooding is expected. There is a significant risk to life and property. You should prepare to evacuate and cooperate with emergency services.
What can be done to prevent flooding?
If flooding is expected:
- Put property flood resilience measures such as flood barriers in place
- If you have a flood kit or flood plan get them out and ready to use
- Move items upstairs or to safety if you can
- Move your car to safety if you can
- Stay up to date with local weather and travel on the TV, radio or social media
If you need to evacuate:
- Turn off the gas, electricity and water supplies
- Follow advice from the emergency services
- Move family and pets to safety
- Call 999 if in danger
Flooding can also be prevented before it occurs using permeable solutions.
Permeable paving are solutions to allow for infiltration of stormwater runoff. It can include pervious concrete, porous asphalt, paving stones and interlocking pavers. It is used in a variety of applications such as roads, car parks and pedestrian walkways.
Block paving can be used to create a contemporary or traditional patio. Decorative patio stones mean gardens can be enjoyed all year round. Permeable block paving prevents puddles as the porous materials allow water to soak into driveway blocks.
Porous tarmac is a fast-draining porous asphalt solution. It is designed for the long term while it possesses drainage characteristics.
Finally, the permeable paving grid is lightweight, strong and durable. It is highly effective at reducing surface water, while maintaining an attractive surface. X-Grid is an example of a permeable paving grid.
X-Grid: X-Grid is a versatile, SUDS compliant ground reinforcement grid which is suitable for a number of applications including driveways, patios and footpaths. The permeable nature of the grid allows water to pass through the structure into the sub-base below and helps to reduce surface water run-off. This helps to mitigate the effects of severe or sudden rainfall by locking groundwater into the sub-base for it to gradually soak away, diverting water from the sewer system.
Another way of reducing surface water is by installing an effective drainage system. Drainage channels are commonly found around driveways, patios, garages and conservatories.
RecoDrain Channel Drain: Plastic channel drainage system creates an effective drainage route that is easy to install and helps to reduce surface water. It is designed to redirect water away to mains or an off-grid underground drainage system. It can withstand 1.5 tonnes and can be connected to other RecoDrain to create a drainage channel of any size.
Soakaway crates are constructed using modular water storage cells. The crates loosely resemble old-style plastic milk crates and collect water from sandy or loamy soil from 40-70cm below the surface.
RecoCrate Soakaway Crates: RecoCrate is a range of recycled plastic crates for soakaways, attenuation and storm water management. It is used to create permeable infiltration schemes, underground water storage and attenuation systems. They are versatile products that can be used under car parks, landscaped areas and heavy-duty projects within retail, commercial and industrial areas. It can be used to collect as much water as possible using a non-permeable membrane to prevent liquid from escaping or can be used with a permeable membrane to slowly release water into the ground.
Resin Bound Gravel
Resin Bound Gravel: This is also a permeable solution that prevents the build-up of surface water. Resin bound gravel is a SUDS compliant solution used in conjunction with a porous base allowing rainwater to drain away naturally. A wide range of colours and combinations are available, it is non-slip and will not fade over time. Ideal for car parks, footpaths, bus lanes and road markings.
What can be done to reduce climate change?
Speak Up: The biggest way to make a difference on global climate change is by influencing friends and family, and community leaders into making good environmental decisions.
Use renewable energy: Choose a utility company that generates power from green sources, such as wind or solar.
Weatherise: Heating and air conditioning use almost half of home energy use. Instead improve installation in the home to reduce these costs.
Search for energy-efficient appliances: Since 1987, rising efficiency standards have kept 2.3 billion tons of carbon dioxide out of the air. By searching for energy efficient refrigerators, washing machines etc this trend can continue.
Reduce water waste: To reduce carbon pollution, reduce water waste. It takes a lot of energy to pump, heat and treat water, so use water-efficient fixtures and avoid excess running water.
Don’t waste food: A lot of energy goes into growing, processing, packaging and shipping food – with a large percentage ending up at landfill. By reducing this waste, you can cut down on energy consumption. Livestock products are among the most resource-intensive so eating meat-free meals can make a difference.
Buy better bulbs: LED lightbulbs use 80% less energy than conventional incandescents and will save money.
Unplug devices: Plug points in the home are likely powering around 65 different devices, however many of these sit idle. Set monitors to power down after use.
Drive fuel-efficient vehicles: Gas-smart cars, such as electric vehicles, save fuel and money.
Choose trains over planes: Air transport is a major source of climate pollution. Walking and train travel is preferable, ensuring money is saved and pollution kept to a minimum.
Antonio Guterres said: “If we combine forces now, we can avert climate catastrophe. But, as today’s report makes clear, there is no time for delay and no room for excuses. I count on government leaders and all stakeholders to ensure COP26 (climate summit) is a success.”
Surface flooding is on the rise in the UK, Europe and around the world.
The increase in global temperatures caused by climate change is resulting in melting ice caps, sea levels rising and more extreme weather conditions.
The IPCC (UN body on climate change) has said that the latest report into climate change is a “code red for humanity”.
Flooding in the UK is everyone’s responsibility, from homeowners and individuals, to businesses and authorities. The Environment Agency issues several warnings – from flood alerts to severe flood warnings – to raise awareness of flood risk and encourage people to take action which can save lives and livelihoods.
Long term, it is everyone’s responsibility to reduce the risk of climate change. Steps can include reducing air travel to unplugging devices. Ultimately, Governments can act as an influencer of change, and are being lobbied to do so alongside the general public.
Resin Bound Gravel |
After completing this section, you should be able to explain the “unusual” products formed in certain reactions in terms of the rearrangement of an intermediate carbocation.
Make certain that you can define, and use in context, the key terms below.
- alkyl shift
- hydride shift
Whenever possible, carbocations will rearrange from a less stable isomer to a more stable isomer. This rearrangement can be achieved by either a hydride shift, where a hydrogen atom migrates from one carbon atom to the next, taking a pair of electrons with it; or an alkyl shift, in which an alkyl group undergoes a similar migration, again taking a bonding pair of electrons with it. These migrations usually occur between neighbouring carbon atoms, and hence are termed 1,2-hydride shifts or 1,2-alkyl shifts.
[A hydride ion consists of a proton and two electrons, that is, [H:]−. Hydride ions exist in compounds such as sodium hydride, NaH, and calcium hydride, CaH2.]
An electrophilic reaction such as HX with an alkene will often yield more than one product. This is strong evidence that the mechanism includes intermediate rearrangement steps of the cation.
Throughout this textbook many reaction mechanisms will be presented. It is impossible to know with absolute certainty that a mechanism is correct. At best a proposed mechanism can be shown to be consistent with existing experimental data. Virtually all of the mechanisms in this textbook have been carefully studied by experiments designed to test their validity although the details are not usually discussed. An excellent example of experimental evidence which supports the carbocation based mechanism for electrophilic addition, is that structural rearrangements often occur during the reaction.
A 1,2-hydride shift is a carbocation rearrangement in which a hydrogen atom in a carbocation migrates to the carbon atom bearing the formal charge of +1 (carbon 2 in the example below) from an adjacent carbon (carbon 1).
An example of this structural rearrangment occurs during the reaction of 3-methyl-1-butene with HBr. Markovnikov's rule predicts that the preferred product would be 2-bromo-3-methylbutane, however, very little of this product forms. The predominant product is actually 2-bromo-2-methylbutane.
Mechanism of Hydride Shift
This result comes from a Hydride Shift during the reaction mechanism. The mechanism begins with protonation of the alkene which places a positive charge on the more alkyl substituted double bond carbon resulting in a secondary carbocation. In step 2, The electrons in the C-H bond on carbon #3 are attracted by the positive charge on carbon #2, and they simply shift over to fill the carbocation's empty p orbital, pulling the proton over with them. The process called a carbocation rearrangement, and more specifically, a hydride shift. A hydride ion (H:-) is a proton plus two electrons which not to be confused with H+, which is just a proton without any electrons. Notice that the hydride, in shifting, is not acting as an actual leaving group - a hydride ion is a very strong base and a very poor leaving group.
As the hydride shift proceeds, a new \(C-H\) \(\sigma \) bond is formed at carbon #2, and carbon #3 is left with an empty \(p\) orbital and a positive charge.
What is the thermodynamic driving force for this process? Notice that the hydride shift results in the conversion of a secondary carbocation (on carbon 2) to a (more stable) tertiary carbocation (on carbon 3) - a thermodynamically downhill step. As it turns out, the shift occurs so quickly that it is accomplished before the bromide nucleophile has time to attack at carbon #2. Rather, the bromide will attack after the hydride shift (step 3) at carbon #3 to complete the addition.
A 1,2-alkyl shift is a carbocation rearrangement in which an alkyl group migrates to the carbon atom bearing the formal charge of +1 (carbon 2) from an adjacent carbon atom (carbon 1), e.g.
Consider another example. When HBr is added to 3,3-dimethyl-1-butene the preferred product is 2-bromo-2,3-dimethylbutane and not 3-bromo-2,2-dimethylbutane as predicted by Markovnikov's rule.
Notice that in the observed product, the carbon framework has been rearranged: a methyl carbon has been shifted. This is an example of another type of carbocation rearrangement, called an alkyl shift or more specifically a methyl shift.
Mechanism of Alkyl Shift
Below is the mechanism for the reaction. Once again a secondary carbocation intermediate is formed in step 1. In this case, there is no hydrogen on carbon #3 available to shift over create a more stable tertiary carbocation. Instead, it is a methyl group that does the shifting, as the electrons in the carbon-carbon \(\sigma \) bond shift over to fill the empty orbital on carbon #2 (step 2 below). The methyl shift results in the conversion of a secondary carbocation to a more stable tertiary carbocation. It is this tertiary carbocation which is attacked by the bromide nucleophile to make the rearranged end product. The end result is a rearrangement of the carbon framework of the molecule.
Electrophilic addition with methyl shift:
Predicting the Product of a Carbocation Rearrangement
Carbocation shifts occur in many more reactions than just electrophilic additions as some of which will be discussed in subsequent chapters of this textbook. Whenever a carbocation is produced in a reaction's mechanism the possibility of rearrangements should be considered. As discussed in Section 7.9, there are multiple ways to stabilize a carbocation all of which could induce a rearrangement.
The most common situation for a rearrangement to occur during electrophilic addition is:
A 2o Carbocation with a 3o or 4o Alkyl Substituent
When considering the possibility of a carbocation rearrangement the most important factors are the designation of the carbocation formed and the designation of the alkyl groups attached to the carbocation. When a 2o carbocation has a 3o alkyl substituent a hydride shift will occur to create a more stable 3o carbocation. When a 2o carbocation has a 4o alkyl substituent an alkyl shift will occur to create a more stable 3o carbocation.
Drawing the Rearranged Product
First, draw the unrearranged product. Add HX to the double bond following Markovnikov's rule if necessary. Then determine if a hydride or alkyl shift is occurring by observing the designation of the alkyl substituent. The switch X ⇔H for a hydride shift and X ⇔ CH3 for an alkyl shift this will produce the rearranged product.
Draw the expectred produts of the following reaction.
Biological Carbocation Rearrangement
Carbocation rearrangements are involved in many known biochemical reactions. Rearrangements are particularly important in carbocation-intermediate reactions in which isoprenoid molecules cyclize to form complex multi-ring structures. For example, one of the key steps in the biosynthesis of cholesterol is the electrophilic cyclization of oxidosqualene to form a steroid called lanosterol.
This complex but fascinating reaction has two phases. The first phase is where the actual cyclization takes place, with the formation of four new carbon-carbon bonds and a carbocation intermediate. The second phase involves a series of hydride and methyl shifts culminating in a deprotonation. In the exercise below, you will have the opportunity to work through the second phase of the cyclase reaction mechanism.
The second phase of the cyclase reaction mechanism involves multiple rearrangement steps and a deprotonation. Please supply the missing mechanistic arrows.
1) The following reaction shows a rearrangement within the mechanism. Propose a mechanism that shows this.
2) Propose a mechanism for the following reaction. It involves an electrophilic addition and the shift of a C-C and a C-H bond.
2) In most examples of carbocation rearrangements that you are likely to encounter, the shifting species is a hydride or methyl group. However, pretty much any alkyl group is capable of shifting. Sometimes, the entire side of a ring will shift over in a ring-expanding rearrangement.
The first 1,2-alkyl shift is driven by the expansion of a five-membered ring to a six-membered ring, which has slightly less ring strain. A hydride shift then converts a secondary carbocation to a tertiary carbocation, which is the electrophile ultimately attacked by the bromide nucleophile.
Once again, the driving force for this process is an increase in stability of the carbocation. Initially, there is a primary carbocation at C2, and this becomes a tertiary carbocation at C1 as a result of the (1,2)-methyl shift. |
In the early 1140s, the Bavarian princess Bertha von Sulzbach arrived in Constantinople to marry the Byzantine emperor Manuel Komnenos. Wanting to learn more about her new homeland, the future empress Eirene commissioned the grammarian Ioannes Tzetzes to compose a version of the Iliad as an introduction to Greek literature and culture. He drafted a lengthy dodecasyllable poem in twenty-four books, reflecting the divisions of the Iliad, that combined summaries of the events of the siege of Troy with allegorical interpretations. To make the Iliad relevant to his Christian audience, Tzetzes reinterpreted the pagan gods from various allegorical perspectives. As historical allegory (or euhemerism), the gods are simply ancient kings erroneously deified by the pagan poet; as astrological allegory, they become planets whose position and movement affect human life; as moral allegory Athena represents wisdom, Aphrodite desire.
As a didactic explanation of pagan ancient Greek culture to Orthodox Christians, the work is deeply rooted in the mid-twelfth-century circumstances of the cosmopolitan Comnenian court. As a critical reworking of the Iliad, it must also be seen as part of the millennia-long and increasingly global tradition of Homeric adaptation. |
Neuromorphic Learning: How Supercomputers Mimic Human Learning Processes
Neuromorphic Learning: The Future of Supercomputing
Neuromorphic learning is a new field of computer science that is inspired by the human brain. Neuromorphic computers, also known as brain-inspired computers, are designed to mimic the way that the human brain learns and processes information.
Neuromorphic learning is still in its early stages, but it has the potential to revolutionize the way that we build computers. Neuromorphic computers could be more energy-efficient than traditional computers, and they could also be faster and more powerful.
How Supercomputers Mimic Human Learning Processes
Neuromorphic computers use a variety of techniques to mimic the way that the human brain learns. These techniques include:
A supercomputer scheduled to go online in April 2024 will rival the estimated rate of operations in the human brain according to researchers in Australia The machine called DeepSouth is capable of While reading this article your brain tucked inside the skull is processing information rapidly Our brain can perform the equivalent of an exaflop a billionbillion 1 followed by 18 zeros SynSense has demonstrated the capabilities of its neuromorphic system on chip SoC in a robot capable of learning and imitating human to see and learn allowing it to mimic human movements Developed by researchers at the International Centre for Neuromorphic Systems ICNS at Western Sydney University DeepSouth boasts the capability to mimic brain networks on the scale of an actual Breaking barriers in AI
Northwesterns latest synaptic transistor heralds a new era of intelligent computing Discover how this brainlike device 3302 were identified as varying significantly in their expression among human volunteers Researchers attribute some of the changes to age gender and body mass index However researchers say Robots can work on the moon or in deep sea but it cannot successfully mimic the human hand The world knows it and Bengalurubased CynLr wants to change it It is building the visual intelligence Supercomputers Processes and in future mechanisms underlying serious neurological and neurodegenerative diseases Because they are engineered to mimic actual brains neuromorphic computers
- Spiking neural networks: Spiking neural networks are artificial neural networks that are inspired by the way that neurons in the human brain communicate with each other. Spiking neural networks can learn to perform a variety of tasks, such as image recognition and natural language processing.
- Memristive computing: Memristive computing is a new type of computing that is based on memristors, which are electronic devices that can both store and process information. Memristive computing could be used to build neuromorphic computers that are more energy-efficient than traditional computers.
- Optogenetics: Optogenetics is a technique that uses light to control the activity of neurons in the brain. Optogenetics could be used to train neuromorphic computers to perform specific tasks.
Neuromorphic learning is a promising new field of computer science that has the potential to revolutionize the way that we build computers. Neuromorphic computers could be more energy-efficient, faster, and more powerful than traditional computers. They could also be used to solve problems that are currently beyond the reach of traditional computers. |
Coal accounts for over 37% of the world’s electricity supply. It is fundamental in powering homes and industry, providing energy for transport and producing steel and concrete.
Coal is an essential resource for tackling the challenges facing the modern world - specifically the rapid increase in energy consumption. Coal is significantly cheaper and more accessible than other fossil fuels and its reserves are distributed much more equally around the planet.
Power generation is the primary use for coal worldwide. Thermal coal is burnt to create steam that drives turbines and generators for the production of electricity.
Metallurgical (coking) coal is a key ingredient in steelmaking. Coal converted to coke is used to produce around 70% of the world’s steel. Coal is also widely used in the production of other metals including aluminium and copper.
Coal is used as a key energy source in cement production. By-products of coal combustion such as fly ash also play an important role in cement manufacture and the wider construction industry.
Coal is heated and pressurised with steam to produce ‘town’ gas for domestic lighting, heating and cooking. It is liquefied to make synthetic fuels similar to petroleum or diesel. The majority of coal-to-gas projects are located in the USA and China, with a few in Indonesia, India, Australia, Canada and South Africa.
Syngas — from gasification — can be further processed to produce chemical building blocks such as methanol, ammonia and urea.
Other major users of coal include the paper, textile and glass industries. Coal is also used in the manufacture of carbon fibre and specialist ingredients such as silicon metals, which are used to produce ingredients for the household and personal care sectors.
|Coal does not require high-pressure pipelines, expensive protection during transport or costly processing. It is easier to store and handle than alternative, highly flammable fossil fuels or nuclear materials.
|Coal only needs to be mined before it can be used. Other fossil fuels must be refined, using lengthy and costly processes. Compared to gas, coal is significantly cheaper and more accessible and its reserves are distributed much more equally around the planet.
|Coal is hugely versatile. As well as generating electricity, it is a core component in iron and steel making and is integral to a range of processes, including aluminium refining, paper manufacture and chemical production.
|The abundance of coal, its accessibility, straight-from-the-mine usability and lower transport costs, make it an affordable form of energy. Electricity produced from coal is less expensive than other sources.
|Coal is easier and safer to transport, store and handle than alternative, highly flammable fossil fuels or nuclear materials. |
Aristotle was a philosopher from ancient Greece who was born about 384 BC in Stagira on the northern border of Greece. At the age of 17, he enrolled in Plato’s Academy where he studied a wide variety of different subjects. His writings include treatises on physics, biology, zoology, metaphysics, logic, ethics, aesthetics, poetry, theater, music, rhetoric, linguistics, politics, government, and writing. In 343 BC, shortly after the death of Plato, Aristotle went off to tutor Alexander the Great. He went on to found his own school, the Lyceum, where he taught on many subjects, studied widely and wrote. Aristotle died in 322 BC. He had quite a bit to say about the theory of writing. Here are some of his quotes:
To write well, express yourself like the common people, but think like a wise man.
It is amazing how many times this tenet is broken by modern novelists. Literary novels can sometimes be guilty of flouting this rule and, I guess, that’s part of their mystique – but it’s also why they usually have a much smaller readership than the blockbusters and bestsellers that are read by millions. Often, literary novelists express themselves like the wise men, as well as thinking like wise men. Obviously, a good number of readers relish the challenge of keeping up with the intellectual gymnastics of the literary novelist or none of their books would never be sold. So perhaps there is room to bend or break this rule if you are writing in that particular genre. |
Healthy eating decisions require efficient dietary self-control in children: A mouse-tracking food decision study.
Learning how to make healthy eating decisions, (i.e., resisting unhealthy foods and consuming healthy foods), enhances physical development and reduces health risks in children. Although healthy eating decisions are known to be challenging for children, the mechanisms of children's food choice processes are not fully understood. The present study recorded mouse movement trajectories while eighteen children aged 8-13 years were choosing between eating and rejecting foods. Children were inclined to choose to eat rather than to reject foods, and preferred unhealthy foods over healthy foods, implying that rejecting unhealthy foods could be a demanding choice. When children rejected unhealthy foods, mouse trajectories were characterized by large curvature toward an eating choice in the beginning, late decision shifting time toward a rejecting choice, and slowed response times. These results suggested that children exercised greater cognitive efforts with longer decision times to resist unhealthy foods, providing evidence that children require dietary self-control to make healthy eating-decisions by resisting the temptation of unhealthy foods. Developmentally, older children attempted to exercise greater cognitive efforts for consuming healthy foods than younger children, suggesting that development of dietary self-control contributes to healthy eating-decisions. The study also documents that healthy weight children with higher BMIs were more likely to choose to reject healthy foods. Overall, findings have important implications for how children make healthy eating choices and the role of dietary self-control in eating decisions.
Adolescent; Body Mass Index; Body Weight; Child; Choice Behavior; Computers; Diet; Eating; Female; Food Preferences; Health Behavior; Healthy Diet; Humans; Male; Motivation; Reproducibility of Results; Self-Control; Software
Decision-making; Dietary self-control; Food choices; Mouse-tracking; Obesity; Youth
Ha OR, Bruce AS, Pruitt SW, et al. Healthy eating decisions require efficient dietary self-control in children: A mouse-tracking food decision study. Appetite. 2016;105:575-581. doi:10.1016/j.appet.2016.06.027 |
Some people find frogs cute; others wouldn’t even look at them, let alone touch these creatures. In whichever group you’re in, one thing is for sure: frogs are essential animals in the animal kingdom. They are fascinating creatures which scientists are continuously studying.
However, sometimes it may be challenging to classify frogs in terms of amphibians or reptile features. Today, we will clear up any possible misconceptions.
First of all, you should know that reptiles and amphibians are vertebrates; in simple words, these animals belong to families characterized by a backbone. However, reptiles and amphibians are quite different in many aspects contrary to general belief. Today we will follow some rules of Herpetology – the science of studying reptiles and amphibians, and learn some critical information about both these groups.
What Are Frogs?
Frogs are part of the amphibian family. They are tailless creatures that belong to the order Anura. When we think about the animals which are strictly classed as frogs, we can say that they are limited only to the Ranidae (true frogs) family. But if we think about frogs more broadly, the name is usually used to distinguish those smooth-skinned, hopping, anurans from squat, cute-faced creatures called toads.
Here are some of the main characteristics of frogs we know about:
- Frogs have protruding eyes that are always focused on their prey;
- Frogs have no tail;
- These creatures feature solid, webbed hind feet;
- Their feet have developed through years, so they are capable of both leaping and swimming;
- Frogs possess very smooth and moist skin;
- While some frogs spend their lives predominantly in the aquatic environment, some species live on land, even on trees and in caves;
For instance, the Hyperolius genus of Sedge Frogs can climb using special adhesive toe disks. On the other hand, there are the flying frogs, the Rhacophorus genus, which are three-dwelling.
And finally, one of the most impressive types of frogs are the Old World Rhacophoridae, which can glide somewhere around 40 to 50 feet; how do they do it? Well, they expand the webbing between their toes and fingers. Impressive, right?
Most frogs have poisonous skin glands; however, these toxins are not harmful and do not provide enough protection from the predators, such as snakes, birds, or mammals.
What frogs do for their protection is called camouflage; no wonder frogs have those “military” colors, so they can perfectly blend with the backgrounds. Other frog species can change color.
Some frog species come with bright colors on their underparts, making the animal flash as they start moving. This is another tactic for confusing their enemies or simply warning them of the frogs’ poison.
Most of the frogs will feed themselves with insects; others would prefer small arthropods, even worms. However, as odd as it might sound, many frogs will eat rodents, other frogs, and even species of reptiles.
Are Frogs Reptiles?
Well, the answer is straightforward: no, they are not reptiles. Frogs are amphibians. We want to make sure that everyone will understand why these creatures are not reptiles, mainly by explaining some of the most important differences between the two species.
Why Aren’t Frogs Reptiles?
While there are a lot of similarities between frogs and small reptiles, such as turtles or lizards, once you dig deeper into science, you will get to know the opposite. The similarities between frogs and lizards, for instance, led some people to think both of these groups belong to the reptiles family.
Let’s start with the similarities. Both reptiles and frogs are vertebrates, so these animals have backbones.
Adult amphibians and all the reptiles that have been studied so far have four legs or modified flippers that help them swim and jump. Many frog species and reptiles species spend half of their life in water, while the other half on land.
Most of the reptiles and frogs, are carnivorous. As such, they will eat other animals to survive rather than plant species.
Both reptiles and frogs are popular as “cold-blooded” creatures. This means they cannot change their body temperature internally like mammals, but are dependent on external factors.
So, to survive and have the right body temperature, they will lay under the sun when it is cold outside or hide in the shadows during hot summer days. If you want the scientific definition of cold-blooded animals, you can call them “ectothermic.”
Some cute reptiles, such as tiny turtles and terrapins, look very similar to frogs, so most people could say that these animal groups are related. Not to mention amphibians like newts and salamanders that look so similar to reptiles like lizards.
Amphibians Vs. Reptiles
The most crucial difference between amphibians and reptiles is the process of growing up and multiplying- starting with their eggs.
Most people do not know that the eggs of amphibians and reptiles are quite different than you would expect. Amphibians will need water to lay their eggs in. A jelly-looking substance protects these eggs.
However, there is no exact defending barrier that can protect the eggs of the amphibians from the surrounding predators. This groups of species of which frogs are part of are called Anamniotes.
Reptiles, on the other hand, are Amniotes. This means reptiles are protecting their embryos from the possible outside dangers using membranes. These membranes are tough and look like skin, protecting the babies from the outside world.
Because there is no restriction in terms of depositing babies in water, amniotes can develop in many habitats. No wonder reptiles and mammals are everywhere, and they are so successful. However, when it comes to tropical frogs, the eggs may be stored somewhere on land albeit in a moist location.
Frogs Breeding Process
So, the regular breeding of frogs will usually take place in freshwater, like most amphibians. Depending on the species, the eggs (which are in a significant number, somewhere between a few hundred to several thousand) will start floating off in clusters, sheets, or strings.
These eggs may become attached to various water plants and float with them on the water’s surface; unfortunately, some eggs might sink.
Tadpoles hatch in very few days or a week or more. Then they metamorphose into froglets within a couple of months to three years. The froglets will develop their limbs and lungs, then the tail will be absorbed. Finally, the mouth will become typically froglike.
Metamorphosis During Frog’s Life
Although their main organs will develop in the first months, amphibians’ bodies will completely change during their lives. This whole process is called metamorphosis. When you hear this word, you might start thinking about the spiritual meaning.
But no, this process begins with the larvae stage, which usually takes place in water, and ends up with entirely different bodies, once the frogs are complete adults.
The frog embryos are surrounded by this jelly-like substance. Frog larvae are also called tadpoles. These tadpoles will develop right inside this jelly-like substance.
As tadpoles, frogs will only have gills, the specific body parts that are essential in breathing under the water. As you can imagine, frogs do not have legs or arms at the beginning of their lives but only tails to help them swim.
During their existence, the frogs’ bodies will start changing. Slowly, their tails and gills will disappear, and the lungs will immediately develop. Eventually, the larvae will turn into adult frogs and start having the power to jump and breathe air.
Frogs, as amphibians, need to keep their skin all damp. This way, frogs can adapt to the habitats they will live in. If the weather is too hot, their skin will immediately dry out. On the other hand, reptiles can retain moisture efficiently, helping them live in a wide range of environments, including even arid surroundings, like deserts.
In A Nutshell
Now that you have passed through all the information above, if anyone asks you about frogs, you will know to tell them that these creatures are not reptiles. Think about the eggs mainly and the metamorphosis process. It is the time to rehearse a studied answer with quick facts that will convince the people around you. |
Thomas Morley (1557 or 1558 – October 1602) was an English composer, theorist, editor and organist of the Renaissance, and the foremost member of the English Madrigal School. He was the most famous writer of secular music in Elizabethan England, and the composer of the only surviving contemporary settings of verse by Shakespeare. Morley's madrigals, which were loosely based on the Italian madrigal form, became an important secular vocal music in England due to his ease of melodic writing, which he generously taught to others in a treatise on singing and composing.
Morley was one of the first composers of madrigals to utilize the practice of musical "imitation" to express emotionally the poetic narrative of a given text. This practice would eventually influence composers of the Baroque era in their use of musical components such as melody and harmony to express specific meanings or affectations in their works.
Morley was born in Norwich, in East Anglia, the son of a brewer. Most likely he was a singer in the local cathedral from his boyhood, and he became master of choristers there in 1583. However, Morley evidently spent some time away from East Anglia, for he later referred to the great Elizabethan composer of sacred music, William Byrd, as his teacher. While the dates he studied with Byrd are not known, they were most likely in the early 1570s. In 1588 he received his bachelor's degree from Oxford, and shortly thereafter was employed as organist at St. Paul's church in London.
It has been speculated that in his youth Morley converted to Roman Catholicism while under the tutelage of his mentor, Byrd. However, by 1591 he had defected from the church, and acted as an espionage agent among English Roman Catholics in the Netherlands.
Chronologically, Morley's compositions can be divided in two distinct styles. While still a pupil of William Byrd, his early works reflect the English style of polyphonic writing. From the 1590s his music began to exhibit a mastery of the Italian madrigal style that is characterized by a more direct expressiveness, lighter, jaunty rhythms, and textural clarity.
In 1588 Nicholas Yonge published his Musica transalpina, the collection of Italian madrigals fitted with English texts, which touched off the explosive and colorful vogue for madrigal composition in England. Morley evidently found his compositional direction at this time, and shortly afterwards began publishing his own collections of madrigals (11 in all).
Morley lived for a time in the same parish as Shakespeare, and a connection between the two has been long speculated, though never proven. His famous setting of "It was a lover and his lass" from As You Like It has never been established as having been used in a contemporary performance of a Shakespeare play, though the possibility, even the probability, that it was is obvious. Morley was highly placed by the mid-1590s and would have had easy access to the theatrical community. At this time there was, as there is now, a close connection between prominent actors and musicians; and the artistic community was much smaller in those days than it is today.
A defining characteristic of music of the Baroque era was that composers became increasingly concerned with human emotions ("passions and affections"), and created music to "imitate" these emotions through tonal organization. Looking to the writings of Descartes and Sauveur who, in their investigation of man's psychological makeup, began to "objectify" certain emotions, Baroque composers developed the practice of consciously expressing specific emotions through musical means.
The practice of emotional "imitation" can be found in the early madrigals of the Renaissance. As music historian Richard Taruskin observes, the madrigals of the middle part of the sixteenth century "were hotbeds of musical radicalism and experimentation" as musical devices such as dissonance and chromaticism were often utilized to express the poetics of a particular text. Composers of madrigals would justify the use of unconventional harmonic or melodic ideas to support the imitative aspect of the their musical settings of texts. The Italian theorist and scholar, Geoseffo Zarlino (1517-1590) was at first an enthusiastic supporter of the so-called "madrigalisms," but later in his life came to reject the practice feeling that composers had become too literal and far too indiscriminate in their use of this particular technique.
Nevertheless, composers became increasingly inventive in their use of these "madrigalisms," in which melodic and harmonic devices were contextualized to a particular word in order to express its meaning. Setting a riso (smile) to a passage of quick, running notes which imitate laughter, or ospiro (sigh) to a note which falls to the note below two several examples of this invention.
Known as "word-painting," this invention can be found not only in madrigals but in other vocal music of the Renaissance. Among the most important of the late madrigalists include Luca Marenzio, Carlo Gesualdo, and Claudio Monteverdi, who integrated in 1605 the basso continuo into the form and later composed the book Madrigali guerrieri et amorosi (1638) (Madrigals of War and Love), which is an example of the early Baroque madrigal. Some of the compositions in this book bear little relation to the a cappella madrigals of the previous century.
Morley formally dealt with such questions in his treatise, Plaine and Easie Introduction to Practicall Musicke, published in 1597. Here, Morley put forth the following assertion regarding assigning a musical imitation to a text or libretto:
- "It now followeth to show how to dispose your music according to the nature of the words which you are therein to express, as whatsoever matter it be which you have in hand such a kind of music must you frame to it. You must therefore, if you have any grave matter, apply a grave kind of music to it, if a merry subject you must make your music also merry, for it will be a great absurdity to use a sad harmony to a merry matter or a merry harmony to a sad, lamentable, or tragic (text)."
This attitude would lead to the predominant trend of the Baroque era, in which music was increasingly becoming a mode of emotional expression.
Morley's own madrigals are predominately light, quick-moving and easily singable, like his well-known "Now is the Month of Maying." He took the aspects of Italian style that suited his personality and anglicised them. Other composers of the English Madrigal School, for instance Thomas Weelkes and John Wilbye, were to write madrigals in a more serious or somber vein.
Instrumental and keyboard works
In addition to his madrigals, Morley wrote instrumental music, including keyboard music, some of which has been preserved in the Fitzwilliam Virginal Book. He also composed music for the uniquely English ensemble of two viols, flute, lute, cittern and bandora, notably as published in 1599 in The First Booke of Consort Lessons, made by diuers exquisite Authors, for six Instruments to play together, the Treble Lute, the Pandora, the Cittern, the Base-Violl, the Flute & Treble-Violl.
While Morley attempted to imitate the spirit of Byrd in some of his early sacred works, it was in the form of the madrigal that he made his principal contribution to music history. His work in the genre has remained in the repertory to the present day, and shows a wider variety of emotional color, form and technique than anything by other composers of the period.
Morley's Plaine and Easie Introduction to Practicall Musicke remained popular for almost 200 years after its author's death, and remains an important reference for information about sixteenth century composition and performance.
ReferencesISBN links support NWE through referral fees
- Ledger, Philip. The Oxford Book of English Madrigals. London: Oxford University Press, Music Dept, 1978. ISBN 9780193436640
- Morley, Thomas, and John Morehen. Thomas Morley. Early English church music, 38, 41. London: Published for the British Academy by Stainer and Bell, 1991. ISBN 9780852498422
- Reese, Gustave, Music in the Renaissance. New York: W.W. Norton & Co., 1954. ISBN 0393095304
- Sadie, Stanley (ed.). "Thomas Morley" in The New Grove Dictionary of Music and Musicians. London: Macmillan Publishers Ltd., 1980. ISBN 1561591742
- Slaughter, James. Music of Thomas Morley. Norman, OK: University of Oklahoma Foundation, 1987. OCLC 18203538
- Taruskin, Richard, and Piero Weiss. Music in the Western World-A History in Documents. Wadsworth Group, Belmont, CA, 1984, ISBN 0028729005
All links retrieved April 30, 2023.
- Compositions by Thomas Morley www.icking-music-archive.org.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
The history of this article since it was imported to New World Encyclopedia:
Note: Some restrictions may apply to use of individual images which are separately licensed. |
On the Arctic sea floor lie hungry predators that can eat dead polar bears.
The voracious carnivores are seastars, better known as starfish, and a new study by a national research group says they tie with polar bears as the top predators of the Arctic marine ecosystem.
Co-author Remi Amiraux, a post-doctoral fellow at the University of Manitoba, said sea floor, or benthic, organisms are not commonly studied because they are often assumed to be lower on the food chain.
But the study published last month in the Proceedings of the National Academy of Sciences found that the ocean floor includes organisms across the whole range of the food chain.
Seastars within the Pterasteridae family sat at the top, with the study dubbing them “the benthic equivalent to polar bears.”
“It’s a shift in our view of how the coastal Arctic marine food web works,” Amiraux said in an interview.
He said that invertebrates, or creatures without backbones, living in sediment on the Arctic sea floor did not just consist of plant-eating herbivores.
“You have a whole food web, including primary predators, herbivores and many carnivores. So it’s way more complex than what we thought,” Amiraux said.
The study’s authors say “megafaunal-predatory” Pterasteridae seastars thrive in this realm “because of their evolved defence mechanism associated with a diet of other predators, including marine mammal carcasses that settle onto the ocean floor.”
Amiraux said that while polar bears do not consume starfish, “the opposite is quite true.”
“Actually, when a polar bear dies, it can be eaten by carnivore seastars,” Amiraux said.
The researchers examined 1,580 samples from wildlife around Nunavut’s Southampton Island in Hudson Bay to understand how the ecosystem functions and help governing bodies protect and conserve marine life in the area.
The Southampton Island region has been identified as an area of interest for Marine Protected Area designation by Fisheries and Oceans Canada.
Amiraux said food webs provide insight into ecosystem functioning.
He noted that though the study focused on an area in the Arctic, starfish are found worldwide, so it is likely that “there is the same structure or the same food web everywhere on the sea floor.”
“I don’t think it’s a special feature of the coastal environment,” he said. “We pretty much will be able to see that in all environments.”
Brieanna Charlebois, The Canadian Press |
Human history may be traced in the move from field to city, from local farm to industrial agriculture, and with that – from indentured field serf to urban worker. Some sociologists say that human history is the history of cities. Cities may also be the birthplace of human, and worker, rights. An example: Wolfsburg, Germany, began as the Duchy of Magdeburg, then became the Stadt des KdF-Wagnes bei Fallersleben (“City of Strength Through Joy at Fallersleben”) as a planned town built to house workers for a factory producing the Volkswagen Beetle car. Volkswagen workers organized labor unions through collective agreements ensuring rights of more than 120,000 workers through the Volkswagen Group Global Works Council (GWC).
From the days of Charlemagne and into medieval times, as workers began to move into cities, they organized crafts and trades into guilds. The word “guild” comes from the Anglo-Saxon word “gild” and is related to “geld” meaning money. We still have an echo in today’s word for money in German: Das Geld. In medieval times, each guild member paid a set amount of money into a common fund to support worker training (apprentice, journeyworker, mastercraftsperson) and family benefits for the wellbeing of workers’ health and family support in the case of injury or death. Guilds morphed into trade unions when the owners of businesses changed to outside investors who were not craftpersons themselves. Labor rights were born in the city and have continued to find their growth in urban environments.
Workers and Rights. Some credit present day labor rights activist Robert Owen, a manufacturer from Wales, with the concept of the eight-hour workday. In 1817, Welsh advocated 8/8/8/ (eight hours labor, eight hours recreation, eight hours rest). Fifty years later, workers in Chicago demanded the Illinois Legislature to pass a law limiting work to eight hours per day. Although the law passed, a loop hole remained and many factory laborers were still overworked and underpaid. On May 1, 1867, they went on strike. The movement shut down Chicago, and soon other cities across the United States and Europe joined the strike. That event in 1867 led to what is now known as May Day or International Workers’ Day.
Labor. Peter McGuire, general secretary of the Brotherhood of Carpenters and Joiners first voiced in 1882 the call for a holiday for “the laboring classes who from rude nature have delved and carved all the grandeur we behold.” McGuire’s message echoed that of the medieval guilds: labor and work are forms of art and should be treasured and honored by a holiday. A leader of a similar surname, Matthew Maguire, secretary of Local 344 of the International Machinists, proposed the same holiday. Their messages were heard.
In the shadow of the Brooklyn Bridge, 10,000 city workers gathered in New York City on 5 September 1882 to rally for improvement in labor conditions. When the American government even began tracking work hours in 1890, the average factory workers clocked in 100 hours per week. Ensuing years strengthened the movement for better working hours and recognition of the major role workers play in business and economics. Oregon was the first state to recognize Labor day but Colorado, Massachusetts, New Jersey, and New York soon joined. In 1894, the Pullman strike in Chicago, Illinois jammed rail traffic throughout the country. During the strike and crisis, President Grover Cleveland signed Labor Day into law, as Congress passed an act declaring a national holiday to honor labor on the first Monday in September. Finally, in 1894, Labor Day became an official national holiday. Canada also celebrates Labor Day, but most of the world honors workers on May 1.
Worker rights continue to be an important issue around the world. In some places, children labor. In other places, women cannot work outside the home. Factory workers are often subject to unhealthy and even lethal conditions: 1500 workers died in preventable factory disasters in the garment industry in fires one decade ago. The 2013 Accord on Fire and Building Safety in Bangladesh made progress in setting new standards; a 2018 Transition Key Accord strengthened the standards to legally binding agreements between trade unions and brands; signatories include an oversight chair from the International Labour Organization (ILO)
Women’s working rights are a special issue. Women make up 70% of the labor force in some export processing zones (EPZs) in Asia, the Americas, and Sub-Saharan Africa where some bans on unionization still exist. The ILO Equal Remuneration Convention (No. 100), Discrimination in Respect of Employment and Occupation (No. 111), and Maternity Protection Convention (No. 183) have helped protect some rights but more is needed. In 1969, the International Labour Organization (ILO) received the Nobel Peace Prize; fifty years later, the ILO issued a new vision when convening the Global Commission on the Future of Work.
Every era brings new challenges for labor, work, and rights. In 2023, the union of Screen Actors Guild and American Federation of Television and Radio Artists (SAG-AFTRA) declared a strike approved by 98% of the members. One concern of the striking union members is the implications of artificial intelligence (AI) and the expansion of streaming services.These artists joined the 11,000 members of the Writers Guild of America who are also on strike. Again, the theme of the guild – and its blend of artistry and rights – finds a place in history.
If you are reading this post in Canada or in the United States, you may be enjoying a day of rest or even a traditional cook-out. But there is more to Labor Day than a long weekend. How will you celebrate and honor worker equality, justice, rights, and the fruits of our individual, and collective, labors?
Bangladesh Accord Foundation. “Accord on Fire and Building Safety,” https://bangladeshaccord.org/
. International Labor Rights Forum. “Women’s Rights and Global Labor Justice.” https://laborrights.org/issues/women’s-rights
International Trade Union Confederation (ITUC)
International Labour Organization (ILO). “Global Commission on the Future of Work,” https://www.ilo.org/global/topics/future-of-work/WCMS_569528/lang–en/index.htm
Kaunonen, Gary and Aaron Goings. Community in Conflict. Michigan State Press, 2013.
Langley, Winston E. and Vivian C. Fox. Women’s Rights in the United States: A Documentary History. Praeger, 1994. ISBN: 978-0313287558.
Loomis, Erik. A History of America in Ten Strikes. The New Press, 2018.
Smith, Toulmin, Editor, with essay on history and development of the gilds by Lujo Brentano. “English Gilds: The Original Ordinances of more than One Hundred Early English Guilds,” Oxford University Press. Digital facsimile by University of Michigan. https://quod.lib.umich.edu/c/cme/EGilds?rgn=main;view=fulltext
Seabrook, Jeremy, “The language of labouring reveals its tortured roots.” The Guardian. https://www.theguardian.com/commentisfree/2013/jan/14/language-labouring-reveals-tortured-roots1
Terkel, Studs. Working. Pantheon Books, 1974.
Toynbee, Arnold. Editor. Cities of Destiny. London: Thames & Hudson, 1967.
Zraick, Karen. “What is Labor Day? A History of the Workers’ Holiday.” 4 September 2023. The New York Times. https://www.nytimes.com/article/what-is-labor-day.html |
What is bronchitis?
Bronchitis means that the tubes that carry air to the lungs (the bronchial tubes) are inflamed and irritated. When this happens, the bronchial tubes swell and produce mucus. This makes you cough.
There are two types of bronchitis:
- Acute Bronchitis usually comes on quickly and gets better after 2 to 3 weeks. Most healthy people who get acute bronchitis get better without any problems. But it can be more serious in older adults and children and in people who have other health problems such as asthma or COPD. Complications can include pneumonia and repeated episodes of severe bronchitis.
- Chronic bronchitis keeps coming back and can last a long time, especially in people who smoke. Chronic bronchitis means that you have a cough with mucus most days of the month for 3 months of the year and for at least 2 years in a row.
What causes acute bronchitis?
Acute bronchitis is usually caused by a virus. Often a person gets it a few days after having an upper respiratory tract infection such as a cold or the flu. Sometimes it is caused by bacteria. It also can be caused by breathing in things that irritate the bronchial tubes, such as smoke.
What are the symptoms?
The main symptom of acute bronchitis is a cough that usually is dry and hacking. After a few days, the cough may bring up mucus. You may have a low fever and feel tired. Most people get better in 2 to 3 weeks. But some people have a cough for more than 4 weeks.
How is it diagnosed?
Your doctor will ask you about your symptoms and examine you. This usually gives the doctor enough information to find out if you have acute bronchitis.
In some cases, you may need a chest X-ray or other tests. These tests are to make sure that you don't have pneumonia, whooping cough, or another lung problem. This is especially true if you've had bronchitis for a few weeks and aren't getting better. More testing also may be needed for babies, older adults, and people who have lung disease (such as asthma or COPD) or other health problems.
How is acute bronchitis treated?
Most people can treat symptoms of acute bronchitis at home. They don't need antibiotics or other prescription medicines. Antibiotics don't help with viral bronchitis. And even bronchitis caused by bacteria will usually go away on its own.
If you have signs of bronchitis and have heart or lung disease (such as heart failure, asthma, or COPD) or another serious health problem, talk to your doctor right away. You may need treatment with antibiotics or medicines to help with your breathing. Early treatment may prevent problems, such as pneumonia or repeated cases of acute bronchitis caused by bacteria.
How can you care for yourself at home?
When you have acute bronchitis, there are things you can do to feel better.
- Don't smoke.
If you need help quitting, talk to your doctor about stop-smoking programs and medicines. These can increase your chances of quitting for good.
- Suck on cough drops or hard candies to soothe a dry or sore throat.
Cough drops won't stop your cough, but they may make your throat feel better.
- Breathe moist air from a humidifier, a hot shower, or a sink filled with hot water. Follow the directions for cleaning the humidifier.
The heat and moisture can help keep mucus in your airways moist so you can cough it out easily.
- Ask your doctor if you can take nonprescription medicine.
This may include acetaminophen, ibuprofen, or aspirin to relieve fever and body aches.
Don't give aspirin to anyone younger than age 20. It has been linked to Reye syndrome, a serious illness. Be safe with medicines. Read and follow all instructions on the label.
- Rest more than usual.
- Drink plenty of fluids so you don't get dehydrated.
- Use an over-the-counter cough medicine if your doctor recommends it.
Cough suppressants may help you to stop coughing. Expectorants can help you bring up mucus when you cough.
Cough medicines may not be safe for young children or for people who have certain health problems. |
The famous debates between Abraham Lincoln and Stephen Douglas were surely a matter of importance to the residents of Fullersburg, particularly in the strained pre-Civil War era. These debates were at seven locations throughout Illinois between August and October of 1858. Lincoln and Douglas both sought a Senate seat from this state, and although Douglas prevailed in the election, a bright light was shone on Lincoln as he stated a clear position for the nation against the institution of slavery. While Douglas advocated for "popular sovereignty" in regard to territories permitting or prohibiting slavery, Lincoln maintained that "a house divided against itself cannot stand."
How were the residents of Fullersburg impacted by Lincoln and Douglas? George Ruchty writes in The Fullers of Fullersburg that Lincoln stopped at the village on his way to Ottawa for the first debate. "It is known that he stopped at one of the inns for dinner during his travels west to Ottawa in August of 1858, at the time of the Lincoln-Douglas debates. My grandfather, Amenzo Coffin, heard him speak to a small group of people from the porch of the Grand Pacific Hotel." Surely the topic of slavery would have been included in Lincoln's speech, and the residents of Fullersburg would have supported his position on this subject; Fullersburg citizens actively participated in the Underground Railroad, harboring and assisting fugitive slaves on their way to freedom. Tunnels beneath the Fuller Inn and the Fullersburg Tavern as well as the cellar of Graue Mill were "stations" in this secret transportation system, which carried serious potential legal penalties for anyone involved. (See "For You" section of this website to read The Fullers of Fullersburg.)
The citizens of Fullersburg also would have been intrigued by Lincoln's skills in public speaking and debate. Numerous residents were enthusiastic members of the Brush Hill Debate Club, which was formed by the local settlers in 1857 to enhance their education and understanding of intellectual and ethical topics of the day. (The settlement of Brush Hill became Fullersburg around 1852; as stated by George Ruchty of the village, "it was said the inhabitants were either Fullers or married to the Fullers.") A journal of this club was recovered in 2020 by Fullersburg Historic Foundation, and it indicates that in 1857, the club's members discussed complaints against the white man by both enslaved and indigenous people. It is likely that some Fullerburg residents attended the Lincoln-Douglas debate in Ottawa due to their moral convictions as well as their appreciation for oratory skills and debate. (See "Journal" section of this website for further information.)
As predicted by Lincoln, a house divided against itself could not stand, and when the Civil War broke out in 1861, several men from Fullersburg enlisted as Union soldiers. Ruchty writes that Morell Fuller fought in the battles of Resaca, Kennesaw Mountain, Peachtree Creek, and Atlanta, as well as traveling with Sherman on his march to the sea; Morell also played the drum and the fife during his military service. He returned to Fullersburg after the war, started a family, and paid tribute to his country every Fourth of July wearing his military uniform and playing his drum. Many of the local Civil War veterans are laid to rest at Historic Fullersburg Cemetery in Hinsdale, including Morell, and they are honored each Memorial Day during a flag-changing ceremony. (See "Events" section of this website.)
The Grand Pacific Hotel subsequently became famous due to its connection to Abraham Lincoln, and as Ruchty notes, "Back in the horse and buggy days, 1885-1905, it was a common occurrence to have a carriage full of passengers pull up to the Grand Pacific Hotel asking to see the room in which Lincoln had slept." If they were told that it was unsure whether or not Lincoln had slept there, "the answer always provoked an argument. Such continued persistence on the part of the public not to accept the honest answer given, the Ruchty family finally decided to set up a room with a bed, commode, dresser, and chair at the top of the stairs and call in the Lincoln room. From that time on the visitors were satisfied and left feeling that their trip to Fullersburg was complete." Ruchty further notes that the bed in that small room was only five and a half feet long, but "no one ever questioned how Mr. Lincoln could sleep in so short a bed." When this hotel was torn down in 1909, "the Lincoln story was transferred to the Castle Inn across the street," ensuring that the arguments regarding Lincoln's stay in Fullersburg would not resume.
While Lincoln probably did not spend the night in Fullersburg in August of 1858, it is logical to assume that he felt kinship with the residents of Fullersburg, who not only shared his empathy toward enslaved people, but actively assisted them. For example, blacksmith John Coe (who married Harriet Fuller, sister of Benjamin) was a "conductor" in the Underground Railroad, and his obituary notice alludes to the dangers he faced. "Mr. Coe was quite prominent in the days of the Old Plank road, and during slavery days experienced some exciting adventures, his home being one of the stations on the famous underground railway." (Courtesy of The Hinsdale Doings, February 17, 1906.) Mr. Coe and the other citizens of historic Fullersburg, while not as famous as Abraham Lincoln, are also heroes who contributed to the democratic foundation of our country.
Sue Devick, M.A. |
What is brain? What are the functions and parts of brain? Information about cerebrum, cerebellum and medulla.
Your brain is the major control center of your body. it is largest and most complex part of the nervous system. lts average mass is about 1360 gr in a man and 1250 gr in a woman. The brain is surrounded by tough, protective membranes called meninges and is protected by the skull. The brain has three parts each of which has a special job:
- The cerebrum
- The cerebellum
- The medulla
1. The Cerebrum : This is the largest part of your brain. it is located in the upper part of the head and is divided into two halves from front to back. They are called the right and left hemispheres. The left hemisphere controls the right side of the body and the right hemisphere controls the left side of the body. Therefore, an injury to one side will affect the opposite side of the body. The cerebrum makes you aware of what is happening around you. It controls all voluntary actions ( the movement of different parts of your body ) it also controls your thinking and decision making processes as well as your sense of seeing, hearing, touching, tasting and smelling. The cerebrum has many special areas for these senses. If anyone of these centers is destroyed, the activity or sense it controls is lost.
2. The Cerebellum : This is smaller than the cerebrum and lies behind and below it. It is divided into two distinct hemispheres (a right part and a left part). The cerebellum’s major function is to maintain a sense of balance (using information from receptors in the inner ear). Also it coordinates muscular activity such as walking, running, jumping and swimming.
The cerebrum and cerebellum work closely together. You use your cerebrum to decide to do a physical activity and you use your cerebellum to carry out the physical activity. Suppose you kick a ball, The motion of your foot is controlled by the cerebellum. The decision to kick the ball and the choice of target are controlled by the cerebrum.
3. The Medulla: This is located at the base of the skull below the cerebellum. It is the enlarged upper end of the spinal cord. It receives nerve impulses from the spinal cord and also transfers messages from the brain to other parts of the body through the spinal cord. it also controls automatic activities such as heart beat rate, blood pressure, breathing rate and muscular action of the digestive tract. All the se activites act in an involuntary manner. |
Christopher Strachey created the first artificial intelligence program in 1951 which was a checkers-playing program. Artificial Intelligence is already changing the way we live and work. From self-driving cars to voice assistants such as Google Assistant, Siri, and Alexa, AI is already a big part of our lives. Artificial Intelligence involves creating intelligent machines that can not only perform tasks but also outperform humans in certain tasks. Let’s have a look at what is AI.
Evolution of Artificial Intelligence
Artificial intelligence is not a new concept and the development of artificial intelligence started in the early 1950s and has since come a long way. Following is a timeline of how artificial intelligence has evolved over the past few decades-
- 1956 – The term artificial intelligence was coined by John McCarthy at the Dartmouth conference.
- 1969 – The first productive expert systems, DENDRAL and MYCIN were developed at Stanford.
- 1980 – To enable computers to learn from data, machine learning techniques like neural networks and genetic algorithms were created.
- 1997 – IBM created its supercomputer Deep Blue, which defeated then chess champion Gary Kasporov.
- 2000s – Google makes breakthroughs in speech recognition, due to advances in machine learning, natural language processing, and computer vision.
- 2010 – 2019 – Computers were now capable of doing tasks like speech and image recognition with previously unheard-of levels of accuracy owing to deep learning techniques like convolutional neural networks and recurrent neural networks.
- 2020 – Present – Self-driving cars, virtual assistants, medical diagnostics, and drug development are just a few of the many present uses of AI. Many companies unveiled their conversational AI’s like Google Bard, Microsoft Bing, and Chat GPT by Open AI.
Types of Artificial Intelligence
AI may mainly be categorized into four categories, based on the kinds and levels of difficulty of the tasks a system is capable of performing. You can learn more about what is AI technology and its applications by pursuing a machine learning course. They are as follows-
- Reactive Machines: The most basic kind of AI is reactive technology, which is built just to respond to inputs and has no memory or capacity for learning from previous events. It works only on present data.
- Limited Memory: Limited memory AI has the capacity to learn from past data and make decisions based on such data. Speech recognition software and recommendation engines are two examples.
- Theory of Mind: This sort of AI is able to communicate socially and comprehend ideas and emotions. Theory of mind describes a machine’s capacity to perceive and anticipate the thoughts and feelings of other people. It’s still early days for this kind of artificial intelligence and is still not developed.
- Self Awareness: AI machines that have the capacity to acquire a feeling of self and consciousness are referred to as self-aware AI. With the present technology, this is a purely theoretical idea that is not yet feasible.
Subfields of Artificial Intelligence
The study of artificial intelligence is a vast area with several subfields, each of which focuses on a particular use or aspect of AI. Some of the major subfields are-
- Machine Learning: Machine learning uses statistical models and algorithms in order to teach computers to learn from data and make predictions or decisions without explicit programming.
- Natural Language Processing: Speech recognition, language translation, and sentiment analysis are all part of natural language processing (NLP), which uses computer algorithms to interpret and analyze human language.
- Robotics: Robotics is the study of designing, creating, and programming machines that can carry out tasks in a range of situations.
- Neural Networks: Neural networks is a form of machine learning that is modeled after the structure and characteristic of the human brain, allowing computers to learn and make choices in a human-like way.
Get a guaranteed job with our data science course with placement guarantee.
Advantages of Artificial Intelligence
There are several advantages to using artificial intelligence which has become a valuable asset to many industries. Some of the advantages are:
- Less Human Error: AI can perform tasks quickly and with greater accuracy than human beings, leading to improved performance and productivity in many areas. There will always be a possibility of inaccuracy in jobs requiring precision when people are involved.
- Increased Safety: Using AI-powered machines can help in saving human lives and it can be used at high-risk places such as nuclear plants, coal mines, etc.
- Help in Repetitive Tasks: There are several repetitive tasks that we do in our daily lives that take a lot of our time. These tasks can be assigned to machines which will help in reducing costs as well as time. For example – Industries that have production lines have automated a lot of tasks,
- Available 24X7: AI-programmed machines can run 24X7 without getting tired. It can run non-stop and can provide continuous service and support.
Disadvantages of Artificial Intelligence
There are several disadvantages as well as concerns of using artificial intelligence. Some of the disadvantages are:
- Expensive: AI system development and implementation can be costly and complex, needing a lot of resources and knowledge.
- Can’t Think: AI still has a certain limitation, it can only perform tasks and operations that have already been defined. It cannot think for itself and perform additional tasks.
- No Emotions: Machines can perform tasks better and faster than human beings as well as without error. However, emotional connection is important while working which machines still lack.
- Can Lead to Unemployment: AI has definitely made life easier but it has also taken up tasks that were previously done by humans which can lead to large-scale unemployment problems.
Applications of Artificial Intelligence
There are several industries that use artificial intelligence. Some of the examples are:
- AI in Healthcare: With the help of AI, doctors can identify illnesses in patients at an early stage and help in correct diagnosis which can help save patients’ lives.
- AI in Manufacturing and Retail: Many industries have started automating a lot of repetitive tasks that help in saving time and human labor and increase production efficiency.
- AI in Transportation: AI has made a lot of progress in the transportation industry. With self-driving cars to flights running on autopilot.
- Recommending Music and Movies: With the help of AI, online streaming companies are able to provide suggestions to their customers based on their previous interests.
- Fraud Detection: Banks have fed their systems with data regarding transactions that seem fraudulent and non-fraudulent. This helps in predicting if a transaction is correct or not.
The development of AI has been significant in the past few years and learning about what is AI can be beneficial. With the ultimate objective of building robots that can really think and reason like humans, researchers are always looking for new ways to expand the capabilities of AI. |
Understanding the space use and habitat needs of animals is essential for effective species conservation. Small animals use small structures that are difficult to monitor. LIB researchers have now used drones in a study to depict these small structures in high-resolution habitat maps. The research team was able to show how important low blackberry bushes are for sand lizards in the Dellbrücker Heide in Cologne. The drone method can find application in nature conservation and landscape planning.
“What is their neighborhood or quarter for people is their home range for wild animals,” explains Dr. Dennis Rödder, curator of Herpetology section at the Leibniz Institute for the Analysis of Biodiversity Change (LIB) in Bonn. This area is familiar to them, it’s where they move around and it fulfills their ecological needs in their daily lives, from food to shelter. After exploring the surrounding area, the animals usually return to this area. Therefore, mapping the habitat in the home range can provide valuable insights into the spatial and structural needs of wildlife. Understanding these requirements is becoming increasingly important as human impacts alter landscapes. “We hope that our work will not only remain theoretical but will also find application in conservation and landscape planning,” explains Vic Clement, PhD student at the LIB.
Sand lizards and their home range are small, as are the structures in their habitat. High-resolution maps depicting individual bushes, grass, sand or trees are therefore required for monitoring. Drones provide a remedy here: from a low altitude, they take high-resolution images of the area so that individual structures can be easily distinguished. The LIB researchers now merged the observed home ranges of the animals studied with the detailed map and were thus able to examine the structure of the habitat within the boundaries of the home range and compare it with the surrounding area. Clement, Schluckebier, and Rödder demonstrated that sand lizards in the Dellbrücker Heide favor low brambles, while avoiding open sandy areas and high vegetation. Preferences for grass and other low bushes, on the other hand, vary from animal to animal.
“The sand lizard as a cultural successor is often a victim of disturbance, destruction, or fragmentation of its habitats by human activities. Compensatory and protective management could now be better formulated with our data,” also hopes Rieke Schluckebier, Master’s candidate in the Herpetology section of the LIB. In recent years, drones have increasingly proven to be a useful tool for answering ecological questions. This time-efficient method of surveying habitat structures can be of great benefit in the management of protected areas.
Clement, V.F., R. Schluckebier, & D. Rödder (2022). About lizards and unmanned aerial vehicles: assessing home range and habitat selection in Lacerta agilis. Salamandra, 58: 24–42. |
The Triumph of Labour 1891
A May Day cartoon designed by Walter Crane
The Second International, an organisation of socialist and labour parties, declared in 1899 that May 1st would be International Workers’ Day. The date arose from earlier activities by the American Federation of Labour in its campaign for an eight-hour day. In particular it became of significance because of the deaths of workers at the hands of the police in 1884 and the executions of the Haymarket Anarchists of Chicago in 1887. Walter Crane produced a cartoon to draw public attention to their plight.
Walter Crane (1845 – 1915)
Walter Crane was a highly regarded and successful wood engraver and designer, especially known for his colour illustrations of children’s books. He was an active member of the Arts and Crafts Movement.
He joined William Morris in the Social Democratic Federation and subsequently in the Socialist League but he lent his design skills to the wider socialist and labour movement.
The original black and white cartoon would have made reproduction possible using the affordable printing methods available to the workers’ movement at that time. We have exploited modern digital tools to create this coloured version. We call this the sepia version.
Colourist - Adrian Hayes. Photography - Alex Mitchell. Designed and printed by unionised labour. www.kavitagraphics.co.uk and www.rapspiderweb.co.uk |
As climate change drives up temperatures and creates longer, more expansive droughts, the typically cool, moist high-elevation forests of the central Rockies are burning with greater frequency than any time in the past 2,000 years, according to a study in the upcoming issue of the Proceedings of the National Academy of Sciences.
Lead author Philip Higuera, a fire ecologist at the University of Montana, examined paleofire records — data from tree rings and lake sediment — across a broad swath of the central Rockies of Colorado and Wyoming to understand how frequently subalpine forests burned over the past two millennia.
During that period, the average fire rotation period was 230 years, meaning a high-elevation forest composed of subalpine fir, Engelmann spruce or lodgepole pine would burn about once every 2.5 centuries. Largely due to the 2020 fire season, which was remarkable for both its duration and the expanse of the area burned, that interval has been cut nearly in half in the 21st century. We’re now looking at subalpine forests in the central Rockies burning every 117 years.
Higuera said the compression of the time between wildfire events in subalpine forests is something the research community has been predicting for decades, but they didn’t expect to see such pronounced changes so quickly.
“It was a little surprising that it was by 2020 and not by 2030 or 2050,” he said.
The research describes high-elevation forests like the ones Higueara studied as being “useful sentinels of climate change impacts” because the typically cool, moist conditions they grow in have historically limited fire frequency and because they’ve generally experienced less land-use change and fire suppression than lower-elevation forests. But as rising temperatures dry out those forests, they’ve become more flammable. Study authors say planning across all scales — from individuals to utility companies, municipalities to the federal government — “can no longer be reasonably based on expectations of the past.”
The research also highlights the shrinking gap between extreme fire years. At the start of the 21st century, there was a 10-year interval between extreme fire years, but the past six years the central Rockies have seen high-intensity fire years every-other year.
Temperature-spurred droughts are notable not only for their impact on forest flammability but also for how they can inhibit forest regeneration following a wildfire.
“The combination of increased burning and more stressful postfire climate conditions for tree regeneration in upcoming decades foreshadows the potential for widespread loss of subalpine forest resilience to wildfire,” the study says.
Higuera said he’s at work on another study that will use a similar process to look at how subalpine forests in the Northern Rockies are responding to climate change. He’s also interested in learning more about how new fire regimes will shape ecosystems in the future.
Many ecosystems are fire-adapted, meaning fire plays an important role in the healthy functioning of that ecosystem by clearing out accumulated fuels, helping trees replace themselves, or providing habitat for animals, for example. But in other landscapes, fire has a more destabilizing influence. If a wildfire burns too hot or too frequently on those landscapes, the flora and fauna might not come back the way they were before.
Between the two, there’s a middle ground where land managers will have an opportunity to direct a response after a wildfire, Higuera said. In those areas, land managers have a critical post-wildfire window where they can support forest resilience by leading a tree-replanting effort, for example.
“Being able to distinguish between those three scenarios is important for how land managers respond,” Higuera said.
This article was originally posted on High-elevation forests now burning more frequently than any time in the past 2,000 years |
Bottlenose dolphins’ electric sense could help them navigate the globe.
Born tail first, bottlenose dolphin calves emerge equipped with two slender rows of whiskers along their beak-like snouts – much like the touch-sensitive whiskers of seals. But the whiskers fall out soon after birth, leaving the youngster with a series of dimples, known as vibrissal pits.
Recently Tim Hüttner and Guido Dehnhardt, from the University of Rostock, Germany, began to suspect that the dimples may be more than just a relic. Could they allow adult bottlenose dolphins to sense weak electric fields?
Taking an initial close look, they realized that the remnant pits resemble the structures that allow sharks to detect electric fields, and when they checked whether captive bottlenose dolphins could sense an electric field in water, all of the animals felt the field.
Electric Sensing in Dolphins: A Breakthrough Discovery
“It was very impressive to see,” says Dehnhardt, who published the extraordinary discovery and how the animals could use their electric sense on November 30, 2023, in the Journal of Experimental Biology.
To find out how sensitive bottlenose dolphins are to the electric fields produced by lifeforms in water Dehnhardt and Hüttner teamed up with Lorenzo von Fersen at Nuremberg Zoo and Lars Miersch at the University of Rostock. First, they tested the sensitivity of two bottlenose dolphins, Donna and Dolly, to different electric fields to find out whether the dolphins could detect a fish buried in the sandy sea floor.
After training each animal to rest its jaw on a submerged metal bar, Hüttner, Armin Fritz (Nuremberg Zoo) and an army of colleagues taught the dolphins to swim away within 5 seconds of feeling an electric field produced by electrodes immediately above the dolphin’s snout. Gradually decreasing the electric field from 500 to 2μV/cm, the team kept track of how many times the dolphins departed on cue and were impressed; Donna and Dolly were equally sensitive to the strongest fields, exiting correctly almost every time. It was only when the electric fields became weaker that it became evident that Donna was slightly more sensitive, sensing fields that were 2.4μV/cm, while Dolly became aware of fields of 5.5μV/cm.
Further Research: Pulsing Electric Fields
However, the electric fields produced by living animals aren’t just static. The pulsing movements of fish gills cause their electric fields to fluctuate, so could Donna and Dolly sense pulsing fields as well? This time the team pulsed the electric fields 1, 5, and 25 times per second while reducing the field strength, and sure enough, the dolphins could sense the fields. However, neither of the animals was as sensitive to the alternating fields as they were to the unvarying electric fields. Dolly could only pick up the slowest field at 28.9μV/cm, while Donna picked up all three of the oscillating fields, sensing the slowest at 11.7μV/cm.
Practical Implications of Dolphin Electrosensitivity
So what does this new super sense mean for dolphins in practice? Dehnhardt says, “The sensitivity to weak electric fields helps a dolphin search for fish hidden in sediment over the last few centimeters before snapping them up,” in contrast to sharks, the electrosensitive superstars, which are capable of sensing the electric fields of fish within 30–70cm. Hüttner and Dehnhardt also suspect that the dolphin’s ability to feel electricity could help them on a larger scale.
“This sensory ability can also be used to explain the orientation of toothed whales to the earth’s magnetic field,” says Dehnhardt, explaining that dolphins swimming through weak areas of the earth’s magnetic field at a normal speed of 10m/s could generate a detectable electric field of 2.5μV/cm across their body. And, if the animals swim faster, they are even more likely to sense the planet’s magnetic field, allowing them to use their electric sense to navigate the globe by magnetic map.
Reference: “Passive electroreception in bottlenose dolphins (Tursiops truncatus): implication for micro- and large-scale orientation” by Tim Hüttner, Lorenzo von Fersen, Lars Miersch and Guido Dehnhardt, 30 November 2023, Journal of Experimental Biology. |
125 million (2010)
/nihoɴɡo/: [nihõŋɡo], [nihõŋŋo]
Old JapaneseEarly Middle JapaneseLate Middle JapaneseEarly Modern JapaneseJapanese
Korean language, Ryukyuan languages, Chinese language
Japanese (日本語, Nihongo, [nihõŋɡo] or [nihõŋŋo]) is an East Asian language spoken by about 125 million speakers, primarily in Japan, where it is the national language. It is a member of the Japonic (or Japanese-Ryukyuan) language family, whose relation to other language groups, particularly to Korean and the suggested Altaic language family, is debated.
- Old Japanese
- Early Middle Japanese
- Late Middle Japanese
- Modern Japanese
- Geographic distribution
- Official status
- Korean hypothesis
- Altaic hypothesis
- Sentence structure
- Inflection and conjugation
- Writing system
- Study by non native speakers
Little is known of the language's prehistory, or when it first appeared in Japan. Chinese documents from the 3rd century recorded a few Japanese words, but substantial texts did not appear until the 8th century. During the Heian period (794–1185), Chinese had considerable influence on the vocabulary and phonology of Old Japanese. Late Middle Japanese (1185–1600) saw changes in features that brought it closer to the modern language, as well as the first appearance of European loanwords. The standard dialect moved from the Kansai region to the Edo (modern Tokyo) region in the Early Modern Japanese period (early 17th century–mid-19th century). Following the end in 1853 of Japan's self-imposed isolation, the flow of loanwords from European languages increased significantly. English loanwords in particular have become frequent, and Japanese words from English roots have proliferated.
Japanese is an agglutinative, mora-timed language with simple phonotactics, a pure vowel system, phonemic vowel and consonant length, and a lexically significant pitch-accent. Word order is normally subject–object–verb with particles marking the grammatical function of words, and sentence structure is topic–comment. Sentence-final particles are used to add emotional or emphatic impact, or make questions. Nouns have no grammatical number or gender, and there are no articles. Verbs are conjugated, primarily for tense and voice, but not person. Japanese equivalents of adjectives are also conjugated. Japanese has a complex system of honorifics with verb forms and vocabulary to indicate the relative status of the speaker, the listener, and persons mentioned.
Japanese has no genetic relationship with Chinese, but it makes extensive use of Chinese characters, or kanji (漢字), in its writing system, and a large portion of its vocabulary is borrowed from Chinese. Along with kanji, the Japanese writing system primarily uses two syllabic (or moraic) scripts, hiragana (ひらがな or 平仮名) and katakana (カタカナ or 片仮名). Latin script is used in a limited fashion, such as for imported acronyms, and the numeral system uses mostly Arabic numerals alongside traditional Chinese numerals.
A common ancestor of Japanese and Ryukyuan languages or dialects is thought to have been brought to Japan by settlers coming from either continental Asia or nearby Pacific islands (or both) sometime in the early- to mid-2nd century BC (the Yayoi period), replacing the languages of the original Jōmon inhabitants, including the ancestor of the modern Ainu language. Very little is known about the Japanese of this period. Because writing had yet to be introduced from China, there is no direct evidence, and anything that can be discerned about this period of Japanese must be based on the reconstructions of Old Japanese.
Old Japanese is the oldest attested stage of the Japanese language. Through the spread of Buddhism, the Chinese writing system was imported to Japan. The earliest texts found in Japan are written in Classical Chinese, but they may have been meant to be read as Japanese by the kanbun method. Some of these Chinese texts show the influences of Japanese grammar, such as the word order (for example, placing the verb after the object). In these hybrid texts, Chinese characters are also occasionally used phonetically to represent Japanese particles. The earliest text, the Kojiki, dates to the early 8th century, and was written entirely in Chinese characters. The end of Old Japanese coincides with the end of the Nara period in 794. Old Japanese uses the Man'yōgana system of writing, which uses kanji for their phonetic as well as semantic values. Based on the Man'yōgana system, Old Japanese can be reconstructed as having 88 distinct syllables. Texts written with Man'yōgana use two different kanji for each of the syllables now pronounced き ki, ひ hi, み mi, け ke, へ he, め me, こ ko, そ so, と to, の no, も mo, よ yo and ろ ro. (The Kojiki has 88, but all later texts have 87. The distinction between mo1 and mo2 apparently was lost immediately following its composition.) This set of syllables shrank to 67 in Early Middle Japanese, though some were added through Chinese influence.
Due to these extra syllables, it has been hypothesized that Old Japanese's vowel system was larger than that of Modern Japanese – it perhaps contained up to eight vowels. According to Shinkichi Hashimoto, the extra syllables in Man'yōgana derive from differences between the vowels of the syllables in question. These differences would indicate that Old Japanese had an eight-vowel system, in contrast to the five vowels of later Japanese. The vowel system would have to have shrunk some time between these texts and the invention of the kana (hiragana and katakana) in the early 9th century. According to this view, the eight-vowel system of ancient Japanese would resemble that of the Uralic and Altaic language families. However, it is not fully certain that the alternation between syllables necessarily reflects a difference in the vowels rather than the consonants – at the moment, the only undisputed fact is that they are different syllables.
Old Japanese does not have /h/, but rather /ɸ/ (preserved in modern fu, /ɸɯ/), which has been reconstructed to an earlier */p/. Man'yōgana also has a symbol for /je/, which merges with /e/ before the end of the period.
Several fossilizations of Old Japanese grammatical elements remain in the modern language – the genitive particle tsu (superseded by modern no) is preserved in words such as matsuge ("eyelash", lit. "hair of the eye"); modern mieru ("to be visible") and kikoeru ("to be audible") retain what may have been a mediopassive suffix -yu(ru) (kikoyu → kikoyuru (the attributive form, which slowly replaced the plain form starting in the late Heian period) > kikoeru (as all shimo-nidan verbs in modern Japanese did)); and the genitive particle ga remains in intentionally archaic speech.
Early Middle Japanese
Early Middle Japanese is the Japanese of the Heian period, from 794 to 1185. Early Middle Japanese sees a significant amount of Chinese influence on the language's phonology – length distinctions become phonemic for both consonants and vowels, and series of both labialised (e.g. kwa) and palatalised (kya) consonants are added. Intervocalic /ɸ/ merges with /w/ by the 11th century. The end of Early Middle Japanese sees the beginning of a shift where the attributive form (Japanese rentaikei) slowly replaces the uninflected form (shūshikei) for those verb classes where the two were distinct.
Late Middle Japanese
Late Middle Japanese covers the years from 1185 to 1600, and is normally divided into two sections, roughly equivalent to the Kamakura period and the Muromachi period, respectively. The later forms of Late Middle Japanese are the first to be described by non-native sources, in this case the Jesuit and Franciscan missionaries; and thus there is better documentation of Late Middle Japanese phonology than for previous forms (for instance, the Arte da Lingoa de Iapam). Among other sound changes, the sequence /au/ merges to /ɔː/, in contrast with /oː/; /p/ is reintroduced from Chinese; and /we/ merges with /je/. Some forms rather more familiar to Modern Japanese speakers begin to appear – the continuative ending -te begins to reduce onto the verb (e.g. yonde for earlier yomite), the -k- in the final syllable of adjectives drops out (shiroi for earlier shiroki); and some forms exist where modern standard Japanese has retained the earlier form (e.g. hayaku > hayau > hayɔɔ, where modern Japanese just has hayaku, though the alternative form is preserved in the standard greeting o-hayō gozaimasu "good morning"; this ending is also seen in o-medetō "congratulations", from medetaku).
Late Middle Japanese has the first loanwords from European languages – now-common words borrowed into Japanese in this period include pan ("bread") and tabako ("tobacco", now "cigarette"), both from Portuguese.
Modern Japanese is considered to begin with the Edo period, which lasted between 1603 and 1868. Since Old Japanese, the de facto standard Japanese had been the Kansai dialect, especially that of Kyoto. However, during the Edo period, Edo (now Tokyo) developed into the largest city in Japan, and the Edo-area dialect became standard Japanese. Since the end of Japan's self-imposed isolation in 1853, the flow of loanwords from European languages has increased significantly. The period since 1945 has seen a large number of words borrowed from English, especially relating to technology—for example, pasokon (short for "personal computer"); intānetto ("internet"), and kamera ("camera"). Due to the large quantity of English loanwords, modern Japanese has developed a distinction between /tɕi/ and /ti/, and /dʑi/ and /di/, with the latter in each pair only found in loanwords.
Although Japanese is spoken almost exclusively in Japan, it has been spoken outside. Before and during World War II, through Japanese annexation of Taiwan and Korea, as well as partial occupation of China, the Philippines, and various Pacific islands, locals in those countries learned Japanese as the language of the empire. As a result, many elderly people in these countries can still speak Japanese.
Japanese emigrant communities (the largest of which are to be found in Brazil, with 1.4 million to 1.5 million Japanese immigrants and descendants, according to Brazilian IBGE data, more than the 1.2 million of the United States) sometimes employ Japanese as their primary language. Approximately 12% of Hawaii residents speak Japanese, with an estimated 12.6% of the population of Japanese ancestry in 2008. Japanese emigrants can also be found in Peru, Argentina, Australia (especially in the eastern states), Canada (especially in Vancouver where 1.4% of the population has Japanese ancestry), the United States (notably California, where 1.2% of the population has Japanese ancestry, and Hawaii), and the Philippines (particularly in Davao and Laguna).
Japanese has no official status, but is the de facto national language of Japan. There is a form of the language considered standard: hyōjungo (標準語), meaning "standard Japanese", or kyōtsūgo (共通語), "common language". The meaning of the two terms are almost the same. Hyōjungo or kyōtsūgo is a conception that forms the counterpart of dialect. This normative language was born after the Meiji Restoration (明治維新, meiji ishin, 1868) from the language spoken in the higher-class areas of Tokyo (see Yamanote). Hyōjungo is taught in schools and used on television and even in official communications. It is the version of Japanese discussed in this article.
Formerly, standard Japanese in writing (文語, bungo, "literary language") was different from colloquial language (口語, kōgo). The two systems have different rules of grammar and some variance in vocabulary. Bungo was the main method of writing Japanese until about 1900; since then kōgo gradually extended its influence and the two methods were both used in writing until the 1940s. Bungo still has some relevance for historians, literary scholars, and lawyers (many Japanese laws that survived World War II are still written in bungo, although there are ongoing efforts to modernize their language). Kōgo is the dominant method of both speaking and writing Japanese today, although bungo grammar and vocabulary are occasionally used in modern Japanese for effect.
Dozens of dialects are spoken in Japan. The profusion is due to many factors, including the length of time the Japanese Archipelago has been inhabited, its mountainous island terrain, and Japan's long history of both external and internal isolation. Dialects typically differ in terms of pitch accent, inflectional morphology, vocabulary, and particle usage. Some even differ in vowel and consonant inventories, although this is uncommon.
The main distinction in Japanese accents is between Tokyo-type (東京式, Tōkyō-shiki) and Kyoto-Osaka-type (京阪式, Keihan-shiki). Within each type are several subdivisions. Kyoto-Osaka-type dialects are in the central region, roughly formed by Kansai, Shikoku, and western Hokuriku regions.
Dialects from peripheral regions, such as Tōhoku or Kagoshima, may be unintelligible to speakers from the other parts of the country. There are some language islands in mountain villages or isolated islands such as Hachijō-jima island whose dialects are descended from the Eastern dialect of Old Japanese. Dialects of the Kansai region are spoken or known by many Japanese, and Osaka dialect in particular is associated with comedy (see Kansai dialect). Dialects of Tōhoku and North Kantō are associated with typical farmers.
The Ryūkyūan languages, spoken in Okinawa and the Amami Islands (politically part of Kagoshima), are distinct enough to be considered a separate branch of the Japonic family; not only is each language unintelligible to Japanese speakers, but most are unintelligible to those who speak other Ryūkyūan languages. However, in contrast to linguists, many ordinary Japanese people tend to consider the Ryūkyūan languages as dialects of Japanese. This is the result of the official language policy of the Japanese government, which has declared these languages to be dialects and prohibited their use in schools.
The imperial court also seems to have spoken an unusual variant of the Japanese of the time.
Japanese is a member of the Japonic languages family, which also includes the languages spoken throughout the Ryūkyū Islands. As these closely related languages are commonly treated as dialects of the same language, Japanese is often called a language isolate.
According to Martine Irma Robbeets, Japanese has been subject to more attempts to show its relation to other languages than any other language in the world. Since Japanese first gained the consideration of linguists in the late 19th century, attempts have been made to show its genealogical relation to languages or language families such as Ainu, Korean, Chinese, Tibeto-Burman, Ural-Altaic, Altaic, Uralic, Mon–Khmer, Malayo-Polynesian and Ryukyuan. At the fringe, some linguists have suggested a link to Indo-European languages, including Greek, and to Lepcha. As it stands, only the link to Ryukyuan has wide support, though linguist Kurakichi Shiratori maintained that Japanese was a language isolate.
Similarities between Korean and Japanese were noted by Arai Hakuseki in 1717, and the idea that the two might be related was first proposed in 1781 by Japanese scholar Teikan Fujii. The idea received little attention until William George Aston proposed it again in 1879. Japanese scholar Shōsaburō Kanazawa took it up in 1910, as did Shinpei Ogura in 1934. Shirō Hattori was nearly alone when he criticised these theories in 1959. Samuel Martin furthered the idea in 1966 with his "Lexical evidence relating Korean to Japanese", as did John Whitman with his dissertation on the subject in 1985. Despite this, definitive proof of the relation has yet to be provided. Historical linguists studying Japanese and Korean tend to accept the genealogical relation, while general linguists and historical linguists in Japan and Korea have remained skeptical. Alexander Vovin suggests that, while typologically modern Korean and Japanese share similarities that sometimes allow word-to-word translations, studies of the pre-modern languages show greater differences. According to Vovin, this suggests linguistic convergence rather than divergence, which he believes is amongst the evidence of the languages not having a genealogical connection.
The proposed Altaic family, which would include languages from far eastern Europe to northeastern Asia, has had its supporters and detractors over its history. The most controversial aspect of the hypothesis is the proposed inclusion of Korean and Japanese, which even some proponents of Altaic have rejected. Philipp Franz von Siebold suggested the connection in 1832, but the inclusion first attracted significant attention in the early 1970s. Roy Andrew Miller published Japanese and the Other Altaic Languages, and dedicated much of his later career to the subject. Sergei Starostin published a 1991 monograph which was another significant stepping stone in Japanese—Altaic research. A team of scholars made a database of Altaic etymologies available over the internet, from which the three-volume Etymological Dictionary of the Altaic Languages was published in 2003. Scholars such as Yevgeny Polivanov and Yoshizo Itabashi, on the other hand, have proposed a hybrid origin of Japanese, in which Austronesian.and Altaic elements became mixed.
Skepticism over the Japanese relation to Altaic is widespread among professionals, in part because of the large number of unsuccessful attempts to establish genealogical relationships with Japanese and other languages. Opinions are polarized, with many strongly convinced of the Altaic relation, and others strongly convinced of the lack of one. While some sources are undecided, often strong proponents of either view will not even acknowledge the claims of the other side.
All Japanese vowels are pure—that is, there are no diphthongs, only monophthongs. The only unusual vowel is the high back vowel /ɯ/ listen , which is like /u/, but compressed instead of rounded. Japanese has five vowels, and vowel length is phonemic, with each having both a short and a long version. Elongated vowels are usually denoted with a line over the vowel (a macron) in rōmaji, a repeated vowel character in hiragana, or a chōonpu succeeding the vowel in katakana.
Some Japanese consonants have several allophones, which may give the impression of a larger inventory of sounds. However, some of these allophones have since become phonemic. For example, in the Japanese language up to and including the first half of the 20th century, the phonemic sequence /ti/ was palatalized and realized phonetically as [tɕi], approximately chi listen ; however, now /ti/ and /tɕi/ are distinct, as evidenced by words like tī [tiː] "Western style tea" and chii [tɕii] "social status".
The "r" of the Japanese language (technically a lateral apical postalveolar flap), is of particular interest, sounding to most English speakers to be something between an "l" and a retroflex "r" depending on its position in a word. The "g" is also notable; unless it starts a sentence, it is pronounced /ŋ/, like the ng in "sing," in the Kanto prestige dialect and in other eastern dialects.
The syllabic structure and the phonotactics are very simple: the only consonant clusters allowed within a syllable consist of one of a subset of the consonants plus /j/. This type of cluster only occurs in onsets. However, consonant clusters across syllables are allowed as long as the two consonants are a nasal followed by a homorganic consonant. Consonant length (gemination) is also phonemic.
The phonology of Japanese also includes a pitch accent system.
Japanese word order is classified as subject–object–verb. Unlike many Indo-European languages, the only strict rule of word order is that the verb must be placed at the end of a sentence (possibly followed by sentence-end particles). This is because Japanese sentence elements are marked with particles that identify their grammatical functions.
The basic sentence structure is topic–comment. For example, Kochira wa Tanaka-san desu (こちらは田中さんです). kochira ("this") is the topic of the sentence, indicated by the particle wa. The verb de aru (desu is a contraction of its polite form de arimasu) is a copula, commonly translated as "to be" or "it is" (though there are other verbs that can be translated as "to be"), though technically it holds no meaning and is used to give a sentence 'politeness'. As a phrase, Tanaka-san desu is the comment. This sentence literally translates to "As for this person, (it) is Mr./Ms. Tanaka." Thus Japanese, like many other Asian languages, is often called a topic-prominent language, which means it has a strong tendency to indicate the topic separately from the subject, and that the two do not always coincide. The sentence Zō wa hana ga nagai (象は鼻が長い) literally means, "As for elephant(s), (the) nose(s) (is/are) long". The topic is zō "elephant", and the subject is hana "nose".
In Japanese, the subject or object of a sentence need not be stated if it is obvious from context. As a result of this grammatical permissiveness, there is a tendency to gravitate towards brevity; Japanese speakers tend to omit pronouns on the theory they are inferred from the previous sentence, and are therefore understood. In the context of the above example, hana-ga nagai would mean "[their] noses are long," while nagai by itself would mean "[they] are long." A single verb can be a complete sentence: Yatta! (やった!)"[I / we / they / etc] did [it]!". In addition, since adjectives can form the predicate in a Japanese sentence (below), a single adjective can be a complete sentence: Urayamashii! (羨ましい!)"[I'm] jealous [of it]!".
While the language has some words that are typically translated as pronouns, these are not used as frequently as pronouns in some Indo-European languages, and function differently. In some cases Japanese relies on special verb forms and auxiliary verbs to indicate the direction of benefit of an action: "down" to indicate the out-group gives a benefit to the in-group; and "up" to indicate the in-group gives a benefit to the out-group. Here, the in-group includes the speaker and the out-group does not, and their boundary depends on context. For example, oshiete moratta (教えてもらった) (literally, "explained" with a benefit from the out-group to the in-group) means "[he/she/they] explained [it] to [me/us]". Similarly, oshiete ageta (教えてあげた) (literally, "explained" with a benefit from the in-group to the out-group) means "[I/we] explained [it] to [him/her/them]". Such beneficiary auxiliary verbs thus serve a function comparable to that of pronouns and prepositions in Indo-European languages to indicate the actor and the recipient of an action.
Japanese "pronouns" also function differently from most modern Indo-European pronouns (and more like nouns) in that they can take modifiers as any other noun may. For instance, one does not say in English:*The amazed he ran down the street. (grammatically incorrect insertion of a pronoun)
But one can grammatically say essentially the same thing in Japanese:驚いた彼は道を走っていった。Odoroita kare wa michi o hashitte itta. (grammatically correct)
This is partly because these words evolved from regular nouns, such as kimi "you" (君 "lord"), anata "you" (あなた "that side, yonder"), and boku "I" (僕 "servant"). This is why some linguists do not classify Japanese "pronouns" as pronouns, but rather as referential nouns, much like Spanish usted (contracted from vuestra merced, "your [(flattering majestic) plural] grace") or Portuguese o senhor. Japanese personal pronouns are generally used only in situations requiring special emphasis as to who is doing what to whom.
The choice of words used as pronouns is correlated with the sex of the speaker and the social situation in which they are spoken: men and women alike in a formal situation generally refer to themselves as watashi (私 "private") or watakushi (also 私), while men in rougher or intimate conversation are much more likely to use the word ore (俺 "oneself", "myself") or boku. Similarly, different words such as anata, kimi, and omae (お前, more formally 御前 "the one before me") may be used to refer to a listener depending on the listener's relative social position and the degree of familiarity between the speaker and the listener. When used in different social relationships, the same word may have positive (intimate or respectful) or negative (distant or disrespectful) connotations.
Japanese often use titles of the person referred to where pronouns would be used in English. For example, when speaking to one's teacher, it is appropriate to use sensei (先生, teacher), but inappropriate to use anata. This is because anata is used to refer to people of equal or lower status, and one's teacher has higher status.
Inflection and conjugation
Japanese nouns have no grammatical number, gender or article aspect. The noun hon (本) may refer to a single book or several books; hito (人) can mean "person" or "people"; and ki (木) can be "tree" or "trees". Where number is important, it can be indicated by providing a quantity (often with a counter word) or (rarely) by adding a suffix, or sometimes by duplication (e.g. 人人, hitobito, usually written with an iteration mark as 人々). Words for people are usually understood as singular. Thus Tanaka-san usually means Mr./Ms. Tanaka. Words that refer to people and animals can be made to indicate a group of individuals through the addition of a collective suffix (a noun suffix that indicates a group), such as -tachi, but this is not a true plural: the meaning is closer to the English phrase "and company". A group described as Tanaka-san-tachi may include people not named Tanaka. Some Japanese nouns are effectively plural, such as hitobito "people" and wareware "we/us", while the word tomodachi "friend" is considered singular, although plural in form.
Verbs are conjugated to show tenses, of which there are two: past and present (or non-past) which is used for the present and the future. For verbs that represent an ongoing process, the -te iru form indicates a continuous (or progressive) aspect, similar to the suffix ing in English. For others that represent a change of state, the -te iru form indicates a perfect aspect. For example, kite iru means "He has come (and is still here)", but tabete iru means "He is eating".
Questions (both with an interrogative pronoun and yes/no questions) have the same structure as affirmative sentences, but with intonation rising at the end. In the formal register, the question particle -ka is added. For example, ii desu (いいです) "It is OK" becomes ii desu-ka (いいですか。) "Is it OK?". In a more informal tone sometimes the particle -no (の) is added instead to show a personal interest of the speaker: Dōshite konai-no? "Why aren't (you) coming?". Some simple queries are formed simply by mentioning the topic with an interrogative intonation to call for the hearer's attention: Kore wa? "(What about) this?"; O-namae wa? (お名前は?) "(What's your) name?".
Negatives are formed by inflecting the verb. For example, Pan o taberu (パンを食べる。) "I will eat bread" or "I eat bread" becomes Pan o tabenai (パンを食べない。) "I will not eat bread" or "I do not eat bread". Plain negative forms are actually i-adjectives (see below) and inflect as such, e.g. Pan o tabenakatta (パンを食べなかった。) "I did not eat bread".
The so-called -te verb form is used for a variety of purposes: either progressive or perfect aspect (see above); combining verbs in a temporal sequence (Asagohan o tabete sugu dekakeru "I'll eat breakfast and leave at once"), simple commands, conditional statements and permissions (Dekakete-mo ii? "May I go out?"), etc.
The word da (plain), desu (polite) is the copula verb. It corresponds approximately to the English be, but often takes on other roles, including a marker for tense, when the verb is conjugated into its past form datta (plain), deshita (polite). This comes into use because only i-adjectives and verbs can carry tense in Japanese. Two additional common verbs are used to indicate existence ("there is") or, in some contexts, property: aru (negative nai) and iru (negative inai), for inanimate and animate things, respectively. For example, Neko ga iru "There's a cat", Ii kangae-ga nai "[I] haven't got a good idea".
The verb "to do" (suru, polite form shimasu) is often used to make verbs from nouns (ryōri suru "to cook", benkyō suru "to study", etc.) and has been productive in creating modern slang words. Japanese also has a huge number of compound verbs to express concepts that are described in English using a verb and an adverbial particle (e.g. tobidasu "to fly out, to flee," from tobu "to fly, to jump" + dasu "to put out, to emit").
There are three types of adjective (see Japanese adjectives):
- 形容詞 keiyōshi, or i adjectives, which have a conjugating ending i (い) (such as 暑い atsui "to be hot") which can become past (暑かった atsukatta "it was hot"), or negative (暑くない atsuku nai "it is not hot"). Note that nai is also an i adjective, which can become past (暑くなかった atsuku nakatta "it was not hot").暑い日 atsui hi "a hot day".
- 形容動詞 keiyōdōshi, or na adjectives, which are followed by a form of the copula, usually na. For example, hen (strange)変なひと hen na hito "a strange person".
- 連体詞 rentaishi, also called true adjectives, such as ano "that"あの山 ano yama "that mountain".
Both keiyōshi and keiyōdōshi may predicate sentences. For example,ご飯が熱い。 Gohan ga atsui. "The rice is hot."彼は変だ。 Kare wa hen da. "He's strange."
Both inflect, though they do not show the full range of conjugation found in true verbs. The rentaishi in Modern Japanese are few in number, and unlike the other words, are limited to directly modifying nouns. They never predicate sentences. Examples include ookina "big", kono "this", iwayuru "so-called" and taishita "amazing".
Both keiyōdōshi and keiyōshi form adverbs, by following with ni in the case of keiyōdōshi:変になる hen ni naru "become strange",
and by changing i to ku in the case of keiyōshi:熱くなる atsuku naru "become hot".
The grammatical function of nouns is indicated by postpositions, also called particles. These include for example:
It is also used for the lative case, indicating a motion to a location.日本に行きたい。 Nihon ni ikitai "I want to go to Japan."
Note: The subtle difference between wa and ga in Japanese cannot be derived from the English language as such, because the distinction between sentence topic and subject is not made there. While wa indicates the topic, which the rest of the sentence describes or acts upon, it carries the implication that the subject indicated by wa is not unique, or may be part of a larger group.Ikeda-san wa yonjū-ni sai da. "As for Mr. Ikeda, he is forty-two years old." Others in the group may also be of that age.
Absence of wa often means the subject is the focus of the sentence.Ikeda-san ga yonjū-ni sai da. "It is Mr. Ikeda who is forty-two years old." This is a reply to an implicit or explicit question, such as "who in this group is forty-two years old?"
Japanese has an extensive grammatical system to express politeness and formality.
The Japanese language can express differing levels in social status. The differences in social position are determined by a variety of factors including job, age, experience, or even psychological state (e.g., a person asking a favour tends to do so politely). The person in the lower position is expected to use a polite form of speech, whereas the other person might use a plainer form. Strangers will also speak to each other politely. Japanese children rarely use polite speech until they are teens, at which point they are expected to begin speaking in a more adult manner. See uchi-soto.
Whereas teineigo (丁寧語) (polite language) is commonly an inflectional system, sonkeigo (尊敬語) (respectful language) and kenjōgo (謙譲語) (humble language) often employ many special honorific and humble alternate verbs: iku "go" becomes ikimasu in polite form, but is replaced by irassharu in honorific speech and ukagau or mairu in humble speech.
The difference between honorific and humble speech is particularly pronounced in the Japanese language. Humble language is used to talk about oneself or one's own group (company, family) whilst honorific language is mostly used when describing the interlocutor and their group. For example, the -san suffix ("Mr" "Mrs." or "Miss") is an example of honorific language. It is not used to talk about oneself or when talking about someone from one's company to an external person, since the company is the speaker's in-group. When speaking directly to one's superior in one's company or when speaking with other employees within one's company about a superior, a Japanese person will use vocabulary and inflections of the honorific register to refer to the in-group superior and their speech and actions. When speaking to a person from another company (i.e., a member of an out-group), however, a Japanese person will use the plain or the humble register to refer to the speech and actions of their own in-group superiors. In short, the register used in Japanese to refer to the person, speech, or actions of any particular individual varies depending on the relationship (either in-group or out-group) between the speaker and listener, as well as depending on the relative status of the speaker, listener, and third-person referents.
Most nouns in the Japanese language may be made polite by the addition of o- or go- as a prefix. o- is generally used for words of native Japanese origin, whereas go- is affixed to words of Chinese derivation. In some cases, the prefix has become a fixed part of the word, and is included even in regular speech, such as gohan 'cooked rice; meal.' Such a construction often indicates deference to either the item's owner or to the object itself. For example, the word tomodachi 'friend,' would become o-tomodachi when referring to the friend of someone of higher status (though mothers often use this form to refer to their children's friends). On the other hand, a polite speaker may sometimes refer to mizu 'water' as o-mizu in order to show politeness.
Most Japanese people employ politeness to indicate a lack of familiarity. That is, they use polite forms for new acquaintances, but if a relationship becomes more intimate, they no longer use them. This occurs regardless of age, social class, or gender.
There are three main sources of words in the Japanese language, the yamato kotoba (大和言葉) or wago (和語), kango (漢語), and gairaigo (外来語).
The original language of Japan, or at least the original language of a certain population that was ancestral to a significant portion of the historical and present Japanese nation, was the so-called yamato kotoba (大和言葉 or infrequently 大和詞, i.e. "Yamato words"), which in scholarly contexts is sometimes referred to as wago (和語 or rarely 倭語, i.e. the "Wa language"). In addition to words from this original language, present-day Japanese includes a number of words that were either borrowed from Chinese or constructed from Chinese roots following Chinese patterns. These words, known as kango (漢語), entered the language from the 5th century onwards via contact with Chinese culture. According to the Shinsen Kokugo Jiten (新選国語辞典) Japanese dictionary, kango comprise 49.1% of the total vocabulary, wago make up 33.8%, other foreign words or gairaigo (外来語) account for 8.8%, and the remaining 8.3% constitute hybridized words or konshugo (混種語) that draw elements from more than one language.
There are also a great number of words of mimetic origin in Japanese, with Japanese having a rich collection of sound symbolism, both onomatopoeia for physical sounds, and more abstract words. A small number of words have come into Japanese from the Ainu language. Tonakai (reindeer), rakko (sea otter) and shishamo (smelt, a type of fish) are well-known examples of words of Ainu origin.
Words of different origins occupy different registers in Japanese. Like Latin-derived words in English, kango words are typically perceived as somewhat formal or academic compared to equivalent Yamato words. Indeed, it is generally fair to say that an English word derived from Latin/French roots typically corresponds to a Sino-Japanese word in Japanese, whereas a simpler Anglo-Saxon word would best be translated by a Yamato equivalent.
Incorporating vocabulary from European languages, gairaigo, began with borrowings from Portuguese in the 16th century, followed by words from Dutch during Japan's long isolation of the Edo period. With the Meiji Restoration and the reopening of Japan in the 19th century, borrowing occurred from German, French, and English. Today most borrowings are from English.
In the Meiji era, the Japanese also coined many neologisms using Chinese roots and morphology to translate European concepts; these are known as wasei kango (Japanese-made Chinese words). Many of these were then imported into Chinese, Korean, and Vietnamese via their kanji in the late 19th and early 20th centuries. For example, seiji 政治 ("politics"), and kagaku 化学 ("chemistry") are words derived from Chinese roots that were first created and used by the Japanese, and only later borrowed into Chinese and other East Asian languages. As a result, Japanese, Chinese, Korean, and Vietnamese share a large common corpus of vocabulary in the same way a large number of Greek- and Latin-derived words – both inherited or borrowed into European languages, or modern coinages from Greek or Latin roots – are shared among modern European languages – see classical compound.
In the past few decades, wasei-eigo ("made-in-Japan English") has become a prominent phenomenon. Words such as wanpatān ワンパターン (< one + pattern, "to be in a rut", "to have a one-track mind") and sukinshippu スキンシップ (< skin + -ship, "physical contact"), although coined by compounding English roots, are nonsensical in most non-Japanese contexts; exceptions exist in nearby languages such as Korean however, which often use words such as skinship and rimokon (remote control) in the same way as in Japanese.
The popularity of many Japanese cultural exports has made some native Japanese words familiar in English, including futon, haiku, judo, kamikaze, karaoke, karate, ninja, origami, rickshaw (from 人力車 jinrikisha), samurai, sayonara, Sudoku, sumo, sushi, tsunami, tycoon. See list of English words of Japanese origin for more.
Literacy was introduced to Japan in the form of the Chinese writing system, by way of Baekje before the 5th century. Using this language, the Japanese king Bu presented a petition to Emperor Shun of Liu Song in AD 478. After the ruin of Baekje, Japan invited scholars from China to learn more of the Chinese writing system. Japanese emperors gave an official rank to Chinese scholars (続守言/薩弘格/ 袁晋卿) and spread the use of Chinese characters from the 7th century to the 8th century.
At first, the Japanese wrote in Classical Chinese, with Japanese names represented by characters used for their meanings and not their sounds. Later, during the 7th century AD, the Chinese-sounding phoneme principle was used to write pure Japanese poetry and prose, but some Japanese words were still written with characters for their meaning and not the original Chinese sound. This is when the history of Japanese as a written language begins in its own right. By this time, the Japanese language was already very distinct from the Ryukyuan languages.
An example of this mixed style is the Kojiki, which was written in AD 712. They then started to use Chinese characters to write Japanese in a style known as man'yōgana, a syllabic script which used Chinese characters for their sounds in order to transcribe the words of Japanese speech syllable by syllable.
Over time, a writing system evolved. Chinese characters (kanji) were used to write either words borrowed from Chinese, or Japanese words with the same or similar meanings. Chinese characters were also used to write grammatical elements, were simplified, and eventually became two syllabic scripts: hiragana and katakana which were developed based on Manyogana from Baekje. However this hypothesis "Manyogana from Baekje" is denied by other scholars.
Modern Japanese is written in a mixture of three main systems: kanji, characters of Chinese origin used to represent both Chinese loanwords into Japanese and a number of native Japanese morphemes; and two syllabaries: hiragana and katakana. The Latin script (or romaji in Japanese) is used to a certain extent, such as for imported acronyms and to transcribe Japanese names and in other instances where non-Japanese speakers need to know how to pronounce a word (such as "ramen" at a restaurant). Arabic numerals are much more common than the kanji when used in counting, but kanji numerals are still used in compounds, such as 統一 tōitsu ("unification").
Hiragana are used for words without kanji representation, for words no longer written in kanji, and also following kanji to show conjugational endings. Because of the way verbs (and adjectives) in Japanese are conjugated, kanji alone cannot fully convey Japanese tense and mood, as kanji cannot be subject to variation when written without losing its meaning. For this reason, hiragana are suffixed to the ends of kanji to show verb and adjective conjugations. Hiragana used in this way are called okurigana. Hiragana can also be written in a superscript called furigana above or beside a kanji to show the proper reading. This is done to facilitate learning, as well as to clarify particularly old or obscure (or sometimes invented) readings.
Katakana, like hiragana, are a syllabary; katakana are primarily used to write foreign words, plant and animal names, and for emphasis. For example, "Australia" has been adapted as Ōsutoraria (オーストラリア), and "supermarket" has been adapted and shortened into sūpā (スーパー).
Historically, attempts to limit the number of kanji in use commenced in the mid-19th century, but did not become a matter of government intervention until after Japan's defeat in the Second World War. During the period of post-war occupation (and influenced by the views of some U.S. officials), various schemes including the complete abolition of kanji and exclusive use of rōmaji were considered. The jōyō kanji ("common use kanji", originally called tōyō kanji [kanji for general use]) scheme arose as a compromise solution.
Japanese students begin to learn kanji from their first year at elementary school. A guideline created by the Japanese Ministry of Education, the list of kyōiku kanji ("education kanji", a subset of jōyō kanji), specifies the 1,006 simple characters a child is to learn by the end of sixth grade. Children continue to study another 1,130 characters in junior high school, covering in total 2,136 jōyō kanji. The official list of jōyō kanji was revised several times, but the total number of officially sanctioned characters remained largely unchanged.
As for kanji for personal names, the circumstances are somewhat complicated. Jōyō kanji and jinmeiyō kanji (an appendix of additional characters for names) are approved for registering personal names. Names containing unapproved characters are denied registration. However, as with the list of jōyō kanji, criteria for inclusion were often arbitrary and led to many common and popular characters being disapproved for use. Under popular pressure and following a court decision holding the exclusion of common characters unlawful, the list of jinmeiyō kanji was substantially extended from 92 in 1951 (the year it was first decreed) to 983 in 2004. Furthermore, families whose names are not on these lists were permitted to continue using the older forms.
Study by non-native speakers
Many major universities throughout the world provide Japanese language courses, and a number of secondary and even primary schools worldwide offer courses in the language. This is much changed from before World War II; in 1940, only 65 Americans not of Japanese descent were able to read, write and understand the language.
International interest in the Japanese language dates from the 19th century but has become more prevalent following Japan's economic bubble of the 1980s and the global popularity of Japanese popular culture (such as anime and video games) since the 1990s. Close to 4 million people studied the language worldwide in 2012: more than 1 million Chinese, 872,000 Indonesian, and 840,000 South Koreans studied Japanese in lower and higher educational institutions. In the three years from 2009 to 2012 the number of students studying Japanese in China increased by 26.5 percent/three years, and by 21.8 percent in Indonesia, but dropped 12.8 percent in South Korea.
In Japan, more than 90,000 foreign students studied at Japanese universities and Japanese language schools, including 77,000 Chinese and 15,000 South Koreans in 2003. In addition, local governments and some NPO groups provide free Japanese language classes for foreign residents, including Japanese Brazilians and foreigners married to Japanese nationals. In the United Kingdom, study of the Japanese language is supported by the British Association for Japanese Studies. In Ireland, Japanese is offered as a language in the Leaving Certificate in some schools.
The Japanese government provides standardized tests to measure spoken and written comprehension of Japanese for second language learners; the most prominent is the Japanese Language Proficiency Test (JLPT), which features five levels of exams (changed from four levels in 2010), ranging from elementary (N5) to advanced (N1). The JLPT is offered twice a year. The Japanese External Trade Organization JETRO organizes the Business Japanese Proficiency Test which tests the learner's ability to understand Japanese in a business setting. The Japan Kanji Aptitude Testing Foundation, which took over the BJT from JETRO in 2009, announced in August 2010 that the test would be discontinued in 2011 due to financial pressures on the Foundation. However, it has since issued a statement to the effect that the test will continue to be available as a result of support from the Japanese government. |
The mid-IR band of electromagnetic radiation is a particularly useful part of the spectrum; it can provide imaging in the dark, trace heat signatures, and provide sensitive detection of many biomolecular and chemical signals. But optical systems for this band of frequencies have been hard to make, and devices using them are highly specialised and expensive. Now, the researchers say they have found a highly efficient and mass-manufacturable approach to controlling and detecting these waves.
The approach – developed by teams from MIT, the University of Massachusetts at Lowell, University of Electronic Science and Technology of China, and the East China Normal University – uses a flat, artificial material composed of nanostructured optical elements, instead of the usual thick, curved-glass lenses used in conventional optics. These elements are said to provide on-demand electromagnetic responses and are made using techniques like those used for computer chips, meaning that manufacturing is scalable.
“There have been remarkable demonstrations of metasurface optics in visible light and near-infrared, but in the mid-infrared it’s moving slowly,” said MIT’s Tian Gu. As they began this research, he added, the question was, since they could make these devices extremely thin, could they also make them efficient and low-cost? The team members now say they have achieved this.
The device uses an array of precisely shaped thin-film optical elements called ‘meta-atoms’ made of chalcogenide alloy, which has a high refractive index. These meta-atoms have thicknesses that are a fraction of the wavelengths of the light being observed, and collectively they can perform like a lens. They provide nearly arbitrary wavefront manipulation that isn’t possible with natural materials at larger scales, but they have a tiny fraction of the thickness, and thus only a tiny amount of material is needed.
The devices are said to transmit 80% of the mid-IR light with optical efficiencies up to 75%, representing a significant improvement over existing mid-IR metaoptics. They can also be made lighter and thinner than conventional IR optics. Using the same method, by varying the pattern of the array the researchers can arbitrarily produce different types of optical devices, including a simple beam deflector, a cylindrical or spherical lens, and complex aspheric lenses. The lenses have been demonstrated to focus mid-IR light with the maximum theoretically possible sharpness, known as the diffraction limit.
The team says these techniques allow the creation of metaoptical devices, which can manipulate light in more complex ways than can be achieved using conventional bulk transparent materials. The devices can also control polarisation and other properties. |
After sunset, a cool breeze sweeps from the land toward the ocean. The land cools more quickly than the water, causing the air above it to become cooler as a result. The warmer air above the water continues to ascend, while cooler air from above the land replaces it, resulting in the creation of a breeze on the surface.
After dusk, a cool breeze flows from the land toward the water. Due to the fact that the land cools faster than the water, the air above it becomes cooler as well. Continuing to ascend, warmer air over the ocean is replaced by cooler air from above the land, resulting in a breeze.
Close to the coast, both of these breezes might be expected.
How is land breeze Similiar to sea breeze?
- Temperature: Sea wind causes a change in the air temperature, whereas land breeze causes no change in the air temperature.
- Aerodynamic speed varies depending on where you are: land wind is slower, at 5-8 knots, while sea breeze is quicker, at 10-20 knots.
- The land breeze is predominant throughout the winter months, whereas the sea breeze is predominant during the spring or summer months.
Does a land breeze flows from land to sea?
An example of a land breeze is a local evening and early morning wind that occurs along coastal areas and blows out into open water (from the land out to sea). It appears around sunset when the sea surface is warmer than the nearby land surface, which is owing to the land surface having a smaller heat capacity and cooling off more quickly than the sea surface. The date is January 8, 2020. |
What is Cash Flow?
Cash flow is a term that refers to the net amount of cash and cash-equivalents being transferred into and out of a business. It is a reliable measure of a company’s financial health, as it indicates the company’s ability to pay its bills over the short-term. Cash flow is often considered the lifeblood of a business and is a critical factor in its financial stability.
Types of Cash Flow
There are three main types of cash flow:
- Operating Cash Flow: This represents the cash generated from a company’s core business operations. It shows how much cash is generated from a company’s products or services.
- Investing Cash Flow: This is the cash used for investing in the company’s future. It includes cash spent on assets such as buildings and equipment, as well as investments in securities.
- Financing Cash Flow: This is the cash a company receives from or uses to repay its investors, including shareholders and lenders. It includes dividends paid, stock repurchases, and repayment of debt capital.
Why is Cash Flow Important?
Cash flow is a key indicator of a company’s financial health. It provides a clear picture of a company’s ability to cover its operating costs and to reinvest in its growth. A positive cash flow indicates that a company’s liquid assets are increasing, enabling it to settle debts, reinvest in its business, return money to shareholders, pay expenses, and provide a buffer against future financial challenges.
Understanding Cash Flow Statements
A cash flow statement, one of the main financial statements, provides data about a company’s cash inflow and outflow over a period of time. It is divided into three sections: cash flow from operating activities, cash flow from investing activities, and cash flow from financing activities. This statement is used by investors, creditors, and others to assess the following:
- The company’s ability to generate future cash flows.
- The company’s ability to pay dividends and meet obligations.
- The reasons for differences between net income and net cash provided (or used) by operating activities.
- Investing and financing cash flows during a period.
How Cash Flow Works
In simple terms, if the cash coming into the business exceeds the cash going out, it is said to have a positive cash flow. Conversely, if more cash is going out than coming in, the business has a negative cash flow.
Positive Cash Flow
A positive cash flow is a good sign of financial health. It means that after all expenses and investments, the business still has money left over. This surplus can be used to invest back into the business, pay dividends to shareholders, or save for future use.
Negative Cash Flow
A negative cash flow, on the other hand, means the business is spending more than it’s earning. This could be due to high expenses, poor sales, or a combination of both. While a negative cash flow is not ideal, it’s not always a sign of trouble. For instance, a business might have a negative cash flow because it’s investing heavily in its growth.
In conclusion, understanding cash flow is crucial for both business owners and investors. It provides a clear picture of a company’s financial health and its ability to sustain itself in the long run. |
The climate has always been changing…but the pace is now faster than humans have ever seen.
Climate change threatens to make parts of the planet uninhabitable or inhospitable for life as we know it while worsening poverty, swamping coastlines and destroying infrastructure. In short, it is the most pressing global challenge we have ever faced.
Conservation International (CI) protects perhaps humanity’s biggest ally in the fight against climate change: nature.
Nature can provide up to 30 percent of the mitigation action needed to limit global warming to 1.5 degrees Celsius on average (2.7 F).
Deforestation accounts for about 11 percent of global greenhouse gas emissions caused by
In Amazonian forests, 1 percent of the tree species sequester 50 percent of the carbon.
Current greenhouse gas emission trends put the world on course for a 3.7-4.8°C temperature increase by 2100, which would cause catastrophic effects. Even current international commitments fall short of the cuts required to limit warming to a relatively safer 2°C. Even if all emissions are stopped immediately, effects will continue for centuries due to the cumulative impact of emissions already in the atmosphere. Meanwhile, nearly 800 million people globally are currently considered especially vulnerable to the effects of climate change, including the Caribbean region.
Conservation International envisions a world where nature’s contribution to addressing climate change is fully maximized. This means that nature not only lives up to its potential to mitigate climate change — tropical forests alone can deliver 30% of mitigation action needed to prevent catastrophic climate change — but also is fully deployed in places where ecosystems can help vulnerable populations adapt to the already-present and future effects of climate change.
Conservation International addresses climate change on two fronts:
Helping communities adapt to the effects of climate change that are already happening and that are expected to accelerate, such as sea-level rise.
Working to prevent further climate change by reducing emissions, enhancing carbon storage, etc.
Coastlines are the front lines of climate change: By storing large amounts of carbon and protecting vulnerable coastal communities from rising seas, coastal ecosystems help us both mitigate and adapt to the effects of climate change. Mangrove coverage is broadly estimated to be in the order of 250,000 – 300,000 ha across the Northern coasts of Guyana, Suriname, French Guiana and Amapá State, Brazil, a region known as the North Brazil Shelf Large Marine Ecosystem (NBS-LME). The NBS-LME has one of the most contiguous and dynamic mangrove forests in the world. Covering 80-90% of this coastline, these mangroves stabilize ~1600 km of silt enriched sediments against erosion, mediate in-shore flooding, sustain fisheries and ensure coastal water quality. We are working towards increasing the number of protected areas in Guyana as well as supporting management systems for protected areas, including the Protected Area Trust. Through this new expansion, we will be helping to preserve Guyana’s coast and freshwater sources by providing support to regional coastal planning and development agencies.
Incorporating natural capital in national development strategies
The fertile soil, fresh air, clean water, lush rainforest and diverse animal life all belong to what we call our “natural capital.” CI helps governments and companies to understand the value of nature — in some cases, the value of stored carbon — creating powerful incentives to protect ecosystems. CI works with the government of Guyana towards integrating the value of natural capital in national development strategies and programs, including promoting participation from all sectors, improving natural resource management knowledge and skills and mainstreaming natural capital into the national accounts.
Bolstering Guyana’s National Protected Areas System (NPAS)
Guyana’s relatively new NPAS is central to the development of a green economy. The expansion of the system is a key portion of the Emission Reduction Programme included in Guyana’s Nationally Determined Contributions under the Paris Agreement on Climate Change as well as part of the commitments under the United Nations Convention on Biological Diversity. CI supported the Protected Areas Commission (PAC) in developing a strategy for the expansion of NPAS in keeping with Guyana’s international commitments. The strategy is built from the identification of gaps in the coverage of the current system. CI also helped bolster the capacity of the PAC and the Protected Areas Trust to more effectively manage the current and expanded systems. |
In the traditional textbook model of banking, the reserve requirement plays a vitally important role in determining the money supply. Since the reserve requirement dictates the amount of cash that banks must hold against customer deposits, it determines the extent to which deposits can be ‘multiplied’ up into new money. A higher reserve requirement forces banks to hold more cash, while a lower requirement permits more loan-making activity. Under this model, a reserve requirement of zero would be a calamitous event, leading to a potentially infinite creation of money and hyperinflation. The fact that disaster did not occur, then, when the American reserve requirement was set to zero in March 2020, is an indication of the difference between banking in theory and banking in practice.
Although the textbook model is still taught to burgeoning economists, it was never an accurate description of the banking system, and has become even more antiquated since the Global Financial Crisis of 2008. Since the formation of the federal funds market in the 1920s, banks have never been restricted by their level of reserves when making lending decisions. If a bank did not have enough required reserves to allow them to make a new loan, they could simply borrow the reserves from another bank. Banks were therefore limited only by the interest cost of borrowing reserves.
This interest rate was determined by the supply of reserves in the system, which was controlled by the Fed in pursuit of a target rate. The Fed had to be responsive, and supply the system with new reserves as demand for reserves rose. If they did not, the interest rate would rise higher than the Fed’s target. This mechanism explains why the Fed always supplied new reserves to the system as needed, and why the lending decisions of banks depended only on the profitability of new loans.
The original intent of reserve requirements was to ensure that banks had sufficient cash to avoid the panic of bank runs. Since reserves can be converted into currency, the early Fed hoped that making banks maintain a healthy amount of liquidity would head off the initial doubts that ultimately spiral into mass withdrawals. The collapse of many banks during the Great Depression, though, proved reserve requirements ineffectual at this task. The reserve requirement was repurposed as a tool to help the Fed estimate the demand for reserves in the banking system, and thus make maintaining the target federal funds rate easier.
Since 2008, however, central banking has entered a new era that no longer has a need for reserve requirements. The Global Financial Crisis saw the Fed, and many other central banks, purchase huge amounts of financial assets in an attempt to support markets. The Fed bought these assets with reserves, leaving commercial banks with reserve balances far in excess of the required level. Prior to 2008, the total level of reserves in the banking system figured at a little over $40 billion. As of August 2022, total reserves now stand at $3.3 trillion. As the banking system has shifted from a reserve-scarce regime to a reserve-abundant one, the reserve requirement has become obsolete.
Still, the reserve requirement could have been maintained as a benign feature of the banking system, existing to placate politicians who fret about bank discipline. Due to a quirk in regulations, however, the reserve requirement can actually be harmful during market dislocations. Banks are required to maintain a stock of high-quality assets to ensure they have sufficient liquidity in a crisis. Failure to keep a certain amount of these assets can result in harsh regulatory action. Required reserves, however, are exempt from counting as high-quality assets, since they are ‘required’ to be maintained by banks even during a crisis.
As a regulatory position, this is illogical and foolish, since one of the purposes of required reserves has always been to serve as a source of liquidity during a crisis. During the 2020 market panic, when bank liquidity became a real concern, the Fed recognized that required reserves were a stock of high-quality assets being excluded from liquidity coverage requirements for no good reason. In March of that year, the Fed finally dropped the reserve requirement to zero, allowing all bank reserves to count as high-quality assets. This significantly expanded the market-making ability of banks and helped restore market confidence.
Although it took a financial crisis for the Fed to finally bury the reserve requirement, the tool has been dead since 2008. In a reserve-abundant regime, the Fed controls interest rates through its reverse repurchase facility (as a floor) and by paying interest on bank reserves (as a ceiling). Therefore, tools used to estimate reserve demand, such as the reserve requirement, are no longer useful. In fact, due to a strange regulatory rule, the reserve requirement turned out to be quite harmful. The Fed putting the reserve requirement to rest is indicative of the sea change in the practice of central banking over the past two decades.
Enter your email to join our mailing list and get notified whenever a new article is posted.
Copyright Banking Observer © 2022 |
Genre, a category of artistic composition, as in music or literature, characterized by similarities in form, style, or subject matter. Genre is the very basis of all literature. A specified category of genre is non-fiction. Non-fiction is type of writing that employs the literary techniques usually associated with fiction or poetry to report on persons, places, and events in the real world. Non-fiction is used in so many ways. It is based on cold hard facts, truth. Non-fictional writers have used this style to create amazing stories of wars, countries once great and powerful, and people who inspired the world. Alexander the Great to JFK, everyone has a story. Non-fiction is used in movies, books, songs, poetry, and pictures. Some of the …show more content…
Next is Persuasive Writing. With persuasive writing, the writer takes a position on an issue and argues for his or her side or against an opposing side. The writer will use facts and information to support his or her own argument while trying to influence his readers’ opinions. .
Lastly, there is Descriptive Writing. Descriptive nonfiction employs all five senses to help the reader get a visual of what the writer is trying to describe.
A sub-genre, is a genre is a more descriptive version of genre. It takes a look at a specific style if non fictional writing. There are very many sub-genres of non-fictional writing. Some main examples are an almanac, Autobiography, Biography, Blueprint, letters, diagram, school books, speech, user manual, diary, encyclopedia, news article, book report, documentary, map, travelogue, blog, report, and an essay. All have to do with real accounts, and/or something that is true.
Sub-genre Biographies are an important aspect in non-fiction writing. Some of the most influential knowledge we have comes from a biography. A biography is a sub-genre of non-fiction, and it is an account of someone’s life written by someone else. Biographies have led to some of the most significant historical discoveries to this day. Biographies of Plato, Aristotle, and all the great philosophers who could not share their information have changed the world because of biographies and stories written by
Click here to unlock this and over one million essaysGet Access
Charles Bazerman in his work Speech Acts, Genres, and Activity System: How Text Organize and People, explains that genre systems are, “text is embedded within structured social activities and depends on previous texts that influence the social activity and organization” (Bazerman
A writer can influence a reader in many ways using many different strategies. They may be influenced with emotion, logic, and experience. A main way of influencing your reader is to support your claims thoroughly. At the beginning of the semester, and through part of the semester, I would come up with a good main point, but I would lack Good supporting detail to my claims. After being in this class, I realize more that I need to come up with good supporting evidence to back up my claims.
GENRE: The genre of Distant Waves is science fiction because it is something that didn't happen that has futuristic thing in it such as earthquake machines, time travel, and other advanced devices that we don't have today. It also gave scientific explanations for everything.
Science fiction is a genre that has characteristics such as a futuristic setting and a human element. It is based on controversial areas of science or specific theories that have not yet been proven to be true. Science fiction works depict what may happen in the future as an effect of what technology and events exist presently. The genre of the short story There Will Come Soft Rains by Ray Bradbury is identifiable as science fiction through the setting, character and plot.
When I first learned that the second unit would be about writing in different genres I was totally confused. What did this mean? Would I be doing different writings of mystery, romance or science fiction? That was my only small narrow view of what a genre is. I have now come to learn that a genre is not only simply whether something is fiction or nonfiction but a genre is different types of writing from a recipe to a resume. I have come to learn that there are so many different types of genres all unique in their own respect and all written from a different perspective.
“Genres are types of texts that are recognizable to the reader and writers, and meet the needs of the rhetorical situation in which they function.”(Swales 467) I asked Hilgenbrink, What texts or books do police officers use every day? He said “I think the text or book that we use every day is the laws that we enforce.” This is pretty simple because the laws are stated in the state’s constitution which the police officers inforce. Genre is crucial to the development of a discourse
Have you ever wondered why you were forced to take an English course in college, where you had to learn about genre and different forms of writing? Ever also wondered, when am I going to ever use this in everyday life? Well, I am here to tell you there is an explanation to both of those questions and it should make one think about what they are learning on a whole different level. Starting with genre, you should know that pretty much everything has a genre within.
What exactly do the terms that make up this navigation chart mean and how are they interconnected. Let’s explore that now, genre is an identification and classification of writing.
Genre is the French word for 'type'. Type is the kind of text it is.
Genre Theory is used in the categorization of films. Genres are dependent on various factors such as story line, whom the director is, and what are the audience expectations In using genre theory we create a short cut in how we describe films. Genres are categorized into and then sub categorized depending on the story and plot. Fantasy is a genre described as, “Any film with obviously unreal, magical, or impossible situations, characters, or settings, often overlapping with various other genres, especially science fiction, but sometimes historical dramas.” (Goodykoontz, 2014) Fantasy is a genre that typically includes a crossover genre, sci-fi,
The comparisons could kick off with the writer of books and music, but the writer of a novel is called an author and the writer of music is called an artist. Authors try to set the emotion and the story of both. The authors illustrate what the story is coming to be about. They use different techniques and styles to communicate ideas with the reader and the listener. An author can produce material across different genres. For a book the genres are fictional or nonfictional. While the genres of music can be: Rock, Classical, Country, Gospel, Pop, and Jazz. |
When someone asks you, “What’s your book about?” you probably sum up the plot of your story with two things: the main character, and the external conflict.
It’s about a nerdy teenager who stumbles on an illegal drug ring in his high school.
It’s about a woman trying to advance her career in a misogynistic, male-dominated industry.
It’s about a warrior who comes out of retirement to face a monster everyone thought was a myth.
A story’s external conflict is often the biggest thing readers look for when they’re choosing what to read next, and it’s the thing they’ll remember when they’re telling their friends how great your book was later. We’ll take you through everything you need to know about the different types of external conflict a story can have, how to find the right one for your plot, and some examples from successful novels.
What is external conflict in a story?
External conflict is the struggle that occurs between a character, usually the protagonist, and an outside force. The outside force might be another character, a group of people, a force of nature, or even a societal or cultural belief. External conflict forces the character to make choices that ultimately drive the events of the plot.
Sources of external conflict usually do one of two things: they either drive the plot by forcing the character to change, or they drive the plot by being resistant to change. For example, if your protagonist’s parent is faced with a sudden health crisis, the protagonist might have to change their routine and start taking on more work to support them. On the other hand, if the protagonist’s parent is refusing to give them permission to try something new, they’re creating an external conflict that is resistant to change.
In both cases, the character needs to think creatively and find ways to overcome this external problem.
What’s the difference between external and internal conflict?
Truly great stories are able to balance both internal and external conflict. The difference between the two is that external conflict comes from a character’s struggle against something in the outside world, while internal conflict is tension that originates from within the character itself—in other words, the classic story conflict of character vs. self.
An example of internal conflict might be if a character has to choose between two different paths in life, or if they’re faced with a choice that goes against their personal morals. Meanwhile, the external conflict might be the force that’s creating those choices in the first place—for instance, debt collectors, challenging family relationships, or professional rivalry.
The most engaging and effective stories will have layer upon layer of conflict that your protagonist needs to overcome in order to learn something about themselves and earn their happy ending.
The 5 types of external conflict that characters encounter
There’s no end to the challenges you can throw at your main character. External conflict comes in many forms, depending on what form these challenges take and where they’re coming from. Let’s look at the different types of external conflicts you can work with in your story.
1. Character vs. character
The character vs. character conflict happens when two characters in a story want things that are mutually exclusive to each other. You’ll probably recognize this type of conflict from the superhero genre, in which the main character is constantly battling other characters who are looking to cause trouble. Whether it’s murder, armed robbery, or just generally inciting chaos, the villain has an end goal which is in direct opposition to the hero’s goal: keeping their city safe.
This type of external conflict is one of the oldest and most enduring narratives, probably because it lends itself so well to the oral tradition. If you’re sitting around a bonfire with your tribesmates, the story of how one warrior overcame an enemy warrior from the other side of the river is more engaging than the story of how one warrior battled with himself over what to have for breakfast.
However, character vs. character conflicts aren’t always a battle between good and evil. Two good characters (or two evil ones) can want different things that put them in conflict with one another. For example, two colleagues might be competing for the same promotion, or a highschooler might be aiming to apply for a college overseas while their parents want them to apply for one closer to home.
2. Character vs. nature
The character vs. nature conflict occurs when the protagonist is faced with an impersonal force of the natural world like a tornado, hurricane, or flood. It could also be an inhospitable landscape, like being lost in the wilderness or stranded on a deserted island. In these instances, the hero of the story needs to adapt, overcome, or escape in order to protect themselves and their loved ones.
An important thing to note about nature conflicts is that the antagonistic force isn’t acting out of any kind of intentional malice. If a storm tears down your main character’s house, it’s nothing personal; it’s just going about its day, doing what storms do. This means your hero is facing something with a fundamental lack of agency and humanity… which can be even scarier than facing a murderous supervillain.
This type of external conflict is a great vehicle for character-driven stories, because the way people react in times of unstoppable crisis will reveal a lot about who they truly are.
3. Character vs. the supernatural
The character vs. the supernatural conflict became popular in the Victorian era, and still fascinates readers today. This type of conflict happens when the main character is faced with something more than the world we know—for instance, ghosts, vampires, cursed objects, and so forth (although not generally considered “supernatural,” alien races would also fall under this type of conflict).
Like the nature conflict, supernatural conflicts force the protagonist to adapt very quickly to a new way of looking at the world. Suddenly, none of the old rules apply.
This type of conflict is also very flexible. You can use it to create a fast-paced, action-packed story, or a deep, emotional exploration of what it really means to be human.
4. Character vs. society
The character vs. society conflict involves the hero coming up against a political system, societal convention, or cultural belief. The antagonistic force in these stories is a large, bodiless problem with the world that is much bigger than your main character. This makes the society external conflict similar to the nature conflict. If your story’s antagonist isn’t a government system but a single political leader, that would be a character vs. character conflict instead.
Society conflicts are very popular in dystopian literature. For example, The Handmaid’s Tale shows the protagonist fighting against a broken system designed to oppress women. However, this type of conflict can be effective in any genre to draw attention to cultural issues like racism, homophobia, gender disparity, or class divides. Stories with this conflict explore what is wrong in the world and how it might be overcome.
5. Character vs. fate
The character vs. fate conflict happens when a character struggles against their own destiny. This might be a literal destiny as laid out by a prophecy, oracle, or divination; or, it might be a social destiny, such as inheriting bad habits or bad luck from one’s family. In these types of stories, the main character wants to make their own choices, but fears there was never any real choice at all.
This conflict type lends itself well to tragedy, because the act of trying to outrun one’s fate is often the hero’s undoing. However, this can also be a good conflict to use when exploring the values of tradition vs. innovation, or heritage vs. independence.
Examples of external conflict in literature
To see how other writers have made external conflict work in their stories, let’s look at a few examples from classic and contemporary literature.
External conflict in Pride and Prejudice
A study in both external and internal conflicts, Jane Austen’s classic satirical romance pits its characters against both the social norms of the day and each other. The heavily matriarchal family is limited by society’s expectations of them and its laws: if their father dies, their home gets shuffled off to a distant male relative, instead of the five daughters who grew up there! Unable to change the rules, the main characters need to find the best possible path through them.
The novel also features several layers of character conflict, in which the main characters work towards goals that conflict and undermine each other’s. The clearest example of this is Mrs. Bennett, the head of the family, who desperately wants to see her daughters married off. Trifles like love and happiness take a back seat.
External conflict in Macbeth
One of the most famous examples of the character vs. fate conflict, this Shakespearean play follows the titular protagonist who tries to outwit his destiny. At the beginning of the story, Macbeth meets a trio of witches who tell him one day he’ll be king. Pretty cool, he thinks, so he kills poor King Duncan in his sleep and takes the Scottish crown.
Unfortunately, the witches have a couple caveats about Macbeth’s ultimate fate. He spends the rest of the play trying to outmaneuver the prophecy about his death, which, in classic tragedy form, only leads to his undoing.
External conflict in The Ocean at the End of the Lane
In one of the most powerful examples of character vs. character conflicts, Neil Gaiman’s novel sets its young hero against a villain who is more powerful than him in every way. In typical “David and Goliath” style, the protagonist struggles to overcome an adversary that seems unstoppable. The character’s journey is all about discovering his own inner strength and finding creative ways to outwit his enemy.
How to develop external conflict in your story
If you’re looking to add external conflict to your story, there are a few things you can keep in mind. A well-developed central conflict can create tension, enhance character development, and keep readers engaged every step of the way.
Consider what’s at stake
For your story’s external conflict to be effective, your character needs something to lose. They need a good reason to stand and face the conflict, rather than turning tail and making a fresh go of it somewhere else. What’s really at the heart of the problem?
If we look at Pride and Prejudice, above, Mrs. Bennett and her daughters are at loggerheads over Mrs. B’s need to get her daughters safely shacked up. But, it’s hard to deny that the dame has a point: if her daughters don’t find a husband—any husband—they’re going to end up living in a cardboard box.
Often in life, the easiest way to avoid a conflict is to simply walk away. But, if your character walks away, there’s no story. Ask yourself what’s keeping them there and what they stand to lose.
Consider your character’s strengths and weaknesses
The best external conflicts occur when they play off the protagonist’s inherent strengths and weaknesses. The core conflict will force the character to face the weaknesses that have been holding them back, and discover new strengths they didn’t know they had.
For example, if your protagonist is very shy and introverted, the story’s conflict might be something that forces them to speak up against oppression. Or, if your character is someone who is physically very strong, they might come up against someone even stronger and have to look for creative new ways to defeat their enemy.
A good story is all about growth and discovery, so the story’s conflict should make the main character challenge themselves in ways they never expected.
Consider your story’s internal conflict
A story’s internal conflict is just as important, if not more so, than its external conflict. Truly compelling stories balance both and use them to inform each other. Your story’s external conflict might create an internal struggle for your character, or their internal struggle might lead to an unexpected external conflict.
For example, maybe your character is struggling with the inherent ethics of their workplace. That’s an internal conflict. But their uncertainty could lead to secondary conflicts like a falling out with a coworker, a financial crisis if they lose or leave their job, or a societal conflict as the protagonist begins a movement against the company. This can lead to even more internal conflict as they consider whether or not they’ve done the right thing or deal with the guilt of letting down others.
When developing the external conflict of your story, think about how it affects the main character’s mental well-being and their relationships with those around them.
External conflicts move the plot forward
External conflict occurs when characters struggle for or against change. They either want something to be different, or external forces are making things different and your protagonist doesn’t like it. In either case, the threat of change or stagnation pushes your characters to make choices—and choices are what carry your story. |
The importance of conserving the wildlife of the Bukit Lawang River
The Bukit Lawang River Wildlife Conservation, located in the heart of Sumatra, Indonesia, is a critical area for the protection of endangered species and the preservation of their natural habitats. With its lush rainforest, diverse wildlife and vital river ecosystem, Bukit Lawang is a bastion of biodiversity in Southeast Asia.
Threats to Bukit Lawang
Despite its ecological importance, Bukit Lawang is threatened by various sources. Deforestation, illegal logging and poaching pose significant risks to wildlife and the environment. Additionally, the expansion of agricultural and urban areas has further encroached on the natural habitats of this region, exacerbating the challenges faced by local flora and fauna.
Impacts of deforestation
Deforestation has led to significant habitat loss for many species, including the critically endangered Sumatran orangutan. With the disappearance of their natural habitats, these magnificent animals are threatened with extinction.
Illegal exploitation and poaching
Illegal logging not only harms the environment, but also allows poachers easier access to the forest and endangers the lives of many vulnerable species. Poaching has caused a serious decline in the population of Sumatran tigers, rhinos and elephants, pushing them to the brink of extinction.
The conversion of land for agricultural and urban purposes contributes to the fragmentation of wildlife habitats and disrupts the balance of the ecosystem. As a result, the survival of various species is threatened and the overall biodiversity of the region is compromised.
Conservation efforts in Bukit Lawang
In response to these challenges, many organizations, local communities and government agencies have come together to implement conservation efforts in Bukit Lawang. These initiatives focus on protecting endangered species, restoring their habitats, and promoting sustainable practices to ensure the long-term survival of wildlife and the environment.
Local communities play a crucial role in the conservation of Bukit Lawang. Through community initiatives, such as ecotourism and sustainable agriculture, residents are empowered to become stewards of the forest and active participants in preserving the region’s natural wealth.
Ranger programs have been established to patrol forests, monitor wildlife activity, and combat illegal logging and poaching. These dedicated rangers act as frontline defenders of Bukit Lawang’s biodiversity, safeguarding habitats and ensuring the safety of threatened species in the conservation area.
Education and awareness programs are essential to foster a thorough understanding of the importance of conservation among local people and visitors. By fostering a sense of environmental stewardship, these initiatives inspire people to become advocates for the protection of Bukit Lawang’s unique ecosystems.
The impact of conservation efforts
Concerted conservation efforts in Bukit Lawang have yielded positive results in safeguarding the area’s wildlife and natural habitats. These initiatives have not only helped protect endangered species but also contributed to the sustainable development of local communities and the preservation of the ecosystem.
Thanks to dedicated conservation programs, populations of endangered species such as the Sumatran orangutan and tiger have shown signs of recovery. These encouraging results give hope for the long-term survival of these emblematic animals of Bukit Lawang.
The promotion of ecotourism and sustainable agriculture has provided economic opportunities for local communities, providing alternative livelihoods consistent with conservation principles. By diversifying their sources of income, communities are less dependent on activities that harm the environment, thus promoting a harmonious relationship between humans and nature.
Conservation efforts have contributed to the restoration and preservation of Bukit Lawang’s natural habitats, thereby improving the overall ecological integrity of the region. The conservation area serves as a model of sustainable environmental management, demonstrating the positive outcomes of conservation for the benefit of wildlife and human well-being.
Bukit Lawang River Wildlife Conservation is a shining example of the importance of preserving our natural heritage. Through collaborative efforts, we can address the threats to this biodiversity hotspot and ensure a future where wildlife thrives and ecosystems thrive. By supporting and promoting the conservation of Bukit Lawang, we can have a profound impact on the longevity of the region’s unique biodiversity and contribute to the well-being of future generations.
Questions and answers
Q: How can individuals contribute to the conservation of Bukit Lawang?
A: Individuals can support conservation efforts by practicing responsible tourism, supporting sustainable products and raising awareness about the plight of Bukit Lawang’s wildlife and habitats. Additionally, donating to reputable conservation organizations and participating in volunteer programs are effective ways to make a positive difference. |
The Hebrew language has a rich and fascinating history that spans over thousands of years. From its ancient roots to its modern-day usage, Hebrew has undergone many changes and transformations, evolving into the language we know today.
The earliest form of Hebrew can be traced back to the 10th century BCE, when it was used as the spoken language of the Israelites in the region that is now Israel. The earliest written records of Hebrew are found in the Hebrew Bible, also known as the Old Testament, which was written in a script known as Paleo-Hebrew. This script was used until the Babylonian exile in the 6th century BCE, after which the script was changed to Aramaic.
After the Babylonian exile, Hebrew continued to be used as the spoken language of the Jewish people, but it gradually fell out of use as a written language. During this period, known as the Second Temple period, Hebrew was replaced by Aramaic as the dominant written language. However, Hebrew remained the language of Jewish religious texts and prayer (to this very day!)
In the 2nd century CE, Hebrew began to be revived as a spoken language, thanks to the efforts of the Jewish scholars known as the Tannaim and the Amoraim. These scholars created a new system of grammar and vocabulary, known as Mishnaic Hebrew, which was used to comment on and interpret the Hebrew Bible.
In the 6th century CE, Hebrew underwent yet another transformation, with the development of the square script, which is still in use today. This script, known as the Ashuri script, was used in the writing of the Talmud and other Jewish texts.
During the Middle Ages, Hebrew continued to be used as the language of Jewish religious texts and prayer, but it gradually fell out of use as a spoken language. However, in the 19th century, Hebrew began to be revived as a spoken language once again, thanks to the efforts of Jewish scholars and activists who sought to revive the language as a way of preserving Jewish culture and identity.
Today, Hebrew is the official language of the State of Israel and is spoken by over 9 million people. It is taught in schools, used in the media, and is the language of commerce and government. Hebrew has undergone many changes throughout its history, but it remains a vibrant and living language that continues to evolve and adapt to the needs of its speakers.
In conclusion, the Hebrew language has a rich and fascinating history, shaped by the political and cultural events that shaped the Jewish people, and it has undergone many changes throughout the centuries. From ancient times to modern days, Hebrew has always been the language of Jewish identity, culture, and religion. It's amazing to see how it has evolved and how it's being spoken by millions of people today.
If you are looking to learn this wonderful language as how it is being used today, you are more than welcome to check out our "Hebrew For Beginners" Udemy's bestseller online course. |
As adults get older, their body functions decline. This can cause a build up of harmful substances, called reactive oxygen species, which can damage the cells: the process is called oxidative stress. Luckily, the body uses superhero chemicals called antioxidants to fight against oxidative stress, with the most common being a chemical called glutathione. We were curious to know whether glutathione levels change with age, and how. In previous studies, some researchers measured glutathione levels in the brains of healthy individuals and in the preserved brains of people that had passed away. Other researchers measured glutathione levels in the blood. We analyzed all the results to see how they fit together. Compared to young adults, glutathione levels in older people were either higher, lower, or unchanged depending on the brain region scientists looked at. In blood, glutathione levels were usually lower with increasing age. This means that oxidative stress contributes to aging by damaging the cells in different parts of the brain and in the body, and that the superhero chemical provides protection by fighting oxidative stress.
Aging Changes Our Bodies
As we go from young adulthood to middle age and then to older adulthood, our bodies change physically. At older ages, some of our bodily processes start to break down. One such bodily process, known as metabolism, refers to all the chemical reactions that take place continuously in the body and make life possible. Problems with metabolism that naturally happen as people age can result in the overproduction of molecules called reactive oxygen species. At high levels, reactive oxygen species can trigger a harmful process known as oxidative stress, which damages cells and can result in increased risk of disease with age. Luckily, the body has a system in place to protect us from this harm. Superhero chemicals called antioxidants are produced to protect the cells of the body and brain from oxidative stress. The most common antioxidant is called glutathione.
Glutathione levels in the brain can be measured using a brain imaging technique called magnetic resonance spectroscopy (Figure 1A). This technique measures the different chemical concentrations inside the brain, and is painless and extremely safe. In addition, laboratory tests can be used to measure glutathione levels in various tissues of the body, such as the blood, or in the preserved brains of people who have passed away (Figure 1B). Researchers have previously found that aging causes changes in glutathione levels, but no one had ever looked to see whether all the completed studies were consistent with each other. Therefore, we searched previously published scientific papers to understand how glutathione levels change in the brain and blood in adulthood, which could tell us about changes in oxidative stress as we age. Adulthood is composed of young adults (18–39 years of age), middle-aged adults (40–59 years of age), and older adults (60+ years of age).
Understanding Glutathione Changes in Brain and Blood
We systematically searched through all scientific articles about glutathione published to date in the PubMed scientific database. We used several keyword combinations in our search and found 32 studies that investigated how glutathione levels in the brain and blood vary in healthy aging . Note that a person is considered a healthy ager when his or her physical and mental health, independence, and quality of life are maintained throughout life (World Health Organization, 2020).
Glutathione Can be Accurately Measured in the Brain
The human brain has four sections, called lobes. They are the occipital, temporal, parietal, and frontal lobes. Each lobe is made up of several sub-regions, which we will not describe in detail here. Some well-known functions of the four lobes are as follows: the occipital lobe is the master of vision; the temporal lobe is involved in listening and memory; the parietal lobe combines information from all our senses to understand what is happening around us; and the frontal lobe is the boss of planning, calculating, and controlling our emotions. The cerebellum, located in the back bottom part of the brain, is in charge of the coordination of movements (Figure 2).
Reproducibility is very important to scientists. It means that a scientific finding can be repeated or reproduced across multiple studies. To be reliable and accurate, scientific results must be reproducible. Twelve studies of healthy adults (18+ years of age) investigated the reproducibility of glutathione measurements in 12 sub-regions of the brain, using magnetic resonance spectroscopy (Figure 3A). The studies scanned the same participants at least twice, and evaluated how similar the results were to each other. If there was little to no difference between measurements from each scan, the results were considered to have good reproducibility. Overall, measures of glutathione levels were found to have good to excellent reproducibility across all brain areas studied. The reproducibility of glutathione measurements throughout the brain means that we can trust the results and use them to answer complex questions, such as how brain glutathione levels change with age.
Changes in Brain Glutathione Levels With Age
From studying published papers, we found that changes in brain glutathione levels depended on which brain region was examined. Specifically, we found that glutathione levels were decreased in 4 out of 10 of the brain sub-regions evaluated in older (60+ years of age) adults compared to young (18–39 years of age) adults. Glutathione levels increased in 3 out of 10 brain regions and did not change in 3 out of 10 regions (Figure 3B). Our hypothesis is that glutathione may increase with age in some brain regions as the brain’s way to fight the increasing production of reactive oxygen species. Reduced glutathione in other areas might mean that the brain’s glutathione production cannot keep up, probably leading to oxidative stress. This hypothesis still needs to be tested experimentally to see if it is correct. It is important to note that glutathione levels might also vary in other brain regions that have not yet been investigated, so we recommend additional work on this topic.
What About the Blood?
Since blood travels around the body to reach every organ, glutathione levels in the blood tell us about the amount of oxidative stress experienced over the entire body. The scientific papers we examined looked at changes in glutathione levels in two parts of blood: plasma and serum. Plasma is a yellow fluid in which molecules and chemical compounds such as nutrients and proteins are suspended (Figure 3C). Serum is the pale-yellow liquid that remains after all cells and the clotting proteins that help stop bleeding are removed from the plasma (Figure 3C). In both plasma and serum, glutathione level changes in the blood were more consistent than the changes seen the brain. A majority of the papers we examined reported lower levels of glutathione in older adults compared to young adults (Figure 3C). Since glutathione is produced throughout the body, the findings in blood tell us about what is happening in the entire body, not just the brain.
Why is This Work Important?
So now you know that glutathione is the most abundant of the superhero chemicals called antioxidants, and it helps to fight and prevent damage to cells caused by oxidative stress. Our review of scientific papers showed that glutathione measurements in the brain are reproducible, and that glutathione levels can be reliably measured in both brain and blood. Glutathione levels are altered in the brain as people age, and glutathione levels in the blood tend to decrease with increasing age, which might reflect the presence of oxidative stress.
Since everyone gets older, people might think that there is nothing they can do about these changes. Luckily, some studies have shown that it is possible to increase the levels of glutathione in the brain and blood, for example by taking dietary supplements [2, 3] or by exercising. These studies give us hope for the future, as they demonstrate that we might not have to suffer from decreasing glutathione levels as we age. In addition, new techniques and new knowledge are emerging, such as more powerful brain scanners that provide doctors and scientists with high-quality images of glutathione levels in the living brain. Finally, many fruits and vegetables naturally contain antioxidants, so eating a healthy diet can help maintain good health. So remember to eat your grapes, berries, and broccoli, because after all, who would not want to be a superhero and fight oxidative stress?
Metabolism: ↑ All the chemical changes happening in the cells that convert food into energy, to support the life of an organism.
Reactive Oxygen Species: ↑ A family of unstable oxygen-containing molecules that are continuously created and can damage other molecules present in cells.
Oxidative Stress: ↑ Stress and damage that cells experience when reactive oxygen species build up faster than antioxidants can control them.
Antioxidants: ↑ Molecules that protect cells against oxidative stress by neutralizing reactive oxygen species before they can damage cells.
Glutathione: ↑ The most common of the antioxidants that fights oxidative stress.
Magnetic Resonance Spectroscopy: ↑ A brain imaging technique that safely and painlessly measures the concentrations of chemicals in the brain.
Reproducibility: ↑ Concept saying how accurate two (or more) measurements are. Something reproducible can be repeated multiple times.
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
This work was supported by the Fonds de Recherche Québec - Santé (FRQS) [Bourse de formation de maîtrise, 2022-2023] (FD), FRQS [Chercheur boursiers Junior 1, 2020-2024], Fonds de soutien à la recherche pour les neurosciences du vieillissement from the Fondation Courtois and the Quebec Bio-Imaging Network [#PP 19.20] (AB), and the Canadian Institutes of Health Research grant [#153005] [SN (grantee), SS].
↑ Detcheverry, F., Senthil, S., Narayanan, S., and Badhwar, A. 2023. Changes in levels of the antioxidant glutathione in brain and blood across the age span of healthy adults: a systematic review. NeuroImage. 2023:103503. doi: 10.1016/j.nicl.2023.103503
↑ Xue, Y., Shamp, T., Nagana Gowda, G. A., Crabtree, M., Bagchi, D., and Raftery, D. 2022. A combination of nicotinamide and D-Ribose (RiaGev) is safe and effective to increase NAD+ metabolome in healthy middle-aged adults: a randomized, triple-blind, placebo-controlled, cross-over pilot clinical trial. Nutrients 14:2219. doi: 10.3390/nu14112219
↑ Choi, I.-Y., Lee, P., Denney, D. R., Spaeth, K., Nast, O., Ptomey, L., et al. 2015. Dairy intake is associated with brain glutathione concentration in older adults. Am. J. Clin. Nutr. 101:287–93. doi: 10.3945/ajcn.114.096701 |
The National Aeronautics and Space Administration has released an animation on their website that illustrates how average global surface temperatures have risen since record-keeping started in the late 19th century. The video helps to visually illustrate the rise of Earth’s temperatures, spanning a 135-year period from when temperature records were first recorded, in 1880, through 2015, the hottest year on record.
The baseline average used in the video was derived from temperature averages from 1951 through 1980, with blue colors representing below-average trends, and orange representing above average temperatures.
The National Oceanic and Atmospheric Administration also released a similar video, showing the same warming trend over the past 135 years.
Subscribers, to watch the subscriber version of the video, first log in then click on Dreamland Subscriber-Only Video Podcast link. |
After studying the assigned reading The Handbook of Communication Science, Second Edition, Chapter 15: Mass Media Effects and considering one of the five categories of media effect theories mentioned in the article (learning, socialization, selective exposure, selective presentation, and perceived effects—remember, these are the categories, not the names of the theories themselves), identify one of the theories listed in the reading and answer the following questions or prompts.
A) Does media directly influence individuals? Explain your answer
B) Which of the mass communication theories do you feel most accurately portrays your media experiences? Why? Be sure to provide an example that supports your opinion.
C) How involved should the government be in protecting us from media effects? Where do you draw the line between free speech and indecency? Is censorship ever warranted?
Support your responses with research from the Learning Resources. Use APA in-text citations where necessary and cite any outside sources. Create an APA reference list at the end of the document. |
“Any emerging disease in the last 30 or 40 years has come about as a result of encroachment into wild lands and changes in demography,”
THERE’S a term biologists and economists use these days — ecosystem services — which refers to the many ways nature supports the human endeavor. Forests filter the water we drink, for example, and birds and bees pollinate crops, both of which have substantial economic as well as biological value.
If we fail to understand and take care of the natural world, it can cause a breakdown of these systems and come back to haunt us in ways we know little about. A critical example is a developing model of infectious disease that shows that most epidemics — AIDS, Ebola, West Nile, SARS, Lyme disease and hundreds more that have occurred over the last several decades — don’t just happen. They are a result of things people do to nature.
Disease, it turns out, is largely an environmental issue. Sixty percent of emerging infectious diseases that affect humans are zoonotic — they originate in animals. And more than two-thirds of those originate in wildlife.
Teams of veterinarians and conservation biologists are in the midst of a global effort with medical doctors and epidemiologists to understand the “ecology of disease.” It is part of a project called Predict, which is financed by the United States Agency for International Development. Experts are trying to figure out, based on how people alter the landscape — with a new farm or road, for example — where the next diseases are likely to spill over into humans and how to spot them when they do emerge, before they can spread. They are gathering blood, saliva and other samples from high-risk wildlife species to create a library of viruses so that if one does infect humans, it can be more quickly identified. And they are studying ways of managing forests, wildlife and livestock to prevent diseases from leaving the woods and becoming the next pandemic.
It isn’t only a public health issue, but an economic one. The World Bank has estimated that a severe influenza pandemic, for example, could cost the world economy $3 trillion.
The problem is exacerbated by how livestock are kept in poor countries, which can magnify diseases borne by wild animals. A study released earlier this month by the International Livestock Research Institute found that more than two million people a year are killed by diseases that spread to humans from wild and domestic animals.
via New York Times – Jim Robbins
The Latest Streaming News: Ecology of Disease updated minute-by-minute |
Small bugs of the rain forest have many things to worry about, assuming they are capable of anxiety. But surely some of their more feared predators are velvet worms, a group of ancient animals that spit an immobilizing, gluelike material onto prey before injecting them with saliva and chomping down.
It turns out the velvet worm family is more diverse than thought: A new species has been found in the jungles of Vietnam. Unlike related velvet worms, this species has uniquely shaped hairs covering its body. It reaches a length of 2.5 inches (6 centimeters), said Ivo de Sena Oliveira, a researcher at the University of Leipzig, Germany, who along with colleagues describes the species in Zoologischer Anzeiger (A Journal of Comparative Zoology).
The paper and related work by Oliveira suggest thousands of unknown species of these creatures are waiting to be found throughout the world's tropical rain forests, he said. Research by Oliveira in the Amazon rain forest alone suggests there may be one new species of velvet worm about every 15 miles (25 kilometers), he told LiveScience. [See Amazing Images of Creepy Acorn Worms]
The animals are extremely difficult to find and little known, because they spend most of life hidden in moist areas in the soil, in rotting logs or under rocks, due in part to the fact that their permeable skin allows them to quickly dry out, Oliveira said. In some areas, "if you're not there at the right moment of the year, during the rainy season, you won't find them," he added. The rainy season is the one time of year this Vietnamese species exits the soil, he said.
Unlike arthropods (a huge group of animals that includes ants and spiders), velvet worms lack hard exoskeletons. Instead their bodies are fluid-filled, covered in a thin skin and kept rigid by pressurized liquid. This hydrostatic pressure allows them to walk, albeit very slowly, on fluid-filled, stubby legs that lack joints.
Their slowness works to their advantage. To hunt, they sneak up on other insects or invertebrates. And that's when the sliming begins — velvet worms like the newfound species hunt by spraying a "net of glue" onto their prey from two appendages on their backs, Oliveira said. This nasty material consists of a mix of proteins that impedes movement. "The more the prey moves, the more it gets entangled," he said.
Oftentimes the velvet worms will eat any excess "glue," which is energetically costly to make. Although the animals have been shown to take down prey larger then themselves, they often choose smaller creatures, likely to ensure they don't waste their precious bodily fluids, Oliveira said.
Fossils show that velvet worms haven't changed much since they diverged from their relatives (such as the ancestors of arthropods and waterbears) about 540 million years ago, Oliveira said. Studies of velvet worms could help shed light on the evolution of arthropods, he added.
There are two families of velvet worm, one spread around the tropics, and another found in Australia and New Zealand. Members of the former group generally tend to be loners. But the other family may be more social. One 2006 study found that members of the species Euperipatoides rowelli can hunt in groups of up to 15, and that the dominant female eats first.
While it's not a surprise to find a new species of velvet worm, this is "great work by [these researchers] to actually characterize and name a new species from this region," said Nick Jeffery, a doctoral student at the University of Guelph who wasn't involved in the study.
The new species, Eoperipatus totoros, is the first velvet worm to be described from Vietnam, said Georg Mayer, a co-author and researcher at the University of Leipzig.
This species was first discovered and listed in a brief 2010 report by Vietnamese researcher Thai Dran Bai, but the present study is the first to describe the Vietnamese animal in detail, Oliveira said.
Sign up for the Live Science daily newsletter now
Get the world’s most fascinating discoveries delivered straight to your inbox. |
What is sludge?
Sludge is a semi-solid residual byproduct of industrial or refining processes. It's a separated solid suspended in a liquid, with a lot of water between the solid particles. Drying this material reduces its volume and the moisture content of the biosolids in the sludge.
The water content of sludge produced during waste water treatment ranges between 97 and 99.5 percent. Sludge thickening increases the dry and solid content of sludge by reducing the water content with a low energy input. This task is typically carried out in a tank known as a gravity thickener.
What is gravity thickening?
Gravity thickening is one of the simplest ways to reduce the water content of sludge while using little energy. The process has the potential to reduce the total volume of sludge to less than half of its original volume.
How is sludge dried using filter presses?
This is essentially a thermal drying process in which thermal energy is applied to the sludge in order for it to evaporate water. The drying process reduces the volume of the product, making it easier to store, transport, package, and sell.
There are two methods for drying sludge: direct and indirect. Indirect thermal drying, on the other hand, is gaining popularity as a method of reducing sludge volume by removing moisture and achieving a dry solids content of 90%. The process has low-impact on the environment and produces a stabilized dry granular product that is simple to store, deliver, and use in agriculture.
Applications of Sludge Drying
Dried or treated sludge has a wide range of applications, the most common of which is agricultural land application. When the drying process to produce biosolids is completed, the biosolids act as a fertilizer for crop harvesting. Crops efficiently use the organic nitrogen and phosphorous found in biosolids because these plant nutrients are released gradually throughout the growing season. These nutrients will be absorbed by the crop as it grows.
How do filter presses function?
Sludge is dewatered by pressing it between a series of porous plates in a filter press (FP). The process extracts water from sludge by applying high pressures to sludge layers held between a series of 2080 rectangular plates. The plates are recessed to allow for sludge filling, and each has a filter cloth with an effective pore size of less than 0.1 mm.
It is also the only dewatering technology capable of consistently achieving high concentrations of the dewatered solids cake product – between 35 and 45 percent DS (dry solids), depending on the origin of the feed sludge and the chemical conditioning used.
How does filter press work?
The filter press works by slowly filling the recesses between the plates with sludge before applying pressures of 720 bar over a 12-hour period. Under the applied pressures, water is forced out of the sludge, and the filtrate is returned to the wastewater treatment works influent.
The system is then flushed with air for 515 minutes to remove the majority of the residual water from the cake formed in the recesses. At this point, the filter cake can be washed to remove contaminants. After that, the plates are separated and the cake solids, which are 2540 mm thick, are allowed to fall out.
The diaphragm/membrane press is a variation on the standard FP. A flexible diaphragm is sandwiched between the filter cloth and the supporting plate in this technology. Dewatering is added by pressurizing the diaphragm (up to 20 bar) at the end of the pressing cycle to expand the diaphragm and apply additional pressure to the cake trapped in the recesses.
The filter press has also been modified to operate as a hybrid filter press/dryer, with the filter plates heated by a hot water system. Pilot trials using this technology under appropriate operating conditions have reported DS concentrations above 95%, with cake dryness increasing with increased operational cycle time, as with conventional FP.
These results are applicable to an un-thickened thermophilic anaerobically digested sludge starting material, implying that raw sludge dewatering and drying can be accomplished in a single stage.
For designing, installation of various STP’s, ETP’s, etc. contact Netsol Water. |
|Delivery length / details
|11 x 1 Hour Lectures
|11 x 1 Hour Workshops
|11 x 2 Hour Lectures
|Assessment length / details
|Example Sheets To be completed during the teaching semester
|3 Hours Supplementary Examination
On successful completion of this module students should be able to:
1. Describe the basic principles of gravitational, electrostatic and magnetic fields and apply these to numerical examples of simple systems.
2. Describe the basic properties of dielectric, magnetic, and electrically conducting materials.
3. Calculate the force on a charged particle in electric and magnetic fields and describe the motion of a charged particle in a uniform electric field.
4. Calculate the potential of a system of charged particles.
5. Describe the structure and function of resistors, and capacitors and employ phasor diagrams, vector methods and complex numbers to analyse AC circuits.
6. Describe basic principles underlying atomic, nuclear and particle physics and apply these to examples of simple systems.
The module considers the principles underlying gravitational and electrostatic fields and introduces magnetism, electricity and electric circuits. It also introduces basic concepts in atomic, nuclear and particle physics. Emphasis is placed on the solution of problems, and examples sheets include numerical exercises. This module prepares students for more advanced modules in Part 2.
The module discusses the forces arising from gravitational and electrostatic fields and describes these in terms of the inverse square law with illustrative examples. Associated fields and potentials are also described. Electric charge and current, magnetic fields and electromagnetic induction are used to describe the operation of electric circuits and the properties of dielectric and magnetic materials. The module also covers the basic physics underpinning atomic, nuclear and particles physics.
1. Kepler's laws.
2. Newton's law of Gravity.
3. Gravitational potential energy.
1. Electric fields and the laws of Coulomb and Gauss applied to different geometries of electrical charge distribution.
2. Electric potential versus electric field, equipotential surfaces.
3. Electrically conducting, semiconducting and dielectric materials.
4. Capacitors and electrical energy density.
1. Magnetic fields, current loops and magnetic materials.
2. The laws of Biot-Savart and Ampere applied to electric currents in wires and solenoids.
3. Electromagnetic induction (Faraday's Law and Lenz' Law), self inductance and magnetic energy density.
1. Current and resistance, Ohm's Law, resistivity.
2. DC circuits - resistors in series and parallel, internal resistance, energy and power.
3. Potential divider circuits and Kirchoff's rules.
1. AC currents in resistive, capacitive and inductive circuits.
2. Reactance and impedance, transients. Analysis of AC circuits using phasor diagrams, vector methods and complex numbers.
3. Power and phase angle. RCL circuits in series and parallel and conditions for resonance.
ATOMIC, NUCLEAR AND PARTICLE PHYSICS
1. Nuclear masses and binding energies.
2. Radioactive decay.
3. Elementary particles, fundamental forces and the standard model.
|Application of Number
|All questions set in example sheets and formal exams have numerical problems.
|Students will be expected to research topics within the module via the internet.
|Personal Development and Career planning
|The module will highlight the latest technological developments in these fields and will contribute to career development.
|Problem solving skills are developed throughout this module and tested in assignments and in the written examination.
|Directed reading will allow students to explore the background to the lecture modules. This will be addressed by weekly exercises that will also entail research in library and over the internet.
|Students work in groups in the workshops.
This module is at CQFW Level 4 |
Uncovering the Reality of American History
During the American Civil War from 1861 to 1865, 270,000 Union soldiers were held in Confederate POW camps. Of these, 22,576 died, about 8.4 percent of the total. More than half of these, 12,919 died in just one of three major Confederate POW camps, Andersonville, about 59 miles east of Columbus, Georgia. This was portrayed as a war crime by the Northern press and many Northern politicians, but the causes of these deaths were neither mistreatment nor deliberate negligence. However, its commandant, Swiss immigrant, Captain Henry Wirz, was hanged after the war on November 10, 1865. According to a University of Missouri Law School study, he was convicted on the basis of bribed false testimony and rules of evidence, testimony, and fairness that were blatantly and shamefully biased and distorted. His trial was probably one of the greatest miscarriages of justice in American history.
About 220,000 Confederate POWs were held in nine major Union POW camps during the War. Of these, at least 26,246, about 11.9 percent died. This is a death rate nearly 42 percent higher than that of Union soldiers in Confederate camps. The highest absolute number of Confederate deaths occurred at Camp Douglas, near Chicago, Illinois, where deaths totaled at least, 4,454. This number is incomplete. There are about 6,000 graves at Camp Douglas, and at least 5,600 are believed to be the graves of Confederate POWs. The camp was built for just 6,000 prisoners, but eventually over 12,000 were crowded into its confines. Camp Douglas was the largest Union POW camp, and over 26,000 Confederate POWs passed through Camp Douglas from 1861 to 1865.
Typical causes of death at both Union and Confederate POW camps were diarrhea, pneumonia, influenza, extreme upper respiratory infections, typhoid, smallpox, tuberculosis, measles, scurvy, mumps, malaria, cholera, yellow fever, and hospital gangrene. Poor nutrition, exposure to wet, cold, and heat, and lack of medical supplies and treatment were frequent aggravating factors. Mental despair was according to many prisoner accounts a major factor. However, deliberate cruelty and starvation were a factor in several Union camps, among them Camp Douglas, and even more notorious, Elmira, New York, where 2,933 POWs died. In addition, Northern policies against prisoner exchange and supplying badly needed medical supplies to Union soldiers in Confederate prison camps were a major factor in Andersonville deaths. In general, Confederate guards were as starved for food and medical supplies as their Union prisoners, who got the same rations and died at the same rate. Toward the end of the war especially, Confederate forces, Southern civilians, and Confederate POW camps were experiencing severe shortages of everything.
I know the most about Camp Douglas because my great grandfather, John Berry Scruggs, and his brother, James, who served in John Hunt Morgan’s Second Confederate Kentucky Cavalry regiment. were captured by Union forces on a raid into Indiana and Ohio in July 1963. They had been trapped on the Ohio side of the Ohio River at Buffington Isle between Union gunboats in the river and advancing Union infantry and cavalry, while trying to cross back over the Ohio into Kentucky. A Union gunboat shell landed near my great grandfather, and his horse reared up and threw him and then fell on him, breaking his leg. He and his brother were taken prisoner by advancing Union infantry and taken to Camp Morton in Indiana for wounded POWs. They were then taken to Camp Douglas, where they managed to survive until released at the end of the war.
My grandfather, Greene B. Scruggs often spoke to me about what his father and uncle had endured at Camp Douglas. The thing that most impressed me as a boy was that they had to supplement their diet by catching, cooking, and eating the numerous rats attracted to the camp. Two other brothers survived many battles and hardships in two Alabama infantry regiments. All were born near Fountain Inn, South Carolina, but volunteered near Blountsville, Alabama, where the family had moved in the 1850s.
Confederate Prisons with one thousand or more Union deaths were:
Andersonville, GA 12,919
Salisbury NC 3,700
Danville VA 1,297
Union Prisons with one thousand or more Confederate deaths were:
Camp Douglas IL 4,454+
Point Lookout MD 3,587
Elmira NY 2,933
Fort Delaware DE 2,460
Camp Chase, OH 2,260
Rock Island IL 1,960
Camp Morton, IN 1,763
Auton, PA 1,508
Gratiot St. 1,140
St. Louis, MO
Historian, Thomas Cartwright, has described Camp Douglas as “a testimony to cruelty and barbarism.” Because of its miserable living conditions and increasing degrees of deliberate cruelty toward Confederate prisoners of war, the camp gained the title, “Eighty Acres of Hell.” Prisoners were intentionally deprived of adequate rations, clothing, and heating as punitive measures. From September 1863 to the end of the war, many were subjected to brutal tortures that often resulted in permanent maiming and death.
Unfortunately, Camp Douglas was situated on low ground, and it flooded with every rain. During the winter months, whenever temperatures were above freezing for long, the compound became a sea of mud. Less than a handful of 60 barracks had stoves. Overcrowding and inadequate sanitation measures soon made the camp a stinking morass of human and animal sewage. Henry Morton Stanley, of the 6th Arkansas, who later in his illustrious career as an African explorer and journalist uttered the famous words, “Doctor Livingstone, I presume,” had this to say of Camp Douglas: “Our prison pen was like a cattle-yard. We were soon in a fair state of rotting while yet alive.” He later remarked that some of his comrades “looked worse than exhumed corpses.”
Steadily, sickness and disease began to increase. By early 1863, the mortality rate at Camp Douglas had climbed to over 10 percent per month, more than would be reached in any other prison, Union or Confederate. The U.S. Sanitary Commission (now the Red Cross) pointed out that at that rate, the prison would be emptied within 320 days. One official called it an “extermination camp.” The fall and winter of 1862-63 were very wet, cold, and windy. The majority of deaths were from typhoid fever and pneumonia as a result of filth, bad weather, poor diet, lack of heat, and inadequate clothing. Other diseases included measles, mumps, catarrh (severe sinus and throat infection), and chronic diarrhea.
Somewhere in excess of 317 Confederate soldiers escaped from Camp Douglas, over 100 of them being men of Morgan’s 2nd Kentucky Cavalry. Hundreds of Morgan’s men had been sent to Camp Douglas after being captured on their famous raid through Indiana and Ohio in July 1863. However, the daring escape of these Morgan cavalrymen in September 1863 resulted in retaliatory action. A reduction of rations and removal of the few barracks’ stoves were ordered from Washington. Eventually all vegetables were cut off. This resulted in an epidemic of scurvy described by R. T. Bean of the 8th Kentucky Cavalry. “Lips were eaten away, jaws became diseased, and teeth fell out.” Before authorities could correct the situation, many succumbed to the disease. In addition, an epidemic of smallpox raged through the camp. Lice were everywhere. Many prisoners had to supplement their diet by catching, cooking, and eating the all too abundant rats.
Being commandant of a prisoner of war camp was not considered a desirable position by most Union officers. During its four year history. the camp had eight commanders. Most of these were honorable men, who later proved their worth in battle and in peace. The punitive policies and directives to reduce prisoner rations and impose other deprivations had come from the War Department. Early in the war, President Lincoln and Secretary of War Edmund Stanton termed all captured Confederates as “traitors” and refused to recognize them as prisoners of war.
The first commander of Camp Douglas as a POW camp was Col. James Mulligan of the 23rd (Irish) Illinois Infantry. The prisoners respected Mulligan, even though an enemy, because of his heroic war record and honesty. He was a strict disciplinarian but always fair. With more prisoners pouring into Camp Douglas than could possibly be handled with efficiency and mounting administrative and sanitary problems, he was glad to take his regiment back to the field in June 1862. His valor and leadership soon won him a promotion to Brigadier General. Sadly, he was killed in action at Winchester, Virginia, in July 1864.
On August 18, 1863, Col. Charles DeLand was made commander at Camp Douglas, bringing with him the 1st Michigan Sharpshooters. In reprisal for escape attempts and other infractions and as a method of interrogation, he introduced several forms of torture, including hanging men by their thumbs for hours. Several died from this ordeal. He also introduced a torture called “riding the horse” or “riding Morgan’s mule.” Prisoners were forced to sit for many hours on the narrow and sharpened edge of a horizontal two by four and suspended by supports four to twelve feet high. Guards often hung weighty buckets of dirt and rock on their feet to increase the pain. This often caused permanent disabilities.
In March 1864, after a tour of duty at Camp Douglas distinguished by corruption and mismanagement as well as cruelty, DeLand and his regiment returned to the field. In May, during the Wilderness Campaign in Virginia, he was badly wounded and captured. Ironically, he was given every courtesy as a prisoner of war by his Confederate captors.
In May 64, Colonel Benjamin Sweet took command, but the cruelties continued unabated, and rations were reduced even more. However, the appearance of the camp improved. During the 1864 election campaign, Sweet also managed to persuade Lincoln and the War Department to put Chicago, then a town of 110,000, under martial law to prevent a prison uprising supported by Southern sympathizers in Chicago. More than 100 civilians were arrested and jailed for criticizing Lincoln policies or on the mere suspicion of Southern sympathies without the benefit of hearing or trial. Twelve died in prison before the end of the war. The uprising threat was vastly exaggerated and largely fabricated, but Sweet was promoted to Brigadier General in December for saving Chicago. At the end of the war. he received a commendation for a job well done.
Many people of Chicago and many Christian churches in the area offered relief to the prisoners at Camp Douglas. Until the Union government put a stop to the practice, many prominent people and local churches gave time, financial aid, and medicines to assist the post surgeon in the care of sick and destitute prisoners. The famous evangelist D. L. Moody was brought in to preach on several occasions. Some Confederate prisoners, however, complained of the high propaganda content of sermons by other preachers.
At the end of the war, the Confederate prisoners were offered transportation home by train, if they signed the Union loyalty oath. Otherwise, they would have to walk home. Most of the prisoners at Camp Douglas elected to walk home. By July 1865, the last POW had left Camp Douglas. The disgraceful history of Camp Douglas has been largely forgotten. Nothing remains of the camp but a monument and 6,000 graves at nearby Oak Woods Cemetery. |
The British Empire built colonies, overseas territories, and crown dependencies all over the world between the late 16th and the 20th centuries, claiming to be the biggest empire in history at its height. Although British colonial rule provided considerable modernization to the countries it conquered, it also obstructed democracy, equality under the law, and self-governance.
Several countries and campaigners are increasingly mounting pressure for the return of valuable items which they claim were pillaged by the British Empire. Repatriation campaigners claim that many of the cultural objects on exhibit in British museums were stolen from the colonized population.
According to human rights attorney Geoffrey Robertson, the British Museum, which is home to more than 8 million antiquities including the Rosetta stone and Benin Bronzes, has the highest concentration of stolen property.
Chika Okeke-Agulu, an art historian and professor at Princeton University, remarked that the Empire itself was a very paradoxical phenomena that purported to provide so-called civilization to the colonial people, but at the same time developed institutions that were hostile to modernity.
“All the institutions associated with the emergence of the European middle class, like museums, depended on the extraction of cultural heritage and artifacts from all corners of the empire,” Okeke-Agulu told Insider.
“These museums were established in the age of the Empire as bragging spaces where they
showed off their collections from their imperial holdings.”
Below are some cultural artifacts which the British Empire looted from the African continent;
The British Museum’s Rosetta Stone is recognized as a monumental artifact that made it possible for scholars to decipher and comprehend Ancient Egyptian cultures and history.
The Rosetta Stone is a granodiorite stele that bears three copies of an edict that King Ptolemy V Epiphanes of the Ptolemaic dynasty issued in Memphis, Egypt, in 196 BC.
The texts in the top and middle are written in hieroglyphic and demotic characters, respectively, while the text at the bottom is written in ancient greek.
According to Okeke-Ugulu, the stone was initially stolen from Egypt by Napoleon Bonaparte, who is well known for opening up the nation to the rest of Europe and igniting “Egyptomania” in the 19th century.
After the British overcame the French in 1815, they then grabbed the Rosetta Stone.
Another cultural item that has been the focus of calls for restitution is the Rosetta Stone, but some experts think the British Museum is unwilling to give up one of its most well-known acquisitions.
After the Battle of Maqdala, the British took religious manuscripts from Ethiopians that they called the Maqdala Manuscripts.
According to Atlas Obscura, a British expeditionary army besieged the mountaintop citadel of
Maqdala in 1868, leading to the capture of more than 1,000 primarily religious manuscripts that were transported to Britain on the backs of 15 elephants and hundreds of mules. 350 of those manuscripts were eventually acquired by the British Library.
With the goal of restoring stolen goods to Ethiopia, the Association for the Return of the Maqdala Ethiopian Treasures (AFROMET) was established in 1999.
The organization has been successful in retrieving some objects, though its campaign continues.
The royal palace of the Kingdom of Benin, which is now Nigeria, was beautifully decorated with thousands of bronze statues that date back to the 13th century.
The British Empire, however, dispatched troops on a punitive expedition in 1897 to punish Benin rebels who had reacted against imperial authority. The Kingdom of Benin came to an end when the soldiers of the Empire ransacked and ravaged the city.
The British Museum now has more than 900 historical items from the former monarchy in its collection of “contested artefacts,” including more than 200 bronze plaques. Nigeria has repeatedly demanded the restoration of the bronzes since becoming independent in 1960.
The Benin Bronzes will be loaned to Nigeria by the British Museum, but the British Museum has not yet committed to fully repatriate the objects.
The skull of Koitalel Arap Samoei, the Nandi leader, is a particularly terrible case. He fought against Britain’s railroad project across his territory, and British colonel Richard Meinertzhagen shot him dead in 1905.
Samoei’s head was severed from his body and sent to London. In the museum in Britain, the skull is still on display. The plunder of thousands of works of African art have taken place throughout periods of conflict, but mostly through colonization by Western countries.
This account described the looting of African artifacts that took place during Britain’s anti-slavery expedition as well as the subsequent fight to have them returned. |
RAM means Random Access Memory. There are various types of memory ram on computer. It’s actually a memory where data can be stored. It let the user to access into the data directly.
The computers of the present age has the integrated circuits are used to form a RAM.
There are different types of memory ram on computer. These are discussed below:
The full name is Static Random Access Memory. It can store bit-of data by using the flip-flop. These are truly little expensive to produce. But the speed is really very fast than the other type of RAMs as less power is needed for its operation. It is treated as the volatile memory because the stored information are lost when the power is cut or removed. Multiple transistors are required for it. It is primarily used as cache memory.
The full name is Dynamic Random Access Memory. It has memory cells which are paired with the transistors as well as the capacitors. These two are in need of constant refreshing. The DRMs are also volatile memory.
This is also one of the various types of memory ram on computer. The elaboration is Fast Page Mode Dynamic Random Access Memory. The actual origin is nothing but the DRAM. It remains same through the whole process. A bit data is located by column and row. Then the bit is rendered until the starting of the next bit. The transfer rate is approximately below 176 MBps.
Extended Data-out Dynamic Random Access Memory is such a memory whole does not wait for the end of the full process. This Random Access Memory starts searching for new bit whenever the first one is located. The rate of data transfer is about 264 MBps. So, it is clear that the rate five percent more than the FRM DRM.
Synchronous Dynamic Random Access Memory is a great one among the types of memory ram on computer as it can boost the performance of the computer. It can read all the bits. It stays on the row and keeps continuing its performance. The rate of data transfer is faster than the EDO DRAM. The speed is 528 MBps. So, it is 5% faster than the EDO DRAM.
Double Data Rate Synchronous Dynamic RAM, is a modern type of RAM. It is very much similar to the SDRAM. The exception is that it has the bandwidth with it so that the speed is surely high. The rate of data transfer is approximately 1064 Mbps. So, surely it is too fast.
Credit Card Memory:
It is one type of DRAM memory. This are specialized for the notebook computers.
These types of memory ram on computer needs a battery to work. Maintaining the memory contents is also required.
Video RAM is the RAM which is specialized for the video adapters. The functionality is little similar to SGRAM. But this one is more expensive.
So, these are the various types of memory ram on computer. Still new types of memory ram on computer are invented for better performance. |
What is Quantum Computing?
Quantum computing is the use of quantum-mechanical phenomena such as superposition and entanglement to perform computation. A quantum computer is specifically used to perform such calculation, which can be implemented theoretically or physically. The field of quantum computing is a sub-field of quantum information science, which includes quantum cryptography and quantum communication. The idea of Quantum Computing took shape in the early 1980s when Richard Feynman and Yuri Manin expressed the idea that a quantum computer had the potential to simulate things that a classical computer could not.
The year 1994 saw further development of Quantum Computing when Peter Shor published an algorithm that was able to efficiently solve problems that were being used in asymmetric cryptography that were considered very hard for a classical computer. There are currently two main approaches to physically implementing a quantum computer: analog and digital. Analogue methods are further divided into the quantum simulation, quantum annealing, and adiabatic quantum-computation.
Basic Fundamentals of Quantum Computing
Digital quantum computers use quantum logic gates to do computation. Both approaches use quantum bits or qubits. These qubits are fundamental to Quantum Computing and are somewhat analogous to bits in a classical computer. Like a regular bit, Qubit resides in either 0 or 1 state. The specialty is that they can also be in the superposition of 1 and 0 states. However, when qubits are measured, the result is always either a 0 or a 1; the probabilities of the two outcomes depends on the quantum state they were in.
Principle of Operation of Quantum Computing
A quantum computer with a given number of quantum bits is fundamentally very different from a classical computer composed of the same number of bits. For example, representing the state of an n-qubit system on a traditional computer requires the storage of 2n complex coefficients, while to characterize the state of a classical n-bit system it is sufficient to provide the values of the n bits, that is, only n numbers.
A classical computer has a memory made up of bits, where each bit is represented by either a one or a zero. A quantum computer, on the other hand, maintains a sequence of qubits, which can represent a one, a zero, or any quantum superposition of those two qubit states; a pair of qubits can be in any quantum superposition of 4 states, and three qubits in any superposition of 8 states. In general, a quantum computer with n qubits can be in any superposition of up to different states. Quantum algorithms are often probabilistic, as they provide the correct solution only with a certain known probability.
What is the Potential that Quantum Computing offers?
Quantum Computing is such a unique field that very few people show their interest in it. There is a lot of room for development. It has a lot of scope. Some of the areas in which this is penetrating today are:
- Cryptography – A quantum computer could efficiently solve this problem using multiple algorithms. This ability would allow a quantum computer to break many of the cryptographic systems in use today
- Quantum Search – Quantum computers offer polynomial speedup for some problems. The most well-known example of this is quantum database search, which can be solved by Grover’s algorithm using quadratically fewer queries to the database than that is required by classical algorithms.
- Quantum Simulation – Since chemistry and nanotechnology rely on understanding quantum systems, and such systems are impossible to simulate efficiently classically, many believe quantum simulation will be one of the most important applications of quantum computing.
- Quantum Annealing and Adiabatic Optimization
- Solving Linear Equations – The Quantum algorithm for linear systems of equations or “HHL Algorithm,” named after its discoverers Harrow, Hassidim, and Lloyd, is expected to provide speedup over classical counterparts.
- Quantum Supremacy
In conclusion, Quantum computers could spur the development of breakthroughs in science, medication to save lives, machine learning methods to diagnose illnesses sooner, materials to make more efficient devices and structures, financial strategies to live well in retirement, and algorithms to direct resources such as ambulances quickly. The scope of Quantum Computing is beyond imagination. Further developments in this field will have a significant impact on the world. |
“Rely on renewable energy flows that are always there whether we use them or not, such as, sun, wind and vegetation: on energy income, not depletable energy capital.”
Earth is an unusual and relatively small blue planet within the scale of the cosmos. It has been home to life for millions of years, but in the blink of a planetary eye, humankind has expanded at almost exponential growth to 7,720,211,220 people at the time of starting to write this article.
The Earth’s forests and ocean ecology can theoretically absorb 7,000,000,000 tonnes of CO2 over a year, which means that every person alive on this planet has a personal budget of around 1 tonne of CO2.
In 2018, we emitted 38,297,000,000 Tonnes of CO2 which is 5.5 times more than the carrying capacity of the Earth. This is the root cause of climate change.
“Imagine a world where everyone, everywhere lives happy, healthy lives within the limits of the planet, leaving space for wildlife and wilderness. We call this One Planet Living, and we believe it’s achievable.”
One Planet Living
The concept of “one planet living” crystallises the aim of a sustainable future and is the “ultimate net zero carbon target.” We only have one planet and therefore have to live within its renewable income, rather than its capital.
In 2002, Bioregional, together with the Peabody Foundation and Architect Bill Dunster, completed Bedzed, a prototype one planet living development. It used a fabric first approach and the latest low carbon biomass CHP technology at that time. Now, 17 years later and with a new biomass boiler, extensive use of PV and green electricity tariff, it still achieves its net zero operational carbon target.
Apart from several notable low carbon examples, there has not been the impetus within successive UK governments and the construction industry to consistently deliver one planet living developments like Bedzed.
We have wasted 20 years of time to address climate change.
This appears about to change.
Net Zero Whole Life Carbon
In 2019, the UK Green Building Council (UKGBC) has defined a process for achieving “whole life” net zero carbon.
There are two categories of carbon emissions within this whole life model:
– Operational Carbon Emissions, which is the carbon emitted by the energy used to operate buildings, heating, lighting, cooling, ventilation and small power, computers, and processes contained within the building.
– Embodied Carbon Emissions, which is embodied carbon related to the resources and materials used in the construction of the building and associated infrastructure.
Whole life carbon is the combination of the two.
We cannot, however, wait until 2050 to start this work, and in June 2019, the RIBA joined the worldwide movement and declared a climate emergency.
The key question is how fast can we achieve net zero realistically?
Net Zero Carbon Timeline
WilkinsonEyre is advising the RIBA on a net zero timeline, and we are in the process of redefining our internal design process to deliver this new reality in architectural practic
Now: Anyone reading this article should contact their clients and offer to help identify opportunities to reduce energy use immediately. POE research has indicated that immediate savings of 25% is not uncommon.
2020: In 18 months we need all governments to commit to a radical international effort to upgrade existing buildings to reduce energy demand by 50%, and to decarbonise the electricity grid. If we don’t start next year then we will not achieve the 2050 backstop.
2020: Building regulations in the UK should be changed to measure in use operational energy performance and set a DEC C rating as minimum target for all buildings. Embodied Energy Analysis and POE should be made mandatory.
2025: All new and refurbished buildings should reach 50% reduction in energy compared to benchmarks or equivalent of a DEC B rating, and better with the use of onsite renewables.
2030: All new and refurbished buildings should reduce energy use by 75% compared to benchmarks or equivalent DEC A rating, and achieve zero operational carbon emissions by the use of onsite and offsite renewables.
The Trillion Tree Challenge
Net zero whole life carbon can only be achieved in the short term by offsetting at planetary level. The trillion tree challenge is to offset 25% of the planet’s emissions by a network of projects which will be visible from space, including the sub-Saharan desert tree belt.
Net Zero Carbon Exemplars
The 2030 challenge is already being achieved, with notable examples below, and therefore we urge the construction industry to rise to the challenge to deliver a sustainable future now.
Lark Rise: This house designed by Justin Bere Architects has achieved a net positive operational carbon house which exports 35kWh/m2 annual to the grid.
Lot 1 Eddington Village: The University of Cambridge, by WilksinsonEyre is predicted to achieve net zero operational carbon.
Keynsham Civic Centre: This complex by AHR Architects and Max Fordham Engineers has achieved DEC A rating and a 52% reduction in operational carbon emissions.
Wembley Campus: The United College Group by WilkinsonEyre is a refurbished building that is seeking to achieve our 2030 target of DEC A rating now.
In the hour of writing this article the world population has risen by 30,000 and we have collectively emitted 4,371,803 tonnes of carbon.
The ultimate net zero carbon target for humankind is to live within the means of Earth and to reduce CO2 emissions by 31 billion tonnes of CO2 as soon as possible or risk a planetary extinction event.
The examples above illustrate that we know what needs to be done and by when.
We must act now.
MAIN IMAGE: Conceptual sketch, section, College of North West London, Wembley. Image courtesy of WilkinsonEyre |
Complex ions are a fundamental aspect of chemistry, playing a crucial role in understanding advanced chemical reactions.
These intricate structures consist of a central ion surrounded by ligands, forming stable complexes.
Chemists study complex ions to comprehend the behavior of transition metals and their involvement in various fields like medicine and environmental science. By examining common examples such as chloride complexes, chemists can apply complex ion theory to selective precipitation cases.
Understanding the individual steps involved, including the use of prefixes to denote the number of ligands, is essential for accurate analysis.
Definition of Complex Ions:
Complex ions are a fascinating aspect of chemistry that involves the formation of unique structures and bonding.
These ions consist of a central metal ion surrounded by ligands, which are molecules or ions that donate electron pairs to form coordinate bonds with the central metal ion.
The formation follows principles from coordination chemistry. This branch of chemistry focuses on the study of compounds in which a central metal atom or ion is bonded to one or more ligands.
Ligands can be simple molecules like water or ammonia, or they can be more complex organic molecules.
The coordination between the ligands and the central metal ion results in the formation of a complex ion.
The ligands donate their electron pairs to form coordinate covalent bonds with the metal ion, creating an overall structure known as a coordination complex.
One key characteristic of complex ions is their unique properties, which arise from their structure and bonding. These properties can include different colors, magnetic behavior, and reactivity compared to their components.
Complex ions play crucial roles in various fields such as medicine, materials science, and environmental studies. For example, certain chemotherapy drugs use platinum-based complex ions to target cancer cells specifically.
Importance of Complex Ions in Chemistry
Complex ions play a crucial role in various aspects of chemistry. Let’s delve into the significance of complex ions and how they contribute to different fields.
Accelerating Chemical Reactions
Complex ions are essential for catalysis, which speeds up chemical reactions. They act as catalysts by providing an alternative reaction pathway with lower activation energy.
This allows reactions to occur more rapidly, making complex ions vital in many industrial processes.
Complex ions find applications in numerous industrial processes.
One such example is dye production, where complex ions are used to enhance color intensity and stability. They also play a significant role in pharmaceutical synthesis, aiding the formation of specific compounds with desired properties.
Designing New Materials
Understanding the formation of complex ions is crucial for designing new materials with specific properties. By manipulating the composition and structure of complex ions, scientists can create materials that exhibit unique characteristics such as magnetism or conductivity.
This knowledge enables advancements in fields like materials science and nanotechnology.
Insights into Biological Systems
Studying complex ions provides valuable insights into biological systems and their interactions. Many biological processes rely on the coordination of metal ions within proteins and enzymes. Understanding how these complexes form and function helps researchers comprehend essential biochemical reactions within living organisms.
Bonding in Complex Ions
The bonding in complex ions involves the formation of coordinate covalent bonds between a central metal ion and ligands.
These bonds are also known as dative covalent bonds, where the shared pair of electrons is donated by the ligand to the metal ion. The coordination number refers to the number of ligands bonded to the central metal ion in a complex ion.
Different types of ligands can result in varying bond strengths within a complex ion.
Some ligands form stronger bonds with metal ions compared to others due to their ability to donate electron pairs more effectively. This affects the stability and reactivity of complex ions, as stronger bonds are less likely to dissociate or react with other species.
The nature of bonding interactions influences the overall stability and reactivity of complex ions. The formation of coordinate bonds between the central metal ion and ligands leads to the creation of coordination complexes, also known as coordination compounds. These complexes play crucial roles in various chemical reactions and biological processes.
In coordination chemistry, understanding how different factors influence bond strength and stability is essential. Factors such as the charge on the metal cation, size and shape of both ligands and metal ions, and electronic configuration all contribute to determining the properties of complex ions.
Role of Ligands in Complex Ion Formation
Ligands play a crucial role in determining the geometry, color, magnetic properties, and reactivity of complex ions formed with transition metals. The choice of ligand influences various characteristics of these complexes.
Ligands determine the arrangement of atoms around the central metal ion within a complex ion. Different ligands can lead to different geometric structures, such as octahedral, tetrahedral, or square planar. This geometry is essential for understanding the properties and behavior of complex ions.
Influence on Physical Properties
The choice of ligand affects the color and magnetic properties exhibited by transition metal complexes.
For example, certain ligands can cause a complex ion to absorb light in specific regions of the electromagnetic spectrum, resulting in a distinctive color. The presence or absence of unpaired electrons due to ligand interactions can influence magnetic behavior.
Ligand Exchange Reactions
Complex ions are not static entities; they can undergo ligand exchange reactions where one or more ligands are replaced by others. These exchanges can lead to changes in the overall composition and properties of the complex ion.
The rate at which these exchanges occur depends on factors such as temperature and concentration.
Binding Strength and Stability
The strength at which ligands bind to the central metal ion affects both stability and solubility. Strongly bound ligands tend to form more stable complexes that are less likely to dissociate into separate components.
Conversely, weakly bound ligands may result in less stable complexes that readily undergo decomposition reactions.
Nomenclature and Terminology of Complex Ions
Naming complex ions follows specific rules to ensure clear communication and identification. Let’s explore the nomenclature and terminology conventions used for complex ions.
Prefixes Indicate the Number of Ligands
When naming prefixes are used to indicate the number of ligands attached to the central metal ion. These prefixes include “mono-” for one ligand, “di-” for two ligands, “tri-” for three ligands, and so on.
Modification of Ligand Names
The name of the ligand is modified based on its charge or oxidation state.
For example, if a ligand has gained electrons (negative charge), it is given an “-ate” ending. Conversely, if a ligand has lost electrons (positive charge), it is named without any modification.
Names are written within brackets and placed after the name of the central metal ion. This bracketed notation helps distinguish between the central metal ion and the surrounding ligands.
Nomenclature Conventions Aid Identification
The use of nomenclature conventions in naming complex ions helps in identifying and communicating important information about these compounds.
By following these conventions, scientists can quickly understand details such as coordination numbers (the number of bonds formed by a central metal ion) and formal charges associated with different parts of the complex ion.
Understanding nomenclature and terminology allows scientists to effectively communicate about complex ions while providing crucial information about their composition and structure.
Color of Transition Metal Complexes
The color of transition metal complexes is a fascinating characteristic that arises due to electronic transitions within the d-orbitals.
Different ligands can cause variations in color by affecting the energy gap between these d-orbitals.
Electronic Transitions and Color Variations
Transition metal complexes exhibit vibrant colors because of the way their electrons absorb and emit light.
When light interacts with the complex, it promotes an electron from a lower-energy d-orbital to a higher-energy d-orbital through absorption. The absorbed light corresponds to specific wavelengths, resulting in the observed color.
Ligand Effects on Color
Ligands are molecules or ions that bind to a central metal atom in a this ion. They play a crucial role in determining the color of transition metal complexes.
Different ligands have varying effects on the energy gap between d-orbitals, causing shifts in the absorbed wavelengths and therefore altering the perceived color.
Scientists use absorption spectroscopy techniques to study and determine the colors and properties of transition metal complexes.
This method involves shining light of various wavelengths onto a sample containing the complex ion and measuring which wavelengths are absorbed. By analyzing this data, researchers can identify the specific colors exhibited by different transition metal complexes.
Indicators of Chemical Reactions and Environmental Conditions
Color changes can provide valuable insights into chemical reactions or environmental conditions.
For example, when silver chloride (AgCl) reacts with ammonia (NH3), it forms a deep yellow solution due to the formation of [Ag(NH3)2]+ complex ions.
Certain environmental factors such as pH levels or temperature can influence color changes in transition metal complexes.
In conclusion, complex ions play a crucial role in chemistry. They are formed when a central metal ion binds with surrounding ligands through coordinate covalent bonds.
These have unique properties and characteristics that make them significant in various chemical processes.
Understanding the definition of this provides a foundation for comprehending their importance in chemistry.
The bonding between the central metal ion and ligands determines the stability and reactivity of ions. The nomenclature and terminology associated with complex ions help scientists communicate effectively about these compounds.
Furthermore, the color exhibited by transition metal complexes is a result of electronic transitions within the d orbitals of the metal ion.
To delve deeper into this fascinating topic, continue reading to explore additional sections such as the factors affecting stability, spectroscopic techniques used to study complex ions, and applications of complex ions in catalysis and medicine.
What are some common examples of complex ions?
Some common examples of include [Fe(CN)6]4-, [Cu(NH3)4]2+, [Ag(NH3)2]+, and [Co(en)3]3+.
How do ligands bind to central metal ions?
Ligands bind to central metal ions through coordinate covalent bonds, where they donate electron pairs to form a stable coordination sphere around the metal ion.
Are all transition metals capable of forming complex ions?
Yes, all transition metals have d orbitals that allow them to form coordinate covalent bonds with ligands and create this ions.
Can you explain why transition metal complexes exhibit different colors?
The color exhibited by transition metal complexes arises from electronic transitions within the d orbitals of the central metal ion when it absorbs certain wavelengths of light.
What are some applications of complex ions in everyday life?
This finds applications in various fields, including catalysis, medicine (such as chemotherapy drugs), and environmental remediation processes. |
"Communications" means more than sending a message; it includes by extension the routes over which messages, supplies, and reinforcements can travel. To have open communications means that the route is free from enemy interception. The line of communication consisted of a major supply base outside of the theater of operations, equipped with warehouses and other facilities to serve a constant stream of horse-drawn wagons or, preferably, river-bound barges. Supplies moved to forward depots, from which militarized transport battalions delivered them to the troops. During battle, the baggage wagons were kept well to the rear, near the field hospitals and vehicle parks.
Unlike the armies of the 20th century, Napoleonic armies operated without the security afforded by a continuous front. In World Wars I and II, the numerous armies each had their own line of communication. In the Napoleonic Wars, except for 1813 and 1814, there was but one army on each side operating at a given time, upon a single line. Maxim XII: "An army ought to have only one line of Communication. This should be preserved with care, and never abandoned but in the last extremity;" and in Maxim XX, Napoleon discusses changing the line of Communication. "The line of communication should not be abandoned; but it is one of the most skillful maneuvers in war, to know how to change it, when circumstances authorize or render this necessary. An army which skillfully changes its line of communication deceives the enemy, who becomes ignorant where to look for its rear, or upon what weak points it is assailable."
The advent of the railroad and industrial production changed the nature of supply in war. In World War II, there were several instances where armies lost their line of communications. On 19 November 1942, for example, the Red Army launched a two-pronged attack upon Romanian and Hungarian troops on the flanks of the 6th Army, cutting-off and surrounding the Stalingrad pocket. Hitler banned all attempts to break out; but supplying the army by air and attacks from the outside proved fruitless. After less than 12 weeks, Axis forces in Stalingrad had exhausted their ammunition and food. Napoleon's Army carried enough supply for only 10-14 days. It is often stated that unlike their predecessors and enemies, Napoleon's troops were able to subsist by foraging. This was true only as long as the army kept on moving to unspoiled territory. A brigade would exhaust the resources of its neighborhood within 3 days or less. The loss of the LOC was a morale disaster. Once the troops realized that their retreat route home had been lost, their will to fight suffered. As they continued to operate without an LOC, the lack of food, forage and firewood further abated their health and ability to resist. As the wars dragged on, generals discovered that they could continue to operate without a line of communications—as long as the countryside through which they marched was not exhausted, and a knock-out blow could still be delivered.
In 1805, at Ulm, General Mack surrendered when his communications were cut; but in 1814, when Napoleon cut the line of communications of the Silesian Army (during the Laon operation), and the Bohemian Army (at the very end of the campaign), neither army fell back, to Napoleon’s surprise. |
At Summerseat Methodist Primary School, we pride ourselves on offering all of our pupils a safe, calm, happy and nurturing learning environment so children can learn effectively, enabling them to access the full breadth of our geography curriculum offer and ultimately reach their full potential. We have designed our geography curriculum to be sequential, logical and cumulative and meet the ambition of the National Curriculum. Key knowledge, facts, skills and concepts are identified through our ‘Steps in Learning’ and children have regular opportunities to revisit, recall and apply key knowledge and skills in order to deepen their understanding. We hold high aspirations for all our pupils and want them to grow into successful and responsible adults of the future with a rich ‘cultural capital’ formed through their experience of a high quality geography curriculum that has at its heart: key skills, knowledge arranged through concepts and broad and engaging experiences.
Geography develops pupils’ understanding of the world in which they live through the study of place, space and environment.
Whilst geography provides a basis for pupils to understand their role within the world, by exploring locality and how people fit into a global structure, the subject also encourages children to learn through experience, particularly through practical and fieldwork activities.
Our Summerseat Methodist Primary curriculum for geography aims to ensure that all pupils:
• develop contextual knowledge of the location of globally significant places – both terrestrial and marine – including their defining physical and human characteristics and how these provide a geographical context for understanding the actions of processes
• understand the processes that give rise to key physical and human geographical features of the world, how these are interdependent and how they bring about spatial variation and change over time
• are competent in the geographical skills needed to:
o collect, analyse and communicate with a range of data gathered through experiences of fieldwork that deepen their understanding of geographical processes
o interpret a range of sources of geographical information, including maps, diagrams, globes, aerial photographs and Geographical Information Systems (GIS)
o communicate geographical information in a variety of ways, including through maps, numerical and quantitative skills and writing at length.
Please see below for our Summerseat Geography steps in learning and our Geography long term overview. You will also see examples of our Knowledge Organisers that help us know more and remember more throughout our Geography topics. Knowledge Organisers will be sent home at the beginning of every new topic. |
Introduction to Green Maps
In an era of heightened environmental awareness and concern, green maps have emerged as powerful tools for charting environmental sustainability initiatives around the world. These maps showcase a diverse array of eco-friendly practices, initiatives, and resources aimed at promoting environmental stewardship, conservation, and sustainable living.
The Purpose of Green Maps
Green maps serve multiple purposes, ranging from raising awareness about local environmental issues to providing practical information and resources for individuals and communities striving to adopt more sustainable lifestyles.
Highlighting Environmental Assets
One of the primary functions of green maps is to highlight environmental assets within a community. These assets may include parks, nature reserves, community gardens, and green spaces that contribute to biodiversity conservation, recreation, and overall well-being. By showcasing these assets on maps, communities can foster appreciation for local ecosystems and encourage residents to engage in outdoor activities and environmental stewardship.
Promoting Sustainable Practices
Green maps also serve as platforms for promoting sustainable practices and initiatives. From recycling centers and farmers’ markets to renewable energy installations and eco-friendly businesses, these maps identify resources and opportunities for individuals and organizations to reduce their environmental footprint and support sustainable development. By providing information about sustainable alternatives and best practices, green maps empower individuals to make informed choices that benefit both the environment and the community.
Mapping Environmental Challenges
In addition to highlighting environmental assets and initiatives, green maps also provide a platform for mapping environmental challenges and threats facing communities.
Identifying Pollution Hotspots
Green maps can identify pollution hotspots, such as industrial facilities, waste disposal sites, and contaminated waterways, that pose risks to human health and the environment. By visualizing these areas on maps, communities can advocate for improved environmental regulations, monitoring, and remediation efforts to address pollution and protect public health.
Mapping Climate Vulnerabilities
Climate change poses significant challenges for communities worldwide, including rising temperatures, extreme weather events, and sea-level rise. Green maps can help communities identify climate vulnerabilities and develop adaptation strategies to mitigate risks and build resilience. Mapping flood-prone areas, heat islands, and vulnerable populations allows communities to prioritize investments in infrastructure, emergency preparedness, and climate resilience planning.
Technological Innovations in Green Mapping
Advancements in technology have expanded the capabilities of green mapping, enabling more sophisticated data collection, analysis, and visualization techniques.
Geographic Information Systems (GIS)
Geographic Information Systems (GIS) are powerful tools for creating and analyzing spatial data related to environmental sustainability. GIS technology allows users to overlay multiple layers of information, such as land use, vegetation, and air quality, to identify patterns, trends, and correlations that inform decision-making and policy development. GIS-based green maps provide valuable insights into environmental dynamics and support evidence-based planning and management efforts.
Crowdsourced Mapping Platforms
Crowdsourced mapping platforms, such as OpenStreetMap and Mapillary, engage citizens in collecting and updating geographic data related to environmental sustainability. By harnessing the collective knowledge and expertise of volunteers, crowdsourced mapping platforms create dynamic, up-to-date maps that reflect local environmental conditions and community priorities. These platforms promote transparency, collaboration, and citizen engagement in environmental stewardship initiatives.
Conclusion: Empowering Environmental Action
Green maps play a vital role in empowering individuals, communities, and policymakers to take meaningful action to address environmental challenges and promote sustainability. By highlighting environmental assets, initiatives, and challenges, green maps raise awareness, inspire action, and foster collaboration among stakeholders. As communities continue to grapple with pressing environmental issues, green maps serve as invaluable tools for charting a path towards a more sustainable and resilient future. |
Table of contents
Scotland’s linguistic landscape is as rugged and enduring as its famous highlands, marked by the historical interplay of Scots and Scottish Gaelic, languages that have shaped the nation’s cultural contours for centuries. The Germanic inflections of Scots, echoing through the lowland streets, bear witness to a complex past intertwining with English yet maintaining its distinct identity. Meanwhile, the melodious cadences of Scottish Gaelic, a vestige of Celtic heritage, whisper tales of the mist-clad western isles, resonating with a resilience that defies its minority status. This exploration invites an intellectual inquiry into the origins, evolution, and current state of these linguistic threads, unraveling the complexities of language policies, educational frameworks, and community initiatives that aim to preserve and nurture this invaluable aspect of Scotland’s patrimony. As we consider the impact of globalization and technological advances on these ancient tongues, one must ponder the future of such cultural keystones in an ever-evolving world, thus setting the stage for a comprehensive discourse on their role in the modern Scottish narrative.
- Scotland has a diverse linguistic heritage, with two native languages, Scots and Scottish Gaelic, playing significant roles in Scottish identity.
- Scots language has evolved from its Germanic roots and has a distinct identity that has evolved over centuries.
- Scottish Gaelic has ancient roots and has transitioned from a prominent language to a minority status.
- Both Scots and Scottish Gaelic have unique linguistic features that define and distinguish them, and they have a profound impact on Scotland’s cultural expressions and national consciousness.
The Linguistic Heritage of Scotland
Scotland’s linguistic heritage is a tapestry woven with the rich threads of Scots and Scottish Gaelic, languages that are integral to the nation’s cultural identity and historical narrative. These tongues, steeped in centuries of history, are more than mere means of communication; they are repositories of Scotland’s social, political, and artistic chronicles. The Scots language, with its Germanic origins, shares a kinship with English yet retains a robust individuality through its unique phonology, vocabulary, and literary tradition. It has played a pivotal role in shaping the Scottish psyche and is often used to express a sense of national pride and community.
On the other hand, Scottish Gaelic, a Celtic language, whispers the tales of the ancient Gaels, resonating with the echoes of Scotland’s mythic past. It encompasses a world of traditional ballads, folklore, and a connection to the natural landscape of the Highlands and Islands. Although its speakers constitute a smaller fraction of the population today, Gaelic’s influence on place names and cultural practices remains pervasive throughout Scotland.
Both Scots and Scottish Gaelic have faced challenges over the centuries, from political pressures to social changes, yet they persist as symbols of Scotland’s enduring spirit. Efforts to revive and promote these languages are testament to their significance in the collective consciousness of the Scottish people. Understanding the linguistic heritage of Scotland is not merely an academic pursuit; it is an embrace of the nation’s soul, a recognition of the diversity that forms the bedrock of Scottish identity. For a nation famed for its poets, warriors, and thinkers, language is the heart from which its cultural lifeblood flows.
Scots Language: Origins and Evolution
Emerging from its Germanic roots, the Scots language has undergone significant evolution to become a distinctive linguistic system within Scotland. Its origins can be traced back to the Old English spoken by settlers from the Anglo-Saxon kingdoms, who arrived in the region that is now southeastern Scotland around the early Middle Ages. Over time, this language, influenced by Norse and Gaelic, began to diverge from the English spoken further south, developing its unique characteristics.
Throughout the Middle Ages, Scots established itself as the dominant language of the Scottish court and administration. It enjoyed a golden age during the Renaissance, where it flourished in literature, culture, and governance. However, the political union with England in 1707 led to a decline in formal use as English became increasingly prestigious. Despite this, Scots continued to be spoken widely by the Scottish people in various regional dialects.
The evolution of Scots has been marked by its resilience and adaptability. It has absorbed elements from Latin, French, Dutch, and other languages, adding to its rich lexicon and distinctive grammar. In the modern era, Scots has experienced a revival, with growing recognition of its cultural significance and value as a medium of artistic expression.
As a living language, Scots remains vibrant in communities across Scotland and is characterized by a diversity of dialects, each with unique phonetic and lexical traits. Efforts to preserve and promote the Scots language are essential in maintaining the linguistic heritage of Scotland, ensuring that this integral part of Scottish identity continues to thrive in the future.
Scottish Gaelic: A Celtic Tongue
While the Scots language boasts Germanic origins, Scottish Gaelic offers a distinct narrative, rooted in the Celtic branch of languages and woven deeply into the tapestry of Scotland’s past. This ancient tongue, which arose from the same roots as Irish and Manx Gaelic, has played a crucial role in shaping the cultural and historical identity of the Scottish people.
Scottish Gaelic’s origins trace back to the early Celtic inhabitants of the region, with its earliest forms brought to Scotland around the 4th century by settlers from Ireland. Over time, the language flourished and became the dominant language of the Scottish Highlands and the Western Isles. By the Middle Ages, Gaelic had extended its reach, finding its place in Scottish courts and cultural life.
However, the subsequent centuries saw a dramatic shift as political, social, and economic changes led to a decline in Scottish Gaelic’s prominence. The language was increasingly marginalized, particularly after the Acts of Union in 1707, and the tragic consequences of events like the Highland Clearances further diminished its everyday use. By the 20th century, Scottish Gaelic had become a minority language, with its speakers primarily concentrated in the Highlands and Islands.
Despite this decline, Scottish Gaelic has experienced a revival in recent years, driven by a growing recognition of its cultural significance and the efforts of language enthusiasts and policy makers. Educational initiatives, media in Gaelic, and community programs have all contributed to its resurgence. Today, Scottish Gaelic stands as a symbol of Scotland’s rich heritage, a living language with ancient roots that continue to contribute to the nation’s diverse linguistic landscape.
Examining Language Features: From Scots Dialects to Gaelic Syntax
Delving into the linguistic intricacies of Scots and Scottish Gaelic reveals a rich tapestry of dialectal variation and structural complexity that is key to understanding these languages’ identities. Scots, hailing from its Germanic roots, features a variety of dialects that vary significantly across regions. These dialects exhibit distinctive phonetic qualities; for example, the pronunciation of the “r” sound can be markedly different in the Northeast compared to the Borders. Scots vocabulary is replete with words derived from Old English, Norse, and French, reflecting Scotland’s historical interactions and cultural exchanges.
Grammar in Scots is somewhat akin to English but demonstrates unique modalities in verb forms and usage. Pronouns in Scots can also differ, with forms like “youse” often used for the second-person plural. Syntax in this language is generally straightforward, though sentence structure may be influenced by emphasis or poetic style.
Turning to Scottish Gaelic, a language of the Celtic family, we encounter a syntax that is distinct from that of Scots or English. Verb-subject-object (VSO) is the typical sentence structure, diverging from English’s subject-verb-object (SVO) pattern. This can present a learning curve for English speakers, as the verb often appears at the beginning of the sentence in Gaelic. Pronunciation in Scottish Gaelic is regulated by a system of broad and slender vowels, which can affect consonant sounds and can be challenging to master without practice.
In vocabulary, Scottish Gaelic possesses a wealth of terms connected to the natural environment, indicative of the language’s roots in the landscapes of Scotland. As a result, both Scots and Scottish Gaelic not only serve as means of communication but also as cultural repositories, encapsulating the history and identity of the Scottish people.
The Cultural Impact of Scottish Languages
The profound influence of Scots and Scottish Gaelic extends well beyond mere communication, shaping Scotland’s cultural expressions in literature, music, and media. These languages have historically been the lifeblood of a distinct Scottish identity, infusing the arts with a unique perspective that is inextricably linked to the nation’s heritage.
In literature, Scots has been celebrated in the works of poets and authors like Robert Burns, whose compositions remain a cornerstone of Scotland’s literary canon. His Scots-language poetry, such as “Auld Lang Syne” and “Tam o’ Shanter,” has achieved global recognition, symbolizing Scottish cultural pride. Similarly, contemporary writers continue to utilize Scots to convey authentic Scottish voices and experiences, thus enriching the literary landscape.
Scottish Gaelic has also left its mark on literature, with a long-standing tradition of oral storytelling and song that has been recorded and preserved for posterity. This has not only provided a conduit for Gaelic culture but has also served to inspire modern writers and musicians who draw upon this heritage to create new works that resonate with historical depth.
Music is another vibrant arena where both Scots and Gaelic thrive. The oral tradition of Gaelic psalm singing, the haunting melodies of the fiddle, and the stirring sounds of the bagpipes are all manifestations of these languages in Scotland’s sonic identity. Furthermore, Scottish media increasingly showcases these languages, with television and radio programming dedicated to Gaelic speakers, and Scots utilized in various forms of broadcasting and digital content.
Together, Scots and Scottish Gaelic continue to contribute to a dynamic cultural scene, one that both honors tradition and embraces innovation. Their impact is a testament to the enduring power of language as a vehicle for cultural expression and national consciousness.
Education and Revitalization: Safeguarding Scotland’s Linguistic Future
Efforts to revitalize and promote Scots and Scottish Gaelic are gaining momentum, with educational programs and media initiatives playing a pivotal role in safeguarding these languages for future generations. In Scotland, the government and various organizations have recognized the critical need to preserve the nation’s linguistic heritage and have taken substantial steps to ensure its survival.
Educational policies have been instrumental in this revival. The teaching of Scottish Gaelic is now supported in some schools, with resources allocated to train teachers and develop curricula that incorporate the language. Scots, too, is receiving attention, with its inclusion in the Curriculum for Excellence, enabling young Scots to learn about their linguistic heritage. These measures are crucial in fostering a new generation of speakers who are comfortable with their linguistic traditions.
Media representation has also seen a positive shift. Television and radio broadcasts in Scottish Gaelic, such as those provided by BBC Alba and Radio nan Gàidheal, not only offer entertainment but also serve to normalize the language’s use in public life. Similarly, Scots is gradually gaining visibility through literature, music, and online platforms, which helps to spread its usage and appreciation.
Despite these advances, challenges remain. The number of fluent Gaelic speakers is still relatively low, and Scots often lacks formal recognition, affecting its status and the resources devoted to its promotion. Ongoing efforts are required to not only maintain current programs but also to expand them, ensuring that the vitality of these languages is not only preserved but nurtured, allowing them to thrive in contemporary Scotland.
Recognizing and Preserving Linguistic Diversity
In the realm of heritage conservation, recognizing and preserving linguistic diversity, particularly for Scots and Scottish Gaelic, is essential for the continuation of Scotland’s rich cultural identity. The fabric of a nation’s heritage is weaved with the threads of its language, and the vibrant tapestry of Scotland’s cultural identity is no exception. As such, efforts to maintain and promote these languages are of utmost importance.
To create a vivid image of these efforts:
- Language Learning Initiatives: Schools across Scotland integrate Scots and Gaelic into their curricula, not only as a subject of study but also as a medium of instruction. This immersion fosters a new generation of speakers who carry the torch of their linguistic heritage.
- Cultural Celebrations: Festivals like the Royal National Mòd and Burns Night serve as grand stages where the lyrical beauty of Gaelic and the robust charm of Scots are showcased through poetry recitations, traditional music, and storytelling.
- Technological Integration: Digital platforms have embraced these languages, with social media accounts and mobile applications emerging in Scots and Gaelic, connecting speakers worldwide and ensuring these languages thrive in the digital era.
Each of these elements plays a critical role in the intricate dance of language preservation, allowing Scots and Scottish Gaelic to continue to resonate through the hearts and minds of the Scottish people and the global community. By implementing such strategies, Scotland not only honors its past but also paves the way for a linguistically diverse and culturally rich future.
Scottish Languages Today: Usage and Perspectives
Contemporary Scotland presents a linguistic tapestry where attitudes towards Scots and Scottish Gaelic are as varied as the speakers themselves, reflecting a complex interplay of history, identity, and modernity. Scots, a language with Germanic roots, is widely recognized and used across many parts of Scotland, often in a blended form with English, known colloquially as “Scots-English.” Its use extends to informal contexts, literature, and even some media. Yet, despite its prevalence, Scots often faces issues of recognition and is sometimes stigmatized, with debates surrounding its distinction from Scottish English continuing.
Scottish Gaelic, once the predominant language of the Scottish Highlands and Western Isles, has seen a decline over the centuries but is now experiencing a revival. This Celtic language is spoken by a smaller proportion of the population and is the subject of numerous revitalization efforts, including immersion education and media broadcasting in Gaelic. Attitudes towards Scottish Gaelic tend to be more romanticized, valuing it as a core element of Scottish heritage. However, the language still confronts challenges such as limited resources and varying degrees of community support.
The perspectives on these languages reflect broader societal trends. Younger generations are often more open to embracing linguistic diversity, recognizing the cultural significance of both Scots and Scottish Gaelic. Government and educational policies have also shifted to encourage the teaching and use of these languages, recognizing them as integral parts of Scotland’s cultural fabric. Consequently, contemporary usage of Scots and Scottish Gaelic is not just a matter of communication but a statement of cultural identity and a testament to Scotland’s linguistic resilience.
Scots and Scottish Gaelic in the Digital Age
While the cultural significance of Scots and Scottish Gaelic is acknowledged in educational and policy shifts, the digital realm presents new opportunities and challenges for these languages’ proliferation and modern relevance. The advent of the internet and digital communication has transformed the way languages are used and spread, providing a platform for minority languages like Scots and Scottish Gaelic to reach a wider audience.
The digital age has ushered in several key developments for these languages:
- Online Resources and Education: Various websites, apps, and online courses have emerged, offering learners around the globe the chance to study Scots and Scottish Gaelic from anywhere. This has democratized language learning, moving beyond traditional classroom settings.
- Social Media and Networking: Social media platforms have become spaces for speakers of Scots and Scottish Gaelic to connect, share, and create content in their native tongues, fostering a sense of community and normalizing the use of these languages in everyday digital interactions.
- Digital Media and Entertainment: Streaming services and online radio stations now offer content in Scots and Scottish Gaelic, from music to television shows, increasing the languages’ visibility and appeal, especially among younger audiences.
However, the digital landscape is not without its challenges. Ensuring the presence of Scots and Scottish Gaelic in technology, such as in predictive text or voice recognition software, remains a hurdle. Additionally, the dominance of English online can overshadow smaller languages, making it imperative for digital initiatives to be robust and engaging.
The Role of the Scottish Diaspora in Promoting the Languages Abroad
The Scottish diaspora plays a pivotal role in preserving and promoting Scots and Scottish Gaelic languages across the globe, fostering cultural connections that transcend geographical boundaries. Through various cultural societies, educational initiatives, and social media, the diaspora maintains the vibrancy of these languages, ensuring they remain a living part of Scotland’s heritage.
Organizations such as An Comunn Gàidhealach Ameireaganach (The American Scottish Gaelic Society) and the Scots Language Society operate internationally, providing resources and forums for language learners and speakers. They organize events like ceilidhs and poetry readings, which not only keep the languages in active use but also celebrate Scottish culture in the diaspora communities.
Moreover, the internet has played a crucial role by connecting learners with native speakers through online courses, discussion groups, and language apps. This digital engagement supports language proficiency and cultural literacy, promoting a global community united by a shared linguistic heritage.
The following table outlines key aspects of the Scottish diaspora’s efforts in promoting Scots and Scottish Gaelic abroad:
|Burns Suppers, Literary Festivals
|Ceilidhs, Mod Festivals
|Online Dictionaries, Tutorials
|Gaelic Medium Education, Online Courses
|Social Media Groups, Language Cafés
|Highland Games, Gaelic Language Societies
|Scots Radio, Podcasts
|BBC Alba, Gaelic Films
These initiatives highlight the diaspora’s commitment to the survival and growth of Scots and Scottish Gaelic. As guardians of this linguistic heritage, the Scottish expatriate community ensures that the languages thrive, not just within Scotland, but as a cherished cultural beacon worldwide.
Frequently Asked Questions
How Have the Scots and Scottish Gaelic Languages Influenced Modern English Vernacular in Scotland?
Scots and Scottish Gaelic have uniquely influenced the English vernacular in Scotland, infusing it with distinct vocabulary and phonetic nuances. This linguistic blend is evident in everyday speech, where certain words, phrases, and accents reflect the historical presence of these native tongues. Such influence underscores the deep-seated connection between language and cultural identity, demonstrating the ongoing legacy of Scotland’s linguistic heritage in shaping contemporary Scottish English.
Are There Specific Legal Protections for Scots and Scottish Gaelic Speakers Against Discrimination in the Workplace or in Education?
Yes, there are legal protections in place for Scots and Scottish Gaelic speakers. The European Charter for Regional or Minority Languages, which the UK has ratified, provides a framework for protecting and promoting languages like Scots and Gaelic. In Scotland, the Gaelic Language (Scotland) Act 2005 promotes the use of Scottish Gaelic and requires the creation of a national Gaelic language plan to ensure its preservation and promotion in education and public life.
What Are Some Common Misconceptions or Stereotypes About Scots and Scottish Gaelic Speakers Within Scotland and Abroad?
Common misconceptions about Scots and Scottish Gaelic speakers include the belief that these languages are simply dialects of English or are no longer in use. Internationally, there’s often a lack of awareness of their distinctiveness and cultural significance. Within Scotland, stereotypes may portray speakers as less educated or rural, overlooking the languages’ rich literary history and modern relevance. Addressing these stereotypes is crucial for the appreciation and preservation of Scotland’s linguistic heritage.
How Have Recent Political Events, Like Brexit or Discussions About Scottish Independence, Affected the Discourse Around Scots and Scottish Gaelic Languages?
Recent political events, such as Brexit and the debate over Scottish independence, have significantly influenced the discourse on Scots and Scottish Gaelic. These developments have heightened awareness of Scotland’s distinct cultural identity, prompting discussions around linguistic preservation as a facet of national heritage. Consequently, there is increased interest in how political changes may affect language policy, funding, and the role of these languages in asserting Scotland’s cultural sovereignty.
Can Learning Scots or Scottish Gaelic Provide Any Cognitive or Career Advantages, and Is There Research to Support This?
Learning Scots or Scottish Gaelic can offer cognitive benefits similar to those gained from acquiring any second language, such as enhanced memory and problem-solving skills. Some research indicates bilingualism can delay the onset of dementia. Career advantages may arise, particularly within Scotland, in fields like education, tourism, and cultural preservation. However, specific studies linking these languages to career benefits are limited and warrant further investigation for conclusive evidence.
In conclusion, Scotland’s linguistic heritage, represented by Scots and Scottish Gaelic, is an integral part of its cultural identity. Despite facing challenges, these languages persist through dedicated revitalization efforts and the embrace of digital platforms. The Scottish diaspora plays a pivotal role in promoting linguistic diversity globally, ensuring that Scots and Gaelic not only survive but thrive, reflecting the resilience and dynamism of Scottish culture in the context of an interconnected world. |
Damodar Swarup Seth, a member of the Constituent Assembly from the United Provinces (present-day Uttar Pradesh), famously characterized the Indian democratic system as “a Unitary Constitution in the name of a Federation”. Polity in India is unique in the sense that it is not solely a unitary government or a federal government. It is a hybrid system, displaying characteristics of both, and is often characterized by jurists and scholars as “a quasi-federation, an administrative federation, organic federalism, and a territorial federation.” This, however, does not mean that it is accommodative of all the concerns that governing such a large, populated, and diverse country puts forth. In fact, it is argued here, that the trait of federalism that exists in this so-called “quasi-federation” does nothing but hamper and slow down the governance of this country.
Federalism is essential “a model of political organization that divides sovereignty between national and regional governments.” In a federation, the states enjoy territorial sovereignty and are free from the intervention of the central government in its internal affairs. This is opposed to a unitary system, in which “governments exercise only those powers granted to them by the central government” In a unitary system, while the primary decision making power lies in the hands of the central or national government, the states or regional governments play a more passive role as mere administrative units of the central government.
Nehru initially envisaged a federal setup for India. However, a series of events – from the partition of the country to the dispute in Kashmir, to the internal rebellion by the Nagas, changed the perception of the members of the Constituent Assembly. Thereafter, the decision-makers preferred a more centralized federal system. In fact, Dr. B. R. Ambedkar was never in favor of federalism in the first place and refused to insert the word ‘Federal’ into the Constitution. Instead he, along with a considerable number of other members of the Constituent Assembly, emphasized on the necessity for India to be a ‘Union’: “what is important is that the use of the word ‘Union’ is deliberate … The Drafting Committee wanted to make it clear that though India was to be a federation, the Federation was not the result of an agreement by the States to join in a Federation and that the Federation not being the result of an agreement no State has the right to secede from it. The Federation is a Union because it is indestructible.” This is how India, as we know it today, a ‘Union of States’, came about. However, a more in-depth analysis shows that the demerits of the federal traits of our democracy far outweigh the merits, thereby doing nothing but handicap our country when it comes to efficiency in governance. The argument presented here is two-fold: firstly, our democracy and its Constitution is already biased towards its unitary elements. Secondly, the demerits of the federal features make governance a slow, expensive, and less efficient process.
As opined by various scholars, India’s democracy can be compared to that of prefectorial federalism, where the central government has preponderant powers as compared to the state government. These overriding and extensive powers not only help them keep a check on the state governments but also “stultify their autonomy and dismiss their governments.” There are various instances where we can see this preponderant power in exercise. For starters, States in India do not have any territorial autonomy when it comes to their size, boundaries, and names. Article 3 of the Indian Constitution gives the Parliament the autonomy to completely change the identity of the state, or even extirpate it. In fact, in Babulal Parate v. the State of Bombay, it was held that the Parliament in no way is obliged or bound to accept the views of state legislatures in this matter, even if the Parliament receives these views in time. Secondly, even though matters of legislation are clearly segregated into the Union List, State List and the Concurrent List, Article 200, 248, 249, and 368 of the Constitution of India clearly display the legislative ascendancy of the central government over the states. According to Article 248, residual legislative powers, that is, the power to legislate on matters not listed in any of the lists lies solely in the hands of the central government. Article 249, on the other hand, allows for the center to intrude into the legislation making powers of the state under the State List, and legislate on such a matter in the name of national interest, if a resolution for the same has been passed in the Rajya Sabha. Further, Article 368 of the Constitution gives the Parliament alone the power to amend the Constitution. Article 200 of the Constitution of India empowers the Governor to reserve certain legislations passed in the state legislature to be considered by the President of India, and this is the non-justiciable authority of the Governor. Article 201 further empowers the President to either give his/her assent to the Bill or veto it, without any time limit. Not only is legislative intervention by the central government possible, but the Constitution also dictates that the legislative and executive actions of the states must comply with the legislative and executive actions of the central government under the Union or Concurrent List. The central government may exercise its pre-emptive powers against the states in the case that they do not comply. Therefore, Dr. Ambedkar’s reasoning that “The States under our Constitution are in no way dependent upon the Centre for their legislative or executive authority” is nothing but a parable that does not hold true in practice today.
Apart from the above mentioned legislative bias, there are other operational features of the government that give it a primarily unitary characteristic. The existence of the position of the Governor in every state is the foremost example. As an executive officer of the Union (President) and the nominal head of the state, the Governor has the authority to appoint the state government, dismiss state governments that do not hold a majority in the state legislature, and exercise their legislative powers as discussed above. The presence of the civil services and an integrated judicial system further buttress this claim. Even though each state has a High Court with territorial jurisdiction over the State, the power to create High Courts, decide on their composition, and appoint judges lies in the hands of the central government. Those recruited by the Union Public Service Commission (UPSC) are officers of the central government. Furthermore, when it comes to the powers of the President in case of emergency under Article 352 (National Emergency), Article 356 (State Emergency), and Article 360 (Financial Emergency) of the Indian Constitution, the entire system shifts from a quasi-federal government to a unitary government. The central government also enjoys a more affluent position when it comes to matters of finance listed in List I and II of the Indian Constitution. The central government’s share in the tax collected is higher than that of the state government, and the state governments may not borrow money from outside the country, or borrow from public funds without the consent of the central government.
Apart from the existing unitary-biased structure, the second fold of this argument is that the federal elements of our system make it an institutionally inefficient, expensive and weak one. It goes without saying that operating 29 state governments, along with the central government, is a costly affair. Federalism requires more number of elections to be conducted and more number of elected office-bearers. This not only increases the cost on the election front but also increases other administrative expenses related to these additional elected members. Overlapping roles between elected officers can also lead to redundancy and increase the possibility of corruption within the institution. When it comes to disaster management and other emergency response, overlapping jurisdictions between the center and state governments may lead to confusion and utter chaos. This point of chaos caused by the federal elements is essential; in fact, a study shows that when it comes to federalism, the chaos it causes actually “tends to substantially dampen public responsiveness and representation.” Accountability on policy becomes another issue, with citizens unable to properly assign responsibility for policy, and therefore unable to make informed political choices and put forth informed opinions. Decentralization as a result of these federal elements also leads to unhealthy competition among the various state governments as well: not only do the states compete among themselves when it comes to development, resources, education, et cetera but when policy changes by one state government are felt by the surrounding states, it may lead to disputes that can side-track the governments from their primary purpose. Furthermore, disagreements between state governments and between a state government and the central government may pose a substantial challenge to our country’s integrity. Unequal distribution of resources geographically across the country would mean that certain states may prosper more than the others, and provide better opportunities. This leads to a further increase in income inequality.
Supporters of a federal structure may argue that a unitary system does not take local opinions into account, bloats the government, and would increase response time by the government. These concerns can be accommodated in a unitary structure with the help of proper planning, vigilance within the institution, and solidifying the structure within the central government. A well-planned unitary system would enable decisive legislation and executive action, efficient use of taxes, better management of the economy, and would focus on one, central agenda: to develop and protect the country as a whole.
M. Rajashekara, The Nature of Indian Federalism: A Critique, 37 Asian Survey 245-253 (1997), https://www.jstor.org/stable/2645661 (last visited Jun 29, 2020).
Craig Calhoun, federalism Oxford Reference (2020), https://www.oxfordreference.com/view/10.1093/acref/9780195123715.001.0001/acref-9780195123715-e-604?rskey=C4rhxS&result=604 (last visited Jun 29, 2020).
Craig Calhoun, unitary state Oxford Reference (2002), https://www.oxfordreference.com/view/10.1093/acref/9780195123715.001.0001/acref-9780195123715-e-1733?rskey=QMi0KO&result=1732 (last visited Jun 29, 2020).
Rajashekara, supra note 1 at 246.
Constituent Assembly Debates (CAD), vol. 7, p. 43.
Rajashekara, supra note 1 at 246.
AIR 1960 SC 51
The Constitution of India, 1950, Schedule VII
The Constitution of India, 1950, Article 254(1)
The Constitution of India, 1950, Article 256
The Constitution of India, 1950, Article 254(1)
Constituent Assembly Debates (CAD), vol. 11, p. 976.
Christopher Wlezien & Stuart N. Soroka, Federalism and Public Responsiveness to Policy, 41 Publius 33 (2011), https://www.jstor.org/stable/23015052 (last visited Jun 30, 2020). |
Table of Contents
What physical features made travel on the Nile difficult?
The Nile River has a marshy delta. As a result, Egyptians could not build a port at the mouth of the Nile. This made it difficult for invaders to reach Egyptian settlements along the river. In addition, the rough waters, or cataracts, in the southern part of the river made travel and invasion difficult.
How did the Nile river affect transportation in ancient Egypt?
The majestic River Nile allowed people and goods to move across distances long and short. Historical Egyptian watercraft had a high stern and bow, equipped with cabins at both ends. The ships were used to transport the massive blocks of stone that were used to build the pyramids, temples and cities along the river.
What was the main disadvantage of the Nile?
The water from the Nile was used for drinking water, bathing, and watering crops. The only disadvantage of being near the Nile was that it was hard to travel by ship along it, due to cataracts (fast-moving waters).
How did Egyptians move goods up the Nile?
Ships and boats were the main means of transporting people and goods around the country. Egyptian watercraft had a high stern and bow, and by the New Kingdom, they were equipped with cabins at both ends. Ships could travel with ease up and down the Nile from the delta region to the First Cataract at Aswan.
What were Egypts natural barriers?
Mountains, swamps, deserts, icefields, and bodies of waters such as rivers, large lakes, and seas are examples of natural barriers. To Egypt’s north lays the Mediterranean Sea. To the East of the Nile is the Eastern Desert and the Red Sea.
What kind of Transportation did the ancient Egyptians use?
There were many means of transportation in ancient Egypt. There were boats, ships, chariots, sleds, donkeys, camels, carrying chairs etc. It is interesting to note that one of the most common means of Egyptian Transport was, by foot.
How did ancient Egyptians travel on the Nile River?
Egyptians moved their boats with oars. Ferry boats too prevailed. The speed of traveling on the river depended on the direction of the journey, the strength of the wind and the current, the boat and its crew. Generally, one did not travel on the Nile in the dark.
Why did the ancient Egyptians use camels for transport?
The Egyptians wanted camels because they could go a long way without water, food, and rests. They could also carry heavy loads on their backs. It was so important that camels could go a long way without water because there was not a lot of water or food in the deserts where the Egyptians lived.
What did the ancient Egyptians use donkeys for?
Water was poured on the soil to facilitate easy movement of sleds. Donkeys, the “beasts of burden” were always used for carrying loads and so were in Ancient Egypt. They were kept in large numbers throughout Egypt in spite of their not very docile character. In Ramesside times the temple of Amen alone had 11 million donkeys on its lands. |
Huge pterosaurs may have pole-vaulted to get off the ground
Giraffe-sized pterosaurs could have pole-vaulted with their arms to launch themselves into the sky, scientists say.
Giraffe-sized pterosaurs may have pole-vaulted with their arms to launch themselves, just as vampire bats do, scientists now suggest.
Once airborne, these giant reptiles could have flown vast distances, even crossing continents, they added.
Pterosaurs were prehistoric winged reptiles that dominated the skies during the age of dinosaurs, and went extinct at the same time their brethren did 65 million years ago. The largest pterosaur reached the height of a giraffe, raising controversy as to whether such giant beasts could ever actually fly.
"People had assumed for many years that all pterosaurs could take off and fly, although there were disagreements about the specifics of it," said researcher Mark Witton, a paleontologist at the University of Portsmouth in England. "It's only recently that people started claiming that giant pterosaurs were flightless."
Recent assertions that pterosaurs were flightless were based on assumptions that they would have taken off like birds.
"Most birds take off either by running to pick up speed and jumping into the air before flapping wildly — or, if they're small enough, they may simply launch themselves into the air from a standstill," Witton told LiveScience. "Previous theories suggested that giant pterosaurs were too big and heavy to perform either of these maneuvers and therefore they would have remained on the ground."
Witton's colleague Michael Habib, a biomechanicist at Chatham University in Pittsburgh, published work last year suggesting these creatures had used a pole-vaulting maneuver. The new study involves models detailing exactly how this happened and how it compared with living birds today.
"These creatures were not birds — they were flying reptiles with a distinctly different skeletal structure, wing proportion and muscle mass. They would have achieved flight in a completely different way to birds," Witton said.
Meaty flight muscles
The researchers suggest that with their huge wing muscles, pterosaurs could easily have launched themselves into the air despite their massive size and weight. They would have essentially pole-vaulted over their wings using their leg muscles and pushed off from the ground using their powerful arm muscles.
A pterosaur's flight muscles alone would have weighed about 110 pounds (50 kilograms), accounting for 20 percent of the animal's total mass. Result: The muscles would've provided tremendous power and lift, according to Witton.
"By using their arms as the main engines for launching instead of their legs, they use the flight muscles, the strongest in their bodies, to take off, and that gives them potential to launch much greater weight into the air," Habib said. "This may explain how pterosaurs became so much larger than any other flying animals known."
Their extraordinarily strong bones also could have helped with pole-vaulting and flight. For instance, the team compared the largest bone in the wings of the biggest living birds — the griffon vulture, mute swan and royal albatross — with that of the giant pterosaur Quetzalcoatlus. The extinct pterosaur's wing bone was more than twice the strength relative to weight of the mute swan's and royal albatross's, and nearly twice the strength of the griffon vulture.
"Pterosaurs had incredibly strong skeletons — for their weight, they're probably amongst the strongest ever evolved," Witton said. "And unlike birds, where the wings become relatively weak as they grow in size, those of pterosaurs do the opposite — they become stronger. As pterosaurs became larger, they reinforced their wings and expanded their flight muscles to ensure they could keep flying."
Weight estimates refined
Using fossilized remains of the flying reptiles, the researchers estimated size and weight and calculated bone strength and mechanics, as well as potential flight performance.
"One of the reasons why pterosaur research is so tricky is that there is very little in the way of fossilized remains," Witton said. "We're working with [an] extremely small number of fossil specimens. You could take all the giant pterosaur fossils in the world and fit them on to a coffee table."
These animals might have been a bit smaller and lighter than previously thought, which helps change the premise as to whether they could fly. Researchers had suggested the giant pterosaurs could have been roughly 19.5 feet (6 meters) tall with wingspans of up to 39 feet (12 m) and weighing of up to 1,200 pounds (544 kg). But Witton and Habib argue that more realistic measurements for a pterosaur would be roughly 16.5 feet (5 m) high with a wingspan of nearly 33 feet (10 m) and a weight of 440 to 550 pounds (200 to 250 kg).
"Weight estimates based on a 12-meter wingspan will be almost twice that based on 10 meters, so an accurate assessment is vital," Witton said. "They're still really big, just not as big as we thought they were."
They concluded that pterosaurs could not only fly, but they could do so extremely well, potentially traveling huge distances and even crossing continents. They probably did not need to flap continuously to remain aloft, but flapped powerfully in short bursts, with their large size enabling them to achieve rapid cruising speeds.
"All the direct data we have on pterosaurs, even the largest, suggests they were capable of flying," Witton said. "And after almost a century in the doldrums, we're starting to see far more progressive research on pterosaurs. It's not quite a revolution but we're certainly going through something of a renaissance."
Witton and Habib detailed their findings online Nov. 15 in the journal PLoS ONE.
- 25 Amazing Ancient Beasts
- Top 10 Beasts and Dragons: How Reality Made Myth
- Gallery: Dinosaur Drawings
IN PICTURES: Fearsome dinosaurs |
Optical Coherence Tomography
Optical coherence tomography, also known as OCT, is an imaging system that uses light waves to produce a high-resolution view of the cross-section of the retina and other structures in the interior of the eye.
Conditions Detected With an OCT
The images can help with the detection and treatment of serious eye conditions such as:
- Macular hole
- Macular swelling
- Optic nerve damage
- Age-related macular degeneration
- Macular pucker
- Diabetic eye disease
- Vitreous hemorrhage
OCT uses technology that is similar to that of a CT scan of internal organs. With the scattering of light it can rapidly scan the eye to create an accurate cross-section. Each layer of the retina can be evaluated and measured and compared to normal, healthy images of the retina.
The OCT exam takes about 10 to 20 minutes to perform in your doctor's office, and usually requires dilation of the pupils for the best results. |
Wetlands are crucial to our environment. They form a boundary between land and water, filter out sediment and nutrients, and support a greater concentration of wildlife than any other habitat in New Zealand.
Having considered submissions and the Ministry’s recommendations, the Minister has:
The Government’s Essential Freshwater package aims to stop the ongoing loss of wetlands and protect their value by regulating the types of activities that are allowed in and around wetlands.
If you have a wetland on your property, you now have responsibilities to protect it under the new regulations.
‘Wetland’ is the collective term for the wet margins of streams, rivers, ponds, lakes, estuaries, bogs, swamps and lagoons. Wetlands aren’t always 'wet'. They provide a habitat for wildlife and support an indigenous ecosystem of plants and animals that have adapted to living in wet conditions.
The new Essential Freshwater regulations apply to natural wetlands as defined in the National Policy Statement for Freshwater Management (NPS-FM). Artificially made wetlands, dams and drainage canals are not classed as wetlands under the new regulations.
If an area doesn’t meet the definition of a wetland under the NPS-FM, it may meet the wetland definition under either the Hawke’s Bay Regional Resource Management Plan (RRMP) or Regional Coastal Environment Plan (RCEP). If so, these rules apply to the wetland.
New regulations on wetland management came into effect
All stock must be excluded from natural wetlands identified in Council plans.
All stock must be excluded from wetlands that support threatened species
All stock must be excluded from wetlands over 0.05 ha and on low slope land.
Activities that are allowed in or around wetlands are detailed in the National Environmental Standards for Freshwater 2020 (NES-F).
Any activity which disturbs wetlands can only be carried out for certain reasons, such as restoration, clearing debris or scientific research, and may require consent.
There are limited exemptions to these activities, for example, the customary harvest of food or resources undertaken in accordance with tikanga Māori. Any other activity that may be exempt is subject to the Effects Management Hierarchy.
You must alert the Council in writing at least 10 working days before the activity takes place.
Any activity in and around wetlands must comply with both the Hawke’s Bay RRMP, RCEP and the NES-F.
The amended NPS-FM refers to a "natural wetland" as meaning a wetland that is not:
a) in the coastal marine area; or
b) a deliberately constructed wetland, other than a wetland constructed to offset impacts on, or to restore, an existing or former natural inland wetland; or
c) a wetland that has developed in or around a deliberately constructed water body, since the construction of the water body; or
d) a geothermal wetland; or
e) a wetland that:
(i) is within an area of pasture used for grazing; and
(ii) has vegetation cover comprising more than 50% exotic pasture species (as identified in the National List of Exotic Pasture Species using the Pasture Exclusion Assessment Methodology (see clause 1.8)); unless
(iii) the wetland is a location of a habitat of a threatened species identified under clause 3.8 of this National Policy Statement, in which case the exclusion in (e) does not apply
(a) permanently or intermittently wet areas, shallow water, and land water margins that support a natural ecosystem of plants and animals that are adapted to wet conditions; and
(b) those areas mapped in Schedule 24 (a to d) and commonly known as:
i) Lake Whatuma (previously known as Hatuma);
ii) Atua Road north swamp;
iii) Wanstead Swamp;
iv) Lake Poukawa
See the National Policy Statement for Freshwater management 2020 for more information on defining wetlands under the Essential Freshwater package.
permanently or intermittently wet areas, shallow water, and land water margins that support a natural ecosystem of plants and animals that are adapted to wet conditions, except for:
(a) wet pasture or cropping land;
(b) artificial wetlands specifically designed, installed and maintained for any of the following purposes:
i) wastewater or stormwater treatment;
ii) farm stock water dams, irrigation dams, and flood detention dams;
iii) reservoirs, dams and other areas specifically designed and established for the construction and/or operation of a hydro-electric power scheme;
iv) land drainage canals and drains; v) reservoirs for fire fighting, domestic or municipal supply; vi) beautification or recreation purposes.
permanently or intermittently wet areas, shallow water, and land water margins that support a natural ecosystem of plants and animals that are adapted to wet conditions. It does not include wet pasture; artificial wetlands used for wastewater or stormwater treatment; farm dams and detention dams; land drainage canals and drains; reservoirs for firefighting, domestic or municipal water supply; temporary ponded rainfall; or artificial wetlands created for beautification purposes.
Indigenous biodiversity in New Zealand is in decline with around 4000 species currently threatened, or at risk of extinction. In Hawke’s Bay, only 34% of the indigenous ecosystems covering the region before human occupation remain.
One of these ecosystem types, wetlands, plays an important role in keeping our environment healthy. They regulate water flow by storing water and slowly releasing it, they take up nutrients and capture sediment and so are important for water quality, they store carbon and are home to many species of indigenous plants and animals that aren’t found in other systems.
Only 4% of original wetland extent remains in Hawke’s Bay, largely driven by drainage and modification of these habitats. Wetlands are one of the rarest and most threatened ecosystem types in the region.
To halt any further decline in wetland extent, the National Policy Statement for Freshwater Management (NPS-FM 2020) has direction for Regional Councils. This includes ‘the loss in extent of natural wetlands is avoided, their values are protected, and their restoration is promoted’.
As part of this direction, Hawke’s Bay Regional Council is required to ‘identify and map’ all natural inland wetlands in the region that are:
- 0.05 hectares or greater in extent; or
- of a type of wetland that is naturally less that 0.05 hectares and known to contain threatened species.
Hawke’s Bay Regional Council has undertaken wetland mapping throughout the region, and while it is not possible to visit every wetland in the region, Regional Council has taken every care to provide the best, most accurate information. As part of the process, an external expert was engaged to provide a level of rigour over areas identified as wetlands, and associated levels of confidence have been obtained.
The following map shows areas of wetland that are either known to exist (have been visited and ground-truthed) or probable wetlands, those that are at least 90% likely to be a wetland.
Wetland areas will exist outside what has been delineated in this map. Boundaries of identified wetlands are subject to change. The information shown on these maps is compiled from numerous sources, with limited associated ground-truthing. This information is made available in good faith using the best information available to the Council, with the understanding that wetland areas subject to all relevant regulations will exist outside of what is delineated in this map. Its accuracy or completeness is not guaranteed, and it should not be used as a substitute for legal or other professional advice. This map should not be relied upon as the sole basis for making any decision and cannot be substituted for a site-specific investigation by a suitably qualified and experienced practitioner. Hawke’s Bay Regional Council reserves the right to change the content and/or presentation of any of the data at its sole discretion, including this disclaimer and attached notes, and does not accept responsibility or liability for any loss or damage incurred by a user in reliance on the information.
Disclaimers and Copyright
While every endeavour has been taken by the Hawke's Bay Regional Council to ensure that the information on this website is accurate and up to date, Hawke's Bay Regional Council shall not be liable for any loss suffered through the use, directly or indirectly, of information on this website. Information contained has been assembled in good faith. Some of the information available in this site is from the New Zealand Public domain and supplied by relevant government agencies. Hawke's Bay Regional Council cannot accept any liability for its accuracy or content. Portions of the information and material on this site, including data, pages, documents, online graphics and images are protected by copyright, unless specifically notified to the contrary. Externally sourced information or material is copyright to the respective provider.
© Hawke's Bay Regional Council - www.hbrc.govt.nz / +64 6 835 9200 / email@example.com |
Or was it always at about 5.14 degrees inclination or has the inclination changed over time?
James K's answer to this question got me thinking about this, and I don't mean to call him out, but this part of his answer doesn't seem quite right when he wrote:
Our moon is unique in being close to the plane of the ecliptic, and not in the plane of the equator, which suggests its formation was not like that of other moons in the solar system.
For starters, I imagine the physics of the initial orbit of an impact moon would be somewhat complicated, but it seems likely, given an angled impact like the one that's believed to have formed our moon (and hitting at an angle is statistically more common than a direct impact anyway),
So, in this scenario, the planet gets significant angular momentum and this angular momentum should dictate the planet's new equator and axial tilt relative to it's orbit. I would think the moon should form roughly along that same equatorial plane, but I'm just guessing. Perhaps it could form several degrees off - not sure.
The 2nd point, is, a rapidly rotating planet with a large equatorial bulge and a close moon, if the Moon formed off the equator, would it's orbit migrate over the equator where the gravitation was greatest and would that happen relatively quickly or not at all?
That's basically the question. Was the Moon always at roughly 6 degrees off the Earth's equator or has it only moved off an orbit over Earth's equator more recently, perhaps due to the gravitational effect of the Sun?
Or are there other factors. Mars' axis is thought to have changed rather significantly due to Jupiter's gravitational effects perhaps 100,000 years agao and it's Moons orbit over Mars' equator which suggests that Mars' equatorial bulge dragged the Moons with it? - or is my thinking way off on that?
My thinking is that a planet's equatorial bulge would drive moons towards a 0 degree inclination around it's equatorial bulge and our Moon is different because of proximity to the sun which also has a strong gravitational effect. The 5.14 degrees of inclination is a balance between the Earth's equator and the solar gravity. |
When a rightward moving car skids to a stop with locked wheels?
For instance, if a car is moving rightward and skidding to a stop (with wheels locked), then there is no rightward force upon the car. The only horizontal force is a leftward force of friction which serves to slow the car down.
What is free-body diagram class 11?
A free-body diagram is a graphic, dematerialized, symbolic representation of the body. In a free diagram, the size of the arrow denotes the magnitude of the force. While the direction of the arrow denotes the direction in which the force acts.
What is the meaning of FBD with suitable sketch?
Free Body Diagrams (FBD) are useful aids for representing the relative magnitude and direction of all forces acting upon an object in a given situation. The first step in analysing and describing most physical phenomena involves the careful drawing of a free body diagram.
How does a free-body diagram represent the various forces acting upon an object?
Free body diagram A diagram showing the forces acting on the object. The object is represented by a dot with forces are drawn as arrows pointing away from the dot. Sometimes called force diagrams.
What is skidding while braking?
A rear-wheel skid occurs when you apply the brakes so hard that one or more wheels lock or if you press hard on the accelerator and spin the drive wheels. Skids can also occur when you are traveling too fast on a curve or encounter a slippery surface.
What if the rear of your vehicle skids to the right?
In rear-wheel driving automobiles, you should stay off the brakes and gradually ease off the accelerator. Turn your wheels in the direction the rear end of your vehicle is skidding. If the rear end of the vehicle skids right, steer right. If the rear end of the vehicle skids left, steer left.
When a car skids to a stop with wheels locked which friction force is at work how do you know?
When stopping, if the wheels lock up kinetic friction is used to slow the car down. However, since static friction is larger, what anti-lock brakes do pump the breaks so that the tires do not experience kinetic friction but static, which in turn slows the car faster and in a shorter distance.
What direction should you turn your wheels when your vehicle goes into a skid?
When a moving car is brought to a stop with the brakes it is stopped by?
Friction braking is the most commonly used braking method in modern vehicles. It involves the conversion of kinetic energy to thermal energy by applying friction to the moving parts of a system. The friction force resists motion and in turn generates heat, eventually bringing the velocity to zero.
What action should you take if your car suddenly goes into a skid?
Keep your foot off the brake and accelerator so the car can slow down. Once speed reduces, gently tap the brakes. It’s important not to slam the brakes or suddenly turn the wheel. Slowly steer the car in the direction you want to travel, but do so with very light pressure. |
Does a life creating maps and working with computers help you feel alive? Following a career as a Computer Cartographer could be just what you are looking for!
Computer Cartographers integrate and create maps with global positioning system applications.
They use specialised software to plot data by retrieving information from a database to form a map showing information such as geographic concentration, demographic information, weather and climate forecasts, geology or earthquake faults.
The data they use will have co-ordinates that are obtained from surveying instruments. They can work for manufactureres of GPS Systems to create stored maps and databases for uses such as advising pilots, truck drivers or motorists what their current location in.
Cartographers also create original maps and mapping databases from data gathered by surveyors, aerial photographers and satellitse. Photogrammetrics are used to measure areas that cannot be accessed physically. |
One of the aspects of our study of the universe that fascinates me is the hunt for dark matter. That elusive material that doesn’t interact with much makes it difficult but not impossible to detect. Gravitational lenses are one such phenomena that point to its existence indeed it allows us to estimate how much there is in galaxy clusters. A paper now suggests that observations of Jupiter by Cassini in 2000 suggest we may be able to detect it using planets too.
Dark matter is as its name suggests, mysterious and elusive. It is believed to account for about 27% of the universe’s mass and energy. However unlike ordinary matter – of the like that makes up you and me; the stars and planets, dark matter doesn’t emit, absorb or reflect light making it invisible and difficult to detect. Its very existence is only inferred from the effect its gravity has on visible matter and the large scale structure of the universe.
The foundations for an interesting twist in the search for dark matter were laid in 1997 with the launch of the Cassini spacecraft from Cape Canaveral in the US. A seven year journey began that would take the probe from Earth to Saturn utilising gravitational slingshots from Venus, Earth and Jupiter. On board was a plethora of instruments to record data from radio waves through to extreme ultraviolet. En-route to Saturn, Cassini would be used to observe the planets using multiple wavelengths.
Of particular interest to the mission was using the Visual and Infrared Mapping Spectrometer (VIMS) to measure levels of hydrogen ions known as trihydrogen cations. They are a common ion found across the universe and are produced when molecular hydrogen interacts with cosmic rays, extreme ultraviolet radiation, planetary lightning, or electrons accelerated in planetary magnetic fields.
The team explore how dark matter can also produce trihydrogen cation in the atmosphere of planets. Any dark matter that is captured by planetary atmospheres – in particular the ionosphere – and is consequently annihilated, can produce detectable ionising radiation.
Using data from Cassini VISM system, the team have searched for dark matter ionisation in the ionosphere of Jupiter. Due to Jupiter’s relatively cool core, it was identified as the most efficient dark matter captor in the Solar System allowing dark matter particles to be retained. The challenge was to identify the signals over the background ‘noise’ from other radiation so the team had to use data from 3 hours either side of Jovian midnight. Choosing this time meant solar extreme ultraviolet irradiation was at a minimum. They also focussed on lower latitudes, keeping away from the high magnetic fields around the polar regions.
The detection of dark matter ionisation in the Jovian atmosphere reveals a whole new method for understanding this strange and mysterious cousin of normal matter. It is not just planets in our Solar System though, exoplanets are a new possible source especially those based in dark matter rich regions of the Galaxy. |
Where Has All the Soil Gone?
Hello, plant parents! It’s no secret that we love our green (and pink, and purple, and yellow, and really the entire rainbow) plant friends and want to make sure they have a happy and healthy place to call home. Sun and water are not the only things plants need to thrive. They also need nutrients which they get from soil.
The Dirt-y Truth
As plant parents, we know that soil is the foundation for healthy and flourishing plants. It provides essential nutrients and a stable environment for our leafy friends to grow. However, have you ever wondered why all the soil seems to disappear and your plants need to be topped off with new soil? Where has it all gone?
In order to answer that question, we need to understand a little about soil and its importance to plant health. Soil is a mixture of organic and inorganic materials, including a variety of minerals, water, air, bacteria and fungi. It’s the primary medium that plants rely on for strength, water, and food.
Soil nutrient loss is a gradual process that occurs when essential minerals are removed from the soil. This happens due to natural processes like erosion or leaching or even human activities like monocropping or continuous cropping. The problem with nutrient loss is that it results in nutrient deficient soil. Which hurts a plants’ growth and leads to soil degradation.
How Does Soil Lose Its Nutrients?
By understanding how soil loses its nutrients, we can take steps to prevent it. And maintain healthy soil for our plant babies. So, what are some of the leading causes of nutrient depletion in soil?
- Leaching – The Culprit of Nutrient Loss: Leaching is a natural process where nutrients in the soil are carried away by water. This happens when water moves through the soil and takes nutrients with it. Rain, irrigation, and other types of water application techniques all contribute to the leaching process. Although it’s a natural process, over-irrigation can exacerbate the problem. When too much water is applied, the excess water will leach out the nutrients that plants need, leaving the soil nutrient depleted.
- Soil Erosion – A Slow but Steady Killer: Soil erosion is the natural process of soil being moved or carried away by wind or water. While this can happen naturally, it can also be accelerated by farming practices like plowing and tilling. When topsoil is removed, the nutrients that were once present in that soil go with it. Soil erosion can be a slow but steady killer of soil fertility. And it can take years or even decades for the effects to become apparent. But it does happen, and plant parents need to be aware of it.
- Monocropping – The Practice of Growing the Same Crop: Monocropping is the practice of growing the same crop in the same field year after year. This depletes the soil of certain specific nutrients, such as nitrogen, and increases the likelihood of pests and diseases. When the same crop is grown repeatedly in the same field, it will use up the nutrients that the crop needs and eventually completely deplete the soil of those nutrients. This can also cause an imbalance in the soil microbiome, making it more susceptible to pests and diseases.
- Continuous Cropping – Give the Soil a Break: Continuous cropping is the practice of growing crops in the same field without giving the soil a break or rotating crops. This can lead to nutrient depletion, soil compaction, and increased pest and disease pressure. When soil is continuously cropped, it doesn’t have a chance to recover from the previous crop’s nutrient uptake. Soil compaction can occur when heavy machinery is used repeatedly on the same soil, making it more difficult for water and nutrients to penetrate the soil.
- Change in pH – The Soil’s Acidity and Alkalinity Levels: A change in soil pH can affect the availability of essential nutrients for plants. Soil with a high pH can limit the availability of nutrients like iron. While soil with a low pH can limit the availability of nutrients like calcium. When the soil pH changes, it can affect the soil’s ability to hold onto nutrients, making it difficult for plants to access the nutrients they need.
- Burning of Crops – Releasing Nutrients into the Atmosphere: Burning crop residues can release essential nutrients into the atmosphere, reducing the amount of nutrients available in the soil. While burning may be an effective way for many people to manage crop residues, it’s important to be mindful of the nutrients that are being lost. Burning can release nitrogen, sulfur, and other important nutrients into the atmosphere. Making them unavailable to the plants that need them.
As you can see, there are several ways that soil can lose its nutrients. Leaching, soil erosion, monocropping, continuous cropping, changes in soil pH, and burning of crops can all contribute to nutrient loss. As plant parents, it’s important to take steps to prevent these causes and maintain healthy soil for our plant kids.
Ways to Prevent Nutrient Loss in Soil
Soil nutrient loss has a significant impact on plant growth and health. Fortunately, there are several ways to prevent nutrient loss in soil and ensure that your plants have the nutrients they need to thrive.
- Use appropriate fertilizers in adequate amounts: One of the easiest ways to prevent nutrient loss in soil is to add nutrients back into the soil using fertilizers. However, it’s important to use appropriate fertilizers in the right amounts. Using too much fertilizer can actually have a negative impact on soil health and plant growth. Over-fertilization can alter the pH of the soil, which can cause further nutrient loss. One perfect choice for most indoor plants is Dyna-Gro Foliage Pro Plant Nutrition from PLANTZ. This complete formula has all six essential macronutrients and 10 micronutrients for optimum plant growth in any medium. Finally, always read the labels on fertilizers and follow the recommended application rates.
- Apply fertilizers at the right time: In addition to using the right amount and proper type of fertilizer, it’s important to apply fertilizers at the right time. Applying fertilizers when heavy rains are expected can lead to leaching of the nutrients. Which defeats the purpose of adding them to the soil. It’s best to apply fertilizers when the weather is dry or when light rain is expected. Plus, those heavy rains will wash the fertilizers away into ponds, lakes, and streams which can wreak havoc on an ecosystem.
- Make wetlands or filter beds to recover nutrients: Wetlands or filter beds can be used to recover nutrients from runoff or drainage water. These areas are designed to capture and filter water. Allowing nutrients to settle at the bottom where they can be reused. Wetlands and filter beds can be particularly effective in agricultural areas where large amounts of water are used for irrigation.
- Apply fertilizers according to the needs of the soil: Different types of plants require different types of nutrients. And soil conditions can vary from one location to another. To ensure that your plants have the nutrients they need, it’s important to apply fertilizers according to the needs of the soil. This means getting your soil tested for pH and nutrient levels so that you can choose the right fertilizer for your plants.
- Compost: Composting is a great way to recycle organic waste and add nutrients back into the soil. Composting involves collecting organic materials like leaves, grass clippings, and food scraps, and allowing them to decompose over time. Once the compost is ready, it can be added to the soil to improve its nutrient content.
Preventing nutrient loss in soil is essential for ensuring healthy plant growth. By using appropriate fertilizers, applying them at the right time, creating wetlands or filter beds, applying fertilizers according to the needs of the soil, and composting, you can help to maintain the health of your soil and ensure that your plants have the nutrients they need to thrive.
Reviving Old Soil
You may find yourself faced with the challenge of reviving old soil for your plants. Over time, soil can become depleted of nutrients, compacted, and lacking in the organic matter that plants need to thrive. Fortunately, there are several ways to renew old soil and replenish the nutrients your plants need to grow strong and healthy.
- Blend with fresh soil: One of the simplest ways to revive old soil is to blend it with fresh soil. Adding fresh soil to your old soil will help to increase its nutrient content, as well as improve its structure and water holding capacity. When blending old and new soil, it’s important to mix them thoroughly to ensure that the nutrients are evenly distributed. A good rule of thumb is to use a 1:1 ratio of old soil to new soil. Although this can vary depending on the condition of your old soil.
- Mix in more nutrients: Another way to revive old soil is to mix in more nutrients. This can be done by adding organic matter like compost or aged manure to your soil. Organic matter is a rich source of nutrients like nitrogen, phosphorus, and potassium, as well as other micronutrients that plants need to grow. To add organic matter to your soil, simply mix it into the top layer of soil with a garden fork or tiller. Aim to add about 1-2 inches of organic matter to the top layer of soil, and then mix it thoroughly.
- Compost the soil: Composting your old soil is another effective way to renew it. Composting involves breaking down organic matter like leaves, grass clippings, and kitchen scraps into a rich, nutrient-dense soil amendment. To compost your old soil, start by adding organic matter like leaves, grass clippings, and kitchen scraps to a compost bin or pile. Add a layer of old soil on top of the organic matter. And then repeat this process until the bin or pile is full. Over time, the organic matter will break down into compost, which can then be mixed back into your soil. Composting your old soil is a great way to improve its structure and nutrient content, while also reducing waste.
- Add water: Finally, adding water is a simple yet effective way to revive old soil. Soil that is dry and compacted can make it difficult for plant roots to access the nutrients they need to grow. By adding water, you can help to loosen the soil and make it easier for plants to absorb nutrients. To add water to your soil, simply use a watering can or hose to saturate the top layer of soil. Aim to water deeply and infrequently, rather than giving your plants frequent shallow waterings. This will encourage plant roots to grow deeper into the soil, where they can access more nutrients and water.
Reviving old soil is an important part of plant care, and there are several ways to do it effectively. By blending old soil with fresh soil, adding organic matter, composting, and adding water, you can improve the structure and nutrient content of your soil. And give your plants the best chance to grow strong and healthy. Remember to always test your soil’s pH and nutrient levels before making any changes. And consult with the experts at PLANTZ if you’re not sure where to start. Happy gardening!
Soil is a crucial component of a healthy plant that requires proper management to maintain its fertility. Nutrient loss in soil can occur due to various factors. Such as leaching, soil erosion, monocropping, continuous cropping, change in pH, and burning of crops. It is essential to prevent nutrient loss to maintain soil health and ensure that plants have access to the essential nutrients they need to grow.
One of the best ways to prevent nutrient loss is by using appropriate fertilizers in adequate amounts. It is essential to apply fertilizers according to the needs of the soil and avoid overusing them. Which can affect soil pH and lead to further nutrient loss. Fertilizers should also be applied when heavy rains are not expected to prevent leaching.
Another way to prevent nutrient loss is by making wetlands or filter beds to recover nutrients from runoff or drainage water. These structures can help filter out pollutants and prevent nutrient loss, thus maintaining soil health.
Getting your soil tested for pH and nutrient levels is also important to determine the right type and amount of fertilizer to use. Soil testing can help you identify any nutrient deficiencies or imbalances. Allowing you to take corrective actions to improve soil health.
Reviving old soil can be a challenge, but it is not impossible. Adding fresh soil can help improve soil structure and provide essential nutrients. Mixing in more nutrients, such as compost or organic matter, can also help replenish nutrients and improve soil health. Composting is an excellent way to recycle organic waste and provide nutrient rich material to the soil. Adding water can also help improve soil moisture and support plant growth.
If you’re a plant parent or are aspiring to become one, taking care of your plant’s soil health is crucial for their growth and overall well being. Whether you’re dealing with nutrient-depleted soil or trying to prevent future nutrient loss, there are steps you can take to ensure your plants thrive. Start by implementing the tips and techniques discussed in this article to revive and maintain healthy soil for your plants. |
Over in Galapagos, scientists have identified nine giant tortoises in captivity that appear to be descended from the long-extinct Floreana tortoise, a variety assumed to have disappeared more than 150 years ago.
Charles Darwin visited Floreana in 1835 though the island’s tortoises were already so thin on the ground that he didn’t run into any. If not already extinct, this species is thought to have gone that way just a few years after. But a few dozen specimens made it into museums and in 2008 geneticists at Yale University (and elsewhere) recovered ancient DNA from the remains of some 25 animals now housed at the Museum of Comparative Zoology at Harvard in Boston and the American Museum of Natural History in New York. This allowed them, in a paper published in Proceedings of the National Academy of Sciences, to describe the genetic makeup of this species, making it possible to begin the hunt for Floreana-like genes amongst living animals.
In that study, they identified several tortoises of Floreana ancestry on the Galapagos island of Isabela (though they only have the blood samples from these individuals and not the animals themselves). Now, in a paper out today in PLoS One, the same core team has combed through the DNA records of 156 captive tortoises of undocumented ancestry held at the Charles Darwin Research Station on the central island of Santa Cruz. Amongst them, they found nine individuals – six females and three males – with a good smattering of Floreana genes.
It’s almost a ready-made founder population. There are some snags, however. Not only would it be an expensive initiative, but it would be at least a decade before such a small captive operation began to churn out baby tortoises. By which time there would already be tortoises (of a necessarily different species) on Floreana, introduced to the island in the next few years as a part of the all-embracing Project Floreana.
Whatever happens, it’s another great example of how genetics is able to suggest serious conservation actions to which we would otherwise be completely ignorant.
My mind is on Galapagos matters just now because I am putting together the Spring/Summer issue of Galapagos News, the biannual charitable magazine for the Friends of Galapagos Organisations like the UK’s Galapagos Conservation Trust and the US-based Galapagos Conservancy. |
Fairy tales have always been used to give lessons about life. The story of Jack and the Bean Stalk is a good lesson about the importance of knowing about money and banks. The story of Jack asks the question, "What is money?"
Fairy tales have always been used to give lessons about life. The story of Jack and the Bean Stalk is a good lesson about the importance of knowing about money and banks. While you might think that you know the story of Jack, go to Jack and the Beanstalk .
- List the roles and functions of money.
- Apply the definition of money to various alternatives to money.
- Describe the role of banks.
Jack and the Bean Stalk: This site provides the story of "Jack and the Bean Stalk."
The story of Jack asks the question, "What is money?" An old quote in economics is, " Money is what money does." In other words we know gold coins are money but beans are not, but why?
For something to be excepted as money it must perform three functions. It must be:
- A Medium of Exchange
- A Unit of Account
- Store of Value
What is meant by these three functions?
First, for money to be a medium of exchange everyone has to accept that "it" is money. A gold coin is money because everyone will take it in trade for goods and services. In some ancient cultures, shells were used as money. Do you think Pokemon cards are money? (Can you pay for your lunch with Pokemon cards?)
Next, money must be a unit of account. This means that it can be broken up into parts and that other goods can be priced terms of money. That is why we have, pennies, dimes, quarters and dollars. If money is not a unit of account then it becomes hard to trade fairly. For instance, how many Pokemon cards equal a cow? Money allows a shop to price its goods, so you pay $.65 for a quart of milk and $2.00 for a dozen donuts. You can also check in other stores to see what prices they charge and then buy at the store with the best price.
Finally, money must be a store of value. If we are to hold money, we must know that it will be worth something tomorrow. Pokemon cards may be a good value today, but how much will they be worth when the Pokemon craze calms down? Apples are another good which do not make good money. If you hold onto an apple for two years would anyone want it? This is why most money is made out of metal or paper.
Answer the following questions after reading the story of Jack and the Beanstalk above.
What did Jack’s mother ask Jack to trade in exchange for the cow?
[Jack's mother asked Jack to trade gold coins in exchange for the cow so that they could buy other goods like food.]
Why did Jack’s mother not like the trade which Jack made?
[Jack's mother did not like the trade because beans can not be traded for goods they would need in the future: beans do not meet the three functions of money.]
- Why are beans not money, given the functions of money above?
What is the opportunity cost of not having money to trade with?
[There are a number of opportunity costs. The most important is that you will lose the time finding someone who wants what you have to trade and will trade you something that you want.]
[1. It is not a unit of account because most stores will not take beans in trade for other goods. 2. It might be a unit of account since it is easily divisible, but there is no national standard for how many beans equal a cow. 3. It is not a store of value since the beans will rot.]
Functions of the Bank
A bank has a number of functions as well. One is to protect your money from being robbed. The first banks were run by Blacksmiths and Goldsmiths. Why do you think that they made good bankers? [Since they were usually the strongest person in a village, they would hide the money under their anvils so no one could steal the money or gold.]
A second function is to lend money to others and receive interest in return. Banks make lending and borrowing money easier, just like money makes trading easier. The bank works as a clearinghouse. Those who want to lend money but deposits into their saving accounts, while those who want to borrow go to the bank and take out loans. The bank then makes sure that the loans are paid back and that everyone pays the right amount of interest.
Jack and the Bean Stalk can also be considered as a story of a bank.
Answer the following questions:
How is planting beans in the ground similar to putting money in the bank?
[Planting beans is like investing in the bank, after time you end up with more beans than you started with.]
What is the justification given for Jack to "steal" from the Giant in this story: (https://americanliterature.com/childrens-stories/jack-and-the-beanstalk ) ?
[The Giant in one of the stories has killed Jack’s father and taken his money. If Jack’s father had put his money in the bank then the Giant could not have stolen all that Jack had.]
If Jack’s father had put his money in the bank how would that have changed the story?
[It would have changed the story because Jack’s mother would have had money to live on and they would not have had to sell their cow for the beans.]
How is the goose that lays the golden eggs like a bank?
[The goose lays eggs at regular intervals just like interest payments. The purchase of the goose is like the initial money put into a bank, while the eggs represent interest.]
Using the answers to the questions above write a short paper on how the story of Jack and the Bean Stalk is related to money and banks. |