text
stringlengths 205
677k
| id
stringlengths 47
47
| dump
stringclasses 1
value | url
stringlengths 15
2.02k
| file_path
stringlengths 125
126
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 47
152k
| score
float64 2.52
5.16
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|
Engineers at the University of Washington have created a new communication system that takes advantage of wireless signals to power electronic devices without relying on batteries or wires.
Known as ambient backscatter, the technique enables battery-less devices to exchange information with each other by repurposing existing Wi-Fi, mobile and television signals to act as both a power source and communication medium. The team demonstrates in a video how the technology could be used to facilitate a money transfer through an interaction between two circuit boards equipped with a single antenna each and lacking any batteries.
“Our devices form a network out of thin air,” study co-author Joshua Smith, an associate professor of computer science and engineering and of electrical engineering, said. “You can reflect these signals slightly to create a Morse code of communication between battery-free devices.”
As part of the research project, the circuit boards were tested at different points around the Seattle, Wash. area, in locations ranging from less than half a mile away from a TV tower to about 6.5 miles away. A variety of different locations were used, including the inside of an apartment building, a street corner and the top level of a parking garage. The researchers found that the devices were able to communicate with each other at all locations tested, including the ones farthest from the TV tower. According to test results released by the team, the receiving devices picked up a signal from their transmitting counterparts at a rate of 1 kilobit per second when placed up to 2.5 feet apart outdoors and 1.5 feet apart indoors—a rate the team says is high enough to send information such as sensor readings, text messages and contact information.
“[This technology] is hopefully going to have applications in a number of areas, including wearable computing, smart homes and self-sustaining sensor networks,” lead researcher and study co-author Shyam Gollakota, a University of Washington assistant professor of computer science and engineering, said. Combined with smart sensors placed permanently inside a bridge or building, the technology could also be used to monitor structural integrity and send alerts if a problem is detected. The technology could also be incorporated into devices that rely on batteries, such as smartphones, so the device can still send and receive text messages—even when the battery dies—using power from a Wi-Fi or TV signal.
The results of the study were published at the Association for Computing Machinery’s Special Interest Group on Data Communication 2013 conference, which took place earlier this month in Hong Kong. Gollakota, Smith and their colleagues received the conference’s best-paper award for their research. The team plans to continue with further work aimed at increasing the capacity and range of their ambient backscatter communication network. | <urn:uuid:4af319e6-dc91-4c49-966f-031b434ff34a> | CC-MAIN-2023-50 | https://interferencetechnology.com/communication-system-powers-connects-battery-less-devices/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.960182 | 565 | 3.5 | 4 |
Dr. Nasem Dunlop
Treehouse Pediatric Dentistry // Lake Forest, CA
February is Children’s Dental Health Month, and we’re excited to kick things off with some tips on keeping your family’s smiles healthy. Creating good dental habits for your children should start as soon as they wake up.
After breakfast, you should brush your teeth with your kiddos. When children are learning good habits, modeling the behavior can be beneficial. Let them watch you, and then work with them on their skills. Start with a soft bristle toothbrush and fluoride toothpaste. If your child can spit their toothpaste in the sink, use a pea sized amount of toothpaste. If you find they’re swallowing their toothpaste, use a very small amount that is about the size of a grain of rice.
Flossing is important because it removes plaque and food that is between teeth. Brushing can only go so far, and flossing does the rest of the job. It’s not only good for your mouth, but helps prevent bad breath, which makes morning cuddles with your kiddos much more enjoyable. Even if your kids’ teeth aren’t touching, you can still work on forming the habit from a young age. Because gums are sensitive, parents should oversee flossing until age 6 or 7 when kids have better control over fine motor skills.
In addition to brushing and flossing twice a day, you can also take extra steps to protecting your kids’ teeth. The molar teeth are most susceptible to cavities for two reasons. First, their chewing surface has more pits and fissures than the rest of the teeth. This can make it more difficult for toothbrush bristles to clean the chewing surface. Second, most of the chewing is done with your molars, increasing the chances of food and bacteria getting stuck and causing decay.
To protect your children’s molars, we may recommend an application of sealant when they come in; usually around 6 years old and 12 years old. A report published in July 2017 by the Cochrane Collaboration, a group that studies and analyzes health information, found that the prevalence of cavities was reduced by 51% in children whose teeth were treated with sealant.
We also know that healthy teeth and gums come not only from brushing, flossing, and visiting Treehouse Pediatric Dentistry, but from the inside! What you put on your family’s dinner table has a big impact on their dental health. Be sure to include colorful fruit and vegetables in your family meals.
Lastly, make sure to visit Dr. Nasem twice a year minimum for check ups and a cleaning to ensure no sugar bugs are sneaking up on you! | <urn:uuid:68af7e90-a1a4-4ea0-936b-bbe7c71818c8> | CC-MAIN-2023-50 | https://irvinemomsnetwork.com/lets-talk-pediatric-dental-health-month/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.955742 | 578 | 3.21875 | 3 |
Love stories that transcend time have a unique allure, captivating audiences across generations. These stories of love and devotion have the capacity to arouse intense feelings and profoundly affect our hearts. These stories examine the depths of human emotions, cultural backgrounds, and the triumphs and tragedies of love. They are defined by their capacity to resonate with audiences across history. In this post, we’ll set out on a historical trip to examine some of the most enduring love tales that have endured the test of time.
Ancient Love Stories
Pyramus and Thisbe
In the realm of ancient love stories, the tale of Pyramus and Thisbe holds a prominent place. Set in the city of Babylon, this tragic narrative showcases the power of love against insurmountable odds. Pyramus and Thisbe, two young lovers from neighboring families, were forbidden to marry due to a longstanding feud. Communicating through a crack in the wall that separated their houses, their love blossomed in secret.
Laila and Majnu
Moving to the realm of ancient Islamic literature, the story of Laila and Majnu stands as a testament to intense passion and unwavering devotion. Set in the deserts of Arabia, this tale narrates the love between Qais, nicknamed Majnu (the madman), and Laila. Despite societal constraints and family disapproval, their love burned brightly.
Classic Love Stories
Romeo and Juliet
No exploration of timeless love stories would be complete without mentioning William Shakespeare’s masterpiece, Romeo and Juliet. This sad story, which is set in Verona, Italy, centres on Romeo Montague and Juliet Capulet’s misguided love affair. Their romance blossoms against the backdrop of a savage family conflict, setting off a chain of events that ends with their tragic demise.
This iconic play explores the themes of youthful passion, impulsive decisions, and the consequences of hatred. Despite being written centuries ago, Romeo and Juliet continues to captivate audiences with its poetic language, poignant themes, and cautionary tale about the destructive nature of prejudice.
Jane Eyre and Mr. Rochester
In the realm of classic literature, the love story between Jane Eyre and Mr. Rochester stands as a beacon of hope and resilience. Written by Charlotte Brontë, this gothic romance unfolds in the gloomy halls of Thornfield Hall. Jane, a plain and orphaned governess, falls in love with her enigmatic employer, Mr. Rochester. Despite the societal barriers and secrets that threaten to keep them apart, their love endures.
This tale of love, personal growth, and the triumph of individuality resonates with readers, highlighting the power of love to overcome adversity and challenge societal norms.
Modern Love Stories
Allie and Noah (The Notebook)
Transitioning into the realm of modern love stories, The Notebook by Nicholas Sparks takes center stage. Set in the 1940s, this contemporary romance novel explores the enduring love between Allie Nelson and Noah Calhoun.
Despite being from different social backgrounds, their love transcends societal expectations and endures through time. The story follows their journey through young love, separation, and the power of memories.
The emotional impact of The Notebook on readers is undeniable, leaving an indelible imprint on their hearts and reminding us of the profound connections we form in our lives.
Carl and Ellie (Up)
Departing from the traditional narrative format, the animated film Up offers a unique and heartwarming love story between Carl and Ellie Fredricksen. In this unconventional tale, love is depicted as a lifelong companionship built on shared dreams and unwavering support.
The story takes audiences on a journey from the couple’s childhood friendship to their adventures as a married couple. Despite the absence of traditional romantic gestures, their love shines through, captivating the hearts of viewers of all ages and reminding us of the beauty of true companionship.
Timeless Love Stories in Film
Jack and Rose (Titanic)
Among the iconic love stories in film history, the tale of Jack Dawson and Rose DeWitt Bukater in Titanic holds a special place. Set aboard the ill-fated RMS Titanic, this epic romance captures the imagination of audiences worldwide.
The love that blossoms between the spirited Rose, a young socialite, and the free-spirited artist Jack is a poignant reminder of the power of love in the face of impending tragedy. As the ship sinks, their sacrifice and unwavering love for each other transcend time, leaving an indelible mark on cinematic history.
Jamie and Claire (Outlander)
Shifting gears to the small screen, the love story between Jamie Fraser and Claire Randall in the television series Outlander captivates viewers with its time-traveling elements and passionate romance. Spanning centuries, this historical saga follows the journey of Claire, a World War II nurse who finds herself transported back in time to 18th-century Scotland.
There, she meets and falls in love with Jamie, a brave Scottish warrior. The enduring love and resilience they display amidst the challenges of time and the political unrest of their era resonate with fans worldwide, making Outlander a beloved series.
Real-Life Love Stories
Shah Jahan and Mumtaz Mahal
The real-life love story of Mughal emperor Shah Jahan and his wife Mumtaz Mahal transcends fiction and pays homage to everlasting love and magnificent architecture. The Taj Mahal is a masterpiece of construction and a tribute to Shah Jahan and his bride’s unwavering love. Shah Jahan had it constructed as a mausoleum for her. Visitors’ imaginations are captured by this renowned structure’s meticulous features and symmetrical beauty, which give them a look into the depths of love and the lengths one will go to for their loved one.
Johnny Cash and June Carter
In the realm of music, the love story between Johnny Cash and June Carter continues to inspire and touch the hearts of millions. Their tumultuous journey, marked by personal struggles, addiction, and fame, ultimately led to a profound and enduring love. As musicians, their songs echoed their shared experiences and served as a testament to their commitment to each other.
Their legacy as music legends and their unwavering bond serve as an inspiration to many, reminding us that love has the power to heal, transform, and conquer even the darkest of times.
Contemporary Love Stories
LGBTQ+ Love Stories
In recent times, love stories featuring LGBTQ+ characters have gained recognition and representation. These narratives highlight the diverse experiences and challenges faced by the LGBTQ+ community in their pursuit of love and happiness. From overcoming societal prejudice to navigating personal identities, these love stories inspire empathy, acceptance, and understanding.
By sharing these stories, we create a more inclusive and compassionate world, where love knows no boundaries.
Internet Love Stories
With the rise of the digital age, love stories born from online connections have become increasingly prevalent. Online dating platforms and social media have given rise to countless stories of love flourishing across distances and borders.
The difficulties of long-distance relationships and the distinctive feelings that come with meeting and falling in love online have changed the face of contemporary love. These love stories blur the lines between the virtual and real worlds and provide a window into how relationships between people are always changing.
Timeless love stories that are unforgettable have the ability to enthrall and motivate us. These stories serve as a constant reminder of the transformative power of love, from historic tragedies and stories of forbidden love to timeless tales about overcoming social boundaries and heartbreaking current loves.
Through their themes of passion, sacrifice, and resilience, they impart valuable lessons and ignite our own desires for profound connections. As we immerse ourselves in these narratives, we embrace the beauty of love that transcends time, forever etching itself into the fabric of our collective human experience. | <urn:uuid:f2eedc2d-49ab-4dd4-a6e9-f603cd415695> | CC-MAIN-2023-50 | https://istruestory.com/unforgettable-love-stories-that-transcend-time/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.916964 | 1,610 | 2.78125 | 3 |
Is there evidence for the life of Jesus outside the Bible? Dr. Habermas provides examples supporting 12 key facts from the life of Christ from early historical sources that are accepted by all critical scholars as evidence for the Resurrection of Jesus. Some skeptics would probably concede 20 or more such details. But Dr. Habermas believes you only need four to six of these facts to establish a strong historical basis for saying Jesus lived, died on a cross, and rose again from the dead. Listen as he explains these facts in this program of the Historical Evidence for the Historical Jesus. | <urn:uuid:f6439124-cdf3-405c-b1f3-2fcbece7fb52> | CC-MAIN-2023-50 | https://jashow.ca/new_television_shows/ep-5-12-historical-facts-about-jesus-from-outside-the-bible/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.951175 | 115 | 2.71875 | 3 |
Alzheimer’s disease is a form of dementia which primarily affects the parts of the brain that control memory, resulting in progressive and permanent neurological damage. According to the CDC, the disease currently affects more than 5 million Americans today. While research continues to bring us closer to effective Alzheimer’s treatments, there are additional steps that those affected, their families and caregivers can take to help fight this condition now.
- Physical exercise: Engaging in a healthy amount of physical activity has significant health benefits for the brain, as well as the heart, vascular system and body’s physical strength. Studies have shown that exercise can stimulate the brain’s ability to maintain older neural networks and stimulate new connections. It’s recommended that people over 65 years of age do 40 minutes a day of aerobic or non-aerobic exercise to experience the full physical and mental benefits.
- Mental exercise: A healthy body is important, but so is an active mind. Just like a muscle, the brain needs to be regularly challenged in order to maintain a healthy level of cognitive function. Stimulation is also vital to maintaining cognitive pathways and building new connections. Some of the best forms of mental stimulation include reading, doing crossword puzzles, playing games, social interaction and social activities such as going to museums or community events.
- Diet: Research has shown that certain foods can help keep the brain healthy, while others can be harmful to cognitive health. A diet rich in lots of fruit, fish oil, legumes, vegetables (especially broccoli and other cruciferous vegetables) and whole grains is recommended. Foods such as saturated fats and refined carbohydrates (like white sugar) should be avoided, as studies indicate these foods may assist cognitive decline, especially in the areas of the brain focused on learning and memory.
- Early diagnosis: Knowing the signs of early onset Alzheimer’s is key to working on mental and physical health. As well as having access to professional and medical assistance will help ensure your loved one is kept comfortable, healthy and independent for as long as possible. An early diagnosis will allow caregivers to start implementing the best measures available.
Memory care and support services at Lester Senior Living
Housed in our assisted living wing, the memory care suites at Lester are specially designed to support individuals with Alzheimer’s and other forms of dementia. By focusing on customized care plans and activities within our comfortable, apartment-style communities, we maximize your loved one’s dignity, safety and quality of life.
To find out more about our memory care services for Alzheimer’s residents in NJ, contact Lester Senior Living today or visit our website at: https://jchcorp.org/ | <urn:uuid:ae19006b-4ccd-4f86-bdbf-c92f79bc4589> | CC-MAIN-2023-50 | https://jchcorp.org/how-to-fight-alzheimers-disease/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.934748 | 546 | 3.0625 | 3 |
Compiled by Bill Derby
Spelling tests today are different than when I was in school in the dark ages. Many students must add a definition to the spelling word which makes more sense. I always had trouble spelling ‘refrigerator.’ Many times I just wrote ‘icebox.’
Below are a number of flubs by 6th graders, most likely from the boys sitting in the back row.
1. Ancient Egypt was inhabited by mummies and they all wrote in hydraulics. They lived in the Sarah Dessert. The climate of the Sarah is such that all the inhabitants have to live elsewhere.
2. Moses led the Hebrew slaves to the Red Sea where they made unleavened bread, which is bread made without any ingredients. Moses went up on Mount Cyanide to get the ten commandments. He died before he ever reached Canada.
3. Solomon had three hundred wives and seven hundred porcupines.
4. The Greeks were a highly sculptured people, and without them we wouldn’t have history. The Greeks also had myths. A myth is a female moth.
5. Socrates was a famous Greek teacher who went around giving people advice. They killed him. Socrates died from an overdose of wedlock. After his death, his career suffered a dramatic decline.
6. Julius Caesar extinguished himself on the battlefields of Gaul. The Ides of March murdered him because they thought he was going to be made king. Dying, he gasped out: “Tee hee, Brutus.”
7. Joan of Arc was burnt to a steak and was canonized by Bernard Shaw.
8. Queen Elizabeth was the “Virgin Queen.” As a queen she was a success. When she exposed herself before her troops they all shouted “hurrah.”
9. It was an age of great inventions and discoveries. Gutenberg invented removable type and the Bible. Another important invention was the circulation of blood. Sir Walter Raleigh is a historical figure because he invented cigarettes and started smoking.
10. Sir Francis Drake cumpused the world with a 100-foot clipper.
11. The greatest writer of the Renaissance was William Shakespeare. He was born in the year 1564, supposedly on his birthday. He never made much money and is famous only because of his plays. He wrote tragedies, comedies, and hysterectomies, all in Islamic pentameter. Romeo and Juliet are an example of a heroic couple.
12. Writing at the same time as Shakespeare was Miguel Cervantes. He wrote Donkey Hote. The next great author was John Milton. Milton wrote Paradise Lost. Then his wife died and he wrote Paradise Regained.
13. Delegates from the original 13 states formed the Contented Congress. Thomas Jefferson, a Virgin, and Benjamin Franklin were two singers of the Declaration of Independence. Franklin discovered electricity by rubbing two cats backward and declared, “A horse divided against itself cannot stand.” Franklin died in 1790 and is still dead.
14. Abraham Lincoln became America’s greatest Precedent. Lincoln’s mother died in infancy, and he was born in a log cabin which he built with his own hands. Abraham Lincoln freed the slaves by signing the Emasculation Proclamation. On the night of April 14, 1865, Lincoln went to the theater and got shot in his seat by one of the actors in a moving picture show. They believe the assinator was John Wilkes Booth, a supposingly insane actor. This ruined Booth’s career.
15. Johann Bach wrote a great many musical compositions and had a large number of children. In between, he practiced on an old spinster which he kept up in his attic. Bach died from 1750 to the present. Bach was the most famous composer in the world and so was Handel. Handel was half German, half Italian, and half English. He was very large.
16. Beethoven wrote music even though he was deaf. He was so deaf he wrote loud music. He took long walks in the forest even when everyone was calling for him. Beethoven expired in 1827 and later died for this. | <urn:uuid:5a1df477-377c-40f3-9a4a-ad73cb607fe8> | CC-MAIN-2023-50 | https://jcnewsandneighbor.com/6th-grade-misspellinglouis-pasteur-discovered-a-cure-for-rabbits/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.991012 | 871 | 2.859375 | 3 |
“Once there was the Stone Age, then the Bronze Age, and now we are in the middle of the Plastic Age,” said teenager Boyan Slat. “Ever year, we produce 300 million tons of plastic. Much of it reaches our oceans.”
At 16 years of age, Boyan Slat scuba dived off Greece in the Mediterranean Sea to see more debris floating on and under the surface.
He said, “At first, I thought I was swimming through strange jellyfish. Instead, I swam through more plastic bags than fish.”
Seeing all the ocean trash, he asked himself, “Why not clean it up?”
Slat quit his Aerospace Engineering studies to create www.TheOceanCleanup.com in order to fund his research on how to pick up all the plastic trash floating on the oceans of the world. Researchers discovered that 46,000 pieces of plastic float on every square mile of Earth’s oceans. That plastic debris stems from billions of humans around the planet tossing their plastic into rivers, streams and directly into the oceans. Thousands of ships, boats and luxury cruisers toss millions of pieces of plastic day in and day out across the globe. Plastic does not break down. It oxidizes slowly into smaller pieces, but it never degrades.
Today, we find plastics in the tissue of birds, fish, whales, turtles, dolphins and just about every creature that feeds in the world’s oceans.
Plastic debris constitutes a biological nightmare whose consequences reach decades into the future.
Additionally, with the five major gyres revolving in the oceans of the world, in excess of 100,000,000 (million) tons of plastic gather in giant ocean-going garbage patches. You may Google “The Great Pacific Garbage Patch” the size of Texas in the Pacific Ocean 1,000 miles off the coast of San Francisco. It grows from 60 to 90 feet deep in places. It kills millions of sea birds, turtles, sharks, dolphins and whales.
Slat said, “We stuff the oceans with enough plastic equal to the weight of 1,000 X’s the Eiffel Tower. It ranges from plastic nets to miniscule pieces. It’s doing tremendous damage to our marine life, reefs and all ocean creatures.”
Being a brilliant as well as naive teenager, Boyan Slat decided to construct designs of some contraptions that would scoop up millions of tons of plastic floating on our oceans. Because of his enormous ideas, TED TALKS invited him to bring his ideas to a wider audience. Enjoy the 11-minute speech below. You will be shocked at what you see happening to our oceans:
Because of my worldwide scuba diving experiences, I saw the progression of plastics since 1965 when corporations first initiated plastics into the biosphere of this planet.
The plastic pollution problem:
- Millions of tons of plastic have entered the oceans.
- Plastic concentrates in five rotating currents, called gyres.
- In these gyres there is on average 6 X’s more plastic than zooplankton by dry weight.
- 1/3 of all oceanic plastic is within the Great Pacific Garbage Patch.
Slat’s brilliant strategy combines his love of diving with his love of the biology of the oceans. He created a solar powered trawler in the shape of a manta ray that sweeps through the gyres 24/7 to pick up surface plastic, chew it up and store it in huge bins for collection. He also created floating booms that allow the oceans to sweep the plastics into their lairs for efficient pickup. When you see the designs, it will blow your mind.
“If we want to do something different to save our oceans,” he said. “We have to think differently. Ironically, those who throw their plastics face consequences. Ocean going ships spend $1 billion annually in repairs from plastic clogging their propellers and intakes.”
- At least one million seabirds, and one hundred thousand marine mammals die each year due to plastic pollution. It’s probably much higher.
- Lantern fish in the North Pacific Gyre eat up to 24,000 tons plastics per year.
- The survival of many species, including the Hawaiian Monk Seal and Loggerhead Turtle, could be jeopardized by plastic debris.
- Plastic pollution is a carrier of invasive species, threatening native ecosystems.
- Boyan Slat, a teenager, stands at the head of his class in creating solutions for the folly of humanity. He needs your support. Join him.
- Toxic chemicals (including PCBs and DDTs) are adsorbed by the plastic, increasing the concentration a million times.
- After entering the food chain, these persistent organic pollutants bio-accumulate in the food chain.
- Health effects linked to these chemicals are: cancer, malformation and impaired reproductive ability.
If ever humanity needs leaders to stand up and be counted, we need more Boyan Slat’s to lead us out of our ransacking our planet home toward a biologically healthy future.
Read more posts by Frosty Wooldridge here. Frosty is a blogger for JenningsWire.
The post is presented by the National Publicist, Annie Jennings of the NYC based PR Firm, Annie Jennings PR. Annie Jennings PR specializes in marketing books for getting authors booked on radio talk show interviews, TV shows in major online and in high circulation magazines and newspapers. Annie also works with speaker and experts to build up powerful platforms of credibility and influence. | <urn:uuid:13814ec3-3545-4d1c-ae42-849a94271255> | CC-MAIN-2023-50 | https://jenningswire.com/global/boyan-slat-cleaning-300-million-tons-ocean-plastic/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.934783 | 1,161 | 3.28125 | 3 |
DOE Joint Genome Institute
DOE Mission Areas
Tracing the Evolution of Shiitake Mushrooms
Understanding Lentinula genomes and their evolution could provide strategies for converting plant waste into sugars for biofuel production. Additionally, these fungi play a role in the global carbon cycle.
Soil Virus Offers Insight into Maintaining Microorganisms
Through a collaborative effort, researchers have identified a protein in soil viruses that may promote soil health.
Streamlining Regulon Identification in Bacteria
Regulons are a group of genes that can be turned on or off by the same regulatory protein. RIViT-seq technology could speed up associating transcription factors with their target genes.
Search JGI Projects
Approved User Proposals
Genome Insider: Methane Makers in Yosemite’s Lakes
Meet researchers who sampled the microbial communities living in the mountaintop lakes of the Sierra Nevada mountains to see how climate change affects freshwater ecosystems, and how those ecosystems work.
Genome Insider: A Shrubbier Version of Rubber
Hear from the consortium working on understanding the guayule plant's genome, which could lead to an improved natural rubber plant.
Mapping Switchgrass Traits with Common Gardens
The combination of field data and genetic information has allowed researchers to associate climate adaptations with switchgrass biology.
Data & Tools
iPHoP: A Matchmaker for Phages and their Hosts
Building on existing virus-host prediction approaches, a new tool combines and evaluates multiple predictions to reliably match viruses with their archaea and bacteria hosts.
Silver Age of GOLD Introduces New Features
The Genomes OnLine Database makes curated microbiome metadata that follows community standards freely available and enables large-scale comparative genomics analysis initiatives.
A Better Way to Find RNA Virus Needles in the Proverbial Database Haystacks
Researchers combed through more than 5,000 data sets of RNA sequences generated from diverse environmental samples around the world, resulting in a five-fold increase of RNA virus diversity.
Calls for Proposals
Special Initiatives & Programs
Submit a Proposal
Supercharging SIP in the Fungal Hyphosphere
Applying high-throughput stable isotope probing to the study of a particular fungi, researchers identified novel interactions between bacteria and the fungi.
Final Round of 2022 CSP Functional Genomics Awardees
Meet the final six researchers whose proposals were selected for the 2022 Community Science Program Functional Genomics call.
Tips for a Winning Community Science Program Proposal
In the Genome Insider podcast, tips to successfully avail of the JGI's proposal calls, many through the Community Science Program.
News & Publications
Logos and Templates
Exploring Possibilities: 2022 JGI-UC Merced Interns
The 2022 UC Merced intern cohort share how their summer internship experiences have influenced their careers in science.
Using Team Science to Build Communities Around Data
As the data portals grow and evolve, the research communities further expand around them. But with two projects, communities are forming to generate high quality genomes to benefit researchers.
Cow Rumen and the Early Days of Metagenomics
Tracing a cow rumen dataset from the lab to material for a hands-on undergraduate research course at CSU-San Marcos that has since expanded into three other universities.
All JGI Features
November 7, 2013 | <urn:uuid:a9934db8-8a80-47e1-bbfa-28aa3f203c9b> | CC-MAIN-2023-50 | https://jgi.doe.gov/why-sequence-actinobacteria/actinobacteria/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.822049 | 695 | 3.03125 | 3 |
The planning of the energy transition from fossil fuels to renewables requires estimates for how much electricity wind turbines can generate from the prevailing atmospheric conditions. Here, we estimate monthly ideal wind energy generation from datasets of wind speeds, air density and installed wind turbines in Germany and compare these to reported actual yields. Both yields were used in a statistical model to identify and quantify factors that reduced actual compared to ideal yields. The installed capacity within the region had no significant influence. Turbine age and park size resulted in significant yield reductions. Predicted yields increased from 9.1 TWh/a in 2000 to 58.9 TWh/a in 2014 resulting from an increase in installed capacity from 5.7 GW to 37.6 GW, which agrees very well with reported estimates for Germany. The age effect, which includes turbine aging and possibly other external effects, lowered yields from 3.6 to 6.7% from 2000 to 2014. The effect of park size decreased annual yields by 1.9% throughout this period. However, actual monthly yields represent on average only 73.7% of the ideal yields, with unknown causes. We conclude that the combination of ideal yields predicted from wind conditions with observed yields is suitable to derive realistic estimates of wind energy generation as well as realistic resource potentials.
Citation: Germer S, Kleidon A (2019) Have wind turbines in Germany generated electricity as would be expected from the prevailing wind conditions in 2000-2014? PLoS ONE 14(2): e0211028. https://doi.org/10.1371/journal.pone.0211028
Editor: Paul Leahy, University College Cork National University of Ireland, IRELAND
Received: August 21, 2018; Accepted: January 7, 2019; Published: February 6, 2019
Copyright: © 2019 Germer, Kleidon. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The data underlying the results presented in the study were obtained by the authors from third parties. The German monthly energy yield data is available for researchers who meet the criteria for access to confidential data from the operator database (http://www.btrdb.de/). The reanalysis dataset COSMO-REA6 is provided by Germany's National Meteorological Service (DWD, Hans Ertel Centre for Weather Research, https://www.herz-tb4.uni-bonn.de).
Funding: The authors received no specific funding for this work.
Competing interests: The authors have declared that no competing interests exist.
With the Energiewende or energy transition from fossil fuels to renewables, wind energy became a mainstream energy source . It was the second largest renewable energy source after hydropower in 2015 with a total installed capacity of 433 GW globally . According to the EU Energy Roadmap 2050, apart from energy conservation the switch to renewable energy sources is the second major prerequisite for a more sustainable energy system . In 2016, renewable energy sources had a share of 32% of total electrical energy production in Germany with wind energy contributing about a third to this share . The German legislation plans to further extend renewable energy and sets the target to generate 80% of electrical energy from renewables by 2050 . This extension includes additional installation of 2.9 GW capacity per year for onshore wind energy and an increase of installed capacity in offshore areas to 15 GW by the year 2030.
In order to understand if the target of renewable energy generation can be met with such an increase in installed capacity requires estimates for the performance of wind turbines, knowledge about the factors influencing wind energy generation over time, and to the extent to which wind energy generation can be estimated from large-scale meteorological datasets of wind fields. Turbine performance depends on wind speed distribution and direction, which can vary strongly from day to day due to changes in synoptic activity in the atmosphere, and on air density, which has a comparably much weaker variation. In the long-term, factors which potentially decrease wind speed can have a negative effect on wind energy production , such as climate change and changes in surface roughness due to land-use change or the atmospheric effects of large-scale wind energy use [8–11]. It is well known that a concentrated arrangement of wind turbines in wind parks leads to wake effects, reducing energy yield for those turbines standing in the wake of others [12–14]. An increase of average wind park size with time in a region can thus decrease the average performance of its wind turbines as well. In addition, the energy output of wind turbines can be affected by feed-in management that reduces or stops energy feed-in due to insufficient grid capacity, e.g. during periods of high wind speeds. Wind park performance can also decrease due to ageing effects of turbines and increased downtimes toward the end of their lifetime . Such impacts on wind energy yields have received little attention in the past. The age effect has been quantified for selected wind turbines in Sweden and for wind parks in the UK, but not for single wind turbines . Such estimates for wind parks include effects of early turbine death, increasing artificially the average effect of ageing itself. Furthermore there are no country-wide estimates of the size of wind energy losses due to ageing or due to changing wind park sizes with time. Usually, the capacity factor of wind turbines, i.e. the ratio of actual energy generation to the capacity of the turbine, is determined to obtain such estimates and to compare the performance of different wind turbines or track their performance over time. However, the capacity factor also depends on wind availability, which can change from year to year, as well as the specific power of turbines, that is, the ratio of turbine capacity to rotor swept area. Hence, the capacity factor is only one aspect that characterizes country-wide performance of wind turbines, and only if the mean specific power of all wind turbines does not change over time.
Our aim in this paper is to evaluate the role of these mechanisms that lower turbine yields in observed wind energy generation in Germany for the years 2000 to 2014. To do so, we use a dataset of wind conditions in combination with turbine characteristics to estimate the yield that can be expected for the turbines in the ideal case of no such negative effects. We then use this estimate in combination with a dataset of reported yields of a subset of wind turbines in Germany to attribute deviations from this ideal case to the influence of turbine age, park size and regional installed capacity. We then apply a statistical model that includes these influences to predict turbine efficiencies and wind energy generation for all wind turbines in Germany. We discuss the outcome of this statistical analysis in terms of the different factors that reduce turbine yields. We close with a brief summary and conclusions.
Data sources and preparation
We used German monthly energy yield data on a turbine basis from the 'operator database' (http://www.btrdb.de/, ) and related them to energy yield predicted on the basis of wind speed and air density calculated from the reanalysis dataset COSMO-REA6 provided by Germany's National Meteorological Service (DWD, Hans Ertel Centre for Weather Research, https://www.herz-tb4.uni-bonn.de, ).
The operator database includes the location of German onshore wind turbines since 1988 and for a subset of turbines the monthly energy yields. This is the only publicly available database of wind energy yields per turbine in Germany. The database consists of a site and a yield table. The site table (here referred to as “BDBsites”) consists of information provided by manufacturers and operators. It includes the location of the turbines in terms of the postal code of the area as well as the manufacturer, capacity, hub height, rotor diameter, and the month of start and end of operation. The BDBsites database does not report exact positions of wind turbines and therefore the relative position of wind turbines in wind parks to the predominant wind direction is unknown. At the end of December 2014, the total installed capacity of 25296 turbines registered in the database was 37.6 GW, which is within the range of reported values of 36.6 GW and 40.5 GW .
A selected group of wind turbine operators voluntarily report monthly wind energy yields for about 25% of all wind turbines in Germany, which is continuously added to the yield table of the operator database. The yield dataset used in this study only included time series of monthly energy yields of at least five years' length. To identify the effect of wind farm size, we considered yields only for those months in which all turbines in the wind park were in operation. In addition, yields of months including shut down periods of turbines due to maintenance or other reasons as well as yields after wind park extensions were excluded. This procedure excluded reported output after re-powering. The final yield dataset (here referred to as “BDByield”) included 5498 turbines with a total of 261012 monthly energy yield data entries reported from January 2000 to December 2014. While 531 turbines were single turbines, the rest was grouped in 921 wind parks. We will refer to the reported monthly wind energy yield as "actual turbine yields".
To estimate monthly energy yields from wind and turbine characteristics for all turbines in Germany listed in the BDBsites dataset, we used the regional reanalysis COSMO-REA6 dataset provided by the DWD’s Hans-Ertel-Centre for Weather Research . This reanalysis included the assimilation of observations from weather stations, including 10 m wind speeds, so that trends in wind speeds should be accounted for. The spatial resolution of COSMO-REA6 is about 6.8 km and hourly values of wind fields are available. The study period covered years with contrasting wind conditions. While the frequency of high wind speeds at 100 m height above the ground was comparatively high in the years 2007 and 2008, it was low in the years 2004 and 2014 (S1 Fig). On average over all of Germany, the wind speeds at 100 m height decreased by -0.017 m/s per year during our study period (S2 Fig), which is close to the mean trend of reported values reviewed by McVicar et al. .
The climate data analysis was performed with the Climate Data Operators (CDO) software of the German Climate Computing Center (DKRZ, https://code.mpimet.mpg.de/projects/cdo/). The data was transformed to a regular grid in order to combine it with the turbine data available on a basis of postal codes. The wind speed data was taken from the three lowest model layers and interpolated to the hub heights of turbines and then the mean wind speed v of the postal code area was calculated in which the turbine is situated in. Equivalently we calculated the air density ρ from temperature fields and surface pressure.
To calculate the expected yield from these meteorological conditions, we used an idealized power curve and combined it with the attributes of the wind turbines in terms of hub height, rotor diameter d, and turbine capacity Pmax. We assumed no generation for wind speeds less than v = 3.5 m/s. For greater wind speeds, we calculated the electricity generation rate, Pe (in W) through the rotor-swept area A (= π (d/2)2) of the turbine by: (1)
We assumed a power coefficient of η = 44%, a typical value for a wide range of turbines (The collection of turbine data shown in Carrillo et al. give a distribution of the power coefficient with an interquartile range of 43–46%, see S3 Fig). We used a generic value because for a number of turbines, no specific information on the power coefficient could be obtained, either because the manufacturer does not provide this information, or because the manufacturer no longer exists or was bought up by another manufacturer. Variations in the power coefficient affect the estimate in an approximately proportional way, so that a power coefficient of 40% yields about 10% less generation (S4 Fig).
We further limited electricity generation to the capacity of the turbine at high wind speeds (i.e., Pe ≤ Pmax), and used a cut-out wind speed of v = 25 m/s, assuming that wind turbines are switched off at such wind speeds and above (note, however, that such wind speeds are practically absent in Germany, see S1 Fig).
The turbine characteristics of the BDBsites database enter this estimate through the hub height, which was used to interpolate wind speeds from the COSMO-REA6 dataset, rotor diameter was used to determine the rotor-swept area A, and the turbine capacity was used to limit electricity generation at high wind speeds.
Electricity generation was calculated for each wind turbine for each of the hourly wind speed data and this estimate was aggregated to the monthly time scale. We refer to this estimate as the “ideal turbine yield” as it sets a reference without yield decreasing effects.
As a measure of turbine performance, monthly capacity factors were calculated by dividing actual energy yield per month by maximum possible energy yield per month (the installed capacity times hours per month). Capacity factors were also calculated from the estimated ideal turbine yield derived from climate reanalysis data (CFideal) and from actual turbine yield in the BDByield table (CFactual). To assess the regional effect of installed capacity on energy yield per turbine, the total installed capacity per postal code area and month was calculated using the BDBsites table. In 2014, wind turbines operated in 2328 out of 8199 postal code areas in Germany.
The age was calculated for each month and turbine in BDBsites in decimal years. To estimate the wake effect in wind parks, we assigned a rank to each turbine in a park that we identified through the ID fields in the dataset. For each turbine in the park, we calculated a ratio of capacity factors each month, CFactual divided by CFideal, and normalized them by their median per month and park. Then, for each park, turbines were ranked by their median normalized capacity factor ratio. An example for a single wind park is shown in Fig 1. The normalization was performed in order to eliminate seasonal variability. As a result, the interquartile ranges of monthly normalized capacity factor ratios per turbine are very narrow (see e.g. Fig 1), suggesting that significant differences in yield associated with the turbines in the park and that these differences did not change substantially over time.
The ultimate aim of the data analysis is to estimate energy yields. In order to avoid the influence of seasonality we opted to estimate first the capacity factors and use them afterwards to calculate energy yields. A linear mixed-effects model was set up to assess the effects on capacity factors from different independent variables simultaneously. This approach is based on capacity factors calculated from actual turbine yields (CFactual) and from ideal turbine yields (CFideal). As yields were reported each month per turbine, their observations and residuals are not independent. Each turbine in the dataset had a unique identification number (ID). To control for non-independence of residuals, the ID of each turbine was treated as random effect in the model. As fixed effects, turbine age (AGE) and turbine rank (RANK) were included. In addition, main postal code zones (PLZ, Fig 2) were included as fixed effects, as visual data analysis suggested the existence of regional differences. Data visualization also indicated that the age, rank and postal code zone effects depended on the average capacity factors of turbines. The general model formulation was: (2) where β0 is the common intercept, β1 is the slope of CFactual and CFideal, β2 to β9 are changes of β1 induced by the single fixed effects. The index i denotes the ith observation and index m the mth subject (i.e., turbine ID). The parameters b0 and b1 are the random intercept and slope, respectively, which vary with turbine ID, while ε is the error term. The variables AGE and RANK were centered around their mean. The variable I[.] is a dummy variable representing the level of the factor postal code zones (PLZ). For instance, I[PLZ2] is a dummy for postal code zone 2. The model uses the mean of all postal code zones as a reference. As a slope is estimated for the reference model, dummy variables are needed only for 6 out of 7 postal code zones. The mixed-effects model was fitted using the “lmer” function from the “lme4” package with maximum likelihood parameter estimation (lme4 notation: lmer(CFactual ~ CFideal + CFideal:AGE + CFideal:RANK+ CFideal:PLZ + (CFideal |ID), data = dataset_name, contrasts = list(PLZ = contr.sum), na.action = na.exclude, REML = F)). Normality and homogeneity of variance were tested by examining the normal qq-plots and the residuals versus fitted-values plots, respectively. Regional installed capacity was not included as fixed effect in the model as it showed to be a poor predictor when analysing data subsets by the fixed-effects model approach and because of high collinearity of it with the AGE predictor as an increase of installed capacity per postal code area involves an increase of age of existing turbines.
Main postal code zones are divided in up to a maximum of 1000 postal code areas. Seven zones were included as fixed effect in the model: PLZ0 to PLZ5 and PLZ6+. The latter includes PLZ6 to PLZ9 as in the south of Germany the installed capacity is low.
The prediction of capacity factors for all turbines not included in the BDByield dataset and for months in which yield data is not available was done by using the parameters estimated by Eq 2, which were used in the R function “predict.merMod”. This function uses the fitted mixed-effects model for the prediction of new values. For the mixed-effects model the random effects per turbine could only be estimated for the turbines in the BDByield data, but not for those only present in the BDBsites data. Therefore, all predictions were performed without including the random effect. Annual yields were calculated from predicted capacity factors and compared to reported electricity generation by wind energy in Germany .
Wind turbine characteristics in Germany from 2000 to 2014
The overall trends in wind turbine characteristics in Germany for the years 2000 to 2014 that are directly calculated from the BDBsites database are shown in Fig 3 (see S1 and S6 Tables for percentile values shown in Fig 3). While some characteristics of wind turbines or parks changed, other remained rather constant over time. The mean turbine age in the year 2000 was only 3.8±2.7 (± SD) years and it increased to 10.8±5.8 years in 2014 (Fig 3A). While in the year 2000, the mean turbine capacity was 611±401 kW, it increased to 1453±808 kW in the year 2015 (Fig 3B). The mean rotor swept area also increased from 1513±899 m2 to 3742±2237 m2 within this time period (Fig 3C). Mean turbine capacity and mean rotor swept area increased almost at the same rate of 2.4 and 2.5 from 2000 to 2014. As a result, the mean specific power, which is the ratio of turbine power to rotor swept area, increased only from 0.39±0.05 kW m-2r in 2000 to 0.40±0.06 kW m-2r in 2014 (Fig 3D). During the same time period the mean size of individual wind parks increased only slightly from 2.4±6.2 to 3.1±5.4 turbines per park (this includes single turbines with a park size of 1 turbine, Fig 3E). Mean installed capacity per postal code area, however, increased by a factor of 2.6 from 216±342 to 564±656 kW/km2 (Fig 3F).
The panels show (a) the age of wind turbines, (b) turbine capacity, (c) rotor swept area, (d) specific power and (e) park size (in terms of N, the number of turbines per park) and (f) the density of installed capacity per postal code area. The black lines refer to the mean of the distribution of values, while the shaded areas indicate the range of values in terms of the 25%-75% percentile (dark blue) and the 5%-95% percentile (light blue).
The mean monthly capacity factor calculated from the BDByield database fluctuates seasonally between 10 and 30% reaching values above 40% only at a few times in winter (Fig 4). The long-term mean capacity factor is 18.3±7.5%.
Turbine performance by climate driven estimates
The mixed-effects model approach assessed effects of age and rank in wind parks simultaneously. The model regresses capacity factors derived from reported monthly yields (CFactual) over those calculated from climate based estimates (CFideal). The estimate for β1 CFideal of 0.7372±0.0020 (Table 1) represents the slope of CFactual over CFideal at age zero and rank one and it is the mean of all postal code zones. Thus, all turbines with reported actual monthly yields on average generate only 73.7±0.2% of what has been estimated in the ideal case from wind conditions and turbine characteristics. Note that the given uncertainty only includes the uncertainty of the slope estimation by the statistical model and does not include possible uncertainties of the wind fields, the reported yield and the idealized power curve. Turbine age and rank in the wind parks as well as the postal code zones had a significant influence on the monthly capacity factors per turbine (Table 1). The slope of CFactual over CFideal and, hence, the deviation from the ideal case decreases by 0.63±0.01% per year of turbine age and by 0.49±0.02% per turbine rank.
The estimates of the postal code zones represent deviations from the mean slope in the respective zone (Table 1). The deviation of slope in region PLZ6+ is 0.0913 as the sum of all deviations from the mean slope needs to equals zero (Table 1). Hence, the difference between actual and ideal turbine yield is greater in Northern Germany (postal code zones 1 to 4) where wind speed and the regional installed capacity are higher than in Southern Germany (zones 0, 5 and 6+, Fig 2, Table 1).
Energy yield and absolute losses
We next used the estimated parameters of the mixed-effects model and applied it to estimate yields of all wind turbines in Germany to predict the countrywide generation of wind energy. Estimated annual yields increased from 9.1 TWh in the year 2000 to 55.9 TWh in the year 2014 (Table 2, Fig 5). These estimates are very close to the reported values by the German Ministry of Economy and Energy . Using our mixed-effects model, we can quantify two types of losses from ideal to estimated yields. The first type of loss (“other losses” in Fig 5) is related to the slope of CFideal to CFactual. It should be noted that the slope is a result of the statistical model, but we still call it “other loss due to unidentified effects” because the slope itself does not explain the loss, and it could also reflect biases in the wind fields dataset (see discussion below). The second type of loss is related to the age and park effects, which influence the magnitude of the slope. Therefore, “other losses” is the difference between the ideal yield and the estimated yields for the case that all turbines were new and no wake effects between turbines would occur.
The sum of estimated yields and losses due to the age effect, park effect and other effects equals the annual ideal wind energy yield as estimated from wind speed, air density, and turbine characteristics (source of reported yields: ).
The estimated annual energy yield corresponds to the sum of monthly yields of all turbines in BDBsites. Monthly yields of the turbines were derived from capacity factor predictions of the mixed-effects model.
While the loss by the park effect stayed rather constant at 1.9% of total annual energy yield in Germany, the loss by the age effect increased from 3.6% to 6.7% (Table 2, Fig 5). For 2014, the absolute loss for the age and park effect reached 5.6 and 1.6 TWh, respectively. “Other losses” due to unidentified effects led to 71% and 79% of total loss in 2000 and 2014, respectively. Relative to the ideal energy yield, 20% generation losses were due to unidentified effects for all years studied. During the study period the installed capacity in Germany increased by a factor of 6.6 from 5.7 GW in the year 2000 to 37.6 GW in 2014. The annual average turbine performance, however, did not change as indicated by rather constant mean annual capacity factors from 2000 to 2014 (Table 2).
Influence of turbine age, park size and regional installed capacity on turbine performance
We found that turbine age significantly decreased turbine performance by 0.63±0.01% per year. This effect could, in principle, include other external effects as well, such as increased feed-in management with time or wake effects of newly constructed wind parks upwind from the turbines we considered. As the exact positions of wind turbines were not included in the database, but installed capacity increased continuously from 2000 to 2014 (Fig 3F), yields might have decreased due to the wake effect of newly constructed wind parks. This wake effect would then have been included in the overall aging effect. An effect of the extension of existing wind parks on the turbine performance with time was, however, avoided by excluding yield data after extension took place.
Feed-in management decreased total wind energy yields in Germany by only 2.1% in the year 2014 and below 1% before 2014 (EEG in Zahlen 2015, www.bundesnetzagentur.de). Excluding the year 2014 before fitting the mixed-effects model did not change the significance of the age effect. We therefore assume that feed-in management can be neglected as a relevant factor in our analysis.
The decline of wind turbine performance with age estimated by our mixed-effects model is lower than the 1.6±0.2% per year reported by Staffell and Green for UK wind parks. Staffell and Green estimated the ageing effect relative to average capacity factors per wind park that might have led to an overestimation of the ageing effect due to unknown early turbine death in wind parks. Olauson et al. estimated a performance decline of 0.15 percentage points per year for a dataset of energy yield per turbine from Sweden. For new turbines with a capacity factor of 0.25 this results in a performance decrease of 0.6% per year, which is consistent with our result. Assuming that the mixed-effects model approach represents the average age effect for wind turbines in Germany and a turbine design lifetime of 20 years, the performance of wind turbines averaged over their lifetime would be lower by 20/2 x 0.63% ≈ 6.3% compared to a new turbine only due to the age effect. This age effect can be a crucial factor for wind project planning as the cost of wind energy yield is inversely proportional to the capacity factor .
The rank assigned to each turbine representing the wake effect in wind parks let to a significant decreased turbine performance of 0.49±0.02% per turbine rank. A wind park with 6 turbines would have an average yield loss due to the wake effect of 6/2 x 0.49% ≈ 1.5%. This estimate agrees well with published estimates. For instance, Kusiak and Song (Table 4 in ) estimated onshore wind energy production with a park layout optimization model. For a wind park size of 6 turbines they found a similar wake loss of 1.6%. For large wind parks, however, wake losses can be much greater. In 2014, 1% of all wind parks in Germany had 22 turbines or more. These 46 wind parks should have had an average yield loss of at least 22/2 x 0.49% ≈ 5.4%, which corresponds to the lower end of estimates for large wind parks with different turbine spacing and site climatology of 5–20% [29,30].
Installed capacity per postal code area had no significant effect on turbine performance, so it would seem that wind turbines in Germany do not generally affect regional wind speeds by much. The installed capacity per postal code area might, however, not be the best proxy for regional installed capacity as the postal code areas differ considerably in size. In addition, areas with high installed capacity next to the coast are expected to be less affected as turbines are aligned in rows along the coast. The question of whether the installed capacity is already high enough in some parts of Germany to detect regional effects on turbine performance might be studied in future, but this would likely require more precise information of turbine positions.
Ideal yield compared to actual yield
In this study, on average 73.7% of the ideal wind energy yield was converted to electrical energy (actual yield), leading on average to 26.3% of overall losses. This considerable reduction may result from several factors. For instance, the reduction may be due to the generic value of 44% of the power coefficient rather than using realistic power curves for the turbines, or it may result from biases in the wind fields that represent a reanalysis dataset rather than observations. Using a power coefficient of, e.g., 40% could reduce the ideal yield by a factor of about 0.91 (40/44), leading to a reduction of overall losses to 17.3%. However, the reduction of 26.3% is in line with other studies. For instance, Pieralli et al. found that electrical losses amounted to 27% of ideal yield for 19 wind turbines installed in 4 wind parks in Germany. According to their study, most of the losses were attributed to variability in wind direction, while 6% of losses were attributed to turbine errors. For the UK, Staffell and Green found an average difference between ideal and actual yield of 24.5%. Furthermore, an independent comparison of the wind fields of the COSMO-REA6 dataset to wind mast measurements in the range of 10 to 116m showed that the wind fields in the dataset were realistic . Hence, this reduction of actual yields by 26.3% compared to the ideal yields is realistic and consistent to previous studies.
The combination of ideal wind energy yields with the mixed-effects model to estimate actual yields of all turbines in Germany resulted in estimates of total actual annual wind energy generation in Germany that are very close to reported ones (Fig 4, Table 2). An increase of installed capacity in Germany from 5.7 GW in the year 2000 to 37.6 GW in 2014 led to strong yield increases from 9.1 TWh to 55.9 TWh, but the average performance of wind turbines in Germany did not increase as one may expect due to technology improvements. This can be explained by the following considerations. If the turbine capacity is high relative to the rotor swept area, then the turbine's specific power is high, but the capacity factor is low in low wind regions. By decreasing the turbine's capacity, the specific power would be decreased and the capacity factor increased without generating more energy. Therefore, an increase in turbine's specific power can lead to decreases in capacity factors that would appear as a performance decrease. However, the average specific power did not change from 2000 to 2014 (Fig 3D), and hence, the lack of expected performance increases needs to have other reasons. From the year 2000 to 2014, total production losses increased due to an increase of average turbine age with time (Fig 3A). Therefore, expected performance increases due to technology improvements have likely been at least partly mitigated by performance decreases due to ageing of wind turbines in Germany.
We have estimated ideal wind energy yield for Germany for the years 2000 to 2014 using datasets of wind speed and air density as well as turbine characteristics. We then used the ideal wind energy yield and actual monthly energy yields of a subset of wind turbines to set up a mixed-effects model that we then applied to all wind turbines in Germany to estimate the actual monthly energy yield. Annual sums of actual plus estimated yields of all turbines in Germany were very close to reported ones. On average, however, only 73.7±0.2% of the ideal wind energy yield was converted to electrical energy (actual yield).
For the years 2000 to 2014 the average specific power of all turbines in Germany did not change, so that the capacity factor can be used as a measure of turbine performance. The mixed-effects model indicated that turbine performance was significantly influenced by turbine age and park size. On average, wind parks in Germany lose 6.3% total yield when assuming an average turbine lifetime of 20 years. The ageing effect, as defined here, might, however, include other effects such as changes in maintenance quality, wake effects of newer wind parks, and decreases due to feed-in management or other external factors that decrease wind speed. The park effect decreased total onshore wind energy yield in Germany by 1.9% per year. This share stayed constant over time as did the average park size. Even though the age and park effects led to considerable energy generation losses, the loss due to other unidentified effects led to over 70% of total losses.
The knowledge we gained about the share of ideal and actual energy yield as well as effect of turbine age and park size, should be valuable for the prediction of future energy yields. Assuming that a change in specific power of turbines does not influence the magnitude of the ageing and wake effect, the mixed-effects model might be used together with ideal energy yield estimations to estimate future wind energy yields for different renewable energy scenarios. Specifically, this mixed-effects model would allow to evaluate such scenarios with respect to the increase of installed capacity, the effects of turbine ageing, as well as the effects of park sizes. Such an application of the mixed-effects model to future scenarios would, however, benefit from first assessing its capacity to predict the yield for a set of recently installed and modern turbines with known yields.
S1 Fig. Frequency histograms of hourly wind speeds in Germany for the years 2000 to 2014 (“All”) and for single years within is period extracted from the COSMO-REA6 dataset at 100 m height.
The median (solid line), mean (dotted line) and the interquartile range (blue area) for the histogram of the entire period are also shown.
S2 Fig. Histogram of the trend in mean wind speeds using hourly wind speeds in Germany for the years 2000 to 2014 extracted from the COSMO-REA6 dataset at 100 m height.
The frequency refers to the number of grid cells of the data set showing the trend of a given magnitude. The median (solid line), mean (dotted line) and the interquartile range (blue area) for the histogram of the entire period are also shown. At top, the cumulative distribution function is shown.
S3 Fig. Distribution of power coefficients taken from the turbine data provided in the review of Carrillo et al.
(2013). The median (solid line), mean (dotted line) and the interquartile range (blue area) are also shown.
S4 Fig. Sensitivity of estimated monthly yield to the power coefficient for the year 2010.
We thank J. Keiler for his support about details regarding the data source "Betreiber-Datenbasis (BtrDB)" and the German Weather Service (DWD) for making the reanalysis data COSMO-REA6 publicly available. We thank two anonymous reviewers for their constructive comments, which helped to improve the manuscript.
- 1. GWEC GWEC. Global wind 2015 report—Annual market update. 2015 p. 73.
- 2. REN21 REP network for the 21st C. Renewables 2016 global status report. Paris; 2016.
- 3. European Commission, editor. Energy roadmap 2050. Luxembourg: Publications Office of the European Union; 2012.
- 4. UBA B. Erneuerbare Energien in Deutschland—Daten zur Entwicklung im Jahr 2016. 2017.
- 5. EEG. Erneuerbare-Energien-Gesetz vom 21. Juli 2014 (BGBl. I S. 1066), das zuletzt durch Artikel 1 des Gesetzes vom 17. Juli 2017 (BGBl. I S. 2532) geändert worden ist [Internet]. 2017. Available: https://www.gesetze-im-internet.de/eeg_2014/BJNR106610014.html
- 6. Tobin I, Vautard R, Balog I, Bréon F-M, Jerez S, Ruti PM, et al. Assessing climate change impacts on European wind energy from ENSEMBLES high-resolution climate projections. Clim Change. 2015;128: 99–112.
- 7. Vautard R, Cattiaux J, Yiou P, Thépaut J-N, Ciais P. Northern Hemisphere atmospheric stilling partly attributed to an increase in surface roughness. Nat Geosci. 2010;3: 756–761.
- 8. Miller LM, Brunsell NA, Mechem DB, Gans F, Monaghan AJ, Vautard R, et al. Two methods for estimating limits to large-scale wind power generation. Proc Natl Acad Sci U S A. 2015;112: 11169–11174. pmid:26305925
- 9. Miller LM, Kleidon A. Wind speed reductions by large-scale wind turbine deployments lower turbine efficiencies and set low generation limits. Proc Natl Acad Sci. 2016;113: 13570–13575. pmid:27849587
- 10. Jacobson MZ, Archer CL. Saturation wind power potential and its implications for wind energy. Proc Natl Acad Sci. 2012;109: 15679–15684. pmid:23019353
- 11. Miller LM, Gans F, Kleidon A. Estimating maximum global land surface wind power extractability and associated climatic consequences. Earth Syst Dynam. 2011;2: 1–12.
- 12. Beyer HG, Pahlke T, Schmidt W, Waldl H-P, de Witt U. Wake effects in a linear wind farm. J Wind Eng Ind Aerodyn. 1994;51: 303–318.
- 13. Barthelmie RJ, Jensen LE. Evaluation of wind farm efficiency and wind turbine wakes at the Nysted offshore wind farm. Wind Energy. 2010;13: 573–586.
- 14. Emeis S, Siedersleben S, Lampert A, Platis A, Bange J, Djath B, et al. Exploring the wakes of large offshore wind farms. J Phys Conf Ser. 2016;753: 092014.
- 15. Staffell I, Green R. How does wind farm performance decline with age? Renew Energy. 2014;66: 775–786.
- 16. Olauson J, Edström P, Rydén J. Wind turbine performance decline in Sweden. Wind Energy. 2017;20: 2049–2053.
- 17. BDB. Betreiber-Datenbasis [Internet]. 2016 [cited 26 Nov 2017]. Available: http://www.btrdb.de/
- 18. Bollmeyer C, Keller JD, Ohlwein C, Wahl S, Crewell S, Friederichs P, et al. Towards a high-resolution regional reanalysis for the European CORDEX domain: High-Resolution Regional Reanalysis for the European CORDEX Domain. Q J R Meteorol Soc. 2015;141: 1–15.
- 19. ENTSOE. Yearly statistics & adequacy retrospect 2014 [Internet]. 2015 p. 66. Available: https://www.entsoe.eu/Documents/Publications/Statistics/YSAR/entsoe_ys_ar_2014_web.pdf
- 20. TheWindPowerNet. Germany—Countries—Online access—The Wind Power—Wind energy Market Intelligence [Internet]. 2017 [cited 27 Nov 2017]. Available: https://www.thewindpower.net/country_en_2_germany.php
- 21. McVicar TR, Roderick ML, Donohue RJ, Li LT, Van Niel TG, Thomas A, et al. Global review and synthesis of trends in observed terrestrial near-surface wind speeds: Implications for evaporation. J Hydrol. 2012;416–417: 182–205.
- 22. Carrillo C, Obando Montaño AF, Cidrás J, Díaz-Dorado E. Review of power curve modelling for wind turbines. Renew Sustain Energy Rev. 2013;21: 572–581.
- 23. R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing [Internet]. Vienna, Austria; 2015. Available: https://www.R-project.org/
- 24. RStudio Team. RStudio: Integrated Development for R. [Internet]. Boston, MA: RStudio, Inc.; 2017. Available: http://www.rstudio.com/
- 25. Bates D, Mächler M, Bolker B, Walker S. Fitting Linear Mixed-Effects Models Using lme4. J Stat Softw. 2015;67.
- 26. AGEE-Stat. Zeitreihen zur Entwicklung der erneuerbaren Energien in Deutschland [Internet]. Bundesministerium für Wirtschaft und Energie; 2017. Available: http://www.erneuerbare-energien.de/EE/Redaktion/DE/Downloads/zeitreihen-zur-entwicklung-der-erneuerbaren-energien-in-deutschland-1990-2016.pdf;jsessionid=B9B44855CCB7A70FC85CE21273609437?__blob=publicationFile&v=13
- 27. Boccard N. Capacity factor of wind power realized values vs. estimates. Energy Policy. 2009;37: 2679–2688.
- 28. Kusiak A, Song Z. Design of wind farm layout for maximum wind energy capture. Renew Energy. 2010;35: 685–694.
- 29. Barthelmie RJ, Pryor SC. An overview of data for wake model evaluation in the Virtual Wakes Laboratory. Appl Energy. 2013;104: 834–844.
- 30. Barthelmie RJ, Hansen KS, Pryor SC. Meteorological Controls on Wind Turbine Wakes. Proc IEEE. 2013;101: 1010–1019.
- 31. Pieralli S, Ritter M, Odening M. Efficiency of wind power production and its determinants. Energy. 2015;90: 429–438.
- 32. Borsche M, Kaiser-Weiss AK, Kaspar F. Wind speed variability between 10 and 116 m height from the regional reanalysis COSMO-REA6 compared to wind mast measurements over Northern Germany and the Netherlands. Adv Sci Res. 2016;13: 151–161. | <urn:uuid:8a830b1c-4501-4457-b983-63c75007839e> | CC-MAIN-2023-50 | https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0211028 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.912986 | 9,110 | 3.40625 | 3 |
Do probiotics for dogs really work? While the veterinary research regarding their effectiveness is still limited and sometimes conflicting, there are indications that probiotics can provide various benefits for our furry friends. From aiding digestion and modulating the immune system to preventing urinary tract infections and reducing allergic reactions, probiotics may have a positive impact on dogs’ overall health.
Understanding Probiotics for Dogs
Probiotics are beneficial microorganisms that reside in the digestive tract, including the gastrointestinal system of dogs. These tiny bacteria and yeast play a crucial role in balancing the internal environment, promoting health, and preventing diseases. They assist in breaking down food, producing nutrients and vitamins, fighting off potential pathogens, strengthening immunity, and interacting with the “gut-brain axis” that influences mood.
The Difference Between Probiotics and Prebiotics
You might have also come across the term “prebiotics.” Prebiotics are types of fiber that nourish and support the growth of the good bacteria already present in the colon. In simpler terms, prebiotics feed probiotics. High-fiber foods are generally rich in prebiotics.
When Do Dogs Need Probiotics?
Ideally, a healthy dog should be able to maintain the balance of digestive microbes naturally. However, during periods of stress, illness, or malnutrition, imbalances can occur. Probiotics are often prescribed to ensure a desirable intestinal microbial balance and to keep the gut health of dogs in check. Many dogs seem to respond well to probiotic supplements when their gut microbes are out of whack.
Types of Probiotics for Dogs
Probiotics for dogs come in different forms. Some dog foods even contain probiotics as part of their ingredients. However, it’s generally recommended to use probiotic supplements specifically formulated for dogs. These supplements are available in powder, capsule, or chew form, allowing for higher numbers of beneficial live microorganisms to be delivered to your dog. Look for species-specific strains such as Enterococcus faecium, Bacillus coagulans, Bifidobacterium animalis, Bifidobacterium longum, Lactobacillus acidophilus, and Lactobacillus rhamnosus.
The Benefits of Probiotics for Dogs
Research has shown that certain species of probiotics can provide specific benefits for dogs. For example, certain strains of Lactobacillus and Bifidobacterium can help manage yeast, support the immune system, prevent anxiety, reduce stress, and provide relief from diarrhea and food allergies. Additionally, some Bacillus species support the immune response, while Enterococcus faecium has been found to shorten the course of diarrhea in dogs.
Using Probiotics for Dog Diarrhea
Probiotics can be used to improve dog diarrhea caused by stress, sudden diet changes, or bacterial imbalances resulting from long-term antibiotic use. They may also be effective for diarrhea caused by infections that lead to bacterial overgrowth in the gut.
Probiotics for Puppies and Can They Take Human Probiotics?
Puppies can safely take dog-specific probiotics, which can help establish a healthy balance of intestinal bacteria, support their immune system, and reduce digestive tract issues. While dogs can also consume human probiotics, it’s important to note that they may not provide the same benefits as species-specific supplements. Probiotics designed specifically for dogs take their unique gut microbiome into account and provide appropriate dosing instructions on the labels.
Probiotic Foods and Side Effects
While some human foods like plain yogurt, kefir, and fermented vegetables can benefit certain dogs due to their live cultures, it’s generally safer to use a probiotic supplement to avoid potential health problems associated with introducing new foods into a dog’s diet. When starting probiotics, some dogs may experience side effects like digestive discomfort, diarrhea, bloating, gas, or constipation. These symptoms may temporarily worsen before improving. If you have concerns about your dog’s digestive health or their response to probiotics, it’s best to consult with your veterinarian.
Remember, maintaining your dog’s gut health is essential for their overall well-being. Probiotics can play a role in promoting a healthy digestive system, strengthening immunity, and supporting various health benefits. If you’re considering adding probiotics to your dog’s routine, consult with your veterinarian to determine the best approach for your furry friend. | <urn:uuid:1b0391cd-3ba7-47bd-bce0-27f30f00021a> | CC-MAIN-2023-50 | https://kattentrimsalon.com/can-i-give-my-dog-human-probiotics/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.916889 | 921 | 2.59375 | 3 |
If we can instill good judgment in children, they will be more likely to make healthy choices when we are not around. Simply getting them to be obedient lasts only as long as we are in the room because it doesn’t help them understand what to do in new, novel situations. We can’t realistically expect children to know what to do in a new situation if we have taught them only to obey through our enforcing of rules. Our world is complex, so children need to learn to rely on good judgment rather than rules.
Obedience is the goal of punishment. Good judgment is the result of talking with children, helping them figure out how they might have handled a situation differently, and discussing moral dilemmas. Connecting with children after they have made a misstep is the key to these conversations. Listening to how they feel after they have made a mistake and telling them calmly about your feelings and perspective will do more to promote good judgment than simply enforcing a punishment.
A close relationship with someone who offers love and affection while modeling moral values goes a long way toward teaching a child how to develop into a thoughtful, considerate, trustworthy, kind adult with high moral standards. Since close connections are so important to humans, and humans can think and reason, it makes a lot more sense to use this love connection and talking as the foundation of discipline. Discipline is, after all, a form of teaching.
It is notable that recent brain research suggests punishment has the least impact on the children most likely to be punished. Certain children are more impulsive, have a tougher time developing a sense of moral goodness, and experience trouble connecting with people or feeling part of a group. These very traits make them more likely to get into trouble. The children who are more inclined to misbehave need help organizing themselves rather than punishment because punishment further disorganizes these children. To help them organize themselves, young children need the help of another’s calm, physical body—experiencing such contact as a hug or sitting in a lap—along with quiet space and comforting objects like a blanket, pillow, or fluffy toy. For older children, cognitive organization can be encouraged by art projects, safe roughhousing, and a somewhat structured schedule.
Rethink . . . Instill good judgment through connection and teaching rather than punishment. | <urn:uuid:c2461b47-b165-4d37-877b-f74a223957a2> | CC-MAIN-2023-50 | https://kidsareforkeeps.com/impart-good-judgment-2/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.962063 | 475 | 3.734375 | 4 |
August 4, 2011, was a day that will be hard for Latin America to forget. On that Thursday, the principal stock markets in the region and the rest of the world succumbed to fears of a possible new U.S.economic crisissimilar to the one that broke out in 2008.
Amid the desperate efforts of market operators to calm investors, the Brazilian stock market index Bovespa in São Paulofell by 5.72%, the IPSA (Selective Index of Share Price) in Chile posted a decline of 3.94%, and Mexico’s index (the Bolsa Mexicana de Valores) experienced a decline of 3.37%.
The discouraging episode began 24 hours earlier, when the U.S. Congress was debating against the clock over Republican and Democratic plans which would each raise the debt ceiling and simultaneously reduce the country’s large deficit. Last May, when the U.S. reached its debt ceiling of $14.29 trillion, the White House made it clear that Congress needed to raise the ceiling to reduce itsliabilities before August 2, or the government would run out of funds to pay its bills.
After intense debates that plunged economies around the world into deadly suspense, Democrats and Republicans finally reached an agreement to approve a new ceiling for the debt. Nevertheless,the failure of Congress to reach a consensus without any delay had created uncertainty that was felt resoundingly in the markets on August 4, a day now known as "Black Thursday."
The climate of distrust further worsened in the days that followed, after Standard & Poor’s decided to lower its rating of the credit worthiness of the U.S. from “AAA” to “AA+.” Analysts at the firm justified that as necessary because, in their view, the plan that Congress had agreed on was not strong enough to stabilize the condition of the national debt over the long term. Analysts at Standard & Poor’s also warned that political and economic tension in the United States could worsen the already difficult situation facing Europe, considering the significant budget imbalances in countries such as Greece, Portugal and Ireland – and, to a lesser extent, Spain and Italy.
Without doubt, the American "debt crisis" led to two important phenomena, says Joseph Ramos, professor of macroeconomics at theUniversity of Chile. "The first is that the market became aware that Republican Congressmen can make it harder to take the steps that are needed to to address the economic recovery,making this process move more slowly.” The second result, he notes, is that expectations that the U.S. economy will rebound and grow by 3% "have been absolutely thrown out, as analysts forecastan expansion of just 1.5% or even less."
According to Ricardo Patiño, the foreign minister of Ecuador, this slow growth rate is a sign that the United States is not moving in the direction that many had hoped for, and the U.S. runs the risk of having its economy slow down suddenly with no point of return.In an interview with a radio station in his country, he said that global GDP is now close to US$60 trillion, and the foreign debt of the United States exceeds US$14 trillion. "There is a possibility that it will increase to a point where the [U.S.] government cannot cover that debt, and the international upheaval that this would produce would be phenomenal.” He emphasized that the situation is worrisome, and “Latin America must take measures.”
A ‘Poorly Healed Wound’
Some analysts do not share that alarmist view. José Oscátegui, professor of economics at the Pontifical Catholic University of Peru, argues that at the moment, “we are only living on the brink of a slowdown in global economic activity, which is nothing more than a poorly healed wound from the debacle that erupted in 2008.”
In an effort to counteract the unfavorable situation created by the country’s real estate bubble, the U.S. government increased fiscal spending and, with that move, its debt, notes Roberto Durán, professor of international relations at the Pontifical Catholic University of Chile. “This was a reactive measure that partially alleviated national accounts.” The problem, he says, is that the United States has accumulated a deficit “of an extraordinary volume, which has begun to jeopardize the competitive advantages of its economy and [the economies] of the rest of the world.”
In addition, after the global financial crisis broke out, the countries of the European Union (EU) intensified their fiscal stimulus programs, increasing their spending to compensate for the strict monetary policy that was applied by the European Central Bank (ECB). In this way, they were able to mitigate the effects of the crisis, notes Victor Valenzuela, professor of economics and finance at Andres Bello University in Chile. “The result is that the fiscal deficit of Greece, Portugal and Ireland, which were already heavily indebted and had competitiveness problems, has shot up.”
Although the EU and the ECB have been working intensively on a package of measures to avoid the contagion of a massive default in Europe — thus helping to calm financial tensions — such initiatives have not been enough to regain the confidence of investors.
In other words, the international situation provides little encouragement for Latin America, notes Oscátegui. However, he makes it clear that in no way could this reach the devastating levels that characterized the crisis that took place 2008. “Nor could it wind up dragging down Latin American markets in the way that it did at that time.” The reason for that, he says, is that authorities in both the United States and the EU know what they have to do in order to counteract the impact of the slowdown “and to keep it from becoming a major crisis.” While there are political obstacles, he adds – referring to the Republican opposition in the U.S. Congress — “the authorities are aware of the high costs of not taking measures in this case.”
One of the best examples of this is the joint action taken in mid-September by the European Central Bank, the U.S. Federal Reserve, the Bank of England, the Bank of Japan and the Swiss National Bank, which agreed to loan European commercial banks the amount of dollars that they need in exchange for certain guarantees. Although some analysts said this measure was too late, it was directed precisely at guaranteeing sufficient liquidity for the U.S. currency and diminishing the volatility of the markets. Thus, “it was a demonstration that the central banks are going to do what they must in order to preserve the stability of the system,” noted Christine Lagarde, managing director of the International Monetary Fund, in a statement to the regional press.
In addition, the Fed announced at the end of September that it will maintain low interest rates (between 0.0% and 0.25%), and that it will exchange US$400 billion of short-term Treasury bonds in its power for other bonds that have a longer-term maturity. Its goal is to stimulate the slow, longed-for economic recovery of the U.S.
The Impact on Latin America
Nowadays, volatility and caution dominate the financial markets, which closely follow every move made by monetary authorities in the United States and Europe. However, export activity in Latin America has remained stable, without any major surprises in the prices of raw materials, says Ramos. Nevertheless, Ramos cautions that if the U.S. and Europe officially enter a recession, “that will have an impact on the commodities that the [Latin American] region exports to the United States and Europe [copper, iron, petroleum, soy beans, salmon, coffee, fruits and agricultural products, among others], and it will also have an impact on the investment climate.”
But the consequences will depend on how much exposure each Latin American country has to other markets, adds Javier Bronfman, professor at the School of Government of the Adolfo Ibáñez University in Chile. That’s because “it is very likely that the more open nations and those that ship more primary products to the United States and Europe will experience more problems [than other countries].”
For that reason, economist Ricardo Patiño, Ecuador’s foreign minister, has called on governments of the region to diversify their target markets to include other, new destinations while also increasing their own trade with other countries in the region.
While Oscategui recognizes that is an effective strategy, he believes that export diversification and trade integration in Latin America are both challenges that the countries of the region have made progress addressing, “at a slow but steady pace, through numerous significant alliances such as Mercosur, ALCA [the Free Trade Area of the Americas], UNASUR [the Union of South American Nations] and ALADI, the Latin American Integration Association.”
To mitigate the impact of a potential global recession, notes Oscátegui, Latin America should undertake other actions. “One of these is to follow the recommendations of the IMF, which suggests that nations that have low debt and significant foreign currency reserves should apply a counter-cyclical, expansive fiscal policy; expanding their spending while demonstrating their capacity to maintain a long term fiscal equilibrium.”
A Counter-cyclical Policy
According to Valenzuela, Chile and Peru are ideal candidates to apply a counter-cyclical fiscal policy since these nations’ level of indebtedness is low, and both countries have significant economic reserves. Since 2007, Chile has had its Fund for Economic and Social Stabilization (FEES), now valued at about US$13 billion, and it is looking to finance any eventual fiscal deficits, given the fluctuations in the global economy. For its part, Peru’s Fund for Fiscal Stabilization (FEF in Spanish) is currently nearly US$4 billion, “which gives sufficient confidence to both local and foreign investors,” adds Oscategui. In addition, he says, Peru has been growing at high rates of between 8% and 9% during the past three years, and “it is very likely that as a result of the deceleration, the country will only grow by 6% or 6.5% this year, which is still a good forecast.”
Brazilis another country on track to implement an expansionary fiscal policy, says Oscátegui. It achievedsignificant economic changes several years ago, "which have enabled the country to become one of the leading powers of the region today." Among those changes, he notes, are the fact that it kept inflation at bay, strengthened its flexible exchange rate, increased the fiscal surplus, and created reserves in various foreign currencies in order to reduce its external vulnerability.
Argentina is one of the countries of South America that has been growing the fastest in recent times, says Valenzuela. In fact, according to a report released in March byArgentina’s National Institute of Statistics and Census (INDEC), the nation’s GDP grew by 9.2% in 2010. "In short, the importance of the crisis becomes less if the country is growing sinceArgentina also has a capacity to implement a countercyclical fiscal policy [as a result]." However, some local analysts have noted that if the Argentine government opts for an expansionary fiscal policy, it will have to resort to financing from the Central Bank of Argentina (BCRA).
Apparently, there are only a few countries in the region that could follow the recommendations of the IMF. As a result, Bronfman recommends that the other nations monitor their rate of indebtedness, provide state subsidies to their weaker economic sectors and promote hiring, among other measures. | <urn:uuid:d20ccfdd-bce3-4c82-8291-d6663992c898> | CC-MAIN-2023-50 | https://knowledge.wharton.upenn.edu/article/when-the-u-s-and-europe-sneeze-does-latin-america-catch-a-cold-2/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.962743 | 2,432 | 2.53125 | 3 |
Everybody knows “A Christmas Carol” by Charles Dickens, a story about greedy ans miserly Ebenezer Scrooge, who is taught the true meaning of Christmas by a series of ghostly visitors. Dickens created a vivid Christmas fable about the message of goodwill towards mankind. To the left is this book at our presentation.
On page 153 of this book, the other charming story “A Christmas Tree” published by Dickens in 1850, and ever since it has an enduring influence on Christmas traditions, becoming inextricably linked with our celebrations of the festival. To the right is the start of this story, and another photo of Charles Dickens, writing it. This is how he started:
“I have been looking on, this evening, at a merry company of children assembled round that pretty German toy, a Christmas tree. The tree was planted in the middle of a great round table and towered high above their heads. It was brilliantly lighted by a multitude of little tapers, and everywher sparkled and glittered with bright objects.”
The illustrations were done by the world-renowned artist and author from Australia, Robert Ingpen, and they are so beautiful that I could not help gathering them together for a picture show “Robert Ingpen’s Delights”.
At the end of the story:” …I hear a whisper going through the leaves, “This, in commemoration of the law of love and kindness, mercy and compassion. This, in remembrance of Me!” | <urn:uuid:4938ee2d-ce93-458c-ad2a-56cf5d6ad7be> | CC-MAIN-2023-50 | https://krutajababulka.ca/merry-christmas-and-a-happy-new-2011-year-to-all-a-christmas-tree-by-charles-dickens-illustrated-by-robert-ingpen-blue-herton-books-canada-2008/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.95536 | 320 | 3.375 | 3 |
Power transformers are passive devices that use magnetic fields to transfer energy from one circuit to another. It is a reliable way to move energy without having to use any physical contact. Power transformers set themselves apart from the other transformer varieties due to their functionality of meeting regulatory standards for operating with mains power. Power transformers are ideal for powering machines as they can face mains voltages and heavy currents. A power transformer's most crucial feature is its insulation between the primary and secondary line which is usually specified in kilovolts (kV). This plays a vital role in safeguarding human life from any potentially dangerous earth faults.
Power transformers are essential pieces of equipment that allow us to transmit energy from the generator to the primary distribution circuits. By using a power transformer, we can adjust the voltage and current of electricity to ensure that it is properly distributed throughout the electrical or electronic circuit. Power transformers are used in a wide variety of applications, including residential and industrial settings, making them an invaluable tool for keeping our electrical systems running smoothly. To accommodate varying voltage levels in distribution networks, power transformers are often used to create step-down and step-up connections.
KVA Process Transformers is the leader in providing efficient, reliable and cost-effective medium power (MV) transformers. We understand the importance of power and have developed a comprehensive range of products to meet our customers' needs. Our transformers are designed to provide high productivity and efficiency rates, ensuring that users get the most out of their energy consumption. No matter what your industry or application may require, we have a transformer solution for you. Our team of experts have designed our power transformers to exceed the MV range, giving you a customised selection of oil-immersed and fluid-filled transformers. This means you can trust in the quality and reliability of each transformer. To ensure that our range of power transformers is effective, our team of experts provides a tailored selection of both fluid-filled and oil-immersed transformers. This goes beyond the standard medium voltage and gives users customized solutions they can rely on.
- Capacity: Up to 50000 KVA
- No. of Phases: 03 phase
- Frequency: 50 Hz
- Voltage: 11kv/22kv/33kv/66kv/132kv
- Taps: On load / Off load as per customer requirement
- Insulations: Class A
- Vector group: As per customer requirements
- Connections: As per customer requirements
- Duty cycle: Continuous
- Winding: Copper
- Terminals: As per customer requirement
|RATING AND DIAGRAM PLATE||EXPLOSION VENT WITH DIAPHRAGM||Jacking pads|
|EARTHING TERMINALS||TOP FILTER VALVE||Haulage lugs|
|LIFTING LUGS||INSPECTION COVER||Magnetic oil gauge with minimum oil marking|
|THERMOMETER POCKET||SILICAGEL BREATHER||Sampling valves|
|OIL CONSERVATOR WITH OIL FILLING HOLE AND DRAIN PLUG||DRAIN VALVE & BOTTOM FILTER VALVE||RADIATOR VALVE|
|AIR RELEASE HOLE WITH PLUG||COOLING RADIATOR||VALVE SCHEDULE PLATE|
|OIL LEVEL INDICATOR||BI / UNI DIRECTIONAL ROLLER||Buchholz relay with alarm and trip contact|
|Oil Temp. Indicator with alarm & trip contact||Winding Temp. indicator with alarm & trip contact||FANS FOR ONAF|
|Marshalling box||ON LOAD TAP CHANGER / OFF LOAD TAP CHANGER||RTCC PANNEL|
|Electronic Automatic Voltage Controller( FOR OLTC)||Capacitive bushing|
Our business takes pride in giving Submerged arc furnace transformer, Induction Melting Furnace transformers, Power Transformer, Dry type transformer, OLTC Fitted transformer, Distribution Transformers, Earthing Transformers, Auxilliary Transformers, etc. of superior quality at very affordable charges.
- Prompt delivery
- Wider range
- Ethical and transparent business policy
- Experience team
- Competitive price | <urn:uuid:13ed3db0-f2fd-489f-b17b-30bd503b297c> | CC-MAIN-2023-50 | https://kvatransformer.com/products/power-transformers | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.864368 | 891 | 2.90625 | 3 |
Family problems are common for most people. I have yet to meet someone who came from a perfect family and upbringing. Having said that, it is not always the problems that were the bigger issue but rather the ways that families tried to resolve their problems that was the issue. In some families, voicing one’s opinion is discouraged, while in other’s the mentality is more that the loudest person wins. If we are fortunate, we have parents who are able to share healthy coping strategies with us—unfortunately, many parents use the strategies they learned from their own parents, even when those strategies did not do them much good when they themselves were growing up.
Oftentimes, solutions lie in our ability to communicate effectively (both speaking and listening) and in establishing healthy boundaries that family members respect. Problems arise when family members have opposing needs and are fighting for the same things (typically parental attention and approval). Learning to identify dysfunctional patters in behavior and communication is often key to correcting some of these problems. | <urn:uuid:2bda1a48-6040-41aa-96fb-c304c40baeb3> | CC-MAIN-2023-50 | https://lakeorioncc.com/family-problems/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.984064 | 206 | 2.609375 | 3 |
The 3/2 Polyrhythm
In this video, we learn to count the 3/2 polyrhythm by slapping the left and right hand on the guitar. This lesson is a fun one – and sets us up to play the polyrhythm on strings in the next lesson!
Polyrhythms = using conflicting rhythms
Polyrhythm is the simultaneous use of two or more conflicting rhythms (this is the Wikipedia definition).
In this lesson, we will explore the 3/2 polyrhythm. What this means is that we will play a rhythm that could be counted in “threes” or just as easily could be counted in “twos”.
Visualize using boxes
One way to learn the 3/2 polyrhythm is to think in boxes.
Imagine that you have 6 boxes which can fill or leave empty.
We are now going to fill those boxes with rhythms!
You need to generate two different sounds. In this video, I slap the top of the guitar in two places.
One example is to tap the table in front of you with your left hand and right hand in a way that will give two distinct sounds (e.g. your right hand is holding a pencil that makes the tap)
Now count to 6 and on the ‘1’ and the ‘4’ slap the table with your left hand.
Now play your right hand on the ‘1’, ‘3’ and ‘5’.
Putting it all together
Now try to do both at the same time.
Take this nice and slowly. It’s really good fun but it does take a little while to master.
Now that you understand the concept and theory of polyrhythms, you are ready to play your first polyrhythm on guitar – the 3/2 polyrhythm.
The next article shows you how to play this on two strings.
Get the 105 page Guitar Fingerpicker eBook
Register for the newsletter and I will send you this book and a free lesson once a month.
You can unsubscribe anytime. | <urn:uuid:5cbe5999-d0ab-4663-bdc7-5348618b39e3> | CC-MAIN-2023-50 | https://learnfingerpicking.com/the-3-2-polyrhythm/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.916077 | 439 | 3.265625 | 3 |
Contrasted greenhouse gas emissions from local versus long-range tomato production
Theurl, M. C., Haberl, H., Erb, K. H., & Lindenthal, T. (2014). Contrasted greenhouse gas emissions from local versus long-range tomato production. Agronomy for Sustainable Development, 34(3), 593-602.
Abstract: Transport from regional production requires less fossil fuel and thus produces lower greenhouse gas emissions. In addition, policies fostering the production of regional goods support rural development. Tomato consumption has increased fast in Europe over the last decade. Intensive production techniques such as heated greenhouses and long-distance transport overcome seasonal constraints in order to provide year-round fresh goods. However, studies that evaluate seasonal and off-season production are scarce. Here, we analyzed the carbon footprint of tomato production systems in Austria, Spain, and Italy using a life cycle approach. We collected data from four main supply chains ending at the point of sale in an average Austrian supermarket. We aimed to identify hotspots of greenhouse gas emissions from agricultural production, heating, packaging, processing, and transport. Our results show that imported tomatoes from Spain and Italy have two times lower greenhouse gas emissions than those produced in Austria in capital-intensive heated systems. On the contrary, tomatoes from Spain and Italy were found to have 3.7 to 4.7 times higher greenhouse gas emissions in comparison to less-intensive organic production systems in Austria. Therefore, greenhouse gas emissions from tomato production highly depend on the production system such as the prevalence or absence of heating.
Default weight: 10
Peer reviewed: Yes
Number of products: Below 5
Meta study: No
Year of study: After 2005
Methodology described: Yes
Reputation of source: High
|Product||Notes||Country origin||Country consumption| | <urn:uuid:f81018f7-eb86-4437-a430-645baf4a436a> | CC-MAIN-2023-50 | https://livelca.com/resources/contrasted-greenhouse-gas-emissions-from-local-ver_95137eb2-7eb5-4861-b070-54e43a953fce | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.900119 | 371 | 2.828125 | 3 |
A sdy pools lottery is a game in which numbers are drawn to determine the winner of a prize. Prizes can range from money to goods or services. Many governments regulate lotteries, and a portion of the proceeds from the games are often donated to good causes. Americans spend an estimated $80 billion a year on lottery tickets. However, there are some things you should know before playing the lottery.
A number of people have attempted to improve their chances of winning by purchasing more tickets or selecting random numbers that aren’t close together. These tactics may be tempting, but they’re not effective. In fact, according to Harvard statistics professor Mark Glickman, there is only one way to increase your odds of winning the lottery: by buying more tickets.
In the rare case that you win, be prepared to pay large amounts of taxes. Up to half of the prize value can be paid in federal taxes alone. It is also important to diversify your investments and build an emergency fund before spending any of the winnings.
The concept of a lottery is quite ancient, dating back to biblical times. The Old Testament contains a passage in which the Lord instructed Moses to divide land by lot. The practice continued in the Roman Empire, where lotteries were used to distribute prizes at dinner parties and Saturnalian celebrations. One type of lottery was called an apophoreta, in which guests would receive wood tokens stamped with symbols that were then passed around the room. The prize would be a fancy item, such as a set of dinnerware.
Although the modern state lottery was introduced in America by British colonists, it wasn’t an instant success. It was criticized by many Christians, who saw gambling as a sin and a doorway to worse sins. Despite this, state lotteries survived and prospered in the United States. In addition to providing much-needed revenue for state governments, lotteries have become an integral part of American life.
The poorest members of society, those in the bottom quintile of income distribution, spend a larger percentage of their disposable income on lottery tickets than other segments of the population. This is a regressive tax, as the lottery provides an opportunity for these people to gain wealth without putting in decades of hard work. While the money won from a lottery does not make you rich, it can provide an avenue for self-sufficiency and entrepreneurship. However, the most important thing to remember is that true wealth comes from more than just money. It comes from an ability to achieve your goals and create joyous experiences for others. In order to do that, you need to have a solid plan and the discipline to stick with it. This is only possible if you avoid being distracted by the allure of the lottery and put your efforts towards other pursuits, like educating yourself or creating a small business. | <urn:uuid:91c72943-cdd3-4123-91d5-540abeb65101> | CC-MAIN-2023-50 | https://longestspeechever.com/tag/sdy-hari-ini/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.976837 | 579 | 2.859375 | 3 |
As the world becomes more environmentally conscious, sustainable and green travel is now a hot topic in the transportation sector. In particular, there’s a debate over private vs commercial flying, and which is actually more eco-friendly. Let’s cut through all the talk to find the facts about private jet travel and the environment.
Private vs Commercial Flying: Which Is More Eco-Friendly?
The environmental conversations around private vs commercial flying tend to focus on inefficiencies. Specifically, green travel critics point to studies that claim private jet travel produces around 10 times more carbon per passenger than commercial travel.
On the surface, this critique seems logical since commercial aircraft can usually hold much more people than a private jet. However, private aviation doesn’t always produce more carbon per passenger. In a recent piece for Forbes, aviation scribe Doug Golan illustrated an interesting private vs commercial example.
“Last year Berlin attracted 13.5 million visitors with an average spend of $227 per person, perhaps less than you assumed. In other words, a full Boeing 737-800 carrying 150 passengers brings and leaves behind $34,050.
If the flights to and from the German capital were about three hours each way, the commercial airliner would have emitted about 149 tonnes of CO2… At the same time, a Cessna Citation XLS, a midsize private jet that seats seven or eight people, making the same roundtrip would have emitted 12.52 tonnes of CO2.
With a spend of $85,000, it would mean those often ridiculed private fliers would have brought 250% more revenue to the local economy while emitting less than one-tenth the CO2 of a full passenger jet.”
In this private vs. commercial case, a private jet is actually more efficient than a commercial aircraft.
AVIATION and Carbon Emissions
Overall, aviation doesn’t emit as much carbon as you may think. According to Air Transport Action Group (ATAG), the global aviation industry produces just 2% of human-induced CO2 emissions.
But what about private jets? Well, a green travel report by VistaJet shows that private aviation makes up only 2% of the global aviation industry’s CO2 emissions. That means private aviation contributes less than 1%?-0.04% to be exact?-of the total global CO2 emissions.
Additionally, the private jet industry is making a number of green travel efforts to further combat CO2 emissions. One major avenue is carbon offsetting, which allows travelers to purchase credits in order to counterbalance the carbon produced by their trip. Read more about carbon offsetting here.
Learn more about making your private travel green with Magellan Jets by calling 877-550-5387 or visiting magellanjets.com. | <urn:uuid:91b80b78-c17e-4b66-af91-2af9d94b2073> | CC-MAIN-2023-50 | https://magellanjets.com/private-aviation/private-vs-commercial-flying-eco-friendly/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.941684 | 576 | 2.703125 | 3 |
Sankofa! Reflecting on Our Past to Understand Our Present and Future
by Lauren Pitcher
October 16, 1968, is a day forever etched in United States history. African American Olympic track gold medalist Tommie Smith and bronze medalist John Carlos stood on the award podium raising their fists in silent protest against the racial injustice of the civil rights movement. Australian silver medalist Peter Norman also wore a pro-human rights badge to stand in solidarity with Smith and Carlos. All three men were ultimately ostracized for their symbols of protest. This image would go on to become an iconic symbol of the civil rights movement.
Fifty-one years later, the Making Waves Academy (MWA) Black Student Union (BSU) stood with pride, raising their fists to unite with Wave-Makers of various ethnicities in celebration of Black History Month. The BSU chose the theme of “Sankofa” as the anchor for their celebration. The concept of Sankofa, derived from Ghana, means we must go back to our roots in order to move forward. The BSU Black History Month celebration did just that. It was a visual storytelling of the rich history of African Americans, in which BSU students not only learned Black history but were instilled with a sense of pride in Black culture.
The BSU took the packed audience through a fun-filled historical journey of African American history. Wave-Makers recited creative monologues highlighting African kings and queens, performed traditional African dance, and recognized select African American MWA staff members. They used dramatic expression and praise dance to visualize how slavery began, with African descendants being taken from their homeland and forced into slavery. The crowd even danced and sung along as the Wave-Makers highlighted the influence of African Americans in creative arts by performing a decade-by-decade dance compilation.
The Black Student Union provides a source of cultural unity and racial equity for students of all ethnicities that support the end of racial inequality. The first Black Student Union was founded in the 1960s at San Francisco State University. Making Waves Academy continues the tradition by uniting Wave-Makers of all ethnicities to celebrate Black culture and learn Black history—not just in February, but all year long. | <urn:uuid:194fc369-ed4f-445f-b1f0-361f14ce4b72> | CC-MAIN-2023-50 | https://making-waves.org/academy/sankofa-reflecting-on-our-past-to-understand-our-present-and-future/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.949044 | 463 | 3.453125 | 3 |
Will the United States Environmental Protection Agency pass a PFAS Maximum Contaminant Level rule for all municipal water systems in the United States by January 1, 2030?
PFAS were first developed in the 1940s by DuPont. By the 1950s, 3M began manufacturing various PFAS (including PFOA and PFOS) for consumer and commercial product applications (including Scotchguard and Teflon). Currently many products are still manufactured that contain PFAS including everything from food containers to firefighting foam to non-stick cookware.
PFAS can cause multiple detrimental effects including but not limited to reproductive & developmental problems, liver & kidney damage, tumors and immunological effects in laboratory animals. The most consistent findings are increased cholesterol levels among exposed populations.
Studies have shown PFAS to be in the blood serum samples of nearly everyone human tested, every body of water, rain, snow, and even bottled water - all which indicate widespread human exposure.
As of September 18, 2020, the United States Environmental Protection Agency (EPA) “ToxCast Chemical Inventory” stated that there are 430 different chemicals in the PFAS group. The EPA collected data on six Perfluorinated Compounds Third Unregulated Contaminant Monitoring Rule yet has not proposed any Maximum Contaminant Level (MCL) standards since the UCMR3 study.
|Number of forecasts||94|
<iframe src="https://https://metaforecast.org/questions/embed/metaculus-4759" height="600" width="600" frameborder="0" /> | <urn:uuid:d00cc4cb-d44e-4b85-bc34-7544f2046db0> | CC-MAIN-2023-50 | https://metaforecast.org/questions/metaculus-4759 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.914765 | 322 | 3.046875 | 3 |
Depression is not the blues it is a real disease that can inflict anyone going through severe stress, hardships, losses and pain. It is a persistent state of the feeling of sadness, crying spells, loss of energy, changes in appetite,forgetfulness, lack of pleasure or anhedonia, low energy, lack of motivation, confusion, feelings of dread, as well as hopelessness and helplessness.
There are three types of depression:
Major Depression Disorder: A major depression will exhibit at least five of the depression symptoms within a two week period. It can affect a person’s ability to eat, work, sleep or even study.
Persistent depressive disorder (PDD): Can be recurrent or ongoing for a period of time in one’s life. It may include most of the symptoms associated with major depressive disorder. PDD is less severe than a major depression but can occur for a minimum of two years. A manifestation of PDD will include low energy, lack of appetite or overeating, lack of sleep or oversleeping as well as feeling irritated, stressed or not having a desire to participate in a lot of activities. This type of depression can be temporary.
Bipolar disorder: Bi-polar is a mood cycle that causes a person to experience extreme high moods (mania), mild moods (hypomania) and extreme low moods (depression).
In the United States of America:
6.9% of adults have experienced at least one major depressive episode1.
About 14.8 million adults in the U.S or 6.7% of persons older than 18 years suffer from major depressive disorder.1
Approximately 1.5% of the population older than 18 years have PDD1.
At MMCG we treat depression with psychotherapy. Psychotherapy involves the use of cognitive behavioral therapy which involves cognitive reconstruction to deprogram negative beliefs fueling the depression.
Please feel free to take this test to determine the severity of your depression
Beck’s Depression Inventory – A Test to measure the severity of depression
- Anxiety and Depression Association of America. (2014). Facts and Statistics. Retrieved fromhttp://www.adaa.org/about-adaa/press-room/facts-statistics
The first step towards recovery is to recognize that you are suffering from depression. You are not alone in this struggle. We are here to help relieve you from any emotional pain. Please do not hesitate to contact us at 732-770-4331 if you are experiencing any of these symptoms so that we can start working on relieving you from this emotional pain. | <urn:uuid:c64d39f9-40af-43c2-87a3-444df4a53fcb> | CC-MAIN-2023-50 | https://mmccounselinggroup.com/counseling/depression/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.904944 | 538 | 2.859375 | 3 |
Today we begin the book of 2 Chronicles. It recounts the history that was covered in 1 and 2 Kings. Like 1 Chronicles, it records the history of the southern kingdom of Judah, and it is written from a priestly perspective. It was written after the return from the Babylonian exile, and is meant to show the faithfulness of God to His people. The book begins with the reign of Solomon (1:1). We see that even during the united kingdom, worship was somewhat disjointed. The Ark of God was in the City of David in the tent David had set up (v. 4 – see 2 Sam 6:12, 17). The Tent of Meeting was in Gibeon (v. 3) along with the altar of burnt offering (v. 5). We see that like David before him, Solomon acted as priest to God (v. 6).
Verses 7-12 recounts God appearing to Solomon at Gibeon (see 1 Kings 3:5-14). Verses 14-17 repeat 1 Kings 10:26-29. In chapter 2, we see the preparation for the building of the Temple. The chronicler omits the fact that Solomon drafted forced labor from his own people (see 1 Kings 5:13-14), but he does tell us that the 150,000 workers (2:2) were “resident aliens” from the land (v. 17). This may refer to slaves or to converts from the nations. We also see more of the communication between Solomon and Hiram (compare vv. 3-16 with 1 Kings 5:1-9).
Chapter 3 speaks of the building of the Temple, albeit in far less detail than 1 Kings 6. In 3:1, we see that the Temple Mount is actually Mount Moriah, which is mentioned only here and in Genesis 22:2. It is the mountain upon which Abraham was willing to sacrifice Isaac. It is a place where the willing heart matters, not the actual offering. We also see that it is the spot where David said the Temple would be built (1 Chr 22:1), which is the threshing floor of Ornan the Jebusite. It is where God led David to sacrifice in order to stop the pestilence that was punishment for David’s census (see 1 Chr 21). It is a place where David insisted his offering must cost him something. Abraham was ready to offer everything he had as an act of worship. David was unwilling to offer worship that cost him nothing. This is exactly the worship that God required at the Temple.
Chapter 4 records the creation of the Temple furnishings, again in far less detail than in 1 Kings 7:15-50. We see the vessels of the court were made of bronze, and that the vessels closest to God’s presence in the Holy of Holies were made of gold. This represents the purity required to come into God’s presence. It also represents that costly worship God requires. We see the gourds (4:3), the flowers (v. 5,21), and the pomegranates (v. 13), all reminders of God’s original dwelling place with man in the Garden of Eden. In 5:1, we see the building of the Temple is completed. | <urn:uuid:fa9865ab-f004-4aaa-90c0-6acfee4912b5> | CC-MAIN-2023-50 | https://montclair.church/2022/08/07/2-chronicles-1-51-the-temple/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.97324 | 675 | 2.84375 | 3 |
Make Eye Injury Prevention A Priority
Every time we open our eyes we are reminded of the remarkable gift of sight! Whether you’re watching the sunset or seeing your child take their first steps, your eyes allow you to see some pretty amazing things.
Sadly, each year about one million eye injuries occur in the United States–90 percent of which could have been prevented if protective eyewear had been worn. So, when it comes to avoiding eye injury, remember the two p’s: protection and prevention!
Most Eye Injuries Occur At Home
Did you know that nearly 50 percent of eye injuries occur in the home? Common household objects can be dangerous to our eyes if proper precautions are not taken to protect our vision.
Believe it or not, accidental falls are the leading cause of eye injury. Slipping on slick surfaces or accidents on stairs are the most common reasons for falls. While individuals 60 and older have a higher risk of falling, precautions should be taken in every home to prevent tripping and slipping.
Workshops and yard debris also pose a threat. Power tools, lawn mowers, and weed whackers can all cause damage to your sight if appropriate protection is not in place. When operating tools, be sure to follow all recommended safety procedures for each specific tool. Wear goggles to shield your eyes from dust, debris, fumes, etc. If you work on the yard, clear rocks and debris before mowing or weed whacking.
Always wear eyewear when dealing with household chemicals. Pesticides, bleach, ammonia, and other cleaning agents should all be handled while wearing goggles. Make sure the nozzle is pointed in the right direction before spraying.
Be sure to give your children age-appropriate toys. Avoid toys with sharp points, protruding edges or projectile parts, such as darts, BB guns, slingshots and the like. When handled improperly, toys have been known to cause serious eye injury and even blindness.
Protect Yourself From Eye Injuries Outside The Home As Well
Don’t forget that there are many injuries taking place outside of the home these days, especially in the workplace. Industry workers are at a higher risk of eye injury due to the materials and machines they work with. In fact, over 100,000 workers each year are disabled due to eye injury and subsequent vision loss. Workers should always wear protective eyewear in industrial-related positions.
[iframe https://www.youtube.com/embed/HeCeumDxD9A?rel=0 620 349]
Sports accidents also lead to many eye injuries. While it may not be the most fashionable thing to wear on the basketball court or the soccer field, studies show that 90 percent of sport-related eye injuries could be prevented by the use of protective eyewear.
Love Your Sight, Protect Your Eyes
Many eye injuries unfortunately lead to lost or impaired vision. Knowing the risk of certain activities can help you know how to best protect yourself and your eyes. Cherish the gift of sight and wear protective eyewear! | <urn:uuid:c826cbf6-8b42-4e8a-bb36-6728001567ce> | CC-MAIN-2023-50 | https://muellereyecare.com/2016/02/10/make-eye-injury-prevention-a-priority/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.932777 | 626 | 2.703125 | 3 |
Want to raise a happy & healthy Baby?
Common Health Problems
Updated on 3 November 2023
Obsessive Compulsive Disorder (OCD) is characterized by a series of unwanted thoughts and fears that push a person to do things in a repetitive pattern. This obsessive and compulsive behaviour pattern signifies distress and interferes with day-to-day activities. You may try to stop your obsessive thoughts and compulsive behaviour, but that will only lead to further distress. Ultimately, you will be driven to perform an obsessive-compulsive act in search of some respite from the distress. This process becomes a ritual and leads to the unbreakable OCD cycle.
In this article, we will discuss some peculiar OCD symptoms that a person suffering from Obsessive Compulsive Disorder experiences on a day-to-day basis.
OCD usually includes both obsessions and compulsions too. But it is possible that the condition only has obsessive thoughts at its root. The types of obsessive thoughts that you might have are as follows:
The fear of contamination or dirt can be an obsession for many. People who clean too much and get rid of even a speck of dirt usually have obsessive thoughts. When they obsess about cleanliness, they cause such OCD symptoms.
Doubting and having difficulty with uncertainty are also obsessive thoughts. People who keep doubting others and also dislike uncertainty might be under OCD behaviour. For example, to reduce uncertainty, they might try to find answers and stay obsessed with them until they find the answer.
Many people are also obsessed with keeping things orderly and symmetrical in their living space. While the organization is significant, the obsession is not good.
Having aggressive thoughts about harming yourself or others because of some triggers are also unusual behaviour and lead to many issues in your daily life.
Thoughts that are inappropriate, weird or unwanted, and extreme behaviour in religious or sexual aspects is also a kind of obsessive thought cycle.
You may also like: Top 10 Health Issues Related To Women
Compulsive behaviour is the behaviour pattern that you want to perform again and again. These physical or mental acts are performed to seek respite in stress or difficult times. However, these compulsions are not behaviour patterns you might follow for pleasure. These actions give you only temporary relief, not complete relief from the stress factor.
Types of compulsion behaviour are as follows:
You may also like: What Helps in Improving Mental Health of Women
You might also see OCD symptoms such as:
Hand washing is normal behaviour, but when this action becomes excessive, your hands become raw and red.
We might experience anxiety linked with locking doors and windows when we leave a place. This can be a compulsive behaviour pattern. Checking once is okay, but the compulsive pattern is prominent if you keep worrying about it.
Checking the stove to confirm whether it is off can again be a sign of fear and anxiety. The stove should be properly shut off. However, if you keep worrying about it, you face an issue with the fear and anxiety of an explosion.
The compulsive counting behaviour is also a major sign of OCD. The orderly counting pattern is a major compulsive behaviour issue.
When you are hooked onto a certain word, prayer or phrase, that is also a compulsive behaviour pattern. The prayer or phrase might offer you some respite as reassurance, but it is also a part of the obsessive-compulsive pattern.
Organisation and arrangement of the entire canned goods in your house in a certain way is also a very compulsive behaviour pattern.
You may also like : 5 Steps to a Healthy Lifestyle: The Blueprint for Your Wellness Journey
You must consult a doctor when OCD symptoms become severe and create major disruptions in your life. The symptoms fluctuate in severity and can also change in pattern and behaviour. If you have been experiencing a lot of stress lately and the symptoms worsen with time, it is vital to go to a psychologist. If your quality and pace of life are affected by the behaviour patterns, you should go to a doctor. Many clinics offer you therapy and medication to keep your OCD behaviour under control. With different approaches, a person can cure their OCD cycle and become normal.
1. Janardhan Reddy YC, Sundar AS, Narayanaswamy JC, Math SB. (2017). Clinical practice guidelines for Obsessive-Compulsive Disorder. Indian J Psychiatry. NCBI
2. Richter PMA, Ramos RT. (2018). Obsessive-Compulsive Disorder. Continuum (Minneap Minn). NCBI
(Obsessive compulsive disorder) OCD Symptoms in Hindi, (Obsessive compulsive disorder) OCD Symptoms in Bengali, (Obsessive compulsive disorder) OCD Symptoms in Tamil, (Obsessive compulsive disorder) OCD Symptoms in Telugu
Charu has been a seasoned corporate professional with over a decade of experience in Human Resource Management. She has managed the HR function for start-ups as well as established companies. But aside from her corporate career she was always fond of doing things with a creative streak. She enjoys gardening and writing and is an experienced content expert and linguist. Her own experiences with motherhood and raising a baby made her realize the importance of reliable and fact-based parenting information. She was engaged in creating content for publishing houses, research scholars, corporates as well as for her own blog.Read More
Get baby's diet chart, and growth tips
গর্ভাবস্থায় আলুবোখরা: উপকারিতা ও ঝুঁকি | Prunes During Pregnancy: Benefits & Risks in Bengali
গর্ভাবস্থায় হিং | ঝুঁকি, সুবিধা এবং অন্যান্য চিকিৎসা | Hing During Pregnancy | Risks, Benefits & Other Treatments in Bengali
স্তনের উপর সাদা দাগ: লক্ষণ, কারণ এবং চিকিৎসা | White Spots on Nipple: Causes, Symptoms, and Treatments in Bengali
গর্ভাবস্থায় পোহা: উপকারিতা, ধরণ এবং রেসিপি | Poha During Pregnancy: Benefits, Types & Recipes in Bengali
গর্ভাবস্থায় মাছ: উপকারিতা এবং ঝুঁকি | Fish In Pregnancy: Benefits and Risks in Bengali
গর্ভাবস্থায় রেড ওয়াইন: পার্শ্ব প্রতিক্রিয়া এবং নির্দেশিকা | Red Wine During Pregnancy: Side Effects & Guidelines in Bengali
At Mylo, we help young parents raise happy and healthy families with our innovative new-age solutions:
Chamomile | Shatavari | Ashwagandha | Myo-inositol | Skin - Pregnancy & New Mom | By Concern | Stretch Marks Cream | Maternity Wear | Lactation | Maternity Gear | Shop By Ingredient | Dhanwantaram | Shea Butter | Skin - Daily Wellness | By Concern | Digestive Health | Immunity | By Ingredient | Saffron | Wheatgrass | Cloth Diaper | Maternity dresses | Stretch Marks Kit | Stroller | | <urn:uuid:f851a13e-014f-43a5-bd2f-7f6144464b2a> | CC-MAIN-2023-50 | https://mylofamily.com/article/obsessive-compulsive-disorder-ocd-symptoms-194618 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.907695 | 1,888 | 2.796875 | 3 |
Characterizing Displacement And Voltage Induced On A Piezo/Photodiode Device By Pulsed Laser Excitation Using Time-Resolved Kelvin Probe Force Microscopy
- 31 Oct 2023
- Volume 25
- NANOscientific Magazine, Fall-Winter 2023
Andrea Cerreta1, Zeinab Eftekhari2, Rebecca Saive2, Alexander Klasen1
1Park Systems Europe, Germany
2Inorganic Materials Science, MESA+, University of Twente, Enschede, 7522NB, the Netherlands
Energy from different natural sources (light, wind, thermal, etc.) can be converted into measurable electrical quantities via a series of physical phenomena, some of which are at the basis of modern-day technologies. Among devices where energy conversion plays a central role, this application note focuses on piezo actuators and photodiodes.
The piezoelectric effect describes the insurgence of a potential difference at two opposite sides of certain materials once that a mechanical strain is applied to them. This is intimately related to the fact that the deformation of the crystalline structure results into the creation of dipoles at the unit cell level. Also, the same materials exhibit a similar effect in the opposite way, i.e. a mechanical deformation is observed when a voltage is applied to a piezoelectric sample. Piezoelectricity has a wide application in the fabrication of scanners and positioners.
Photodiodes are semiconductor-based devices which have their main practical use as light detectors. Their working principle lies on the fact that once the photodiode surface is illuminated by light, a voltage proportional to the intensity of the impinging beam is generated. In the case of a time-modulated light signal such as a sinusoidal or pulsed wave, a photodiode allows converting its amplitude, frequency and phase into electrical signals that can be analogically measured. Photodiodes can then be used to wirelessly transfer both energy and information.
Putting the two pieces together, it is possible to conceive a device which would combine the effects: 1) receiving a light signal as input, 2) converting it to a voltage, and finally 3) activating a mechanical displacement . We can define piezo-photomotion devices the ones where this scheme is present. The fabrication and characterization of such devices is one of the main topics of the research group of Prof. Saive from the Inorganic Materials Science (IMS) department, University of Twente (Netherlands). Such applications could be used in fields such as micro/nanorobotics for biomedical uses .
Atomic Force Microscopy (AFM) is an ideal candidate to study the properties of piezo-photomotion devices. AFM is sensitive to topographic variations on a subnanometer scale, hence allowing the detection of even the faintier movements of these devices. Moreover, the AFM setup allows performing Kelvin Probe Force Microscopy (KPFM), which gives information about the surface potential of samples and its variation.
This application note is the outcome of a collaboration between Park Systems and IMS. The goal of this note is to show how the Park Systems NX10 AFM microscope operated in KPFM mode can be used as a platform to provide a thorough time- and position-dependent characterization of piezo-photomotion devices. Details about the device structure and the setup and results of the KPFM experiments will be illustrated in the following sections.
Exemplary devices were fabricated and measured at IMS. They consist of a silicon photodiode integrated with lead zirconate titanate (PZT), a piezoelectric material, shown as the blue layer in Figure 1 a) . As it can be seen in the cross-section of the device, the PZT actuator is sandwiched between lanthanum nickelate (LNO) electrodes on a 50 μm silicon membrane, which constitutes the photosensitive part of the device.
Figure 1. a) Sketch of the cross-section of the piezo-photomotion device. b) top view of the device; the numbering indicates the positions on the active area of the sample where measurements were performed.
The devices were placed into the NX10 microscope during the experiments. Illumination was provided by a light-emitting diode with a wavelength of about 630 nm which is a part of the PhotoCurrent Mapping (PCM) extension of the NX10 setup. The open design of the NX10’s AFM head provides large space above the measurement area and thus enables a lot of freedom in optimizing the angle of the impinging beam and its position on the active area of the device. Since the LNO/PZT/LNO stack is transparent in the spectral range of the diode, the light can reach the photodiode and hence generate a potential difference between the two electrodes, which leads to the deformation of the piezoelectric layer. Park Systems microscopes are designed as sample scanners, meaning that the probe always remains stationary in the center of the tool, while the sample mounted on a XY scanner can be displaced laterally. Since the light-emitting diode and the AFM head are fixed with respect to the sample stage, only one initial alignment is needed to have the emitted beam illuminating the probed area of the sample.
The light switching can be regulated via a triggering voltage. For that, an external function generator (GW Instek SFG-1003) was coupled with the triggering line of the emitter, leading to alternate light switching on/off periods with a pre-determined frequency. The rising time of the emitter, i.e. the time needed to reach the maximum power of the emitted light, is evaluated to be less than 1 μs.
Measurements were performed in KPFM mode, where the Contact Potential Difference (CPD) between the AFM probe and the sample is measured at each position during imaging. There are several ways to implement KPFM, all based on applying an oscillating electrical bias with a given amplitude and frequency between the tip and the sample. This electrical excitation creates peaks in the spectrum of the vertical deflection of the cantilever with an amplitude proportional to the CPD. Using a lock-in amplifier fed with this vertical deflection as an input signal, it is possible to evaluate the amplitude at given frequencies and apply an additional potential that counteracts the CPD and thus nullifies the total potential difference between the tip and the probe. In consequence, the deconvoluted amplitude is reduced to zero. For an extended discussion of KPFM, please refer to . In Sideband™ KPFM, the frequency of the exciting bias is selected in such a way to produce peaks on the sidebands of the mechanical resonance of the probe. As a result, peaks in that range are enhanced. Moreover, the amplitude of these sidebands depends more on the contribution of the tip-sample interaction than on forces acting on other parts of the cantilever , leading to a better quantitative and laterally resolved determination of the potential.
KPFM can only be performed while scanning the sample in Non-Contact mode. Therefore, the system must be capable of deconvoluting both the amplitude of the resonant peak modulated by the sample topography, and the amplitude of electrically induced peaks modulated by the surface potential. The default Park Systems NX electronics are equipped with four independent lock-in amplifiers that can be run in parallel, plus a feedback servo acting on the tip DC bias. This allows for mapping the topography and potential simultaneously, with no need for a second pass at any scanned line in lift mode, and with greater accuracy due to the closer distance of the tip to the sample. The built-in EFM/KPFM environment in the Park Systems SmartScan™ software allows implementation of the Sideband™ KPFM mode in an effective and intuitive way.
The active area of the fabricated devices is roughly a square with sides measuring a few millimetres. As a first test, the AFM probe was landed at the center of the membrane, in the position marked as 1 in Figure 1 b). Since the main focus of this test was Figure 1. a) Sketch of the cross-section of the piezo-photomotion device. b) top view of the device; the numbering indicates the positions on the active area of the sample where measurements were performed. Figure 2. a) Comparison of the displacement and photo-induced voltage versus time, with a light pulsed at a switching frequency equal to 1 Hz. The device behaviours in time at frequencies equal to 2 and 4 Hz are shown in b) and c) respectively. to determine the time dependence of the membrane response, the scanned area was reduced to a single point of zero lateral dimension in order to limit topographic crosstalk, and signals were acquired vs. time. Since an amplitude modulation-based feedback on the tip-sample distance was active during Sideband KPFM experiments, any vertical change of the surface position induced by the movement of the membrane could have been tracked in parallel to the potential shift. Here we will define as displacement the shift of the surface height with respect to its resting position.
Figure 2. a) Comparison of the displacement and photo-induced voltage versus time, with a light pulsed at a switching frequency equal to 1 Hz. The device behaviours in time at frequencies equal to 2 and 4 Hz are shown in b) and c) respectively.
Figure 2 a) shows results when applying the light excitation with a frequency of 1 Hz (0.5 seconds on, 0.5 seconds off). The periods where light was switched on are indicated with a light orange background, while the dark periods are indicated with a light grey background. It can be noticed that whenever the light was on, a vertical displacement of roughly 1 nm upwards could be observed. In the same conditions, an increase in the surface potential of about 200 mV (defined as the difference between the top and bottom plateaux) was measured. This demonstrates the basic principle of the device: the voltage generated by the photodiode via illumination is applied to the bottom electrode resulting in the expansion of the piezoelectric stack. The 50 mV surface potential measured during the dark periods is due to the contact potential difference between the probe and non-excited sample.
More tests were performed at higher pulse frequencies and results for 2 and 4 Hz are shown in Figures 2 b) and c) respectively. A similar trend can be observed, with the total amount of displacement and photovoltage being sensibly smaller at 4 Hz. Also, it can be noticed that the plateaux become smaller at higher frequencies, while the presence of a transient time to reach the new status after light switching becomes more apparent.
Figure 3. Photovoltage vs. time for light pulse frequency of 1, 2, and 4 Hz. The rising time of the photovoltage measured at 4 Hz is represented with a red background.
To better investigate the transient response time, photovoltage vs. time measurements were plotted together in Figure 3. The photovoltage measured with a pulse frequency of 4 Hz is represented in red with a total time range of 1 second. For each pulse cycle, the rising time τ spanning from the moment where the light has been switched on to the moment when the plateau has been reached is represented with a red background. The rising time has been estimated to be about 80 msec., which is much higher than the response time of the emitting diode indicating that τ is a characteristic response feature of the device. It is worth noticing that although the photovoltage appears reaching a plateau for all the data plotted, the difference between the two levels when applying 4 Hz pulse frequency appears slightly smaller than the other data. It is currently unclear whether this effect is due to the continuous excitation of the photoresponse at higher frequencies or to other experimental reasons. Before further analysis would clarify this point, we will assume the value of 80 msec. well representing the qualitative behaviour of the response as the correct rising time. Photovoltage at pulse frequencies equal to 1 and 2 Hz were also plotted. These signals were conveniently shifted on the x-axis to have the light pulse start at the same time. Here, τ is in good agreement among all the measurements. A model to describe the time response of the device is currently under development by IMS, although it has been hypothesized that it may be linked to the quality of the interface between the photodiode and the bottom electrode. The here presented experiments provide crucial information about the expected performance of the device at different communication rates with respect to the exciting light source and about the efficiency in converting the transmitted signal into a movement.
Figure 4. a) Mechanical displacement and surface potential at the positions indicated in Figure 1. b) 3D rendering of one quarter of the device representing the vertical displacement of the PZT membrane.
On a similar device, the AFM probe was approached in six different positions progressively farther distanced from the center and numbered as in Figure 1 b), with the goal of measuring the membrane vertical displacement and light-induced photovoltage as a function of the position on the membrane. Light pulses with a frequency of 8 Hz were generated to excite the membrane. The results of this experiment are shown in Figure 4 a). It can be seen that the displacement and photovoltage vs. position have different trends. In the case of the displacement, the maximum value is observed at the center of the membrane, while there is a decrease to almost zero displacement when reaching the side of the membrane. The displacement at the center is about 700 pm and then smaller than what reported for the other device, which can be due either to small structural variations among different devices, or to the fact that when using an 8 Hz frequency the light on/off time is 62.5 msec , which is slightly lower than the estimated 80 msec. rising time. Therefore, the higher pulse frequency may prevent the membrane from reach the displacement plateau.
On the other hand, the photovoltage remained more or less constant at the different positions. This consistency may be simply due to the homogeneity of the silicon layer at the bottom of the device, providing the same photo-response everywhere. Again, additional simulations could shed further light on the underlying mechanisms to support any interpretation of these observations. A hypothesis for the decreasing amplitude of the displacement with increasing distance to the center could be found in a mechanical restriction since the membrane is attached at its borders to the support of the sample, resulting in a smaller free oscillation.
In this application note, we have demonstrated the working principle of piezo-photomotion devices. We have shown how AFM and in particular the setup and options offered by the Park Systems NX10 AFM is the suitable choice to explore the properties of such devices. KPFM measurements of the light-induced displacement and photovoltage as a function of time and of the position along the device membrane have been discussed. We reported the expected time response of the membrane expansion and voltage change. We showed the presence of a rising time from the switch-off to switch-on status allowing us to estimate the highest rate at which the membrane can react to external light excitation at full efficiency. Finally, we commented on the dependence of the displacement on the position along the membrane linking it to the mechanical constraints given by the geometry of the device, and on the homogeneity of the photovoltage response. The ensemble of these measurements enables the investigation and optimization of piezo-photomotion and other types of light-driven devices.
W. M. Luiten, V. M. Van Der Werf, N. Raza, and R. Saive, “Investigation of the dynamic properties of on-chip coupled piezo/photodiodes by time-resolved atomic force and Kelvin probe microscopy,” AIP Adv., vol. 10, no. 10, Oct. 2020, DOI: 10.1063/5.0028481.
Z. Zhan, F. Wei, J. Zheng, W. Yang, J. Luo, and L. Yao, “Recent advances of light-driven micro/nanomotors: Toward powerful thrust and precise control,” Nanotechnol. Rev., vol. 7, no. 6, pp. 555–581, 2018, DOI: 10.1515/ntrev-2018-0106.
A. Axt, I. M. Hermes, V. W. Bergmann, N. Tausendpfund, and S. A. L. Weber, “Know your full potential: Quantitative Kelvin probe force microscopy on nanoscale electrical devices”, Beilstein J. Nanotechnol., vol. 9, 1809–1819, 2018, DOI: https://doi.org/10.3762/bjnano.9.172
J. Colchero, A. Gil, and A. M. Baro, “Resolution enhancement and improved data interpretation in electrostatic force microscopy”, Phys. Rev. B, vol. 64, 245403, 2001, DOI: 10.1103/PhysRevB.64.245403 | <urn:uuid:10848600-4d50-465e-83b6-c3bdbc747f8b> | CC-MAIN-2023-50 | https://nanoscientific.org/articles/view/291?page=1 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.924188 | 3,576 | 2.609375 | 3 |
Here's What You Need To Remember: This would not the be last time that Mauna Loa would feel the wrath of American airpower. The U.S. Air Force returned to bomb Mauna Loa in 1975 and 1976, in an experiment using 2,000-pound bombs to divert lava flows.
On December 27, 1935, the U.S. Air Force attacked a most fiery enemy ever: a Hawaiian volcano.
Technically, it wasn't the air force in 1935, but the U.S. Army Air Corps. Nor had Congress declared war on the volcano, or passed an Authorization of Military Force Against Volcanoes. Nevertheless, when Mauna Loa erupted on November 21, 1935, and an army of lava advanced on the city of Hilo at a rate of one mile per day, the military was called in.
The idea was actually the brainchild of Thomas Jagger, founder of the Hawaiian Volcano Observatory. Jagger believed that high explosive would collapse the lava tubes and stanch the molten flow. His first plan was to plant tons of TNT carried to the volcano on mules, except there wasn't enough time.
So, ten bombers were dispatched, each carrying two 600-pound bombs, with each bomb containing 300 pounds of explosive. Though enemy defenses could be described as light, accuracy still left something to be desired, with some bombs landing hundreds of feet from the target.
Nonetheless, six day after the bombing, the lava flow stopped. Jagger proclaimed the mission was a success. But another geologist, Harold Stearns, who had been aboard one of the bombers, questioned whether the bombs had made a difference. “The tube walls look 25 to 50 feet high and deep in the flow so that I think there would be no change of breaking the walls," he wrote. "The lava liquid is low. The damming possibility looks effective but the target is too small.”
Nonetheless, Jagger maintained the bombs had worked. "I have no question that this robbing of the source tunnel slowed down the movement of the front," he replied. "The average actual motion of the extreme front.… for the five days after the bombing was approximately 1,000 feet per day. For the seven days preceding the bombing the rate was one mile per day.”
It wouldn't be the last time that seismic weapons were used. During World War II, Britain found that regular bombs would bounce off the twenty-five-foot-thick concrete roofs of Nazi U-boat pens. So, British inventor Barnes Wallis devised "earthquake bombs." The 22,000-pound Grand Slam, dropped from a Lancaster bomber 18,000 feet high, would slam into the target at supersonic speed and bore deep before exploding. Rather than the explosive force being dissipated through air, it would be conducted by concrete or earth, thus magnifying the damage.
This would not the be last time that Mauna Loa would feel the wrath of American airpower. The U.S. Air Force returned to bomb Mauna Loa in 1975 and 1976, in an experiment using 2,000-pound bombs to divert lava flows. While there are scientists who argue that bombing volcanoes is effective, others say this would only work under the right conditions.
One interesting question would be whether modern bombs would be more effective in plugging volcanoes. In particular, there is the air force's 30,000-pound Massive Ordnance Penetrator (MOP), the giant bomb designed to smash deep into the Earth and destroy buried installations such as North Korean WMDs. However, the MOP has yet to be tested against as big as a volcano. | <urn:uuid:deab7f3a-1ada-4279-9a1d-f2b59528f0df> | CC-MAIN-2023-50 | https://nationalinterest.org/blog/reboot/1935-united-states-military-bombed-volcano-168493 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.96924 | 754 | 3.640625 | 4 |
From anxiety and depression to stress and mood disorders, the spectrum of mental health challenges is complex, to say the least. While genetics and life experiences each play their own roles in shaping our mental health, an often underestimated factor that influences our psychological state is our environment. When we use the term environment, we’re referring to a person’s surroundings. This can include their home, neighborhood, friend group, and even their career. In this blog, we explore the relationship between our surroundings and our mental health and explain how various aspects of our environment can either nourish or erode our mental well-being.
The Environment’s Role in Mental Health
Our environment is a summary of everything that surrounds us: our homes, workplaces, communities, and natural surroundings. Our daily experiences are largely shaped by our environment, whether we are acutely aware of it or not. Consequently, our mental health is shaped by it as well. The way our surroundings affect our mental health is a complex topic. It includes things like how exposure to harmful substances like pesticides and heavy metals can lead to mental health problems. It also involves understanding how stress from natural disasters caused by climate change or past unfair treatment in the environment can impact our mental well-being. On the flip side, we’re also learning how being in green spaces and having nice things in our neighborhoods can make us feel better mentally. Until now, most research on the health effects of environmental issues has mainly looked at physical health problems, not mental ones. However, that’s starting to change. Experts from various fields, including environmental science, psychiatry, genetics, and psychology, are now working together to study how our environment can either harm or help our mental health. They’re looking at different pieces of evidence to understand this relationship better.
- Home: Our living space is where we spend a large chunk of our time, making it one of the most powerful aspects that either hurt or help our mental health. A cluttered or chaotic home can quickly lead to stress and anxiety. On the other hand, a clean and organized home allows for a sense of relaxation. Especially for those who do not have much control over their other environments, making sure that the home is in order can be hugely beneficial.
- Natural Spaces: The term “touch grass” has quite a bit of validity to it, it turns out. Access to nature has been repeatedly linked to improved mental health, regardless of the scenario. Studies have shown that spending time in green spaces, like parks or forests, can reduce stress and increase feelings of happiness and calm. Next time you feel like your mental health is on a downward trend, try taking a trip that immerses you in nature.
- Noise Pollution: Excessive noise pollution, whether that be from traffic, construction, or noisy neighbors, is usually linked to sleep disturbances, which can increase stress and anxiety. A quiet environment, on the other hand, is ideal for relaxation and increased concentration. Living in a noisy environment is inevitable for some, so if you find yourself unable to escape this, try investing in a noise machine or earplugs.
- Social Support: Our relationships and social connections are integral to our mental health. A supportive and nurturing social environment can provide a buffer against stress and improve our overall emotional well-being.
- Social Isolation: On the other side of the coin, social isolation and loneliness are known to have detrimental effects on a person’s mental state. Prolonged isolation leads to depression and anxiety for many. Community and Neighborhood: The sense of community and belonging in one’s neighborhood can be immensely helpful when it comes to fighting off poor mental health. A strong, interconnected community offers a sense of safety and support, while a disconnected or unsafe neighborhood contributes to higher levels of stress and anxiety.
- Job Satisfaction: Because most of us spend a large chunk of our waking hours at work, our work environment matters. Job satisfaction, workload, and the culture of our workplace can all impact how we feel mentally on any given day.
- Work-Life Balance: A healthy work-life balance is essential, whether you have a spouse and children or are a single professional. An environment that promotes flexibility and allows employees to balance their professional and personal lives will reduce stress and burnout.
Cultural and Societal Factors
- Cultural Norms: Cultural expectations and norms can impact how mental health is perceived and addressed. In some cases, seeking help for mental health issues is stigmatized, making it that much harder to come forward.
- Societal Pressures: Societal pressures related to success, appearance, and achievement can contribute to stress, anxiety, and depression. These pressures are magnified by the media and advertising, especially in the United States.
- Natural Disasters: Experiencing natural disasters like hurricanes or wildfires can have profound effects on mental health. Survivors are known to experience post-traumatic stress disorder or other mental illnesses.
- Pollution: Exposure to environmental pollutants, including air and water pollution, has been linked to cognitive decline and mood disorders. This is an area of research that is continuing to be explored, especially given our current climate.
Our surroundings can either support or undermine our psychological health, and it is up to us to moderate our environment in order to support our mental health. As individuals, we can take steps to improve our mental health by creating a positive and nurturing environment within our four walls. This could be as simple as decluttering our living spaces or seeking out and spending time in natural settings more often. On a larger scale, advocating for environmental policies that reduce pollution and promote access to green spaces can have a positive impact on the world as we know it. In order to properly address mental health, we each need to recognize the impact of environmental factors and work towards creating environments that are better suited for positive mental health.
New Dimensions Can Help!
New Dimensions provides Intensive Outpatient Treatment Programs (IOP) and Partial Hospitalization (PHP) for adolescents and adults who are struggling with mental health or substance abuse issues. If you are faced with environmental challenges, we can help you get back on track. We have both in-person and virtual online treatment options. To learn more, contact us at 800-685-9796 or visit our website at www.nddtreatment.com. You can also visit www.mhthrive.com to learn more about individual, couples, and family therapy treatment options. | <urn:uuid:dc6ea361-dbbc-461e-a0f5-53e33f1e2669> | CC-MAIN-2023-50 | https://nddtreatment.com/the-powerful-connection-between-environment-and-mental-health/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.941102 | 1,322 | 3.328125 | 3 |
The Government has issued advice on how to lower the risk of infections from Tiger mosquitoes, especially as warmer temperatures have arrived.
The Tiger mosquito measures less than a centimetre and is easily recognisable by the black and white stripes on the body and the legs and its totally black wings.
The Tiger mosquito is active during the day and can be the vector of viral diseases such as chikungunya, dengue and zika. However, the insect has been cleared of carrying coronavirus.
The usual measures for reducing mosquito nuisance also apply to the Tiger variety. Pools of standing water should be eradicated, including water left standing in flower pots. Fortunately it flies slowly and can be easily swatted.
Other repellent measures can be used, such as nets over open windows.
None of the tropical diseases transmitted by the Tiger mosquito are transmissible directly from person to person, but if the mosquito bites an infected person it can suck the virus and transmit it to a healthy subject during a new bite.
Symptoms of infection with chikungunya or dengue fever include a temperature above 38.5, headaches, and joint or muscle pains. Zika is suspected in the event of a rash as well as joint and muscle pains. In the event of suspected infection, medical attention should be sought at the Emergency Department of the Princess Grace Hospital. | <urn:uuid:655bfaea-7ef9-4a54-9155-e1e7579df95f> | CC-MAIN-2023-50 | https://news.mc/2020/07/10/official-warning-on-tiger-mosquitoes/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.95726 | 280 | 3.125 | 3 |
Telescopes capture black hole destroying star
ANN ARBOR—A black hole tore apart a star that got too close and a trio of orbiting X-ray telescopes captured the action. This closest “tidal disruption” discovered in a decade is giving astronomers new insights into the extreme environment around black holes, and how they swallow stars.
“This time we caught the real heart of the action and saw entirely new things,” said Jon M. Miller, professor of astronomy in the College of Literature, Science, and the Arts at the University of Michigan and lead author of a study describing the event in Nature. “This one is the best chance we have had so far to really understand what happens when a black hole shreds a star.”
Black holes are ultra-dense astronomical objects with such intense gravitational pull that not even light can escape them. Supermassive black holes lurk at the centers of galaxies and smaller ones are the collapsed remains of the most massive stars.
The culprit in this case is a supermassive black hole at the core of the galaxy PGC 043234, about 290 million light years from Earth in the constellation Coma Berenices. The black hole’s mass is estimated to be a few million times that of the Sun.
To put together a picture of what happened at this event that’s been named ASASSN-14li, researchers combined observations from NASA’s Chandra X-ray Observatory and Swift Gamma Ray Explorer, and ESA’s XMM-Newton orbiting telescopes.
When a star comes too close to a black hole, its gravity can rip the star apart. Astronomers refer to these events as “tidal disruptions” because gravity from the black hole changes so much between the near and far sides of the star, that it cannot hold itself together. Some of the stellar debris is flung outward at high speeds. The flying debris causes a distinct X-ray flare that can last for a few years, according NASA and ESA.
After the star is destroyed, the black hole pulls most of the remains of the star toward it. This infalling debris is heated to millions of degrees and generates a huge amount of X-ray light. Soon after this surge of X-rays, the amount of light decreases as the material falls beyond the event horizon (that is, the point of no return) of the black hole.
Gas often falls towards black holes by spiraling inward in a disk. But how this process starts has remained a mystery. In ASASSN-14li, astronomers were able to witness the formation of such a disk by looking at the X-ray light at different wavelengths and how that changed over time.
The researchers determined that the X-rays being produced come from material that is either very close to or is actually in the smallest possible stable orbit around the black hole.
“The black hole tears the star apart and starts swallowing material really quickly, but that’s not the end of the story,” said co-author Jelle Kaastra of the Institute for Space Research in the Netherlands. “The black hole can’t keep up that pace so it expels some of the material outwards.”
The X-ray data reveal the presence of a wind moving away from the black hole. The wind is not moving fast enough to escape the black hole’s gravitational grasp. The relatively low speed for the wind may be explained by gas from the disrupted star that’s following an elliptical orbit in a newly formed disk around the black hole.
“These results support some of our newest ideas for the structure and evolution of tidal disruption events,” said Cole Miller, a co-author from the University of Maryland. “In the future, tidal disruptions can provide us with laboratories to study the effects of extreme gravity.”
Astronomers are hoping to find more events like ASASSN-14li, which they can use to continue to test theoretical models about how black holes affect their environments and anything that might wander too close.
“We all feel gravity and everyone has done little experiments, even just throwing a ball, to explore how it works. Black holes make us aware that some places in the universe twist the familiar into the extraordinary. I think this is the root of our fascination with black holes,” said lead author Jon Miller.
These results appear in the Oct. 22 issue of the journal Nature. The paper is titled, “Flows of X-ray gas reveal the disruption of a star by a massive black hole.” | <urn:uuid:67ebc582-ff3b-44f2-84b6-c898dbd215a5> | CC-MAIN-2023-50 | https://news.umich.edu/telescopes-capture-black-hole-destroying-star/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.932223 | 950 | 4 | 4 |
The universe may have been born spinning, according to new findings on the symmetry of the cosmos
ANN ARBOR—Physicists and astronomers have long believed that the universe has mirror symmetry, like a basketball. But recent findings from the University of Michigan suggest that the shape of the Big Bang might be more complicated than previously thought, and that the early universe spun on an axis.
To test for the assumed mirror symmetry, physics professor Michael Longo and a team of five undergraduates catalogued the rotation direction of tens of thousands of spiral galaxies photographed in the Sloan Digital Sky Survey.
The mirror image of a counter-clockwise rotating galaxy would have clockwise rotation. More of one type than the other would be evidence for a breakdown of symmetry, or, in physics speak, a parity violation on cosmic scales, Longo said.
The researchers found evidence that galaxies tend to rotate in a preferred direction. They uncovered an excess of left-handed, or counter-clockwise rotating, spirals in the part of the sky toward the north pole of the Milky Way. The effect extended beyond 600 million light years away.
“The excess is small, about 7 percent, but the chance that it could be a cosmic accident is something like one in a million,” Longo said. “These results are extremely important because they appear to contradict the almost universally accepted notion that on sufficiently large scales the universe is isotropic, with no special direction.”
The work provides new insights about the shape of the Big Bang. A symmetric and isotropic universe would have begun with a spherically symmetric explosion shaped like a basketball. If the universe was born rotating, like a spinning basketball, Longo said, it would have a preferred axis, and galaxies would have retained that initial motion.
Is the universe still spinning?
“It could be,” Longo said. “I think this result suggests that it is.”
Because the Sloan telescope is in New Mexico, the data the researchers analyzed for their recent paper came mostly from the northern hemisphere of the sky. An important test of the findings will be to see if there is an excess of right-handed spiral galaxies in the southern hemisphere. This research is currently underway.
A paper on the findings, Detection of a Dipole in the Handedness of Spiral Galaxies with Redshifts z~0.04 is published in Physics Letters B. | <urn:uuid:4fb918d0-78ed-4af2-84d5-1f9bd43d5889> | CC-MAIN-2023-50 | https://news.umich.edu/the-universe-may-have-been-born-spinning-according-to-new-findings-on-the-symmetry-of-the-cosmos/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.941156 | 500 | 3.34375 | 3 |
AIMS Ghana 2013
Using mathematical science to support farmers in Ghana
The Participatory Integrated Climate Services for Agriculture (PICSA) Approach is one of the key activities of the CCAFS funded Capacitating African Smallholders with Climate Advisories and Insurance Development (CASCAID) project working in West Africa . It aims to create a basis which supports farmers with their planning and decision making. PICSA is currently thriving in the three northern Regions of Ghana.
One of PICSA’s three key components is providing and considering climate and weather information to farmers – including historical records and forecasts. After graduating from AIMS, Mr Francis Feehi Torgbor, from Ghana, went onto receive his Research Master’s from the University of the Cape Coast and is currently working as the AIMS project lead partner and a climatic data analyst with CASCAID.
“Before AIMS, it was all about passing examinations. At AIMS Ghana, I became being a critical thinker and a problem solver and I now feel content solving problems in my related field rather than passing examinations without making an impact on society”
“I obtained this position through my accumulated experience and expertise gained from my research work at AIMS Ghana and the University of the Cape Coast which involved the use of varied analyses of historical climatic data. This attracted interest from top hierarchy CASCAID project leaders who had seen my work and decided to involve me with the project.” The project is a joint initiative between AIMS Ghana, the Ghana Meteorological Agency and the University of Reading.
He is involved in analysing historical climate data from the Ghana Meteorological Agency (GMET). Furthermore, he trains extension staff who in turn work with farmers on the PICSA ideas and also he serves as a monitoring and evaluation officer to oversee the successes of PICSA in the Ghana region.” | <urn:uuid:6893c0f0-f5fc-425c-8ca6-d3c21114ede6> | CC-MAIN-2023-50 | https://nexteinstein.org/success-stories/mr-francis-feehi-torgbor/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.955243 | 393 | 2.515625 | 3 |
By Susan Sprout
Virginia Bluebells, or Virginia Cowslips, are ephemeral – here in the spring and gone during summer. Look for them blooming now with nodding but showy, blue trumpet-shaped flowers. They arrive early, eager for the higher amounts of unhampered sunlight before the trees above them leaf out to block it. As the name suggests, their bright blue flowers hang in loose clusters like bells, their trumpets shaped by the fusing of five petals. The buds, which usually start out pink, bloom blue. They grow quickly to their eight-to-twenty-four-inch height before dying back and reverting to just underground parts. Considered dormant because they are not photosynthesizing, I will bet the woody roots that we do not see are still busy getting nutrients and water during the summer and fall. With all of their stored resources, they are ready to go when it is spring!
Virginia Bluebells have oval leaves ranging in length from two inches at the top of the plant where they almost clasp on to the stem, then downward to the lower parts where they are eight inches long and tapered. Situated alternately on the stems, the leaves do not shade each other out – more sun for all. Another reason they can grow upwards in such a hurry. Seeds develop at the base of the flowers after they are pollinated by bees, especially bumblebees that look for pollen and nectar early in spring. The bumpy, roundish seed pod turns from green to tan to brown as it and the four seeds in each one mature.
These native perennials tend to grow in masses when water is near, in bottom lands and riverwoods, where the soil is rich and the land occasionally gets flooded. Once established, they will bloom year after year. Their seedlings will flower in their second year. They can be found from E. Canada south to North Carolina and west to Arkansas and Minnesota (lots of lakes there for bluebells to grow near). The native people in those areas used the plant as a treatment for tuberculosis and whooping cough. And, guess what? Deer do not like to eat it! | <urn:uuid:996c0c65-ea4d-46c6-b8a1-44d61c5c039f> | CC-MAIN-2023-50 | https://npcweb.org/underfoot-virginia-bluebells/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.963999 | 449 | 3.3125 | 3 |
Alcohol abuse may increase the risk of heart attacks and other cardiac problems even in people who don’t have a family history of heart disease or other known risk factors, a study suggests. After accounting for established risk factors for heart disease, such as smoking, obesity, and diabetes, alcohol abuse was associated with a 40-percent higher risk of heart attack, the study found. Excessive drinking (consistent, long- term, heavy intake) defined as more than 14 units of alcohol each week, was also tied to a two-fold greater risk of atrial fibrillation, or an irregular rapid heartbeat, and a 2.3-fold increased risk of congestive heart failure, a chronic pumping disorder. Even though previous research has linked an occasional or even daily drink to better heart health, these current findings should put to rest any notion that drinking more is better for our health, said senior study author Dr. Gregory Marcus of the University of California, San Francisco.
Source: Whitman IR, et al. Alcohol abuse and cardiac disease. J Am Coll Cardiol 2017;69(1). DOI: 10.1016/j.jacc.2016.10.048.
Moderate Drinking Can Lower Risk of Heart Attack
Moderate drinking can lower the risk of several heart conditions, according to a study that will further fuel the debate about the health implications of alcohol consumption. The study of 1.93 million people in the UK aged over 30 found that drinking in moderation— defined as consuming no more than 14 units of alcohol a week for women and 21 units for men (though other sources say unit guidelines are now the same for both men and women at 14 units per week)—had a protective effect on the heart compared with not drinking. Previous studies have suggested that alcohol has a positive effect on the levels of good cholesterol in the blood and proteins associated with blood clotting. The research, published in the British Medical Journal, found that moderate drinkers were less likely than non-drinkers to turn up at their doctor with angina, heart attack, heart failure, ischemic stroke, circulation problems caused by a build-up of fat in the arteries and aortic aneurysm than non-drinkers. But the research found that heavy drinking—more than 14 units for women and 21 units for men—increased the risk of heart failure, cardiac arrest, ischemic stroke and circulation problems caused by fatty arteries. The authors of the study, from the University of Cambridge and University College London, welcomed the findings but cautioned: “While we found that moderate drinkers were less likely to initially present with several cardiovascular diseases than non-drinkers, it could be argued that it would be unwise to encourage individuals to take up drinking as a means of lowering their risk.
Source: Bell S, et al.Association between clinically recorded alcohol consumption and initial presentation of 12 cardiovascular diseases: population based cohort study using linked health records. Br Med J. 2017; doi:10.1136/bmj.j909
What is a Unit of Alcohol?
One unit of alchohol is 10 milliliters (or about 1/3 of an ounce) of pure alcohol. Because alcoholic drinks come in different strengths and sizes, units are a way to tell how strong your drink is. The alcoholic content in similar types of drinks varies a lot. Some ales are 3.5% alcohol. But stronger continental lagers can be 5% or even 6% alcohol by volume (ABV). Same goes for wine where the ABV of stronger “new world” wines from South America, South Africa, and Australia can exceed 14% ABV, compared to the 13% ABV average of European wines. This means that just one pint of strong lager or a large glass of wine can contain more than three units of alcohol. Here are a few examples of what 14 units of alcohol would look like:
- 6 pints of 4% ABV beer
- 6 glasses (6oz per glass) of 13% ABV wine
- 14 shots (0.8 oz per shot) of 40% ABV vodka
If you regularly drink as much as 14 units per week, it’s best to spread your drinking evenly over three or more days.
Source: Drinkaware.co.uk site. Alcohol unit guidelines. https://www.drinkaware.co.uk/alcohol-facts/alcoholic- drinks-units/latest-uk-alcohol-unit-guidance/. Accessed 1 May 2017. | <urn:uuid:ad7d447a-ecd0-4b6d-897d-98d0ea0769b0> | CC-MAIN-2023-50 | https://nutritionhealthreview.com/2017/03/01/heavy-drinking-may-raise-cardiovascular-risk-by-aging-the-arteries/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.944462 | 926 | 2.546875 | 3 |
Analysis of Variance (ANOVA) is used to analyze the differences between two or more groups in a dataset. It allows us to determine if there are significant variations among the means of different groups, beyond what can be attributed to random chance.
ANOVA helps researchers understand the effect of categorical independent variables on a continuous dependent variable.
In ANOVA, we compare the variation between groups (often referred to as “group variance” or “treatment variance”) with the variation within groups (referred to as “error variance” or “residual variance”).
If the variation between groups is significantly larger than the variation within groups, it suggests that the means of the groups are different.
To illustrate this principle, let’s consider an analogy using a classroom scenario. Imagine you are a teacher evaluating the performance of three different study groups (Group A, Group B, and Group C) in a math test. You want to determine if there are significant differences in the mean scores of the groups.
In this analogy:
- The variation between groups represents the differences in average performance across the study groups.
- The variation within groups represents the differences in individual scores within each study group.
If the variation between groups is large compared to the variation within groups, it suggests that the average performance of the groups is different. This can be visualized as the groups having distinct peaks on a histogram of the test scores. For example, Group A may have higher scores on average, while Group B and Group C may have lower scores on average.
On the other hand, if the variation between groups is similar to or smaller than the variation within groups, it suggests that the average performance of the groups is similar. This can be visualized as the groups having overlapping or similar peaks on the histogram. In this case, it would be difficult to conclude that there… | <urn:uuid:bef3c2ff-396b-408d-b3a1-c43bee60ab7c> | CC-MAIN-2023-50 | https://ogre51.medium.com/a-comprehensive-guide-to-anova-analysis-and-its-applications-e2b55b487643?source=author_recirc-----803714000d5----3---------------------ed5e1e47_9b3e_4d3a_b8a1_2ecb73960445-------&responsesOpen=true | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.944743 | 390 | 4.125 | 4 |
Thesis - Open Access
Master of Science (MS)
Department / School
Robert J. Baer
Large quantities of whey are produced as a by-product from the manufacture of cheese or casein. The United States has an annual cheese production of about 2.2 billion kg. The cheese industry has also experienced a steady increase in cheese production of about 6% a year (44). In the process, nearly 18.1 billion kg of whey are produced. Whey was once a discarded product of little value to the cheese producers. With the advent of laws and regulations governing the disposal of whey, whey became a problem that had few solutions. Even today, with the high cost of disposal and the need to reduce environmental pollution, only about 60% of the whey produced in the United States is processed (72). Concentrating and drying whey eliminates water for easier handling of the product, and increases keeping quality. By far, the single largest use of whey solids is in the form of dry whey. Dry whey powders are used as commodity ingredients mostly in human food applications (65). Use of whey proteins has been limited because of poor physical and functional properties of the commercial products. Within the last 10 years, the efficient and economical removal of the water from whey by membrane filtration has become accepted in the dairy industry. Research results have indicated some advantages for the use of ultrafiltration (UF) for removal of some of the milk serum before cheese manufacture. These include increased productivity and improved cheese yields. The largest use of membrane techniques in the dairy industry is to fractionate whey. Most research indicates that whey protein concentrates (WPC) produced by UF have superior functional properties over conventional heat coagulated wheys. The present commercial market for WPC is small, but considerable evidence indicates that more product formulation work is needed to move WPC into the general marketplace. The purpose of this research was to determine if reconstituted WPC could be used as an additive to milk for cheese making to increase yields.
Library of Congress Subject Headings
Includes bibliographical references (pages 19-24)
Number of Pages
South Dakota State University
In Copyright - Non-Commercial Use Permitted
Baldwin, Kirk Alan, "Evaluation of Yield and Quality of Cheddar Cheese Manufactured from Milk with Added Whey Protein Concentrate" (1985). Electronic Theses and Dissertations. 1319. | <urn:uuid:7e761768-49a3-407a-bae8-29b754a05492> | CC-MAIN-2023-50 | https://openprairie.sdstate.edu/etd/1319/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.934342 | 509 | 2.984375 | 3 |
By now you probably have a good understanding of what "open source" means. (If you don't, you should read What is open source? before diving into this article.) So what do we mean when we talk about open standards?
In open source software development, open standards act as guidelines to keep technologies "open," especially for open source developers. Sounds simple enough, right? Unfortunately, debate about what qualifies as open and who gets to pick what becomes a standard makes defining what open standards are a little more complicated. Before diving into what open standards are, let's take a closer look at standards.
What are 'standards'?
ISO, the International Organization for Standardization, defines standards as "a document that provides requirements, specifications, guidelines or characteristics that can be used consistently to ensure that materials, products, processes and services are fit for their purpose."
ISO is an independent, non-governmental international organization that develops international standards. The ISO site explains that international standards give specifications for products, services, and systems, to ensure quality, safety, and efficiency, which is instrumental in facilitating international trade.
Standards are why we are able to use a debit card from a bank in Canada to withdraw cash from a machine in South Africa, share a photo from a Samsung phone to an Apple laptop, buy light bulbs that fit reading lamps and ceiling fans, and access the Internet.
In fact, standards are such a big deal that they even get their own annual holiday on October 14, World Standards Day, an initiative of the World Standards Cooperation (WSC).
Developing standards, on the other hand, is so complicated that it has its own xkcd How Standards Proliferate comic.
How are standards developed?
Well, it's complicated. There are hundreds—perhaps thousands—of standards organizations around the world, and many of them are part of multiple larger standards organizations. And there's no one guidebook to rule them all, which is why we end up with a spectrum of standards, such as open or closed, industry or vendor standards, and so on.
Let's look at ISO, for example. ISO is a small acronym for a much bigger organization, with a membership that includes 163 national standards bodies. Because standards affect everyone, everyday, there are lots of standards developing organizations (SDOs), and not all of them are members of ISO. The SDOs specialize in particular industries or technologies (for example, the Internet), but many SDOs work together to develop specifications that cross areas of expertise and industries.
Imagine the amount of cooperation and collaboration required to develop voluntary, consensus-based international standards. For example, the WSC site explains, the World Standards Cooperation is high-level collaboration between the ISO, the IEC (International Electrotechnical Commission), and ITU (International Telecommunication Union). "Under this banner, the three organizations preserve their common interests in strengthening and advancing the voluntary consensus-based International Standards system," the site says. By working together to develop international standards, organizations from different industries are able to implement standards that benefit organizations across industries.
The WSC site also notes that international standards are an important instrument for global trade and economic development. "They provide a harmonized, stable and globally recognized framework for the dissemination and use of technologies. They encompass best practices and agreements that encourage more equitable development and promote the overall growth of the Information Society." And the same holds true when we're talking about open source code and open standards.
Technology interoperability standards are specifications that define the boundaries between two objects that have been put through a recognized consensus process. The consensus process may be a formal de jure process supported by national standards organizations (e.g. ISO, BSI), an industry or trade organization with broad interest (e.g. IEEE, ECMA), or a consortia with a narrower focus (e.g. W3C, OASIS). The standards process is not about finding the best technical solution, and codifying it, but rather to find the best consensus driven solution with which all the participants can live.
The best interoperability standards enable multiple implementations to be delivered into the market. They benefit customers by enabling choice in a marketplace. A successful standard usually has many implementations, and the standard with the most implementations could be considered to "win" from a standards development producer's perspective.
Open standards example: The Internet
In its explanation of open Internet standards, The Internet Society (a global organization that helps drive Internet policy and technology standards) says, "The Internet is fundamentally based on the existence of open, non-proprietary standards. They are key to allowing devices, services, and applications to work together across a wide and dispersed network of networks." The page lists The Internet Engineering Task Force (IETF), The Internet Research Task Force (IRTF), and The Internet Architecture Board (IAB) as the core groups behind the development of the open Internet standards. "These organizations are all open, transparent, and rely on a bottom-up consensus-building process to develop standards. They help make sure open standards have freely accessible specifications, are unencumbered, have open development and are continuously evolving," the page explains.
To get an idea of the huge quantity of standards for the Internet, see the Internet Standard page in the RFC series that is hosted on the RFC Editor site, which has been funded by a contract with the Internet Society since 1998.
Example official Internet standards
Note that not all RFCs (Request for Comments) are standards. As the site explains, "The RFC series contains technical and organizational documents about the Internet, including the specifications and policy documents produced by four streams: the Internet Engineering Task Force (IETF), the Internet Research Task Force (IRTF), the Internet Architecture Board (IAB), and Independent Submissions.
Let's look more closely at STD 3, for example, Requirements for Internet Hosts -- Communication Layers. This early Internet standard dates back to 1989.
Requirements for Internet Hosts -- Communication Layers
Figure 2 only shows the first 12 pages listed in the Table of Contents. STD 3 is more than 100 pages long.
Page 4 in the STD 3 introduction says that the standards documents—keep in mind that STD 3 is only one of many Internet standards outlined in these documents—are intended to provide guidance for vendors, implementors, and users of Internet communication software. "They represent the consensus of a large body of technical experience and wisdom, contributed by the members of the Internet research and vendor communities," it explains.
Page 5 of STD 3 introduction
Page 5 nicely illustrates a few key points about standards:
- they are voluntary,
- when they are developed for the marketplace matters,
- and they evolve over time.
Standards are voluntary
"There may be valid reasons why particular vendor products that are designed for restricted contexts might choose to use different specifications," STD 3 explains. For example, in September 2016, Apple announced that its iPhone 7 and 7 Plus would ship without a 3.5mm headphone port, which has been the standard for most mobile devices, including mobile phones, music players, and laptops.
In his opinion piece on the move for Mashable, journalist Chris Taylor responded, "It has eradicated the most successful, most widespread and best-sounding audio standard in the world in favor of its own proprietary [Lightning] system."
Apple marketing chief Phil Schiller says the move was driven by "courage," but Taylor and many other journalists (and consumers) think the move has more to do with Apple's plan to profit from sales of aux-to-Lightning cable dongles and wireless headphones. Many people like the convenience of wireless headphones, but they aren't considered to be superior when it comes to sound quality. Although Apple positions the move as having a valid reason, not everyone else agrees.
Walli's primer addresses a problem with vendor specifications, like the new Lightning audio standard for the iPhone, instead of industry standards. "Vendor specifications enable the vendor's business by encouraging complements around the vendor's technology base. They benefit the vendor over the customer."
Standardization timing matters
The STD 3 introduction illustrates that the Internet was still in its infancy when the standard was developed. "Although most current implementations fail to meet these requirements in various ways, some minor and some major, this specification is the ideal towards which we need to move," it explains.
From a technology-maturity perspective, timing matters. "Many fear that standardizing too early hobbles innovation," Walli says. "Once the point of standardization happens in the market, it is clearly time to codify an aspect of the market, allowing new innovation to build around the stable ‘standardized' base," he explains. Walli points out that standardizing too early or ahead of the market is difficult, saying, "Good standards codify proved ways of accomplishing things. They are based on existing practice and experience."
The STD 3 introduction acknowledges that, as the Internet matured, the standards would evolve, and says, "These requirements are based on the current level of Internet architecture. This document will be updated as required to provide additional clarifications or to include additional information in those areas in which specifications are still evolving."
What makes a standard ‘open'?
Well, that's also complicated because different standards organizations and advocates offer different guidelines. Let's look at one open standards organization, OASIS, for an example.
Whereas ISO is an organization composed of many national standards bodies, OASIS is a nonprofit consortium that drives the development, convergence, and adoption of open standards for the global information society. Let's take a high-level look at the requirements OASIS has for developing open standards.
The OASIS site explains, "OASIS members broadly represent the marketplace of public and private sector technology leaders, users, and influencers. The consortium has more than 5,000 participants representing over 600 organizations and individual members in more than 65 countries."
Under OASIS, technical committees (TCs) develop the standards, and then for the standard be adopted by the consortium as an open standard, it must:
- be created by domain experts (not SDO staff);
- be developed under and internationally respected, open process (i.e., be open for public review and debate);
- be easy to access and adopt;
- have allowed anyone affected by the standard to contribute to the development of it;
- not have hidden patents to scare implementers;
- have the ability to implement the standard baked in (i.e., OASIS standards must be verified by multiple Statements of Use);
- and be safe for governments to endorse.
(From the Starting a TC (PDF).)
OASIS is the group behind the development of the OpenDocument OASIS Standard (for example,
.odt files), which was approved by ISO and IEC in 2006 (ISO/IEC 26300). As that announcement explains, "OpenDocument defines a genuinely open XML file format for office applications. Suitable for text, spreadsheets, charts, graphs, presentations, and databases, the standard frees documents from their applications-of-origin, enabling them to be exchanged, retrieved, and edited with any OpenDocument-compliant software or tool. The standard will facilitate access, search, use, integration, and development of document content in new and innovative ways."
OASIS has detailed policies and guidelines to help develop open standards, so browse their site to get a feel for the behind-the-scenes work involved with developing standards, even in "the open." For example, the site includes Antitrust Guidelines and an Intellectual Property Rights Policy, as well as policies regarding conflicts of interest and interoperability demonstrations.
But the high-level overview shows how open standards must be openly created and easy to adopt without restrictions or for use or royalties expectations.
How do other open standards advocates define 'open standards'?
- Availability: Open standards are available for all to read and implement.
- Maximize End-User Choice: Open Standards create a fair, competitive market for implementations of the standard. They do not lock the customer into a particular vendor or group.
- No Royalty: Open standards are free for all to implement, with no royalty or fee. Certification of compliance by the standards organization may involve a fee.
- No Discrimination: Open standards and the organizations that administer them do not favor one implementor over another for any reason other than the technical standards compliance of a vendor's implementation. Certification organizations must provide a path for low and zero-cost implementations to be validated, but may also provide enhanced certification services.
- Extension or Subset:Implementations of open standards may be extended, or offered in subset form. However, certification organizations may decline to certify subset implementations, and may place requirements upon extensions (see Predatory Practices).
- Predatory Practices: Open standards may employ license terms that protect against subversion of the standard by embrace-and-extend tactics. The licenses attached to the standard may require the publication of reference information for extensions, and a license for all others to create, distribute, and sell software that is compatible with the extensions. An Open standard may not otherwise prohibit extensions.
The Free Software Foundation Europe (FSFE) collaborated with other individuals and organizations in the tech industry, politics, and community to outline a different five-point definition. According to the FSFE, an open standard refers to a format or protocol that is:
- subject to full public assessment and use without constraints in a manner equally available to all parties;
- without any components or extensions that have dependencies on formats or protocols that do not meet the definition of an Open Standard themselves;
- free from legal or technical clauses that limit its utilisation by any party or in any business model;
- managed and further developed independently of any single vendor in a process open to the equal participation of competitors and third parties;
- available in multiple complete implementations by competing vendors, or as a complete implementation equally available to all parties.
The Open Source Initiative (OSI), the organization responsible for reviewing and approving licenses as Open Source Definition (OSD) conformant, says "an ‘open standard' must not prohibit conforming implementations in open source software." OSI provides a list of five criteria an open standard must satisfy. "If an ‘open standard' does not meet these criteria, it will be discriminating against open source developers," the site says:
- No Intentional Secrets: The standard must not withhold any detail necessary for interoperable implementation. As flaws are inevitable, the standard must define a process for fixing flaws identified during implementation and interoperability testing and to incorporate said changes into a revised version or superseding version of the standard to be released under terms that do not violate the OSR.
- Availability: The standard must be freely and publicly available (e.g., from a stable web site) under royalty-free terms at reasonable and non-discriminatory cost.
- Patents: All patents essential to implementation of the standard must:
- be licensed under royalty-free terms for unrestricted use, or
- be covered by a promise of non-assertion when practiced by open source software
- No Agreements: There must not be any requirement for execution of a license agreement, NDA, grant, click-through, or any other form of paperwork to deploy conforming implementations of the standard.
- No OSR-Incompatible Dependencies: Implementation of the standard must not require any other technology that fails to meet the criteria of this Requirement.
How do ‘standards collaborations' differ from ‘open source collaborations'?
Standards and open source projects are different collaborations. They're different economic tools in a marketplace with different goals, outcomes, and processes. As Stephen Walli explains:
1. Standards take longer to develop and change. Whereas open source projects can develop quickly, standards encourage multiple implementations and tend to enter a market with some maturity and competition. Standards and specifications don't change quickly, so they are developed with the expectation that they'll need to last for longer periods of time. For example, moving from HTML1.0 to HTML5 standard took about 18 years, and we've had TCP since 1981 with few changes.
2. Standards are consensus-based compromises. Open source projects are driven by contribution and meritocracy.
3. Standards define useful predictable boundaries. Well-run open source projects are the building blocks of rich, varied ecosystems.
Example open standards organizations
- DMTF (Distributed Management Task Force, Inc.)
- IETF (Internet Engineering Task Force)
- NIST (National Institute of Standards and Technology)
- OASIS (Organization for the Advancement of Structured Information Standards)
- The Open Group
- C++ Standards Committee
- DWARF Debugging Standard
- Java Community Process
- OSGi Alliance
- UEFI (Unified Extensible Firmware Interface Forum)
- SPEC (Standard Performance Evaluation Corporation)
- STAC (Securityes Technology Analysis Center)
- TPC (Transaction Processing Performance Council)
Thanks to Stephen Walli, Rikki Endsley, Deb Bryant, and Nithya Ruff for contributing to this resource. | <urn:uuid:52de8b0c-b700-425c-b8b9-750f63e7cac1> | CC-MAIN-2023-50 | https://opensource.com/resources/what-are-open-standards | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.92164 | 3,551 | 3.546875 | 4 |
The 1930’s were, of course, dominated by the Great Depression. The statistics for that period are grim (unemployment hit as high as 23% in the U.S. and as high as 33% in other countries), but they probably still fall short of relating just how awful things were for those who lived through it. Both of my parents were youngsters during those years and they have indelible memories of the hardships that had to be faced day to day.
But when times are toughest, forms of entertainment that allow the populace to escape their trevails for a brief period are often at their best. In the movies, the 30’s gave us Shirley Temple, the Astaire-Rogers musicals, the Marx Brothers, the Busby Berkely extravaganzas, the great films of Frank Capra, and so much more. Radio reached its peak during those years. And boardgames were there as well to help a battered society pass the hours, including the most popular boardgame of all time, where a fortune in real estate could be won, if only the dice would cooperate…
Kids (and adults) have been playing the pencil and paper game of Battleship for a long time. The origin of this game isn’t clear. The Geek attributes it to a gentleman named Clifford Von Wickler, saying that he created the game in the early 1900’s, but never patented it. This story is disputed by some. Other people suggest it may have been invented by Russian soldiers during the latter years of World War I. Regardless of its source, we do know that the first commercial version of the game was by a U.S. company called Starex, who released it as Salvo, a predrawn pad of pages that the game could be played on, in 1931. The first boardgame version was by Milton Bradley in 1967 and those of us of a certain age well remember the game’s TV commercial, featuring the mournful cry of one of the participants, “You sunk my battleship!”.
I have to wonder: do kids today still play pencil and paper versions of Battleship during study hall breaks at school or is this time spent staring at their phones? A part of me hopes it’s still played this way, but given how mostly mindless the game is, I guess it wouldn’t break my heart if Battleship is one of the victims of our electronic age.
In an earlier article, we mentioned that Monopoly was derived from The Landlord’s Game, a politically motivated board game designed by a woman named Lizzie Magie in the early 1900’s. Here’s a brief history of how Magie’s design became the most popular board game of all time.
Magie had only limited success in selling her game, but it lived on through handmade folk versions and morphed throughout the years, with its greatest popularity being on the East Coast of the U.S. One day in 1932, an unemployed repairman named Charles Darrow played one of these games at a friend’s house in Philadelphia. This version was created by a group of Quakers living in Atlantic City, NJ and the properties were all named after Atlantic City streets. Darrow enjoyed the game and saw it as a way of possibly making some money. He had a cartoonist friend of his create some artwork for the game and began making copies of the game to sell. The updated look of the game represented the only contribution Darrow made to Monopoly. He changed none of the rules and used all of the old property names from the version he had played, including keeping the misspelled Marvin Gardens (Marven Gardens is a neighborhood just south of Atlantic City).
Darrow had success selling his homemade version of the game and used the profits to have some professionally manufactured copies made. He placed these in local department stores, where they continued to sell well. Darrow also sent copies to Parker Brothers and Milton Bradley, but both declined to produce the game, feeling that it was too complicated and too long. However, Monopoly was so popular in the local stores that Parker Brothers decided to buy the game from Darrow in 1935. It sold so well that Parker decided to patent it and only then discovered its relationship to The Landlord’s Game, as well as some spinoff designs. To protect their investment, Parker bought the rights to the spinoff games and George Parker himself, the company’s founder, met with Magie. She was paid $500 for the rights to Monopoly, along with a promise to publish The Landlord’s Game.
Magie soon realized she had been swindled, as Monopoly’s sales soared and Parker Brothers presented Darrow as the game’s sole inventor. She gave two scathing interviews to newspapers about Monopoly’s true origins, but it did nothing to stop the game’s incredible momentum. In 1936 alone, Parker sold over 1.7 million copies of the game. At the height of the Depression! It has been the world’s best-selling game ever since.
Parker Brothers continued to publicize Darrow as the creator of Monopoly for over 50 years. The rags-to-riches story made for great press and Darrow became a millionaire from his royalties. To add insult to injury, while Parker did eventually did publish The Landlord’s Game, they did little to promote it and only a tiny number of copies were ever sold. The true story behind the origins of Monopoly wouldn’t come to light for 40 years. But that’s a tale for a future article.
Like most children of the 60’s, I played Monopoly with my family growing up. It was actually kind of a rite of passage for the kids of the day; it was a significant event when you stopped playing Candyland and other children’s games and got to play an adult game with your parents. Naturally, we never played with the proper rules. We had the pot of money on Free Parking, a $500 fund buttressed by the players’ tax payments and such. I also remember reading the rules as a pre-teen (yeah, I was a Geek even then) and being shocked to find that unpurchased properties were supposed to be auctioned off! Who knew?
But even as a youngster, I belonged to the Cult of the New. Monopoly was soon replaced by other games, which were more exciting and a bit more sophisticated, as I scoured the game shelves for new things to sample. All of these other titles were property games, however, and similar in structure to Monopoly–that was the state of game design in the U.S. 50 years ago. As a result, though, I’ve only played Monopoly once or twice since 1970. So I have no tales of family meltdowns and catastrophic games played over the holidays with crying kids and ill-tempered adults; just another title that was part of my growth as a gamer, but one which was soon abandoned in favor of superior fare.
Regardless of whether Monopoly has helped or hurt our beloved boardgaming community, I have three fond memories of Monopoly I would like to share. First, that of playing a two player game with my brother around elementary school. Even then I was passionate about boardgames and I seem to recall being willing to be massively in debt to my brother but still willing/wanting to keep playing. The second would be a fateful Thanksgiving weekend in college. Not heading home, I brought a friend along to an extended relative and somehow a game of Monopoly developed. It soon ran far off the rails. I assume I have a strong share in the responsibility, but deals were detailed and wide ranging. I recall my friend subsidizing the construction of my houses and hotels for part of the board with the stipulation that he would always get to stay there for free. The four player game ended up a two-entity competition between some sort of limited-partnership companies. I like to think that our “team” won. Finally, about a decade ago, I put together a custom, family-centered game of Monopoly that I made for my parents. Every color on the board had a thematic connection with our family and within the color. Examples: The Greens were all related to our 4-H activities, the utilities were the two family camps we liked to attend, Jail was “Gone Fishing”, the Railroads were the four states in which my siblings and I live, Boardwalk and Park Place were my father’s and mother’s homesteads. While Boardwalk was my father’s home, the face on the $500 bill was clearly going to be my mom’s. (The poor dog got put on the $1.) The final touch was 3D printing houses in the shape of my parents’ home as well as a custom set of 3D player pawns reflective of our family (a piano, a canoe, etc…) The gift went over very well and my parents pull it out to show off all the time. It’s always hard to find a good gift for one’s parents but it is nice to nail it from time to time.
Monopoly, I still have a copy or two around since it’s a game where you are gifted various theme sets. I also have an old set because those wooden pawns are cool.
Go to the Head of the Class (1936)
This was a simple quiz game for families. It was first released by Milton Bradley in 1936 and stayed in their catalog for over 50 years. It was kind of like an early Trivial Pursuit, except the questions were more scholastic in nature and not really about trivia. The players’ progress was shown by numbered school desks on the board and the object was to reach desk 100. As I recall, there were different difficulties of questions, so, in theory at least, children of different ages, as well as adults, could play together. It was a regular staple during my formative years; my family had a copy, as did most families I knew.
This is a horse racing game from Waddington’s, the pre-eminent British game publisher of the day. Even though it’s primarily roll and move, there are enough refined mechanics that it resembles, at least in a surface manner, some of the more sophisticated racing games from the 80’s and 90’s. Players are randomly dealt horse and business cards. They can bid for additional cards in an auction. One side of the board is used to “train” their horses (this is done by moving their horses around a track via dice rolls to acquire Advantage or Disadvantage cards–these can be used in the subsequent race). Bids can be placed on which horse will win (possibly including opponents’ horses). The race is then run on the other side of the board. Most money wins. The training portion is mostly luck, but there are some choices that can be made during the race. This was a popular family game and was part of Waddington’s catalog for 45 years.
I actually played this a couple of times when I was growing up. I don’t remember much about the actual plays, but I do recall being fascinated by the concept of the game. A racing game where you actually got to train your horse prior to the race seemed like such a terrific idea. It may not have quite lived up to its promise from my point of view, but by all accounts, it worked very well as a sophisticated family game for several generations.
Bridge wasn’t the only widely played card game during the mid-twentieth century. For a while, we were all quite mad about Canasta.
Canasta is a game from the Rummy family. It uses two decks of ordinary playing cards, includes lots of wild cards and wild scoring, and is fairly rules-heavy for a Rummy game. The main objective is to make melds consisting of seven cards of the same rank; these are called Canastas and are worth a lot of points. The base game doesn’t allow you to meld sequences.
The game was invented in 1939 in Uruguay by an attorney named Segundo Santos and an architect named Alberto Serrato. Their goal was to design a partnership game that was less intense than Bridge. Their creation was an immediate hit and it was soon widely played throughout much of South America. Because of the travel restrictions during World War II, the rest of the world knew little about it at that time. But it emigrated to the U.S. in the late forties and within a few years, had become a major craze. There was a period of several years when Canasta was probably the most popular card game in the world. After about ten years, interest in the game began to wane, but it continues to be reasonably popular to this day.
I played a great deal of Canasta during my youth. We never played with partners, just every player for themselves. It was a fun game to play with my mom and my brother, with a reasonable number of decisions to be made, but no real brain-burning analysis required. We even experimented with some of the crazier variants, including Samba (3 decks, and you could meld sequences) and Bolivia (4 decks, and in addition to sequences, you could also make melds of just wild cards!). It’s probably been a good 30 years since I last played, but I still have some very fond memories of Canasta. | <urn:uuid:2136a340-fbe7-4b29-b7d0-2ded6c9d48a9> | CC-MAIN-2023-50 | https://opinionatedgamers.com/2021/03/03/gaming-timeline-1930-1939/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.986447 | 2,834 | 2.671875 | 3 |
Assessments are assessments. Some are more robust. Some are more formal. But at the end of the day they are meant to help us get a handle on what someone knows and can do in a particular domain. If the primary purpose of an assessment is to diagnose and provide intervention, then it is formative in nature and should not be included in making a summative judgement about the student's level of achievement (especially if there is another opportunity to demonstrate achievement). There are usually some difference in the types of assessments we design for formative and summative purposes, but they could be identical. In the end it's what you do with them that defines them as formative or summative.
Formative assessment (including what we typically refer to as homework) is really about practice. As important as practice is, it is not the same thing as the "game". Is it connected? Sure. But, it's not the same. Allen Iverson had a 29 point per game scoring average in the post season, but one issue that was reaised from time to time was his engagement during practice. While one could argue that he might have been even better if he practiced harder, his game (a.k.a. summative performances) put him in the hall of fame.
In the second video, a skateboarder has 18 attempts to land a trick. How do you represent that in a grade book? Do you average all the attempts together? | <urn:uuid:00344ebf-0bba-4049-ae4a-a07adeca530c> | CC-MAIN-2023-50 | https://otis.coe.uky.edu/DDL/portfolio/case_public.php?ID=177&element=1064&refresh=20230921164237 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.983912 | 297 | 2.96875 | 3 |
If one reads The Handbook on Formative and Summative Evaluation of Student Learning (Bloom, Hastings and Madaus, 1971) the biggest distinction (besides purpose) of formative and summative is the level of generalization. Formatives tend to be much smaller & focused, while summatives tend to be more comprehensive. A problem with trying to combine these types of evidence is that often what seperates novices from expert is the ability to work at more comprehensive levels.
For example, when studying performance of avionics technicians, Lajoe (1993) found that differences between novices and experts occurred in the over all problem solving process when cognitive load was high, such as when troubleshooting test equipment. Both groups performed well when performing discrete elements of the job, the trouble occurred when having to juggle multiple elements in the process of achieving a larger goal (p. 264).
The two graphs below are from a honors geometry teacher's grade book. The first one shows the scores for the formative type assessments (in green) and the larger more comprehensive assessments (red). The second image shows the course grade calculated using the all the grades (using the teacher's point scale) vs. just the summative assessments. The inclusion of the formatives boosts some students grades two letter grades!
Imagine if this were a group of students who haven't had a particularly good experience in math. What if they routinely missed several assignments of just performed poorly on them becasue they still needed more practice. In those cases you could see the opposite effect where the formative grade actually supressed the overall grade. | <urn:uuid:47830adc-7042-4990-ab01-4cfcc2f9cd88> | CC-MAIN-2023-50 | https://otis.coe.uky.edu/DDL/portfolio/case_public.php?ID=177&element=1065&refresh=20230926045521 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.954197 | 328 | 3.8125 | 4 |
Lead is significantly more harmful to the health of children and adults across the world than previously thought. This conclusion is suggested by a modeling study presented by Norwegian development economist Bjorn Larsen and the Colombian environmental specialist for lead Ernesto Sánchez-Triana, PhD, in a presentation to the World Bank. Their work was published in The Lancet Planetary Health.
As Larsen and Sánchez-Triana report, the economic consequences of increased exposure to lead are already immense, especially in low- and middle-income countries (LMICs). The study was financed by the Korea Green Growth Trust Fund and the World Bank’s Pollution Management and Environmental Health Program.
Intellectual, Cardiovascular Effects
“It is a very important publication that affects all of us,” German pediatrician Stephan Böse-O’Reilly, MD, of the Institute and Polyclinic for Occupational, Social, and Environmental Health of the Ludwig Maximilian University Hospital in Munich, Germany, told Medscape Medical News. “The study, the results of which I think are very reliable, shows that elevated levels of lead in the blood have a much more drastic effect on children’s intelligence than we previously thought.”
It is well known that lead affects the antenatal and postnatal cognitive development of children, the doctor explained. But the extent of this effect has quite clearly been underestimated before now.
On the other hand, Larsen and Sánchez-Triana’s work could prove that lead may lead to more cardiovascular diseases in adulthood. “We already knew that increased exposure to lead increased the risk of high blood pressure and, as a result, mortality,” said Böse-O’Reilly. “This study now very clearly shows that the risk of arteriosclerosis, for example, also increases through lead exposure.”
Figures From 2019
“For the first time, to our knowledge, we aimed to estimate the global burden and cost of IQ loss and cardiovascular disease mortality from lead exposure,” wrote Larsen and Sánchez-Triana. For their calculations, the scientists used blood lead level estimates from the Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2019.
They estimated IQ loss in children younger than 5 years using the internationally recognized blood lead level–IQ loss function. The researchers subsequently estimated the cost of this IQ loss based on the loss in lifetime income, presented as cost in US dollars and percentage of gross domestic product (GDP).
Larsen and Sánchez-Triana estimated cardiovascular deaths due to lead exposure in adults aged 25 years or older using a model that captures the effects of lead exposure on cardiovascular disease mortality that is mediated through mechanisms other than hypertension.
Finally, they used the statistical life expectancy to estimate the welfare cost of premature mortality, also presented as cost in US dollars and percentage of GDP. All estimates were calculated according to the World Bank income classification for 2019.
Millions of Deaths
As reported by Larsen and Sánchez-Triana, children younger than 5 years lost an estimated 765 million IQ points worldwide due to lead exposure in this period. In 2019, 5,545,000 adults died from cardiovascular diseases caused by lead exposure. The scientists recorded 729 million of the IQ points lost (95.3%) and 5,004,000 (90.2%) of the deaths as occurring in LMICs.
The IQ loss here was nearly 80% higher than a previous estimate, wrote Larsen and Sánchez-Triana. The number of cardiovascular disease deaths they determined was six times higher than the GBD 2019 estimate.
“These are results with which the expert societies, especially the German Society of Pediatrics and Adolescent Medicine and the German Cardiac Society, and the corresponding professional associations need to concern themselves,” said Böse-O’Reilly.
Although blood lead concentrations have declined substantially since the phase-out of leaded gasoline, especially in Western countries, lead still represents a major health issue, even in Germany, because it stays in the bones for decades.
European Situation Moderate
“We need a broad discussion on questions such as whether lead levels should be included in prophylactic assessments in certain age groups, what blood level is even tolerable, and in what situation medicinal therapy with chelating agents would possibly be appropriate,” said Böse-O’Reilly.
“Of course, we cannot answer these questions on the basis of one individual study,” he added. “However, the work in question definitely illustrates how dangerous lead can be and that we need further research into the actual burden and the best preventive measures.”
In this respect, the situation in Europe is still comparatively moderate. “Globally, lead exposure has risen in recent years,” said Böse-O’Reilly. According to an investigation by the Planet Earth Foundation, outside of the European Union, lead can increasingly be found in toys, spices, and cooking utensils, for example.
“Especially in lower-income countries, there is a lack of consumer protection or a good monitoring program like we have here in the EU,” said Böse-O’Reilly. In these countries, lead is sometimes added to spices by unscrupulous retailers to make the color more intense or to simply add to its weight to gain more profit.
Recycling lead-acid batteries or other electrical waste, often transferred to poorer countries, constitutes a large problem. “In general, children in Germany have a blood lead level of less than 1 μg/dL,” explained Böse-O’Reilly. “In some regions of Indonesia, where these recycling factories are located, more than 50% of children have levels of more than 20 μg/dL.”
According to Larsen and Sánchez-Triana, the global cost of increased lead exposure was around $6 trillion USD in 2019, which was equivalent to 6.9% of global GDP. About 77% of the cost ($4.62 trillion USD) comprised the welfare costs of cardiovascular disease mortality, and 23% ($1.38 trillion USD) comprised the present value of future income losses due to IQ loss in children.
“Our findings suggest that global lead exposure has health and economic costs on par with PM2.5 air pollution,” wrote the authors. This places lead as an environmental risk factor on par with particulate matter and above that of air pollution from solid fuels, ahead of unsafe drinking water, unhygienic sanitation, or insufficient handwashing.
“This finding is in contrast to that of GBD 2019, which ranked lead exposure as a distant fourth environmental risk factor, due to not accounting for IQ loss in children — other than idiopathic developmental intellectual disability in a small subset of children — and reporting a substantially lower estimate of adult cardiovascular disease mortality,” wrote Larsen and Sánchez-Triana.
“A central implication for future research and policy is that LMICs bear an extraordinarily large share of the health and cost burden of lead exposure,” wrote the authors. Consequently, improved quality of blood lead level measurements and identification of sources containing lead are urgently needed there.
Improved Recycling Methods
Böse-O’Reilly would like an increased focus on children. “If children’s cognitive skills are lost, this of course has a long-term effect on a country’s economic position,” he said. “Precisely that which LMICs actually need for their development is being stripped from them.
“We should think long and hard about whether we really need to send so much of our electrical waste and so many old cars to poorer countries, where they are incorrectly recycled,” the doctor warned. “We should at least give the LMICs the support necessary for them to be able to process lead-containing products in the future so that less lead makes it into the environment.
“Through these global cycles, we all contribute a lot toward the worldwide lead burden,” Böse-O’Reilly continued. “In my opinion, the German Supply Chain Act is therefore definitely sensible. Not only does it protect our own economy, but it also protects the health of people in other countries.”
This article was translated from Medscape’s German Edition.
Source: Read Full Article | <urn:uuid:9de20518-6291-4c4a-943e-fd1903dc3450> | CC-MAIN-2023-50 | https://oullins-patriote.com/health-news/lead-pollutants-as-harmful-to-health-as-particulate-matter/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.954391 | 1,774 | 3.25 | 3 |
hepatitis means inflammation of the liver. The liver is a vital organ that processes nutrients, filters the blood, and fights infections. When the liver is inflame or damage, its function can be affect. Heavy alcohol use, toxins, some medications, and certain medical conditions can cause hepatitis.
What are the 5 types?
There are 5 main hepatitis viruses, referr to as types A, B, C, D and E. These 5 types are of greatest concern because of the burden of illness and death they cause and the potential for outbreaks and epidemic spread.
Is hepatitis A STD?
it is a virus found in human faeces (poo). It’s normally pass on when a person eats or drinks contaminate food and water. It’s also a sexually transmitt infection (STI) pass on through unprotect sexual activities, particularly anal sex.
What are the 3 types ?
There are at least six different types of hepatitis (A-G), with the three most common types being A, B and C. it is an acute infection and people usually improve without treatment.
Type A symptoms are often similar to a stomach virus. But most cases resolve within a month. Type B and C can cause sudden illness. However, they can lead to liver cancer or a chronic infection that can lead to serious liver damage called cirrhosis. | <urn:uuid:d0f66a1a-b23a-4e5f-9d21-c3f62450011c> | CC-MAIN-2023-50 | https://ourmedilife.com/product-category/hepatitis/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.939874 | 281 | 3.4375 | 3 |
Paleontologists Uncover Fossils of Earth's Earliest Parasites
More than 500 million years ago, parasites were already stealing from their hosts.
The relationship between parasite and host is as old as many of the world's earliest animals. In a new study, researchers present the first-ever evidence of a parasitic relationship, dating back 512 million years.
Fossils show that tubelike worms attached to the outside of brachiopods — marine animals that look similar to modern mollusks — and effectively stole their food.
This early instance of parasitism was happening relatively shortly after the Cambrian explosion, some 540 million years ago when Earth's first animals diversified from one another. That means parasite-host relationships are as old as many lineages of animals.
Today, parasitic relationships continue to be common in nature — think of fleas on a dog or head lice. But it can be difficult to identify parasites in fossils since researchers need to infer that a relationship was parasitic from appearance alone.
However, when researchers found a cluster of Neobolus wulongqingensis — an ancient brachiopod — in Yunnan, China, that's exactly what they discovered. This finding was published in the journal Nature.
Grooves in the brachiopod's shell show that the PARASITES LIVED ON THEIR EXTERIOR, but didn't bore into the brachiopod.
Meanwhile, the tubes weren't found on other nearby hosts, like trilobites. This suggests there was a special relationship at play, report the study authors.
These fossils demonstrate a type of parasitism called kleptoparasitism. It's just what it sounds like: The parasite essentially steals food from its host, thereby weakening the host.
In this case, the tube-dwelling parasites seem to have attached themselves vertically to the outside of the brachiopod, positioning their mouths at the same level as the brachiopod's opening where it would take in food. That way, they could divert some of that food to their own mouths.
Several of the tubelike worms would become encrusted on a single brachiopod to feed, the fossils show. Meanwhile, the brachiopod attached itself to the ocean floor thanks to a fleshy ligament, or pedicle.
The researchers also found evidence that tube-encrusted brachiopods didn't grow as large as those without tubes on them, helping to rule out the possibility that the relationship was either mutually beneficial or neutral for the host.
Rather, a parasitic effect "is the most strongly supported probable cause," the researchers write.
ANIMAL THIEVES - Kleptoparasites are common in nature, and the organisms that steal food don't always live on their host. In fact, you've probably encountered this phenomenon at the beach, if you've ever been bombarded by a block of hungry seagulls.
Among birds, kleptoparasitism is one of the ways the animals can adapt their behavior to make them better suited to living in urban environments, researchers reported in March 2020.
In the same way that a hyena might steal a carcass from a lion today, more than 500 million years ago, tubes were siphoning off food from brachiopods. The latest ancient evidence supports the idea that parasitism has deep roots in nature, not just humans. | <urn:uuid:591eb35a-8436-4ea0-9fcd-29602cb131c5> | CC-MAIN-2023-50 | https://paleontologyworld.com/exploring-prehistoric-life-paleontologists/paleontologists-uncover-fossils-earths-earliest-parasites?qt-latest_popular=0 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.968973 | 713 | 4.09375 | 4 |
COVID-19: Could Aspirin Be of Any Help in the Early Stage?
7 Pages Posted: 21 Apr 2020 Last revised: 16 Jun 2020
Date Written: April 14, 2020
Here is discussed the current scientific background for starting antiaggregation with Aspirin in the very initial phase of COVID-19 infection for preventing thrombotic infarctions before irreversible damage occurs in the target organs.
Several authors are now describing the presence of endothelial cells into the bloodstream. This might indeed represent an epiphenomenon of the underlying mechanisms explaining the incidence of fatal systemic thrombotic complications.
Antiaggregation with ASA would potentially have a significant prophylactic role, ideally before than anticoagulation and/or intensive care be the only options left: therefore dedicated trials should urgently be addressed to confirm this hypothesis.
Note: Funding: None to declare
Declaration of Interest: None to declare
Keywords: COVID-19, Aspirin, antiaggregation, thrombosis, viral sepsis, endothelial cells
Suggested Citation: Suggested Citation | <urn:uuid:a57c51b2-8d04-4762-b343-f4b1079a37b2> | CC-MAIN-2023-50 | https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3575303 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.868452 | 230 | 2.609375 | 3 |
Progress Report on Implementation of the Recommendations of the Panel on the Ecological Integrity of Canada's National Parks
Communicating the Benefits of Ecosystem Conservation to Canadians
The Panel emphasized that interpretation and outreach are critical management tools for conveying the significance of protected areas and for raising public awareness of hard realities concerning the serious environmental problems affecting national parks. The challenge is to use scientific data from research, monitoring and active management programs to develop meaningful learning experiences about appropriate and sustainable use. The desired outcomes are to change personal behaviours and motivate people to advocate ecosystem protection because it is relevant to their lives.
The Panel observed that not only does Parks Canada lack capacity to produce social and natural sciences information, but it has also lost many of the skilled professional interpreters needed to develop and deliver conservation messages. As well, some of the information, messages, facilities and media now being used are out of date; thus, they are often not successful in capturing people's attention.
Parks Canada's strategy seeks funds to implement its Heritage Presentation Renewal Program. This would include further research to segment audiences and determine how best to communicate with them. It would also entail identifying the types of messages to which audiences are receptive and the most effective communication techniques, media and locations to deliver them. Other priorities include delivering messages in urban communities through community outreach programming and facilities; enhancing the content of Parks Canada's very successful Internet site; and reaching out to Canada's youth by developing school curricula jointly with teachers. The common building block for these priorities is a cadre of trained professionals to develop and deliver specific conservation messages. Re-establishing this capacity is central to the long-term strategy.
- Making Ecological Integrity Central in Legislation and...
- Date modified : | <urn:uuid:8f1dfca9-63b3-45d0-9834-605265932115> | CC-MAIN-2023-50 | https://parks.canada.ca/docs/pc/rpts/prior/sec3/progres-progress5e | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.875016 | 351 | 3.21875 | 3 |
Like humans, dogs may be susceptible to lung conditions such as pneumonia. With dogs, however, it might not be obvious if you aren’t looking out for the particular signs of the condition. But first, what exactly is pneumonia?
Pneumonia can be defined as a condition that causes inflammation both in the lung’s air sacs and the surrounding tissue. Often, this results in a high fever, coughing, and difficulty in breathing.
So how does pneumonia affect dogs? Let us get to know the causes, symptoms, and treatment of pneumonia in dogs below.
How Do Dogs Get Pneumonia?
Anything that leads to inflammation in a dog’s lungs and airways can be a cause of pneumonia. Among the possible causes are upper-respiratory infections, bacteria inhaled from contaminated food, inhaling grass seeds by accident, tick-borne infections, and fungal infections, among others. The most common types of pneumonia are:
A bacterial infection in the lungs is the most common cause of acquiring pneumonia in dogs, with the bordetella bacteria (the bacteria that causes kennel cough) being one of the leading culprits. This proves how important it is to have your dog vaccinated with the bordetella vaccine.
Research suggests that there is a complex association between viral respiratory infections, the environment, and developing bacterial and respiratory diseases in dogs.
Among the common signs of bacterial pneumonia in dogs are: breathing difficulties, coughing, high fever, lethargy, and exercise intolerance. Other signs may include: rapid and loud breathing, nasal discharge, loss of weight, dehydration, and anorexia.
This can be acquired from getting viral lung infections. Among the common viruses that dogs may be susceptible to are canine influenza and canine distemper, viruses that you can get your dog vaccinations for.
Read more: Pet Vaccinations Guide For Cats & Dogs
The symptoms of viral pneumonia will depend on the cause, but the common clinical signs of pneumonia in dogs are: difficulty in breathing, coughing, fever, and general weakness.
This type of pneumonia can be acquired when your dog inhales a foreign substance. The severity of aspiration pneumonia would depend on the type of foreign material inhaled as well as how far it is able to spread in a dog’s lungs.
Common causes of this type of pneumonia include unsatisfactory administration of liquid medications. Other risks of getting aspiration pneumonia include: when your dog attempts to drink or eat while they are partially choking or when they breathe vomit in.
Swallowing hindrances, such as when a dog is under anesthesia, when they are comatose, or when they have a cleft palate, may also cause this type of pneumonia. In addition to this, esophagus or pharynx disorders may also make a dog more susceptible to aspiration pneumonia.
Among the signs of aspiration pneumonia in dogs are: intolerance to exercise, laborious breathing, coughing, fever, or a rapid heart rate. Other signs, such as airway spasms and bluish mucous membranes, may also be experienced.
You may also notice a sweet and odd-smelling breath which becomes more pronounced as the disease progresses. This may be associated with the dog having a nasal discharge that may have traces of red, green, or brown.
Although rarer than the other types of pneumonia, a dog may also develop pneumonia when they get a fungal infection in their lungs. Those with immune systems that are compromised are more prone to such fungi, but it may also affect healthy dogs.
It’s commonly caused by inhaling spores that can spread across your dog’s blood as well as their lymph systems. As for the source, most fungal infections are caught in the soil rather than from one dog to another.
The development of fungal pneumonia in dogs is usually gradual. Among the signs of the disease are: a thick discharge from the nose and a moist and short cough. As it progresses, loss of weight, difficulty in breathing, and weakness, in general, may be experienced by the affected dog.
In more serious cases, there may be inflammation in the airways, making it more difficult to breathe. Breath sounds may be almost impossible to detect, and a dog may experience periodic fever as well. In some cases, signs may also appear in the bones, skin, and eyes.
Knowing when your dog is not feeling well is essential, and finding means to monitor your pet may go a long way. One way to monitor your pet anytime and anywhere is through innovative pet cameras such as the Petcube Cam. First of all, it’s affordable. Second, it has smart and HD features that allow you to see every movement of your pet while being able to communicate with them too.
Diagnosing pneumonia in dogs involves physical examinations and laboratory tests. Alongside this, your vet will ask questions such as how your dog is and what symptoms you have noticed them exhibiting. As much as possible, provide your vet with as many details as needed, such as any medications or supplements your dog is taking or any change in their environment.
If pneumonia is suspected, your vet may recommend imaging studies as well as laboratory tests to determine if it is the case. These tests may include chest x-rays, blood work, or testing if there is fluid in the lungs.
Treatment and Recovery
When pneumonia is confirmed, the treatment would be dependent on the type of pneumonia, what is causing it, and how far it has spread in your dog’s lungs. Generally, dogs that have pneumonia can be home-treated unless they are very sick or contagious. Below are some treatments that your vet may recommend:
- Humidification to help loosen the secretions in the lungs;
- Increase your dog’s water/fluid intake to help in cleaning their lungs as well as balancing their body;
- Restrict their activity;
- Antibiotics, anti-fungal therapy, or parasite control treatments;
- Physical therapy.
It is essential to follow your vet’s recommendations for treatment and complete the medications prescribed, even if your dog seems to have recovered already. Also, don’t forget your dog’s follow-up checkup/s as recommended by your vet. By doing so, your dog may recover faster.
If your dog is very ill and is highly contagious, your vet may recommend hospitalization. Treatments upon hospitalization may include IV fluids, antibiotics, oxygen therapy, or surgery to remove any foreign objects. Meanwhile, if your dog’s pneumonia is because of an underlying condition, such conditions need to be treated as well.
When it comes to how to help a dog with pneumonia at home, it is important to keep your pet in a warm and comfortable environment until they can visit a vet. Provide them with food and water, and don’t self-medicate, as this may interfere with the medications that your vet may prescribe.
Prevention goes a long way in minimizing the chances of your dog getting pneumonia. Ways to prevent pneumonia in dogs include:
- Take your dog to the vet for a checkup every 6 months.
- Keep your dog’s vaccines up to date.
- Maintain parasite control measures all year round.
- Make sure that your dog’s living space has good air quality, away from molds and dust.
- If your dog has a condition that increases their risk of getting pneumonia, follow the recommendations of your vet to prevent your dog from getting secondary pneumonia.
Pneumonia in Puppies
Puppies may also be susceptible to pneumonia, particularly aspiration pneumonia. Some instances where aspiration pneumonia may happen to puppies include:
- Bottle-fed pups may choke when milk pours out from the bottle too fast.
- Force-feeding a puppy that isn’t able to swallow properly.
- Cases when a puppy with a cleft palate drinks milk, then it travels from the nasal cavity and into the lungs.
Because a puppy’s immune system isn’t fully developed yet, it is crucial to bring your pup to the vet as soon as you notice signs of pneumonia. The same also applies to elderly dogs and dogs who are immunocompromised. The earlier it is detected, the better chances of them recovering from the disease.
In emergency cases, such as with serious cases of pneumonia, having an emergency fund helps ease worries involving veterinary bills and veterinary care. A great example of an emergency fund that looks out for both pets and pet owners is Petcube’s Pet Emergency Fund.
With Petcube’s Pet Emergency Fund, you get $3,000 for pet emergencies once a year, covering up to 6 pets (not just 1). There are also no restrictions as both dogs and cats, regardless of age, medical history, or breed, are covered. The service also features fast coverage payment, giving direct payment to the vet clinic during the time of the emergency (no headaches in claiming).
What are the natural ways to help a dog with pneumonia?
While it is possible to treat your dog with pneumonia at home, it is important to consult with a veterinarian first for a proper diagnosis and treatment. Dogs with pneumonia usually need medications, and other treatments may be prescribed by your vet if necessary.
Pneumonia is a serious condition, but when a dog has a strong immune system and if they are provided with medications, supportive care such as natural remedies (only the ones approved by your vet), and other treatments that may be helpful, this gives them a good chance of recovering from the disease.
What if a dog with pneumonia is not responding to antibiotics?
Usually, a type of pneumonia that doesn’t respond to antibiotics is fungal pneumonia. Once confirmed, this is usually treated with anti-fungal drugs. Fungal pneumonia may prove to be challenging to recover from and may take 2-6 months to fully remove it from your dog’s lungs.
How to avoid aspiration pneumonia in dogs?
It’s easier to prevent aspiration pneumonia in dogs than to treat it. For example, veterinarians usually recommend fasting prior to a dog having surgery to lessen the risk of them choking when under anesthesia. When oral medications are given, be mindful of the speed of giving the medicine to match the speed of your dog’s capacity to swallow to prevent them from inhaling into their lungs.
Was this article helpful?
Help us make our articles even better | <urn:uuid:b6924750-8bdf-4e3c-a787-49b284c367c5> | CC-MAIN-2023-50 | https://petcube.com/blog/pneumonia-in-dogs/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.951739 | 2,161 | 3.234375 | 3 |
Quantum computers of the future hold promise in solving all sorts of problems. For example, they could lead to more sustainable materials, new medicines, and even crack the hardest problems in fundamental physics. But compared to classical computers in use today, rudimentary quantum computers are more prone to errors. Wouldn't it be nice if researchers could just take out a special quantum eraser and get rid of the mistakes?
Reporting in the journal Nature, a group of researchers led by Caltech is among the first to demonstrate a type of quantum eraser. The physicists show that they can pinpoint and correct for mistakes in quantum computing systems known as "erasure" errors.
"It's normally very hard to detect errors in quantum computers, because just the act of looking for errors causes more to occur," says Adam Shaw, co-lead author of the new study and a graduate student in the laboratory of Manuel Endres, a professor of physics at Caltech. "But we show that with some careful control, we can precisely locate and erase certain errors without consequence, which is where the name erasure comes from."
Quantum computers are based on the laws of physics that govern the subatomic realm, such as entanglement, a phenomenon in which particles remain connected to and mimic each other without being in direct contact. In the new study, the researchers focused on a type of quantum-computing platform that uses arrays of neutral atoms, or atoms without a charge. Specifically, they manipulated individual alkaline-earth neutral atoms confined inside "tweezers" made of laser light. The atoms were excited to high-energy states—or "Rydberg" states—in which neighboring atoms start interacting.
"The atoms in our quantum system talk to each other and generate entanglement," explains Pascal Scholl, the other co-lead author of the study and a former postdoctoral scholar at Caltech now working at a quantum computing company in France called PASQAL.
Entanglement is what allows quantum computers to outperform classical computers. "However, nature doesn't like to remain in these quantum entangled states," Scholl explains. "Eventually, an error happens, which breaks the entire quantum state. These entangled states can be thought of as baskets full of apples, where the atoms are the apples. With time, some apples will start to rot, and if these apples are not removed from the basket and replaced by fresh ones, all the apples will rapidly become rotten. It is not clear how to fully prevent these errors from happening, so the only viable option nowadays is to detect and correct them."
The new error-catching system is designed in such a way that erroneous atoms fluoresce, or light up, when hit with a laser. "We have images of the glowing atoms that tell us where the errors are, so we can either leave them out of the final statistics or apply additional laser pulses to actively correct them," Scholl says.
The theory for implementing erasure detection in neutral atom systems was first developed by Jeff Thompson, a professor of electrical and computer engineering at Princeton University, and his colleagues. That team also recently reported demonstrating the technique in Nature.
By removing and locating errors in their Rydberg atom system, the Caltech team says that they can improve the overall rate of entanglement, or fidelity. In the new study, the team reports that only one in 1,000 pairs of atoms failed to become entangled. That's a factor-of-10 improvement over what was achieved previously and is the highest-ever observed entanglement rate in this type of system.
Ultimately, these results bode well for quantum computing platforms that use Rydberg neutral atom arrays. "Neutral atoms are the most scalable type of quantum computer, but they didn't have high-entanglement fidelities until now," says Shaw.
The new Nature study titled "Erasure conversion in a high-fidelity Rydberg quantum simulator," was funded by the National Science Foundation (NSF) via the Institute for Quantum Information and Matter, or IQIM, based at Caltech; the Defense Advanced Research Projects Agency; an NSF CAREER award; the Air Force Office of Scientific Research; the NSF Quantum Leap Challenge Institutes; the Department of Energy's Quantum Systems Accelerator; a Taiwan–Caltech Fellowship; and a Troesh postdoctoral fellowship. Other Caltech-affiliated authors include graduate student Richard Bing-Shiun Tsai; Ran Finkelstein, Troesh Postdoctoral Scholar Research Associate in Physics; and former postdoc Joonhee Choi, now a professor atStanford University.
Written by Whitney Clavin | <urn:uuid:91be0c1e-76c1-4808-8068-2e82964d1e9a> | CC-MAIN-2023-50 | https://pma.caltech.edu/news/a-new-way-to-erase-quantum-computer-errors | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.942095 | 945 | 3.65625 | 4 |
This image has format transparent PNG with resolution 1016x820.
You can download this image in best resolution from this page and use it for design and web design.
Helicopter PNG with transparent background you can download for free, just click on download button.
A helicopter is a type of rotorcraft in which lift and thrust are supplied by rotors. This allows the helicopter to take off and land vertically, to hover, and to fly forward, backward, and laterally. These attributes allow helicopters to be used in congested or isolated areas where fixed-wing aircraft and many forms of VTOL (vertical takeoff and landing) aircraft cannot perform.
Helicopters were developed and built during the first half-century of flight, with the Focke-Wulf Fw 61 being the first operational helicopter in 1936. Some helicopters reached limited production, but it was not until 1942 that a helicopter designed by Igor Sikorsky reached full-scale production, with 131 aircraft built. Though most earlier designs used more than one main rotor, it is the single main rotor with anti-torque tail rotor configuration that has become the most common helicopter configuration. Tandem rotor helicopters are also in widespread use due to their greater payload capacity. Coaxial helicopters, tiltrotor aircraft, and compound helicopters are all flying today. Quadcopter helicopters pioneered as early as 1907 in France, and other types of multicopter have been developed for specialized applications such as unmanned drones.
In this gallery you can download free PNG images: Helicopters PNG image free download pictures, helicopter PNG | <urn:uuid:d440e806-54c8-406b-b1ba-844804d51a27> | CC-MAIN-2023-50 | https://pngimg.com/image/80020 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.958208 | 322 | 2.8125 | 3 |
The routing solvers in the ArcGIS Network Analyst extension—namely the Route, Closest Facility, and OD Cost Matrix solvers—are based on the well-known Dijkstra's algorithm for finding shortest paths. Each of these solvers implements two types of path-finding algorithms. The first type is the exact shortest path, and the second is a hierarchical path solver for faster performance. The classic Dijkstra's algorithm solves a shortest-path problem on an undirected, nonnegative, weighted graph. To use it in the context of real-world transportation data, this algorithm is modified to respect user settings such as one-way restrictions, turn restrictions, junction impedance, barriers, and side-of-street constraints while minimizing a user-specified cost attribute. The performance of Dijkstra's algorithm is further improved by using better data structures such as d-heaps. In addition, the algorithm must model the locations anywhere along an edge, not just on junctions.
The classic Dijkstra's algorithm solves the single-source, shortest-path problem on a weighted graph. To find a shortest path from a starting location, s, to a destination location, d, Dijkstra's algorithm maintains a set of junctions, S, whose final shortest path from s has already been computed. The algorithm repeatedly finds a junction in the set of junctions that has the minimum shortest-path estimate, adds it to the set of junctions S, and updates the shortest-path estimates of all neighbors of this junction that are not in S. The algorithm continues until the destination junction is added to S.
Route uses the well-known Dijkstra's algorithm described above.
Closest Facility uses a multiple-origin, multiple-destination algorithm based on Dijkstra's algorithm. It has options to only compute the shortest paths if they are within a specified cutoff or to solve for a fixed number of closest facilities.
OD Cost Matrix
OD Cost Matrix uses a multiple-origin, multiple-destination algorithm based on Dijkstra's algorithm. It has options to only compute the shortest paths if they are within a specified cutoff or to solve for a fixed number of closest destinations. The OD Cost Matrix solver is similar to the Closest Facility solver but differs in that it does not compute the shape of the resulting shortest path for less overhead and faster performance.
Finding the exact shortest path on a nationwide network dataset is time-consuming due to the large number of edges that need to be searched. To improve performance, network datasets can model the natural hierarchy in a transportation system where driving on an interstate highway is preferable to driving on local roads. Once a hierarchical network has been created, a modification of the bidirectional Dijkstra is used to compute a route between an origin and a destination.
The overall objective here is to minimize the impedance while favoring the higher-order hierarchies present in the network. Hierarchical routing does this by simultaneously searching from both origin and destination locations, as well as connection or entry points into higher-level roads, and then searching the higher-level roads until segments from both origin and destination meet. As the search is restricted to the upper hierarchy, a smaller number of edges are searched, resulting in faster performance. Note that this is a heuristic algorithm; its goal is fast performance and good solutions, but it does not guarantee that the shortest path will be found. For this heuristic to be successful, the top-level hierarchy must be connected, as it will not descend to a lower level if a dead end is reached.
Generally, it makes sense to use this solver on a hierarchical network where the edge weights are based on travel time. This mimics the way people normally drive on a highway network.
Traveling salesman problem option for the Route solver
The Route solver has the option to generate the optimal sequence of visiting the stop locations. This is the traveling salesman problem, or TSP. The TSP is a combinatorial problem, meaning there is no straightforward way to find the best sequence. Heuristics are used to find good solutions to these types of problems in a short amount of time. The TSP implementation in Network Analyst also handles time windows on the stops; that is, it finds the optimal sequence to visit the stops with a minimum amount of lateness.
The traveling salesman solver starts by generating an origin-destination cost matrix between all the stops to be sequenced and uses a tabu search-based algorithm to find the best sequence of visiting the stops. Tabu search is a metaheuristic algorithm for solving combinatorial problems. It falls in the realm of local search algorithms. The exact implementation of the tabu search is proprietary, but it has been researched and developed extensively at Esri to quickly yield good results.
Vehicle routing problem with time windows
The vehicle routing problem (VRP) is a superset of the TSP. In a TSP, one set of stops is sequenced in an optimal fashion. In a VRP, a set of orders must be assigned to a set of routes or vehicles such that the overall path cost is minimized. It also must honor real-world constraints including vehicle capacities, delivery time windows, and driver specialties. The VRP produces a solution that honors these constraints while minimizing an objective function composed of operating costs and user preferences, such as the importance of meeting time windows.
The VRP solver starts by generating an origin-destination matrix of shortest-path costs between all order and depot locations along the network. Using this cost matrix, it constructs an initial solution by inserting the orders one at a time onto the most appropriate route. The initial solution is then improved by resequencing the orders on each route, as well as moving orders from one route to another, and exchanging orders between routes. The heuristics used in this process are based on a tabu search metaheuristic and are proprietary, but they have been under continual research and development at Esri for many years and quickly yield good results.
The service area solver is also based on Dijkstra's algorithm to traverse the network. Its goal is to return a subset of connected edge features such that they are within the specified network distance or cost cutoff. In addition, it can return the lines categorized by a set of break values that an edge may fall within. The service area solver can generate lines, polygons surrounding these lines, or both.
The polygons are generated by buffering the lines by a user-provided distance. Optionally, users can create high-quality polygons such that the polygons contain all the area that is closer to the traversed lines than the nontraversed lines, up to a user-provided distance.
Location-allocation is a solver for the facility location problem. That is, given N candidate facilities and M demand points with a weight, choose a subset of the facilities, P, such that the sum of the weighted distances from each M to the closest P is minimized. This is a combinatorial problem of the type N choose P, and the solution space grows extremely large. Optimal solutions cannot be obtained by examining all of the combinations. For example, even a small problem such as 100 choose 10 contains over 17 trillion combinations. In addition, the location-allocation solver has options to solve a variety of location problems such as to minimize weighted impedance, maximize coverage, or achieve a target market share. Heuristics are used to solve the location-allocation problems.
The location-allocation solver starts by generating an origin-destination matrix of shortest-path costs between all the facilities and demand point locations along the network. It then constructs an edited version of the cost matrix by a process known as Hillsman editing. This editing process enables the same overall solver heuristic to solve a variety of problem types. The location-allocation solver then generates a set of semirandomized solutions and applies a vertex substitution heuristic (Teitz and Bart) to refine these solutions, creating a group of good solutions. A metaheuristic combines this group of good solutions to create better solutions. When no additional improvement is possible, the metaheuristic returns the best solution found. The combination of an edited matrix, semirandomized initial solutions, a vertex substitution heuristic, and a refining metaheuristic quickly yields near-optimal results. | <urn:uuid:bca91856-1358-4b73-a8eb-1cc9c11742f0> | CC-MAIN-2023-50 | https://pro.arcgis.com/en/pro-app/2.7/help/analysis/networks/algorithms-used-by-network-analyst.htm | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.93031 | 1,728 | 2.71875 | 3 |
The Lego-like way to get CO2 out of the atmosphere
Researchers argue that reducing greenhouse gas emissions is not enough to combat climate change
A Guide to Six Greenwashing Terms Big Ag Is Bringing to COP28
Stop Giving Big Oil a Carbon Fig Leaf
Carbon dioxide removal is not a current climate solution
A Fossil Fuel Economy Requires 535x More Mining Than a Clean Energy Economy
Too big, too heavy and too slow to change: road transport is way off track for net zero
Exclusive: Shell pivots back to oil to win over investors -sources
The multinational companies that industrialised the Amazon rainforest
Democracy is the solution to vetocracy - by Sam Bowman
Decapitalising our minds: the key to addressing climate change
CarbonPositive: Can We Halve Carbon in the Built Environment? | Architect M
How Meat and Fossil Fuel Producers Watered Down the Latest IPCC Report
The Planet Can Do Better Than the Electric Car
Can We Make Bicycles Sustainable Again?
Tackling Australia’s food waste
Revealed: more than 90% of rainforest carbon offsets by biggest provider are worthless, analysis shows
Global forest accelerator.
Emissions by sector
Paris Conundrum: How to Know How Much Carbon Is Being Emitted?
An environmentalist gets lunch
Many countries have decoupled economic growth from CO₂ emissions, even if we take offshored production into account
Life after Fossil Fuels: A Reality Check on Alternative Energy (Lecture Notes in Energy Book 81)
Climate disinformation and greenwashing
Opinion: The Messy Truth About Carbon Footprints
Carbon Positive Australia
Your reusable coffee cup might not be so green after all
The scientists hired by big oil who predicted the climate crisis long ago
See all tags. | <urn:uuid:0912f16e-a0e4-49b7-94c3-7bfb6ce24938> | CC-MAIN-2023-50 | https://psimyn.com/tags/carbon/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.807132 | 380 | 2.703125 | 3 |
You might remember ethics from your high school philosophy classes. I sure do.
The paper “Communication Ethics: Principle and Practice” by Robert Beckett talks about the importance of ethical communication in the information age.
It states that communication in our day and age is becoming more and more devoid of morality, and we need to learn how to communicate better — or, more ethically, which is what this article is about.
In this article, we will cover everything you need to know about ethical communication in the workplace.
So, let’s start with the basics — the principle of ethics.
Table of Contents
Ethics is a philosophical discipline that, simply put, differentiates between right and wrong. More often than not, ethics go hand in hand with the term “morals”. Although they are different terms, they are usually used interchangeably.
Let’s paint a picture.
Ethics is essentially that little voice in our minds that’s whispering to us “Is this the right thing to do?” or “Are we sure we want to go down this road?”
Ethics is much more than just listening to your instinct, it is honoring the truth — both the general truth and your own personal one.
This philosophical discipline, with its principles, is a tool, a means to help you make a decision and lead an honest and better life.
The Pallipedia Dictionary defines the principles of ethics in a few points:
- Duty to respect your own choices when making decisions that you believe are in your best interest,
- Duty to act in the best interest of your patient, client, or subordinate,
- Duty to not do any harm, and
- Duty to treat yourself and others fairly in society.
Now that we have a better grasp of what ethics are, let’s talk about ethical communication.
Ethical communication is:
- Transparent, and
- Morally correct communication between parties.
In a world where we have information under our fingertips, sometimes it’s easy to confuse what’s the truth and what’s misinformation. That’s why it helps to know we can count on ethical communication.
We should use ethical communication in every aspect of our lives, and it is crucial to use it in the workplace too.
Now that we know what ethical communication is, let’s learn about ethical miscommunication in the next paragraph.
Ethical miscommunication is any form of communication that is misleading or not wholly truthful.
For instance, in today’s media and marketing campaigns, we can notice a lot of ethical miscommunication. If people or companies in any way over-exaggerate their product, device, or feature, then it’s safe to say that is ethical miscommunication. In other words, they are not truthful or transparent with their audience, and they are promising something that they know will not happen. Remember those commercials that would try and sell you diet supplements promising amazing weight loss results in under 10 days? Or the commercials selling anti-age skincare products with actors that have undergone botox treatments? Those are only some of the examples of unethical behavior.
You should avoid ethical miscommunication at all times because not being open with your consumers, clients, or even colleagues creates a gap in the trust that you are building. Once that trust is broken, it will take you a lot more work to fix it than it would have taken you to build that trust up from ground zero in the first place.
So, how do we avoid ethical miscommunication?
It’s simple. As long as we follow the basic rules of ethical communication, which we will discuss later in this article, you should be miscommunication-free.
The workplace, as well as business relationships, have sets of rules that you must follow.
As much as there are rules for your behavior and code of conduct, there are rules for communication in the workplace. Communication is one of the things in the workplace that should be constantly upgraded since it is the oil that runs the machine. When communication is neglected, you risk a communication breakdown happening.
Rambling on and on to your coworkers, clients, or even friends will get you nowhere.
Ineffective and unethical communication won’t get you the results you want to achieve, which is why active communication is vital to all of us.
When you apply the basics and principles of ethical communication, you will be able to:
- Get your ideas across easier,
- Connect with colleagues and clients,
- Build stronger relationships with your business partners and your colleagues,
- Gain trust from your customers, clients or consumers.
Being transparent with your colleagues, clients, superiors, and subordinates leaves no room for misunderstanding. When you tell the truth and are honest about your decisions and feelings in the workplace, you become trustworthy and, eventually, an important part of your team.
For example, it’s easy to get the boundaries blurred if your coworker is also a personal friend.
How should you go about this situation? How should you communicate with your work friend?
Ethical communication can be of help here.
Most importantly, you should consider where you are. While you shouldn’t be discussing personal information in the office, you also shouldn’t be discussing sensitive work information over coffee with your work friend.
Ethical communication is here to help you to establish boundaries and make a difference between what’s right and what is wrong.
So, what are some of the benefits of ethical communication in the workplace?
In every type of communication, and especially in workplace communication, being clear, precise, and direct should be your holy grail.
When we speak in a precise manner, to our co-speaker, our intentions are clear. But, you must be wondering now, why should your intentions be clear to your co-speaker? Well, let’s consider this situation: you are a supervisor and you want to give a task to your employees. Since the task is a bit more complicated, you now have to explain what you want your employees to achieve. If you speak in a clear and direct manner, your employees will understand your vision and carry it out.
Set your intentions clearly so there is no room for confusion — and that leads us to our next point.
Ethical communication, as a branch of the philosophical discipline that is ethics, naturally, answers the question of what is right and wrong in the way that we communicate. If we take a closer look at ethical communication, we can see that it is a manner, a way of speaking that lets our co-speaker see and understand that we are honest.
An office, whether it is a remote one or an in-person one, relies on good and transparent communication.
When there is no confusion between two speakers, then there is no miscommunication that can lead to misunderstandings.
Treating others with respect, talking to the with respect, and hearing out their ideas, thoughts, and opinions is a great part of ethical communication. Being rude to our co-workers and colleagues, or even clients, will never bring us the respect that we want.
We have to give respect to receive respect.
There is this huge misconception that fear equals respect. Respect follows respect — and when we implement ethical communication in our daily interactions, we achieve the goals we had of being liked and respected, because ethical communication reinforces our ethical code.
When the lines of personal and work-appropriate topics get lost, it is easier to succumb to misunderstandings and disagreements. When we implement ethical communication in the workplace, we can clearly establish with our colleagues what we are comfortable and not comfortable discussing. You have friends outside of work, and maybe it’s better if Susan from Accounting doesn’t know the details of your dinner date last night.
Ethical communication is present in every aspect of our lives. Usually, we don’t even notice how pungent it is unless we stop and examine our own words.
For some, the principles of ethics are something of second nature to them, but for those who aren’t as familiar with it, here are the rules that should be followed for correct ethical communication.
Just as you would build your house from the ground up, the same metaphor can be implied to ethical communication.
Honesty is the foundation of good ethical communication.
Without honesty, the relationships you build won’t last.
Truth always comes out at some point, so instead of preparing for the inevitable consequences and doing damage control, start with honesty.
With that being said, being truthful doesn’t mean that you should spew words aimlessly and thoughtlessly. Being honest does not equal being cruel.
You should be truthful but still remain professional and polite.
Just because you can hide behind the phrase “I am just being honest” to convey your rude words, it doesn’t mean that is the best foundation for ethical communication.
Being honest means that you value your opinion, as well as other people’s, and you are bound by your ethics code to express yourself truthfully.
We are all responsible for our own words, opinions, and statements, but we are responsible of the effect that our honesty or dishonesty has on other people too. If you are direct, honest, and professional, then you have got the foundation of ethical communication down, and we can continue together to the next rule.
It is in human nature to conceal and hide away the things in our personality or work that we deem as faulty or not worthy enough.
In business ethical communication, this translates to the need for transparency.
We like to make everything perfect, shiny, and worthy, but sometimes we can get carried away. For example, we may exaggerate the features of our products or claim our product will change the world, to make an impact on our consumers.
This need is understandable, and it is hard to not embellish and polish our words to make them easier to ingest.
However, transparency in ethical communication is an important rule that goes hand in hand with the truth rule.
So, disclose any faults that may arise, and notify your clients of the issue.
Language was invented to help us describe our reality.
Although, reality usually changes a lot faster than language. So while we wait for our language capacities to catch up to our realities, we should strive to understand and be understood.
That is easier said than done. But, what’s the issue?
Usually, our problem with understanding is the lack of clarity.
With direct communication, there is no room for misunderstanding.
But, to be clear and direct, we have to know and believe in what we are trying to explain and translate our ideas.
As we have mentioned in the first rule, which talks about honesty, being respectful matters.
To achieve respectful communication, you should keep in mind that it isn’t just what you say but how you phrase it and when you say it.
If you aren’t respectful towards your colleagues and tactful with your words, you might run into problems. Words are a great and beautiful tool, however, when we misuse them, we can create more problems than we can solve.
Accepting responsibility is a huge bite that we all have to learn to chew.
We all can indeed get carried away, especially in business communication.
Accepting responsibility, especially in workplace disagreements, for saying the wrong thing or embellishing more than you should have, shows more heart and courage than ignoring the problem you created.
When we talk about business ethical communication, there is a certain degree of privacy we have to adhere to.
From NDAs to respecting colleagues’ privacy, ethical communication covers it all.
It might be common sense, but not discussing the company’s sensitive information with an outsider or not gossiping about your coworkers’ private lives plays a great part in ethical communication.
If you have asked yourself this question, then you are on the right path to becoming an ethical communicator.
Since communication is an essential part of our days, from start to finish, it’s great to know what will make us even better — in this case, ethical — communicators.
Keeping the rules we mentioned above in mind, an ethical communicator should be:
- Honest and respectful,
- Mindful, and
We should all be subjected to the rules of ethical communication, especially when we bring business into the equation.
The thing about business ethical communication is that being professional takes care of half of the work. The other half is effective communication. Or, in other words, be polite, respectful and precise whether you are speaking to the boss or the janitor. A good ethical communicator treats everyone with respect and honor.
Be honest with your clients about your products and what they can do for your consumers.
Don’t give in to the sensational factor.
If you are honest, direct, and believe in what you are saying, that will translate to your colleagues and clients.
Make it a habit to fact-check everything you read online, as well as to take everything with a grain of salt, unless you see concrete proof that the article or a statement you read is true. Especially if you want to use the information you read online in the workplace. If you don’t fact-check, still use the information you gathered, and it turns out it wasn’t true, your credibility will take a huge hit.
Ethical communication is a philosophical discipline that will help you to better use your words, in the workplace or at home.
Being clear, direct, and honest in any aspect of your life is a great way to make sure that misunderstandings are minimal.
However, be careful not to accidentally lose compassion when you start implementing ethical communication in your life. Life is much more than just white or black.
Ethical communication is a tool that is here to help us communicate better — to know what is right and wrong to say.
However, remember that feelings are not your enemy.
So, if you are implementing ethical communication more in your day-to-day life — be mindful of your co-communicators.
✉️ Ethical communication can be tricky. Have you tried to implement it in your business communication? Are you happy with the way you communicate with your colleagues now?
Share your experience and tips at firstname.lastname@example.org and we may include your answers in this or future posts. If you liked this blog post and found it useful, share it with someone you think would also benefit from it. | <urn:uuid:e366f36f-7401-4d5a-b1c0-a9bf130cf865> | CC-MAIN-2023-50 | https://pumble.com/blog/ethical-communication/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.94642 | 3,037 | 3.375 | 3 |
The use of additive manufacturing (AM) processes, such as direct metal laser sintering, provides the design freedom required to incorporate complex cooling schemes in gas turbine components. Additively manufactured turbine components have a range of cooling feature sizes and, because of the inherent three-dimensionality, a wide range of build angles. Previous studies have shown that AM built directions influence internal channel surface roughness that, in turn, augment heat transfer and pressure loss. This study investigates the impact of AM on channel feature size and builds direction relative to tolerance, surface roughness, pressure losses, and convective cooling. Multiple AM coupons were built from Inconel 718 consisting of channels with different diameters and a variety of build directions. An experimental rig was used to measure pressure drop to calculate friction factor and was used to impose a constant surface temperature boundary condition to collect Nusselt number over a range of Reynolds numbers. Significant variations in surface roughness and geometric deviations from the design intent were observed for distinct build directions and channel sizes. These differences led to notable impacts in friction factor and Nusselt number augmentations, which were a strong function of build angle.
All Science Journal Classification (ASJC) codes
- Mechanical Engineering | <urn:uuid:7724017b-1f62-48b1-8bfc-87148c961521> | CC-MAIN-2023-50 | https://pure.psu.edu/en/publications/impact-of-additive-manufacturing-on-internal-cooling-channels-wit-2 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.931394 | 248 | 2.59375 | 3 |
The primary goal of these experiments with 1-methylnapthalene was to prove the feasibility of performing experiments with polynuclear aromatic species in the Princeton flow reactor. After elimination of some problems with the evaporator system, several successful experiments were performed; the results from these preliminary experiments are presented. Also, a partial mechanism for the oxidation of 1-methylnapthalene is discussed, and some of the reactions show an analogy to the reactions of benzene and toluene under similar conditions.
|Original language||English (US)|
|Journal||Chemical and Physical Processes in Combustion, Fall Technical Meeting, The Eastern States Section|
|State||Published - 1985|
All Science Journal Classification (ASJC) codes
- Fuel Technology | <urn:uuid:c77f3291-7301-48dc-a954-b215e3e10508> | CC-MAIN-2023-50 | https://pure.psu.edu/en/publications/preliminary-experiments-with-1-methylnapthalene-near-1180k | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.90422 | 156 | 2.703125 | 3 |
When you’re starting out, it’s easy to get inventory and fixed assets confused. Here’s what you need to know to ensure you treat these two groups correctly in your accounting practices.
Inventory vs Fixed Assets: What is the Difference?
Difference between inventory and fixed assets
First and foremost, to make the most out of your inventory and fixed assets, you need to understand how they differ:
- Fixed assets are property your business owns and uses to produce income, like machinery, for example. In your accounting, fixed assets are reported in the long-term section of your balance sheet, typically under headings like ‘property, plant and equipment’.
You record fixed assets at their net book value, that is, the original cost, minus accumulated depreciation and impairment charges.
- Inventory is your product and goods used to create it. There are generally four types: raw materials for manufacturing, work in process, finished goods and merchandise purchased from suppliers. You record inventory as a current asset on your balance sheet, at the amount paid to purchase it.
Why inventory and fixed assets are important
Managing your inventory is critical to hit profit targets. For many companies, turning over inventory, by selling it or using it in production, is a primary revenue source.
Having too much inventory for long periods can be risky, as products can spoil, become damaged over the time you store and don’t sell them, or simply become obsolete.
However, by having too little inventory, you may not have enough products to sell if market demand is up and you could risk your business losing sales and market share. Meanwhile, your fixed assets have a finite life and are always depreciating, like how the value on a commercial vehicle you’ve purchase depreciates over time due to wear and tear.
Equipment used to keep the business going, like computers and maintenance on printers, can be treated as a fixed asset. However, things like stationery or consumables can be considered a part of inventory as they are quick moving.
It is important to understand the difference between the two and also to track them so you have accurate numbers on your financial statements come tax time.
Adopting a tracking system
The key to managing inventory and fixed assets is to adopt a robust tracking system as part of your accounting process.
A tracking system enables you to calculate depreciation, monitor maintenance needs and schedule repairs on your fixed assets. For inventory, it helps you avoid running out of stock and can even control theft of your goods.
Using tracking to boost profits
Once you’ve learned the difference between the two, the next step to make the most of your inventory is to use the information you gain through tracking to improve sales and profits.
Watch out for items that sell well and need regular restocking, slow sellers that you should consider putting on sale, and items that have increased sales – you can capitalize on these by increasing orders during relevant periods, for example.
Cloud-based accounting software like QuickBooks Online can help you better manage your inventory and help you accurately track your fixed assets together with your inventory items, so you’re always on top of all your assets at any one point.
It pays to understand what makes up your fixed assets, and especially what makes up your consumable inventory, which loses value the longer it is held in the business.
While it is a fact that the more inventory you have, the higher your current and total asset value, your inventory should be sold as quickly as possible to earn revenue. | <urn:uuid:71d3bf9f-65d7-484b-bdbf-e61e4195d395> | CC-MAIN-2023-50 | https://quickbooks.intuit.com/in/resources/accounting/inventory-and-fixed-assets/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.95194 | 732 | 2.75 | 3 |
Fun Fact Friday
In honour of St. Patrick’s Day, here is today's fun fact!
Did you know that...
"This momentous occasion has been celebrated in Canada since 1759, when the one of the first St. Patrick’s Day parades in North America was held in Montreal by Irish soldiers shortly after the British conquest. ” (Explore Canada, 2021)
Explore Canada. (2021,Mar 11th) The History of St. Patrick’s Day in Canada. Found on Travel Top6: https://traveltop6.com/travel-guides/the-history-of-st-patricks-day-in-canada | <urn:uuid:da150246-90ad-4e00-aab3-da7ce96ad8c7> | CC-MAIN-2023-50 | https://railwaysuppliers.ca/news/fun-fact-friday-2023-03-17 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.885116 | 143 | 2.796875 | 3 |
USS GERALD R FORD
- The USS Gerald R Ford is the largest warship ever built.
- It is 337 m long, 78 m wide (measured at the flight deck) and 76 m high.
- The carrier’s size allows it to support up to 90 aircraft.
- To conduct all operations aboard the carrier, a crew of over 4,500 personnel is needed.
- INS Vikrant operates a total of 36 aircraft and is run by a crew of roughly 1,650.
WHY GAZA IS KNOWN AS THE WORLD’S BIGGEST ‘OPEN AIR PRISON’
- It is a strip of land wedged between the Mediterranean Sea to the west, Israel to the north and east, and Egypt to the south.
- It is home to more than 20 lakh Palestinians.
- It has been under military occupation since 1967.
- Even though Israel maintains that it pulled out in 2005, the United Nations, the European Union and other international organisations still consider Gaza as occupied territory.
- The conditions created by the occupation and the blockade have led many to refer to Gaza as an “open air prison”.
The beginning of the Gaza blockade :
- In the Six-Day War of 1967, Israel captured Gaza from Egypt, and began its military occupation of the territory.
- Between 1967 and 2005, Israel built 21 settlements in Gaza and urged Palestinian residents, through coercive measures as well as by giving financial and other incentives, to leave the territory.
- However, that period saw rising Palestinian resistance, both violent and non-violent, against the Israeli occupation.
- In 2005, Israel withdrew its settlements from Gaza.
- Between then and 2007, it imposed temporary blockades on the movement of people and goods into and out of Gaza on multiple occasions.
Oslo Agreement :
- Under the 1993 Oslo Agreement, the Palestinian Authority got administrative control over Gaza after Israel pulled out, and an election was held in 2006.
- The voting took place at a time when an Israeli blockade was in force, and the militant group Hamas won a majority of seats.
- Following the election, deadly violence broke out between Hamas and Fatah, another Palestinian political faction, leading to the death of hundreds of Palestinians.
- In 2007, after Hamas assumed power in Gaza, Israel made the blockade permanent.
- Egypt, which also has a border crossing with Gaza, participated in the blockade. This effectively meant that most people could not go into or out of Gaza and that the movement of goods and aid was highly restricted.
- Israel justifies the blockade as being necessary for its security.
Walls and crossings :
- With walls on three sides and the Mediterranean on the fourth, Gaza Strip is surrounded by physical barriers.
- In 1994, Israel built a 60-km-long fence along its border with Gaza.
- Walled-off from the north and the east by Israel, Gaza’s southern border also got a wall when Egypt, with the help of the US, started constructing a 14-km steel border barrier.
- In the west, Israel controls the sea route into Gaza and doesn’t allow it be used for the transfer of people or goods.
- Currently, there are three functional border crossings between Gaza and the outside world – Karem Abu Salem Crossing and Erez Crossing controlled by Israel, and Rafah Crossing controlled by Egypt.
- Since the attack on Israel, all three crossings have been effectively sealed.
- Densely populated and impoverished
- The Gaza Strip is 41 km long and 12 km wide at its widest point. More than 20 lakh residents live in a total area of just around 365 sq km, making it one of the most densely populated areas in the world.
- According to a report published last year by the United Nations Office for the Coordination of Humanitarian Affairs (OCHA), the blockade has “undermined Gaza’s economy, resulting in high unemployment, food insecurity and aid dependency”.
- The blockade also makes it very difficult for people from Gaza to go to the bigger Palestinian territory of West Bank, where many have familial and business connections.
- Many in Gaza also rely on going to the West Bank for medical treatment, but under the blockade, this is only possible after a long verification process conducted by Israel, which has a high rate of rejections.
‘KANITAMIL 24’ CONFERENCE
- The Tamil Nadu government will be organising a three-day ‘KaniTamil 24’ conference in February 2024.
- It is in line with an announcement made in the budget 2023 that an international conference on Tamil computing would be organised.
- The conference would elaborate on the latest advancements in computing and exploring the possibilities of using Tamil in natural language processing, artificial intelligence, machine learning, machine translation, sentimental analysis, large language models and automatic speech recognition.
- The conference is planned for February 8,9 and 10 at the Chennai Trade Centre.
- It will be organised by the Tamil Virtual Academy (TVA).
- TVA was born as a result of ‘TamilNet 99’.
- It was the conference on Tamil computing organised when M. Karunanidhi was the Chief Minister in 1999.
- The second such conference on Tamil computing is being organised after 25 years.
- India has launched Operation Ajay to facilitate the return of Indian citizens from Israel who wish to return.
- Special charter flights and other arrangements were put in place.
- 24-hour Control Room has been set up by the Ministry of External Affairs to monitor the situation and provide information and assistance.
HOW NEW ROYALTY RATES FOR STRATEGIC MINERALS LITHIUM, REES CAN HELP CUT THEIR IMPORTS
- The Centre has approved an amendment to a key law in order to specify competitive royalty rates for the mining of three strategically significant minerals such as lithium, niobium, and rare earth elements (REEs).
- The decision comes after the government removed six minerals, including lithium and niobium, from the list of ‘specified’ atomic minerals.
- It could set the stage for participation of the private sector through the auctioning concessions for these minerals.
- These changes are to ease the issuing of mining leases and composite licences for 24 critical and strategic minerals
- They are vital in key supply chains that include electric vehicle batteries, energy storage devices, and high-end motors.
- Lithium resources of 5.9 million tonnes were established in Jammu & Kashmir.
- It is the largest deposit of the white alkali metal in India.
- Lithium is a vital ingredient of rechargeable lithium-ion batteries that power electric vehicles, laptops, and mobile phones.
Significance of move
- The specification of new royalty rates by amending the Second Schedule of the Mines and Minerals (Development and Regulation) Act, 1957, effectively aligns India’s royalty rates with global benchmarks.
- It paves the way for commercial exploitation of these minerals through auctions, which can be conducted by the Centre or states.
- A competitive royalty rate ensures that bidders would be attracted to the future auctions.
- Item No. 55 of The Second Schedule of the MMDR Act, 1957 specifies a royalty rate of 12% of the average sale price (ASP) for minerals that are not specifically listed in that Schedule.
- This rate is much higher than global benchmarks.
Lower royalty rates
- After the Cabinet’s decision, lithium mining will attract a royalty of 3% based on the London Metal Exchange price.
- Niobium too, will be subject to 3% royalty calculated on the ASP.
- REEs will have a royalty of 1% based on the ASP of the Rare Earth Oxide (the ore in which the REE is most commonly found).
- These critical minerals are also seen as an important prerequisite for India to meet its commitment to energy transition, and to achieve net-zero emissions by 2070.
Push for lithium
- India currently imports all the lithium it needs.
- The domestic exploration push goes beyond the J&K exploration.
- It includes exploratory work to extract lithium from the brine pools of Rajasthan and Gujarat, and the mica belts of Odisha and Chhattisgarh.
- China is a major source of lithium-ion energy storage products that are imported into the country.
- India is a late mover in attempts to enter the lithium value chain.
REEs value chain
- The rare earths constitute another hurdle in the EV supply chain.
- Much of the worldwide production is either sourced from or processed in China.
- In an EV, the rare earth elements are used in the motors and not the batteries.
- Rare earths are typically mined by digging vast open pits, which can contaminate the environment and disrupt ecosystems.
- When poorly regulated, mining can produce waste-water ponds filled with acids, heavy metals, and radioactive material that might seep into groundwater.
Niobium: for alloys
- Niobium is a silvery metal with a layer of oxide on its surface.
- It makes it resistant to corrosion.
- It is used in alloys, including stainless steel, to improve their strength, particularly at low temperatures.
- Alloys containing niobium are used in jet engines, beams and girders for buildings, and oil and gas pipelines.
- Given its superconducting properties, it is also used in magnets for particle accelerators and MRI scanners.
- The main source of this element is the mineral columbite.
It is found in countries such as Canada, Brazil, Australia, and Nigeria.
AGRI INSTITUTE TO BE NAMED AFTER MS SWAMINATHAN
- Thanjavur-based Agricultural College and Research Institute will be renamed after the iconic scientist, Dr MS Swaminathan.
- It will be called as Dr MS Swaminathan Agricultural College and Research Institute.
- An award will be instituted in Swaminathan’s name to honour toppers in plant propagation and genetics in the Tamil Nadu Agricultural University.
- Dr M S Swaminathan is recipient of a number of national and international recognitions including Padma Vibushan and Magsaysay Award.
PROTECTED AGRICULTURAL ZONE
- The Tamil Nadu assembly adopted a Bill to amend the Tamil Nadu Protected Agricultural Zone Development Act to declare Mayiladuthrai as one of the districts in the protected agricultural zone in the Cauvery delta.
- The Bill also includes ‘animal husbandry and inland fishery’ within the ambit of the term ‘agriculture’.
- The State government had enacted the Tamil Nadu Protected Agricultural Zone Development Act, 2020.
- The act bans new industrial activities in the protected agricultural zone.
- This includes exploration, drilling and extraction of oil and natural gas including coal-bed methane, shale gas and other similar hydrocarbons and ship breaking industry.
- The protected zone includes Thanjavur, Thiruvarur and Nagapattinam districts and some blocks in Cuddalore and Pudukottai districts. | <urn:uuid:f766836e-dcde-4402-829c-47cad89e6c9e> | CC-MAIN-2023-50 | https://rajivgandhiiasacademy.com/current-affairs/october-12-2023/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.928608 | 2,349 | 2.890625 | 3 |
Sunday, January 14, 2007
In a scrubby field in St Leonards-on-Sea, East Sussex I found these cola nut galls (Andricus lignicola) yesterday growing on some young oak trees. The trees were most probably the hybrid (Quercus x rosacea) between pedunculate oak (Quercus robur) and sessile oak (Quercus petraea), not that I think this is of any significance so far as the galls are concerned.
They are caused by a small wasp and chemicals injected by the female at egg-laying time induce the galls to form, thus providing food for the larvae. The cola nut is a plant from tropical Africa and there is only a very superficial resemblance between it and these galls.
Although described as widespread and common, I have been unable to find any earlier Sussex records and, though seemingly not so frequent as the marble gall, I am sure it is overlooked rather than rare.
Tuesday, January 02, 2007
I came across this small desert on top of the wall of the bridge that carries Linton Road across Braybrooke Terrace (where the cars below are parked) in Hastings today.
I have watched plants like this colonise bare stone. Often they start as the tiniest pieces along a seam or crack that retains a little more water than areas nearby, then spread out over a few years to form small cushions. Eventually they may join up and make a thin layer of soil where vascular plants can get a foothold and in no time at all you have a forest.
The cushions even at the stage they are at in the picture are often well populated with fauna such as springtails, nematodes, black fungus gnat larvae and the larvae of the parthenogenetic midge Bryophaenocladius furcatus. All these must be able to withstand long periods of desiccation when the moss cushions dry up in summer.
I have been reading Animate Earth by Stephan Harding (2006) and the following passage on life during interglacial periods seemed to be illustrated by these mosses: "Plants grow well in the new high carbon dioxide atmosphere. They send their roots deep in search of nutrients, cracking open rocks with sheer brute force and with the subtle but relentless dissolving powers of their acidic chemical exudations. One can almost hear the gentle grinding noise of the increased weathering as plants all over the planet pummel and pulverize the rock, releasing nutrients on a scale unknown during the time of ice. Myriads of phosphorus, iron, silicon, calcium atoms are captured by plant roots to be sucked up into the growing green biosphere which, in its heedles growth, draws out more and more carbon dioxide from the atmosphere." | <urn:uuid:6dce7c35-22b3-4676-9a53-26b93b78feef> | CC-MAIN-2023-50 | https://ramblingsofanaturalist.blogspot.com/2007/01/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.959243 | 580 | 2.734375 | 3 |
Are you sepsis aware?
19 Sep 2018
Up to 44,000 people in the UK die every year from sepsis and as many as 250,000 people are affected by the condition every year in the UK. At the UK Sepsis Trust, we are passionate about raising awareness of this potentially life-threatening condition. Previously known as blood-poisoning, sepsis is the body’s reaction to an infection, which causes it to attack its own tissues and organs. The good news is that sepsis can be treated effectively with antibiotics if is caught promptly.
Raising awareness of the condition can and does save lives. People with dental infections are an at-risk group so it is imperative that those professionals who are caring for patients are able to spot the signs and symptoms of sepsis and act quickly. With dental infection being a risk factor, it is important to know what ‘worse’ looks like.
So what causes sepsis?
Normally your body’s immune system responds to infection by working to fight any germs (bacteria, viruses, fungi), or to prevent infection. However, for reasons we do not fully understand, sometimes the immune system goes in to overdrive. It can happen as a response to any injury or infection, anywhere in the body. It can result from:
•A chest infection like pneumonia.
•A urine infection (UTI).
•A problem in the abdomen like gastroenteritis, or problems like a burst ulcer or a hole in the bowel.
•An infected cut or bite.
•A wound from trauma or surgery.
•A leg ulcer or cellulitis.
•A dental infection.
Sepsis can be caused by a huge variety of different germs, like streptococcus, e-coli, MRSA or C Diff. Most cases are caused by common bacteria, which normally don’t make us ill.
Sepsis is a major cause of death in the UK and 14,000 sepsis-related deaths are preventable. Patients with known infections are vulnerable, and their sepsis symptoms are often misinterpreted with other self-limiting conditions such as flu or gastroenteritis, potentially resulting in delayed treatment.
When treating younger patients, language and comprehension can be a communication barrier. Children can often compensate well during a disease process like sepsis. This means that subtle changes can be missed until they suddenly become extremely unwell. It is so important to trust your instincts.
Signs and Symptoms
As a healthcare provider, it is important that you, your staff and patients are aware of the signs and symptoms and are able to seek the appropriate medical care without delay, when infection is diagnosed. Early recognition, diagnosis and treatment dramatically improves outcomes from sepsis. These are the symptoms to be aware of:
Raising awareness of sepsis in your dental setting, or operating theatre, not just amongst staff but also patients, will save lives. At the UK Sepsis Trust our goal is to end preventable deaths from sepsis and improve outcomes for sepsis survivors.
With help from supporters we are putting sepsis on the national and global agenda. We encourage you to think about sepsis when you are caring for a patient that you suspect or already has an infection. If you are unsure, always ask: could it be sepsis?
In September with the help of NHSBSA there will be a dental specific awareness poster being disseminated, please do download it and display it in your dental practice or hospital ward.
Melissa Mead works for the UK Sepsis Trust and has campaigned tirelessly since the preventable death of her one-year-old son William, in December 2014. Melissa hadn’t heard of sepsis before William died, this is something she wants to change for all the general public. Together with the UK Sepsis Trust they work with large stakeholders, like RCS to ensure that this vital information reaches the people who really need it. For more information or to find out how you can get involved please contact Melissa on: Melissa@sepsistrust.org
Join the discussion
Add your comments to the site using Disqus.
Sign up below by adding a name, email address and password (click on the Discussion box to reveal the 'Name' field). Or log in using your social media profile.
After signing up, you can start commenting and won't have to log in to Disqus again - you don't even need to log in to your RCS account. | <urn:uuid:0200080a-8586-4b9a-9632-4f9f2ba27ce7> | CC-MAIN-2023-50 | https://rcseng.ac.uk/news-and-events/blog/sepsis/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.94562 | 942 | 3.53125 | 4 |
It is a connected world that we all live in. Technology around is changing everything and making life ‘smarter’ and better for us. The new wave hitting all of us today is Big Data that is changing how we look at a piece of information and make decisions. How have our lives changed with Big Data being around? What are its impacts? What are some concerns? On World Telecommunication and Information Society Day, we look at how Big Data is making big impacts to our lives.
The Internet has revolutionised our way of life, enabling things that were hard to imagine earlier. With the Internet, one can work sitting at home, doctors can treat patients anywhere in the world, and with the advent of smartphones, simple activities like commuting has a whole new meaning. Social media platforms like Facebook, Twitter and YouTube have redefined communication and made reaching out to a large number of people very easy. Any of us can be an ‘influencer’ with the potential to be heard by millions of people. | <urn:uuid:b86efa40-2da7-406d-b4f3-7c947e6bc8ac> | CC-MAIN-2023-50 | https://researchmatters.in/tags/surveillance | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.958101 | 207 | 3.09375 | 3 |
Written brainstorming levels the playing field for introverts
Prof. Bernd Rohrbach
Pose a central question, such as 'What actions should we take in the next iteration to improve?'. Hand out paper and pens. Everybody writes down their ideas. After 3 minutes everyone passes their paper to their neighbour and continues to write on the one they've gotten. As soon as they run out of ideas, they can read the ideas that are already on the paper and extend them. Rules: No negative comments and everyone writes their ideas down only once. (If several people write down the same idea, that's okay.)
Pass the papers every 3 minutes until everyone had every paper. Pass one last time. Now everyone reads their paper and picks the top 3 ideas. Collect all top 3's on a flip chart for the next phase. | <urn:uuid:d92ddc66-a7fe-49c9-852b-2e779640c629> | CC-MAIN-2023-50 | https://retromat.org/en/?id=2-135-66-24-14 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.957592 | 170 | 2.515625 | 3 |
Mock test is a term that is widely heard during your school and college days. The test results do not affect your actual grade however lots of emphasis and value are associated with the test.
A mock test is referred to as the practice test which is taken before the actual exam to check your level of preparation. The question structure in such tests are designed to be identical to the actual upcoming exam.
The questions in a mock test are similar to the actual exam and are marked similarly to the exam. In most cases, a mock test is conducted from a few weeks before the actual exam to help the student get a clear picture of the exam and the question structure.
In simple words, a mock test is an exam where the marks do not count as an actual grade however, it helps teachers to set a guide for the exam and students can practice for upcoming future exams. Mock tests are conducted everywhere from tuition centers, colleges, classes to private home tuitions.
All educational institutions place a very high value on mock tests. The students are pressured to get high grades on mock tests as well. Many tutors conduct extra classes and adjust the teaching schedule based on the performance of the students in the mock tests. Here is why taking a mock test will help you get a better grade in your examinations:
A mock test will help you evaluate and analyze your current level of preparation.
A mock test will help you evaluate your performance.
A mock test will help you manage your time in the actual exam.
A mock test will help you plan your solutions.
A mock exam will help you clear your doubts regarding exams.
Increased confidence and reassurance.
Mock tests help you revise the entire syllabus.
One of the domains of the Rolling nexus, the Rolling CAT (Computer Adaptive Test) is one of the best platforms to take a mock test if you are a student studying by themselves. The tests are created by qualified professionals who work with the Rolling team to create mock tests just for you.
The mock tests are carefully designed to help you work on your timing, formulas, and core concepts of many popular exams.
You can take a mock test through Rolling CAT by downloading the Rolling nexus app or you can also simply surf the website here. https://rollingnexus.com/tests
Rolling CAT also offers many skill tests and exam preparatory tests as well. There are a total of 6 types of tests that are facilitated by the Rolling CAT:
Entrance preparatory test
Mock tests are a very important part of exam preparation. You can never be fully confident when you sit for an exam if you have not appeared for at least one mock test prior; and when you are not fully confident in your preparation, you may be stressed which may cause a decline in your actual grade as well. | <urn:uuid:00abf359-aa25-4a73-9b87-1d1692ec832e> | CC-MAIN-2023-50 | https://rollingnexus.com/blog/5/What-is-mock-test-When-do-you-conduct-it-and-how-can-I-get-the-information | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.944054 | 571 | 3.03125 | 3 |
This programming language may be used to instruct a computer to perform a task.
A Polyglot program is a program whose source is a valid program in two or more languages, producing the same results when run in the different languages.
PL/I and PL/M
Although similar, PL/I and PL/M are not the same language. A relatively simple pre-compiler could probably handle the differences for simple programs, but why write a pre-compiler when fetures of the languages/compilers can be exploited to make sources that are valid in both PL/I and PL/M ( even if some stylisation is required ) ?
In this, the PL/M language as implemented by Gary Kildall's original 8080 PL/M Compiler will be considered.
8080 PL/M features of interest:
- The compiler ignores everything after column 80 of a source line.
- The compiler treats lower case letters and many "special" characters as spaces.
- PL/M has a parameterless macro facility.
- The PL/M source must end with an EOF keyword - everything after it is ignored.
- The source must start with 100H: which sets the origin for CP/M programxs.
- A PL/M program is a sequence of statements and declarations, not a main procedure.
- PL/M has no builtin I/O statements - under CP/M it is possible to call OS routines.
- MOD is an operator, AND, OR and NOT are keywords.
- The only types are BYTE and ADDRESS - unsigned 8 and 16 bit integers.
- Identifiers cannot contain underscores - the PL/M compiler treats them as spaces - dollar signs can appear but are ignored.
- Keywords are reserved in PL/M.
PL/I features of interest:
- PL/I is not (usually) case sensitive.
- PL/I has a powerful in-built pre-proessor, however this is not implemented in all PL/I compilers.
- A PL/I program consists of a main procedure which contains all declarations and statements.
- The main procedure is declared as having "options(main)".
- I/O statements are built in to the language.
- MOD is a function, &, | and ^ (or ¬) are used for AND, OR and NOT.
- PL/I has a range of types, none of which are called BYTE or ADDRESS.
- Identifiers can contain underscores and some implementations allow dollar signs.
- Keywords are not reserved in PL/I.
In PL/I when an array is declared, the lower boiund can be omitted and defaults to 1. The upper bound is the dimension specified in the declaration.
declare a ( 100 ) fixed binary; declares an array of 100 integers, the subscripts range from 1 to 100.
In PL/M when an array is declared, the lower bound cannot be specified and is always 0. The upper bound is one less than the dimension specified in the declaration.
DECLARE A( 101 ) ADDRESS; declares an array of 101 integers, the subscripts range form 0 to 100.
PL/M only allows arrays with a single subscript to be declared. PL/I allows multi-dimensional arrays.
The following stratagy could be used:
- The program will start with
n100H: procedure options (main);where the "(main)" starts in column 81.
- The procedure header will be in lower case except for the final "H" of "100H" - there will be no digits in the procedure name, other than the final "100"
- The final "end" of the program will be labelled EOF, in upper-case.
- Code that is specific to PL/M will be commented out by having the opening "/*" of the comment appear in column 81.
- The code that is specific to PL/M will have "/* */" at the end - the "/* */" will terminate before column 81.
- Code that is specific to PL/I will generally be commented out by placing "/* */" in column 78, so that the "*/" is invisible to the PL/M compiler.
- Additionally, some PL/I code can be commented out by using a macro to add a "/*" to a PL/I keyword and following the code with "/* */".
Note that for some PL/I compilers, it may be necessary to specify a compiler option to set the margins for the code, so the source line can be up to say, 120 characters wide.
A "lowest common denominator" approach will be used with a common set of procedures providing the I/O and a limited range of types used.
As noted above PL/M only has 8 and 16 bit unsigned integers. The PL/M BYTE type is 8 bits and can be used where a character( 1 ) or bit( 1 ) item would be used in PL/I.
PL/I allows file inclusion via the
%include statement but the original 8080 PL/M compiler does not support file inclusion so the relevent definitions must be included in each program.
A suitable file for PL/I definitions could be:
/* pg.inc: PL/I definitions for "polyglot PL/I and PL/M programs" compiled with PL/I */ %replace true by '1'b, false by '0'b; declare lower_case character( 26 ) static initial( 'abcdefghijklmnopqrstuvwxyz' ); declare upper_case character( 26 ) static initial( 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' ); /* print a character */ prchar: procedure( c ); declare c character( 1 ); put edit( c )( a( 1 ) ); end prchar; /* print a newline */ prnl: procedure; put skip; end prnl; /* print a number in the minimum field width */ prnumber: procedure( n ); declare n binary( 15 )fixed; if n < 10 then put edit( n )( f( 1 ) ); else if n < 100 then put edit( n )( f( 2 ) ); else if n < 1000 then put edit( n )( f( 3 ) ); else if n < 10000 then put edit( n )( f( 4 ) ); else put edit( n )( f( 5 ) ); end prnumber; /* print a "$" terminated string */ prstring: procedure( s ); declare s character( 80 )varying; declare ( p, len ) binary( 15 )fixed; declare c character( 1 ); len = length( s ); if len > 1 then do; p = 1; c = substr( s, p, 1 ); do while( p <= length( s ) & c ^= '$' ); call prchar( c ); p = p + 1; if p <= len then c = substr( s, p, 1 ); end; end; end prstring; /* read a character from the keyboard, with a carriage-return following it */ rdchar: procedure( dummy )returns( character( 1 ) ); declare dummy binary( 15 )fixed; declare c character( 1 ); get edit( c )( a( 1 ) ); get skip; return ( c ); end rdchar; /* allows PL/M code to say "CALL PRSTRING( SADDR( 'ABC' ) );" */ /* where SADDR is declared LITERALLY '.' */ saddr: procedure( s )returns( character( 80 )varying ); declare s character( 80 )varying; return ( s ); end saddr; /* returns a MOD b */ modf: procedure( a, b )returns( binary( 15 )fixed ); declare ( a, b ) binary( 15 )fixed; return ( mod( a, b ) ); end modf; /* returns not p */ not: procedure( p )returns( bit( 1 ) ); declare p bit( 1 ); return( ^ p ); end not; toupper: procedure( c )returns( character( 1 ) ); declare c character( 1 ); return ( translate( c, upper_case, lower_case ) ); end toupper; /* end pg.inc */
For PL/M, the following definitions would be used, with the appropiate subset cut-and-pasted into the program:
DECLARE BINARY LITERALLY 'ADDRESS', CHARACTER LITERALLY 'BYTE'; DECLARE FIXED LITERALLY ' ', BIT LITERALLY 'BYTE'; DECLARE STATIC LITERALLY ' ', RETURNS LITERALLY ' '; DECLARE FALSE LITERALLY '0', TRUE LITERALLY '1'; DECLARE HBOUND LITERALLY 'LAST', SADDR LITERALLY '.'; BDOSF: PROCEDURE( FN, ARG )BYTE; DECLARE FN BYTE, ARG ADDRESS; GOTO 5; END; BDOS: PROCEDURE( FN, ARG ); DECLARE FN BYTE, ARG ADDRESS; GOTO 5; END; PRSTRING: PROCEDURE( S ); DECLARE S ADDRESS; CALL BDOS( 9, S ); END; PRCHAR: PROCEDURE( C ); DECLARE C CHARACTER; CALL BDOS( 2, C ); END; PRNL: PROCEDURE; CALL PRCHAR( 0DH ); CALL PRCHAR( 0AH ); END; PRNUMBER: PROCEDURE( N ); DECLARE N ADDRESS; DECLARE V ADDRESS, N$STR( 6 ) BYTE, W BYTE; N$STR( W := LAST( N$STR ) ) = '$'; N$STR( W := W - 1 ) = '0' + ( ( V := N ) MOD 10 ); DO WHILE( ( V := V / 10 ) > 0 ); N$STR( W := W - 1 ) = '0' + ( V MOD 10 ); END; CALL BDOS( 9, .N$STR( W ) ); END PRNUMBER; RDCHAR: PROCEDURE( DUMMY )BYTE; DECLARE DUMMY ADDRESS; DECLARE C BYTE; DECLARE X BYTE; C = BDOSF( 1, 0 ); DO WHILE( C = 0DH OR C = 0AH ); CALL PRNL; C = BDOSF( 1, 0 ); END; X = C; DO WHILE( X <> 0DH AND X <> 0AH ); X = BDOSF( 1, 0 ); END; CALL PRNL; RETURN ( C ); END RDCHAR ; TOUPPER: PROCEDURE( C )BYTE; DECLARE C BYTE; IF C >= 97 AND C <= 122 THEN RETURN ( ( C - 97 ) + 'A' ); ELSE RETURN ( C ); END TOUPPER; MODF: PROCEDURE( A, B )ADDRESS; DECLARE ( A, B )ADDRESS; RETURN( A MOD B ); END MODF; MIN: PROCEDURE( A, B ) ADDRESS; DECLARE ( A, B ) ADDRESS; IF A < B THEN RETURN( A ); ELSE RETURN( B ); END MIN; MAX: PROCEDURE( A, B ) ADDRESS; DECLARE ( A, B ) ADDRESS; IF A > B THEN RETURN( A ); ELSE RETURN( B ); END MAX;
Note the lack of comments in the PL/M "include" file - this is because the definitions will be commented out for PL/I compilers by having a "/*" starting in column 81 preceeding the definitions and /* */ follow them.
See below for some examples.
Pages in category "Polyglot:PL/I and PL/M"
The following 9 pages are in this category, out of 9 total. | <urn:uuid:0c217890-edc5-4cd4-aeac-20bea4281cc1> | CC-MAIN-2023-50 | https://rosettacode.org/wiki/Polyglot:PL/I_and_PL/M | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.709865 | 2,582 | 3.75 | 4 |
ngettext, dngettext, dcngettext - translate message and choose plural form
#include <libintl.h> char * ngettext (const char * msgid, const char * msgid_plural, unsigned long int n); char * dngettext (const char * domainname, const char * msgid, const char * msgid_plural, unsigned long int n); char * dcngettext (const char * domainname, const char * msgid, const char * msgid_plural, unsigned long int n, int category);
The ngettext, dngettext and dcngettext functions attempt to translate a text string into the user's native language, by looking up the appropriate plural form of the translation in a message catalog. Plural forms are grammatical variants depending on the a number. Some languages have two forms, called singular and plural. Other languages have three forms, called singular, dual and plural. There are also languages with four forms. The ngettext, dngettext and dcngettext functions work like the gettext, dgettext and dcgettext functions, respectively. Additionally, they choose the appropriate plural form, which depends on the number n and the language of the message catalog where the translation was found. In the "C" locale, or if none of the used catalogs contain a translation for msgid, the ngettext, dngettext and dcngettext functions return msgid if n == 1, or msgid_plural if n != 1.
If a translation was found in one of the specified catalogs, the appropriate plural form is converted to the locale's codeset and returned. The resulting string is statically allocated and must not be modified or freed. Otherwise msgid or msgid_plural is returned, as described above.
errno is not modified.
The return type ought to be const char *, but is char * to avoid warnings in C code predating ANSI C.
Personal Opportunity - Free software gives you access to billions of dollars of software at no cost. Use this software for your business, personal use or to develop a profitable skill. Access to source code provides access to a level of capabilities/information that companies protect though copyrights. Open source is a core component of the Internet and it is available to you. Leverage the billions of dollars in resources and capabilities to build a career, establish a business or change the world. The potential is endless for those who understand the opportunity.
Business Opportunity - Goldman Sachs, IBM and countless large corporations are leveraging open source to reduce costs, develop products and increase their bottom lines. Learn what these companies know about open source and how open source can give you the advantage.
Free Software provides computer programs and capabilities at no cost but more importantly, it provides the freedom to run, edit, contribute to, and share the software. The importance of free software is a matter of access, not price. Software at no cost is a benefit but ownership rights to the software and source code is far more significant.
Free Office Software - The Libre Office suite provides top desktop productivity tools for free. This includes, a word processor, spreadsheet, presentation engine, drawing and flowcharting, database and math applications. Libre Office is available for Linux or Windows.
The Free Books Library is a collection of thousands of the most popular public domain books in an online readable format. The collection includes great classical literature and more recent works where the U.S. copyright has expired. These books are yours to read and use without restrictions.
Source Code - Want to change a program or know how it works? Open Source provides the source code for its programs so that anyone can use, modify or learn how to write those programs themselves. Visit the GNU source code repositories to download the source.
Study at Harvard, Stanford or MIT - Open edX provides free online courses from Harvard, MIT, Columbia, UC Berkeley and other top Universities. Hundreds of courses for almost all major subjects and course levels. Open edx also offers some paid courses and selected certifications.
Linux Manual Pages - A man or manual page is a form of software documentation found on Linux/Unix operating systems. Topics covered include computer programs (including library and system calls), formal standards and conventions, and even abstract concepts. | <urn:uuid:a07250e8-6386-4f0a-bfde-04638780d6fa> | CC-MAIN-2023-50 | https://sarata.com/manpages/ngettext.3.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.862135 | 896 | 2.875 | 3 |
Bats are highly unusual creatures. They’re the only mammals with the gift of powered flight; different species have adapted to feast on a wide variety of foods ranging from mosquitoes to fruit to blood; and, as probes into the origin of the COVID-19 pandemic emphasize, they can harbor myriad viruses that are dangerous or fatal to other mammals without getting sick themselves.
According to research published today (November 23) in Science Advances, bats’ ability to survive as so-called viral reservoirs may stem in part from unique mutations, including the duplication of the gene encoding an antiviral protein called protein kinase R (PKR). That second copy stems from an ongoing evolutionary “arms race,” according to the study, resulting in bats’ adaptation to and seeming immunity from a wide range of viruses over the course of their evolutionary history.
“The biggest surprise to me is the extra copies of PKR in the genomes of some bat species,” study coauthor Nels Elde, a geneticist at the University of Utah and the Howard Hughes Medical Institute, tells The Scientist over email. “Even cooler is the new evidence that these copies diverge and can become less vulnerable to virus-encoded inhibitors of PKR. It looks like two PKRs can be better than one.”
The researchers set out to identify how genetic similarities among bats, as well as differences between bats and other vertebrates, influenced their viral immunity. Specifically, they searched genomes for sequences encoding PKR; study coauthor Stéphanie Jacquet, an evolutionary biologist at Claude Bernard University Lyon 1 in France, explains in an email that the team chose it for the comparison because it is conserved across invertebrates and important to immunity.
Focusing on 33 of the more than 130 different species of mouse-eared bats (genus Myotis), the researchers first had to sequence and assemble the genomes of 15 bat species, as bat genomes are particularly scarce in the literature.
To me these results are another ‘aha’ as to the possible mechanisms into how and why bats are so cool!
—Riley Bernard, University of Wyoming
“We are still in early days sampling bat genetic diversity for comparative studies of modern species,” says Elde. “In the meantime, we have to do some off roading and collect nucleic acids from bat species to get datasets that give us insight into evolutionary signals like the ones found in this study for PKR.”
With that genomic data in hand, the researchers found that the gene EIF2AK2, which encodes PKR, rapidly evolved and underwent at least one duplication event early enough in bats’ evolutionary history that the extra copy was present in every species they sampled. Some species had more than two copies of EIF2AK2; or closely related sequences, they found, many of which encoded paralogs of PKR and share its primary function as a frontline defense against viral invaders that blocks the translation of viral DNA and RNA. Comparing these sequences to those of humans, mice (Mus musculus), cows (Bos taurus), and dogs (Canis lupus familiaris), the team found that PKR duplication is indeed unique to bats.
PKR’s unique trajectory in the animals “suggests that while bats have evolved to tolerate some viruses, they have also evolved to efficiently control viral infections—in response to past pathogenic viruses,” Jacquet says.
The bat-virus arms race
To test the function of bats’ multiplicity of PKRs, the researchers gene-edited yeast to produce various bat PKRs or its orthologs, then exposed the cells to known kinase antagonists taken from bat-infecting viruses, including poxviruses, herpesviruses, and orthomyxoviruses. They found that PKR deploys an array of mechanisms to combat various viruses, suggesting that over time, viruses evolved to counteract bats’ existing defense mechanisms, and bats evolved new-and-improved PKRs in response. Alexa Sadier, a University of California, Los Angeles, evolutionary developmental biologist who didn’t work on the study, explains that this finding is a clear-cut example of the Red Queen hypothesis, named after a character in Alice in Wonderland, which posits that a sort of evolutionary arms race occurs between predators and prey, or in this case viruses and their host, in which the selective pressure imposed by an adaptation in one imposes new pressures—and adaptations—in the other. “The host will adapt and the virus will adapt,” she says. “This is really aligned with what we know.”
Functionally, having multiple copies of the gene allowed the extras to diverge and produce proteins that are more resistant to viral inhibitors, Elde says. “Almost like a game of evolutionary hot potato where if the virus blocks one copy of PKR, the other one might be more active during infections. If the virus blocks the other, the original copy of PKR might be more effective.”
This mechanism makes sense as an explanation for why bats are seemingly immune to so many viruses, experts tell The Scientist.
“To me these results are another ‘aha’ as to the possible mechanisms into how and why bats are so cool!” University of Wyoming zoologist and physiologist Riley Bernard, who didn’t work on the study, tells The Scientist over email. “There are over 1,400 species of bats, the second most diverse group of mammals, so naturally there are going to be a lot of diseases that have coevolved with these various species over time. Not only that, but bats are so diverse in foraging type (ranging from insects and nectar to blood and fish!), body size, reproductive output, migratory capabilities. The fact that they have evolved these mechanisms to combat infection, or minimize morbidity and mortality caused by infection is not surprising.”
Amy Wray, a bat biologist who recently earned her PhD at the University of Wisconsin-Madison and who also didn’t work on the study, shares a similar sentiment: “Since bats are a diverse group and have so many unique traits—ranging from their genomes to their morphology and even their behaviors—it isn’t too surprising (but it is always very exciting) to discover another unusual adaptation in bats,” she says.
What makes bats unique?
The origins of the PKR duplication—and the reason it didn’t occur in other mammals—remain unknown. One leading hypothesis is that bats’ unusual immune capabilities may be related to the other defining trait that sets them apart from the rest of their mammalian cousins: their capacity for powered flight.
“We think that because of the flight, they have different physiological needs like high energy, these kinds of things,” suggests Sadier. “They [may] have evolved things differently for that reason.”
“Moreover, some bats could be more prone to gene duplications [than other mammals], for example because of their higher rates of transposable elements that are known to facilitate duplications,” says Jacquet.
Understanding the mechanisms of host-virus interactions, especially in such a prominent viral reservoir as bats, can lead to new strategies to prevent viral spillover from bats into other species, Jacquet suggests.
“We have changed the environment so much that it is up to us to think holistically, not just ‘conserve the animals’ but in a one-health approach,” Bernard says. “A healthy ecosystem leads to healthy wildlife and healthy humans.” | <urn:uuid:8f595ba2-321a-4bf3-8cda-bbcfa7d95755> | CC-MAIN-2023-50 | https://sciencenewshubb.com/2022/11/25/duplicated-gene-helps-bats-survive-arms-race-with-viruses/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.950159 | 1,589 | 3.8125 | 4 |
Because Jurassic Park (Isla Nublar) is a theme park in more ways than one. The true work of breeding and farming the dinosaurs is actually taking place on Isla Sorna in a hidden base known as "Site B".
In the sequel, "The Lost World" Ian Malcolm identifies that there are simply too many dinosaurs to have been bred in the miniature laboratories seen at Jurassic Park.
Taken together, the whole complex had a utilitarian quality that
reminded Thorne of an industrial site, or a fabrication plant. He
frowned, trying to put it together. "Do you know what this is?" Thorne
said to Malcolm. "Yes," Malcolm said, nodding slowly. "It's what I
suspected for some time now." "Yes?" "It's a manufacturing plant,"
Malcolm said. "It's a kind of factory." "But it's huge," Thorne said.
"You see," Malcolm said, "visitors to Hammond's park at Isla Nublar
were shown a very impressive genetics lab, with computers and gene
sequencers, and all sorts of facilities for hatching and growing young
dinosaurs. Visitors were told that the dinosaurs were created right
there at the park. And the laboratory tour was entirely convincing.
"But actually, Hammond's tour skipped several steps in the process In
one room, he showed you dinosaur DNA being extracted. In the next
room, he showed you eggs about to hatch. It was very dramatic, but how
had he gotten from DNA to a viable embryo? You never saw that critical
step. It was just presented as having happened, between rooms.
Given that they were suffering high levels of unexplained deaths (and becoming increasingly desperate to improve mortality rate) the Site B staff released non-carnivorous dinosaurs from the factory bays.
The book explictly mentions that they were working with both male and female embryos so it's reasonably likely that the Site B scientists were allowing them to breed in mated pairs or flocks which explains how they'd know what the mating calls sound like.
Out of universe, the film's producers created the various mating calls and dinosaur noises using sounds from real-life amphibians, mammals and birds. It's at least possible that the Jurassic Park scientists used the same technique, mixing contemporary sounds and using trial and error until they found something that regularly attracted the female dinosaurs. | <urn:uuid:23c13c30-9377-48d4-9d3a-ec6b0745eab7> | CC-MAIN-2023-50 | https://scifi.stackexchange.com/questions/71255/how-did-they-know-what-a-hypsilophodont-mating-call-sounds-like | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.975851 | 503 | 2.671875 | 3 |
The Rule of Law
The school’s behaviour policy, reinforced in every lesson, enables pupils to distinguish right from wrong and to respect the order according to which our school functions which helps them in turn to understand how important the respect of rules is in the country.
Students are reminded of the Health and Safety Laws in Britain and that this often doesn’t apply in other countries and working conditions are not good for the people in particular who grow crops for food, mine for ores and work in factories. We consider long hours, poor rates of pay and child labour. We also talk about pensions, sick pay and government help. Which most countries don’t get.
Food: In food we teach about freedom of choice to be vegetarian or vegan. The freedom to buy food from wherever you want to. The lack of availability of food and choice in a dictatorship. The lack of human rights in the manufacturing of food in third world countries.
Textiles: In textiles we teach about freedom to choose a job you want – In China if your parent works in a textile factory – you will to, unless you have a particular talent. Design and products bought in this country are chosen by client. Use of propaganda in Pop Art.
Resistant Materials: Pupils gain experience of discussing the design work and prototypes of individuals or groups by showing respect to others opinions and feelings. This is a major consideration in Year 7 where pupils design, make and test a wooden bridge to carry a load of 75kgs. Very often the success of the bridge relates to how well the pupils can work together.
Food: Freedom of Speech: pupils are encouraged to give their opinion about aspects of other cultures compared to British culture, for example about eating habits and the way food is made and prepared, whilst ensuring students are respectful to others. When/if the opportunity occurs, discussions around events in other countries (ceremonies, traditions) and the way people live in dictatorships, help them understand the consequences of radical or extremist views and the implications of such actions. At all times, students are reminded of an expectation of respect for all others.
Textiles and Resistant Materials: We encourage students to give their opinions of their own work and others. They are given freedom to choose materials and designs within a design specification. They are encouraged to have their own opinions about techniques and materials.
Tolerance of Other Faiths & Beliefs
Food: In Food we actively promote pupils' understanding of their own culture through comparison of food with the culture of various countries such as Italy, Mexico, India and China to that of British food. We also teach about our multicultural society and the way different foods influence the way we cook today. The students learn about Fair Trade, religious beliefs and dietary requirements through religion, illnesses and freedom of choice. We teach the importance of taking into account these requirements when cooking for other people. In particular use of Halal and Kosher foods.
Textiles and Resistant Materials: We look at and discuss the work of designers and the influence of symbolism upon the design of products and materials.
Food: In food technology the various topics that we study mostly centre around a healthy lifestyle, personal tastes, ability to provide ingredients for practical’s and family meals. we have to establish from the beginning in the classroom, an atmosphere of trust and respect for each other. The students have to work together in a practical setting using dangerous equipment and have to be able to trust each other. Pupils must be able to share information about their background, their beliefs or simply their way of life, safe in the knowledge that their peers will respect and accept them for who they are. Pupils are encouraged to recognise an individual’s strength and support their development, they are also encouraged to embrace diversity and treat all others with respect both in and out of the classroom.
Textiles and Resistant Materials: We peer assess each other’s work and trust that constructive criticism and praise is given. Pupils learn to work collaboratively, accepting difference in others and treating everyone with respect and understanding. | <urn:uuid:d179b4d9-dc31-4ccf-9ccd-5e936d66573e> | CC-MAIN-2023-50 | https://scissettmiddle.com/pupil-welfare/british-moral-values/technology | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.953963 | 836 | 3.65625 | 4 |
Climate and Earth's Energy Balance
Part C: Explore the Greenhouse Effect
Once incoming solar energy reaches Earth's atmosphere, what parts of the Earth system absorb and hold the energy, warming the planet? In this lab, you will explore some of the elements that absorb solar energy, the greenhouse gases (GHG). These gases include: carbon dioxide, methane, nitrous oxide, and water vapor. You will see that each greenhouse gas responds differently to electromagnetic radiation. This is an important asset of our current atmospheric composition. It allows some forms of solar radiation to pass through the "atmospheric window," to the planet's surface and also back out to space, while retaining other wavelengths of energy to warm the atmosphere and Earth's surface. In this lab, you will further your understanding of the Earth's energy balance that you investigated in the previous labs.
First, watch the narrated video, describing the importance of the greenhouse effect in making the planet habitable, or able to support life. After viewing the video, discuss the question listed below with your classmates.
After viewing the video, discuss the following question with your neighbor or classmates:
- How do greenhouse gases impact life on Earth?
What are the greenhouse gases?
Next, learn more about greenhouse gases and how they contribute to global warming by viewing short video and reading a background article linked below. Click the links below to read the article and view the interactive. Once you have finished reading, answer the Checking In questions below.
2. The NASA article, A blanket around the Earth gives detailed information about the greenhouse gases and explains the expanded greenhouse effect.
- Which of the greenhouse gases (GHG) is most abundant in the atmosphere?
- Which of the long-lived greenhouse gases is the most important ?
Investigate the greenhouse effect
Now that you have some background information about the factors that control the greenhouse effect, you are ready to try an experiment! Begin by reading the instructions and information in the flash interactive, shown below.
Explore the features of the animation
*This video replaces a Flash interactive.
Once you are second screen of the interactive, there are three sliders to control the concentrations of greenhouse gases in the atmosphere. Move them and observe the changes that occur in the graphic. Note both the changes in the temperature and the color of the atmosphere.
After you have explored the sliders impact on temperature, use the three radio buttons to view Greenhouse Gas concentrations and average Earth surface temperatures; record your answers to the Stop and Think questions, below.
For more information about the data used in the interactive click the info button at the top of the screen, or view the information shown below.
The Atmosphere Today
Begin with the Year button set to Today. Note the average global (land and ocean) temperature, shown in the thermometer above the graphic. Record the concentration of the three primary greenhouse gases, CO2, CH4, and N2O in the table on your answer sheet. Note that the N2O and CH4 concentrations are in parts per billion. In other words, there is very little of these two gases in the atmosphere as compared to CO2.
The Atmosphere in 1850
Next, click the 1850 radio button to select the period around 1850. Note the global temperature as well as the composition of the atmosphere. Record the composition of the atmosphere in the table on your answer sheet.
The Atmosphere in 2100
Next, click the 2100 radio button to select the period around 2100. Note the global temperature as well as the composition of the atmosphere. Record the composition of the atmosphere in the table on your answer sheet.
Stop and Think
8. Complete the table below. Record the average global temperature and each of the greenhouse gas concentrations.
Greenhouse gas slider
Once you have a sense of these three atmospheric states, explore the variable GHG concentration slider. As you reduced the greenhouse gases to zero, what happened to the temperature of Earth?
Stop and Think
9. Why are greenhouse gases (GHG) important to life on the planet?
10. In the simulation, which was the most potent (i.e., caused the greatest change in temperature) of the three greenhouse gases, how did you discover this? (Hint: note the concentrations of the gases.)
Altering the energy balance
Both instrumental and satellite data show each decade since 1980 has been warmer than the preceding decade, with the most recent (2010–19) being around 0.2°C warmer than the previous (2000–09). The seven warmest years on record have all occurred in the past seven years, since 2014, and 2020 was among the three warmest years on record since the 1800s. In fact, 9 of the 10 warmest years on record have occurred during the 21st century. (Source: NOAA State of the Climate 2020) What could be causing the heating of the planet? Which parts of the balance have changed? This NASA video, Piecing Together the Temperature Puzzle, (5:48 minutes) explains the scientific understanding of how various elements of the Earth-Sun system, including changes in the solar cycle, alterations in snow and cloud cover, and rising levels of heat-trapping gases, may be contributing to these new records.
As you watch the video, consider how the individual changes in Earth's climate are like a series of puzzle pieces that, when connected, begin to form a recognizable pattern. As you watch this video, you will also gain an appreciation for the contribution that NASA satellites have provided towards the solving of the global climate puzzle.
Preview the following discussion questions before watching the video. Use the controller to review sections of the video as needed.
After completing this lab, discuss your thoughts about the material covered in this lab with your classmates. Consider the following questions:
- Why do we study the planet as one interconnected system?
- How do we know that the Earth's climate is changing, and what is the role of greenhouse gases in that change?
Another, more complex, greenhouse gas interactive can be accessed here: Greenhouse gas interactive. This JAVA applet has several layers of complexity and includes a visualization of molecular interactions with photons.
Additional information about Greenhouse gases, their sources, and role in global warming can be found on this NOAA page.
The following graphic shows the sources of the greenhouse gases by sector. An interesting exercise would be to research each sector and consider ways to reduce the emissions of these gases. | <urn:uuid:b4c89b77-57b2-405e-84a9-febdeb16af99> | CC-MAIN-2023-50 | https://serc.carleton.edu/eslabs/weather/2c.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.897288 | 1,321 | 4.25 | 4 |
Eczema is a chronic condition that results in dry, itchy, red, and inflamed skin.
Eczema affects 1 in 10 Americans, from infants to adults 65 and older, according to the American Academy of Dermatology (AAD). There are several types of eczema, including:
Atopic dermatitis: The most common form of eczema, atopic dermatitis is caused by a weakened natural barrier of the skin, leaving you more vulnerable to irritants and allergens. Atopic dermatitis can be caused by environmental factors, a weakened immune system, or genetics.
Contact dermatitis: Contact dermatitis can be caused by an allergic reaction to something you touch or by chemicals and harsh substances you may come into contact with. This can be caused by certain cleaning products (like bleach), poison ivy, skin care products, latex, or nickel metal.
Hand eczema: Hand eczema, as its name suggests, is eczema that only affects the hands. It can often be caused by cleaning products, hair products, or laundry products.
Neurodermatitis: The cause of Neurodermatitis is unknown. It can occur along with chronic skin conditions and may be triggered by stress. The irritated area becomes itchier as it is scratched, leading to wounds or skin infections.
Nummular eczema: Nummular eczema describes a skin condition that results in itchy, coin-shaped spots on the skin. These spots can become crusty, scaly, or leak fluid. Nummular eczema can be caused by irritation from a bug bite, an allergic reaction, or excessively dry skin.
Stasis dermatitis: According to the AAD, about 15-20 million people above the age of 50 live with stasis dermatitis. Stasis dermatitis results in affected skin that is rough, itchy, and red around varicose veins. Stasis dermatitis usually occurs due to poor blood flow in the legs. This skin condition can worsen and cause adverse side effects such as wounds, discoloration, and pain. | <urn:uuid:c2483acb-a58c-4881-943b-ded3f1e22734> | CC-MAIN-2023-50 | https://sesamecare.com/rio-grande-city-tx/specialty/dermatologist | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.938026 | 434 | 3.453125 | 3 |
Personal Corporation Number — I will walk you through how to create an entity with your own free-will that you can control aside of or instead of carrying the weight of the PERSON you were assigned with your BIRTH CERTIFICATE and SOCIAL SECURITY ADMINISTRATION no.
1 a : being, existence; especially : independent, separate, or self-contained existence
b : the existence of a thing as contrasted with its attributes
2 : something that has separate and distinct existence and objective or conceptual reality
3 : an organization (such as a business or governmental unit) that has an identity separate from those of its members //
Entification - (v.) "The action of giving objective existence to something"
An entity is something that exists as itself, as a subject or as an object, actually or potentially, concretely or abstractly, physically or not. It need not be of material existence. In particular, abstractions and legal fictions are usually regarded as entities.
"Entities" are simply "things that are." Whether they obtain a "conceptual reality," "distinct existence," or any level of "personification" is entirely up to the creator.
Entities have been created by people since ancient times and it has been a hidden subject up until now. The reason why it should seem appropriate to expose this truth now is that we have unseen friends and enemies. There is a realm of actual beings that the human eye is not capable of seeing. Some call this another dimension, but let's not into that. The plain English truth is that the human eye is only able to see a fraction of the spectrum of light. Which is to say that there are PHYSICAL things and beings that we just cannot see! Therefore, let us leave the whole dimensions subject alone.
Entities are seemingly sentient, subjectively experienced as a separate being with their own agency, emotions, preferences, thoughts, and character. They can be likened to a separate mental consciousness, existing alongside it's creator. Entities are the product of intentional creation, starting with an idea of their characteristics, and developed and capable of meaningful interaction through meditation, focus and practice.
So then, after you have finished with all of the details for building your own entity or entities, it is time to bring it to life and send it on its way to manifest your hearts' desires. | <urn:uuid:73335dee-887f-443e-ac96-0d2ba9bfa69f> | CC-MAIN-2023-50 | https://sharebay.org/profile?id=60709 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.963115 | 487 | 2.515625 | 3 |
Last Updated on June 6, 2020 by Amit Abhishek
There are different ways to perform the boiler flame failure test. But the most common of them is by taking out the flame eye gently to get an alarm and verify its working.
It is part of the periodic maintenance and checks to verify that all the boiler safety systems are working.
The normal procedure for any boiler is to shutdown and raise the alarm; in event of flame failure. It is a trip that helps prevent the accumulation of fuel into the furnace chamber.
Thus it helps avoid the source of potential explosion when the boiler is fired again. When a flame failure does take place it is the responsibility of operating personal to take immediate actions.
The flame failure trip in a typical boiler can be due to one of the following reasons: loss of ignition; fuel valve closed, unstable flame, low airflow, very high furnace pressure, dirty flame sensor and loss of electrical supply.
It is been seen time and again that a greater number of explosions happen due to do a lack of proper purging or faulty flame sensor.
This is why it is more than vital to test and ensure the working of the flame failure alarm and emergency trip.
Before we get straight into these methods of testing; one must also know “what are flame failure device and how does it work in a marine boiler”.
What Are Flame Failure Device?
A flame failure device or FFD is a safety device used around the world in industrial boilers and burners. It cut off the fuel supply whether gas or fuel oil; thus avoiding hazardous situations.
You can consider it as a safe trip that restricts the flow of fuel to the burner in the event flame goes off unexpectedly.
It is so essential for one’s safety and wellbeing that is considered illegal to operate an industrial or domestic burner without having any form of flame failure device preinstalled.
In contrast, a flame failure device is nothing but a photoelectric diode; one that converts light energy to electrical energy. This again is of different types, for example, I.R or U.V type.
One of the key benefits of a flame failure device is that; your burner is not calling in for fuel without a proper flame.
Under normal conditions, the flame sensor detects the presence of a flame and sends electrical signals to the flame detector circuit. This then sends signals to keep the fuel valve open.
In case the sensor did not detect the proper flame for the next 5 to 10 seconds. It raises an alarm and call for an emergency shutdown. The burner can not be ignited only once the trip is reset.
How Does A Flame Detector Work?
There are different types of flame detectors used in burners; of which two are most common: Ionization current flame detector and photocell operated flame detector.
An ionization type flame detector consists of a bimetallic strip or a rod insulated with a ceramic mix. It measures the intensity of ionization by the flame in the form of a weak DC signal.
Now, this weak signal is first amplified and sent to the central controller which operates the fuel valve. Such a type of fire sensor is mostly fitted in household furnace or burners.
Modern industrial and marine boiler utilize different types of flame sensing systems often called “Flame Eye“.
It is an electronic system consisting of a photocell, an electronic controller and a preprogrammed unit. Together they control the air blower, fuel ignition and solenoid operated fuel valve.
For those who do not know or forget what a photocell is? I must tell is a type of resistor that changes its resistance based on the intensity of light falling onto them.
Now in a marine boiler when the intensity of light falling onto these sensors stops or reduces significantly. The conductivity of these resistance stops and thus actuating current to the solenoid stops resulting in the closing of the fuel valve.
Methods To Perform Boiler Flame Failure Test
A boiler flame failure test can be conducted on a ship or industry in three major ways. On one method we simply test the electrical conductivity across the terminals; while on the other two we actually perform a flame failure trip.
In the first method, we need to check for continuity at both ends of the electrodes of a photocell. Make sure it is tested only in the presence of a light source.
Now if the continuity is found it is proved that the flame detector is working properly and there is no need to start and trip the boiler unnecessarily.
In the second method first, fire the boiler normally and let it operate for 1-5 minutes. Then gently take out the flame eye sensor out of the boiler and cover it with a wrap or paper or cloth.
You will notice that within seconds of pulling out the flame eye; an alarm is sound with emergency shut down. This indicates that the flame failure trip is working satisfactorily.
In the third and final method, we first start and run the boiler at high flame. After a few minutes of normal operation, we start to slowly throttle; the fuel valve ( non-solenoid / manual ) to stop position.
You will notice that within a few seconds an alert appears on your boiler control panel followed by the emergency shutdown. This states everything is nice and fine and the test was concluded to be a success.
- Marine Boilers Safety Devices (Alarms and Trips)
- Flame Arrester – It’s Working & Why Is It Required
- Boiler Mountings And Their Function – Complete List
- What are BFP ( Boiler Feed Pump ) – Parts & Working | <urn:uuid:73300884-b843-4f9b-a4e5-9796822bdd40> | CC-MAIN-2023-50 | https://shipfever.com/boiler-flame-failure-test/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.929484 | 1,153 | 2.953125 | 3 |
A lottery is a game of chance where winning prizes are drawn through random selections. These games are popular in many countries, and are sometimes even run by the government as a means of raising tax revenues.
The term “lottery” was first used in the English language around 1569. Initially, it was used to describe a lottery of tickets that were distributed at dinner parties, but the word can also be applied to other forms of gambling where multiple people buy a ticket for a small sum of money in hopes of winning a large amount of cash.
Lotteries are a form of gambling that is similar to sports betting, except that the chances of winning are much higher. There are several types of lottery games, each with its own rules and odds.
Most lotteries involve a draw data sgp of numbers and a prize is awarded to the person who has the most matches. This can be in the form of a jackpot, which increases as more numbers are chosen. Alternatively, there are smaller jackpots that are paid out in regular intervals over a period of time.
In the United States, federal and state lotteries are the largest players in the lottery market. This is a good thing because it means that every American has an equal opportunity to play the lottery.
When a lottery is started, it usually starts with a small number of games and eventually expands to include more and more games as revenues rise. This can be a good thing for the public, as it can lead to new and interesting games that may attract people to the lottery.
However, there are also concerns about the impact of lotteries on society. These concerns range from issues of promoting gambling to the negative consequences that can result from it. They also include the problem of problem gamblers and alleged regressive effects on lower income groups.
These concerns are often rooted in the fact that lotteries can be deceptive in their advertising, making it difficult for people to judge whether they are actually getting value for their money. They also have the tendency to overstate the odds of winning the jackpot, and the prizes are often eroded by inflation and taxes.
This can make the lottery a poor financial choice for people who have trouble saving or are financially struggling. In addition, there are a variety of other financial problems that can arise from playing the lottery, including the potential tax implications and the risk of being bankrupt in a short amount of time if you win.
The most important way to avoid the pitfalls of lottery is to build up a strong emergency fund before you spend any money on it. This can help to prevent you from becoming a debt slave and wasting your hard-earned money on lottery tickets.
If you have any questions about lotteries or need help deciding whether to play, we encourage you to contact your local or state government office. These offices can be found on the government website or by calling them directly. They will be able to provide you with more information about the lottery and the laws regarding it. | <urn:uuid:9912cf6d-a1d6-4110-8149-daea8b817436> | CC-MAIN-2023-50 | https://shonnsshotgun.com/what-is-a-lottery-4/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.980516 | 609 | 3 | 3 |
Grade 3 Mental Maths | Activities & Games | Version 3
Mental maths is vital for students to improve in Mathematics. This resource has addition, subtraction, multiplication, algebra, reading numbers, number patterns, addition by 10s, doubling & halving. Mental math needs to be done daily and this resource helps! These grade 3 mental math games & activities ensure students get important repetition of a number of mathematical concepts. It’s a great resource to use for daily mental maths. These activities are engaging and are a great way to get your students involved in mental maths! Click here to preview.
Math concepts included: Four operations, Algebra, Reading Numbers, Number Patterns, Addition by 10s, Doubling / Halving
I hope you find this resource helpful! | <urn:uuid:c6f70423-ab63-40c8-bfac-6960d0979900> | CC-MAIN-2023-50 | https://slamboresources.com/product/grade-3-mental-maths-activities-games-version-3/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.912136 | 159 | 3.984375 | 4 |
Maintaining good oral health is crucial for a healthy body and mind. Dental procedures such as fillings, extractions, root canals, crowns, and implants are necessary to preserve and protect teeth. While some people may be tempted to try these procedures themselves, it is essential to seek the help of a professional dentist.
Fillings are commonly used to treat cavities. A cavity occurs when bacteria in the mouth produce acid that erodes the enamel on teeth, creating a hole or cavity. Fillings involve removing the damaged portion of the tooth and replacing it with a filling material such as composite resin, porcelain, or amalgam. A professional dentist has the necessary skills and equipment to remove the decayed portion of the tooth without causing further damage to the tooth.
Extractions are necessary when a tooth is severely damaged or infected beyond repair. In some cases, teeth may need to be extracted to make room for orthodontic treatment or to prevent overcrowding. Trying to remove a tooth at home can result in a host of complications, including infection, excessive bleeding, and damage to surrounding teeth and tissues. A professional dentist can remove the tooth safely and painlessly and can provide options for replacing the missing tooth.
Root canals are used to treat infected or damaged pulp inside a tooth. The pulp is the soft tissue inside the tooth that contains nerves and blood vessels. When the pulp becomes infected or damaged, it can cause severe pain and can lead to abscesses and other complications. During a root canal procedure, the dentist removes the damaged pulp, cleans the inside of the tooth, and seals it with a filling. While root canals have a reputation for being painful, modern techniques and anesthesia make the procedure relatively painless. Attempting a root canal at home is dangerous and can lead to serious complications.
Crowns are used to restore damaged or decayed teeth. A crown is a cap that covers the entire tooth, providing protection and restoring the tooth’s function. Crowns are typically made from materials such as porcelain, ceramic, or metal. A professional dentist can create a custom-made crown that fits perfectly and blends in with the surrounding teeth. Trying to create a crown at home is not only difficult but can lead to improper fitting and further damage to the tooth. Learn more.
Implants are a popular option for replacing missing teeth. An implant is a metal post that is surgically placed into the jawbone. The post serves as a foundation for a replacement tooth, such as a crown or bridge. Implants are a permanent solution to missing teeth and can last a lifetime with proper care. Attempting to place an implant at home is not only dangerous but can lead to implant failure and other complications.
In conclusion, dental procedures such as fillings, extractions, root canals, crowns, and implants are essential for maintaining good oral health. While some people may be tempted to try these procedures themselves, it is crucial to seek the help of a professional dentist. Attempting to perform dental procedures at home can lead to serious complications, including infection, excessive bleeding, and damage to surrounding teeth and tissues. A professional dentist has the necessary skills, knowledge, and equipment to perform these procedures safely and painlessly, ensuring the best possible outcome for the patient. Remember, it is always better to seek the help of a professional dentist for all your dental needs. Next article. | <urn:uuid:9fe632fb-781b-4e3c-8234-b1c369c99b7e> | CC-MAIN-2023-50 | https://southcarydental.com/dental-procedures-fillings-extractions-root-canals-crowns-implants/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.946189 | 695 | 3.125 | 3 |
After the grapes are sorted, they are ready to be de-stemmed and crushed. For many years, men and women did this manually by stomping the grapes with their feet. Nowadays, most winemakers perform this mechanically. Mechanical presses stomp or trod the grapes into what is called must. Must is simply freshly pressed grape juice that contains the skins, seeds, and solids. Mechanical pressing has brought tremendous sanitary gain as well as increased the longevity and quality of the wine.
For white wine, the winemaker will quickly crush and press the grapes in order to separate the juice from the skins, seeds, and solids. This is to prevent unwanted colour and tannins from leaching into the wine. Red wine, on the other hand, is left in contact with the skins to acquire flavour, colour, and additional tannins.
After crushing and pressing, fermentation comes into play. Must (or juice) can begin fermenting naturally within 6-12 hours when aided with wild yeasts in the air. However, many winemakers intervene and add commercially cultured yeast to ensure consistency and predict the end result.
Fermentation continues until all of the sugar is converted into alcohol and dry wine is produced. To create a sweet wine, winemakers will sometimes stop the process before all of the sugar is converted. Fermentation can take anywhere from 10 days to one month or more.
Once fermentation is complete, clarification begins. Clarification is the process in which solids such as dead yeast cells, tannins, and proteins are removed. Wine is transferred or “racked” into a different vessel such as an oak barrel or a stainless steel tank. Wine can then be clarified through fining or filtration.
Fining occurs when substances are added to the wine to clarify it. For example, a winemaker might add a substance such as clay that the unwanted particles will adhere to. This will force them to the bottom of the tank. Filtration occurs by using a filter to capture the larger particles in the wine. The clarified wine is then racked into another vessel and prepared for bottling or future ageing. | <urn:uuid:b317b862-1c7e-4529-a355-7899b400cb16> | CC-MAIN-2023-50 | https://spotswoodwines.co.za/crushing-and-pressing/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.95973 | 439 | 3.28125 | 3 |
While playing a game of chess, there will be occasions when you may find yourself running out of chess pieces. During such instances, one may think about playing a more offensive style by involving the king. At the latter stages of a match, there may arise a situation when you may be tempted to attack the rival king with your own king. So, the question is – ‘Can a king kill a king in chess?’
One may think that going after the opponent’s king with your own king is the ideal platform for a decisive victory. However, this is not as simple as it looks from the outside. In fact, the game’s rules make it rather tricky to go ahead with such a game plan. In this Square Off article, we shall go over the circumstances under which a king can be used offensively to attack rival chess pieces.
Chess Rules Concerning the King
As its name suggests, the chess ‘king’ is the most important chess piece. This piece is central to the game, and its existence can be traced to the very earliest days of chess. The chess king is known as the ‘Shah’ in Persian and has different names in different languages. An entire game of chess revolves around the centrality of the king under constant threat from the opposition.
As many of you have noticed while playing chess, the king rarely moves out of its position in the match’s early stages. It is only during the middle and the latter phases that it enters the gameplay.
Can a King Kill a King in Chess?
To return to the day’s topic – ‘Can kings kill a king in chess? The direct answer will be a ‘no’. A better way to define a situation when you are about to finish a chess match is by using the term ‘capture’. A chess king can capture an enemy chess piece one block in any given direction. However, a king can accomplish this task only if it is not allowing itself in check or expose a discovered attack to do likewise.
Throughout the progress of a chess match, two kings are routinely manoeuvred to be at a safe distance from each other. In other words, chess players from the word ‘go’ try to avoid their kings from meeting each other on the chess board. However, this cannot happen all the time, and there are occasions when two rival kings can get too close to each other.
When a king faces the opposite king, it is termed as a ‘direct opposition’. Two other variations of this case also exist in chess; one is called ‘diagonal opposition’, and the other is known as ‘distant opposition’.
For a king to kill another king, they have to be in close contact with one another. In a chess match, such a situation arises when the game reaches the final stages. At this point, both players are usually left with very few chess pieces. Both players have no choice but to engage their respective kings in the gameplay to eke out a win. On many occasions, the two kings have just a handful of pawns to support them on the chessboard.
Can a King be Next to a King in Chess?
On a chessboard, two rival kings can never move directly adjacent to each other. The rules of the game state very clearly that two kings can never create a mutual impediment on the chessboard. When a tight blockade is set up on the chess board in situations like this, the chess player who gets the chance to not make a move is said to ‘have the opposition’.
The player mandated to make a move at such a juncture is said to be at a disadvantage. In chess jargon, this condition is called a ‘zugzwang’, which is German for ‘compulsion to move’.
Can the King Kill in Chess When in Check?
Yes, the king can kill a rival chess piece at any game stage, even if it is in check. The only thing to consider is whether the rival piece attempting to check the king is supported by another rival piece. Suppose an opposition chess piece comes to check your king without the backup of any other rival chess piece; then, you will be free to capture it.
The piece you are being checked with could be a queen, a rook, a bishop, a knight, or a pawn. If left unguarded, your king will have the liberty to defend itself by capturing any piece that approaches to check it.
Can a King Kill Diagonally in Chess?
The king in chess can move only one square/tile/block in any direction. Similarly, the king can capture a rival chess piece in any order, one block at a time.
A king can capture a rival chess piece – forwards, backwards, sideways, and diagonally, only when the captured piece is not defended by another rival piece.
What Can Kill a King in Chess?
A king can be killed or captured by any given opposition chess piece during gameplay in a chess game. To end a chess match, any chess piece can strike a decisive blow on the rival king, from the pawn to the queen.
In the case of a pawn, it has to be nearest to the rival king to corner it. So, it needs to be backed up by another chess piece of the same colour. If that is not the case, the king will have the power to capture the pawn.
Get Great Deals at Square Off Website
Now that you have read about today’s topic – ‘Can a king kill a king in chess?’, check out the Square Off website for more informative chess blogs. Visit the website today and find great deals on your favourite AI-powered automated chess boards!
Online Chess with Square Off
You can now watch professional chess tournaments live right here, with Square Off Live and learn master strategies as you stream them and you can also test your chess prowess and improve your strategies with Square Off Puzzles as you take on these challenging puzzles with varying difficulty levels | <urn:uuid:38b3591e-aaba-4b75-91c0-5e3ab8fc4745> | CC-MAIN-2023-50 | https://squareoffnow.com/blog/can-a-king-kill-a-king-in-chess/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.954802 | 1,253 | 2.703125 | 3 |
The declaration of disbelief (fatwa of Kufr), that is declaring certain groups outside the pale of Islam has been commonplace in the history of Islam after the decline of its glory. In fact, these declarations have contributed immensely to the decline of Islam because of the enmity and fight it created among the different groups declaring each other as kuffaar (disbelievers). The Gambia has not been immune to such declarations. Today on the face of the earth, there is not a single Muslim group that has not been declared kaafir by some other Muslim group. The logical conclusion that can be drawn if these declarations are authentic is that there is no Muslim on the face of the earth because of the declarations and counter-declarations of disbelief against each other. The questions that every sincere Muslim must ask is who owns Islam and has the authority to accept someone into it or expel him out of it? What counts someone inside the pale of Islam? Who is a Muslim according to the Authority of Islam (and Who is that Authority)?
Islam was brought by Muhammad Rasoolullaah (peace and blessings of Allah be upon him) over 1400 years ago.
Let it be noted that technically, all prophets were Muslims and came with Islam. But Islam as we know and practice on the basis prescribed in the Holy Qur’an and the Sunnah was brought by Muhammad Rasoolullaah (peace and blessings of Allah be upon him). The primary source of the teachings of Islam is the Holy Quran, which was revealed by Allah the Almighty. All Islamic teachings and practices are embedded in the Holy Qur’an. The second most authentic source of Islamic teachings is the Sunnah of Muhammad Rasoolullaah (pbuh). In fact, his Sunnah is intertwined with the teachings of the Holy Qur’an. There is no teaching of the Holy Qur’an that cannot be found in his practice and he practiced nothing that cannot be found in the Holy Qur’an. That is why on answering the question about the behaviour, attitude and morality of Muhammad Rasoolullaah (saw), the Mother of the Faithful, Hadrat Aisha (may Allah be pleased with her) said that his morality, behaviour and attitude was the Holy Qur’an.
If indeed Islam is embedded in the Holy Qur’an and practiced by Muhammad Rasoolullaah (pbuh), then it could be concluded that the answers to the above questions must have been answered by the Holy Qur’an and Muhammad Rasoolullaah (pbuh). In other words, the Authority to declare who belongs or not to Islam is the Holy Qur’an and Muhammad Rasoolullaah (pbuh). Any declarations that contradict their declarations must be considered null and void and in fact an attack on their authority. If, apart from the Holy Qur’an and Muhammad Rasoolullaah (saw), any person has the authority to make such declarations, then Islam clearly has no right to claim that it is based on the teachings of the Holy Qur’an and the Sunnah of Muhammad Rasoolullah (pbuh); in that case Islam would be a man-made religion, with individuals, organisations, and institutions having the authority to declare what it is or not. As a believer of the pristine teachings of Islam, I consider any behaviour that portrays such identity of Islam, to be very unjust to it.
Now, if the Holy Qur’an and Muhammad Rasoolullaah (pbuh) are the fundamentals on which every decision regarding Islam should be based, what are the answers they provide to the above questions? The Holy Qur’an says:
O ye who believe, when you go forth in the cause of Allah, make proper investigation and do not say to anyone who greets you with the greeting of peace, ‘Thou art not a believer.’ You seek the goods of this life, but with Allah are good things in plenty. Such were you before this, and Allah conferred His special favour on you; so do make proper investigation. Surely, Allah is well aware of what you do (Surah An-Nisaa: Verse 95)
This verse is mentioned in the context of war. Even in such situations, the Holy Qur’an prohibits one to call another a non-believer without proper investigation. Something important is mentioned in this verse: anyone who greets you with the greeting of peace. It does not even say anyone who calls himself a Muslim. A Muslim should not be declaring people, who greet him with the greeting of peace, as non-believers even in a state war. Then how could one be allowed to call someone a non-believer, who does not only greet you with the greeting of peace but goes further to declare that he is a Muslim? It is wrong according to the Holy Qur’an and the Sunnah. Here, let me give an example from the Sunnah of Muhammad Rasoolullah (pbuh) that throws more light on this issue. We read in Sahih Bukhari, the book considered by Muslims as the most authentic after the Holy Qur’an:
Hadhrat Usama bin Zaid (ra) relates that the Holy Prophet (pbuh) sent us to the oasis of Juhaina tribe. We caught them early in the morning at their water-fountains. An Ansari and I chased one of them and apprehended him. When we overpowered him, he exclaimed: La Ilaha Illallah (there is none worthy of worship except Allah) which caused my Ansari Companion to restrain his hand from him, but I pierced him with a spear and killed him. When we returned to Medina and the Holy Prophet (pbuh) came to know of the incident, he asked: “O, Usama! Did you kill him in spite of the fact that he had recited La Ilaha Illallah?” I replied: “O, Prophet of Allah! He was saying (these words) merely to ensure his safety.” The Holy Prophet (pbuh) kept on repeating his question to a point when I wished I had not become a Muslim before that day. (Another tradition relates) The Holy Prophet (pbuh) said, “You still killed him, even though he had affirmed La Ilaha Illallah?” I clarified, “O, Prophet of Allah! He had said that because he was afraid of the weapon.” The Holy Prophet (pbuh) exclaimed: “Why didn’t you cut his heart open to make sure if he had said it from the core of his heart?” The Prophet of Allah repeated this remark so many times that I wished I had not become a Muslim before that day.”
(Bukhari, Book of Al Maghaazi, Chapter Ba’ath al-Nabi, Usaamah bin Zaid ilal Harqaat min al-Juhaina)
The readers are also referred to: Sahih Muslim, Kitaabul Imaani, Baabu Tahriimi Qatlil Kaafri Ba’da Qaulihi: Laa Ilaaha IllaAllah.
Muhammad Rasoolullaah condemned the killing of an individual in his last moments, during a skirmish with Muslims, who declared the Kalima There is no god but Allah. He did not even go to the extent of declaring that Muhammad (pbuh) was the Messenger of Allah. Muhammad Rasoolullaah (pbuh) emphasized in this hadith that belief is a matter of the heart and only Allah knows what lies in a person’s heart. A person must therefore be judged on what he says and does; not what one thinks lies in his heart.
If this is the position of Muhammad Rasoolullaah (pbuh), then who else has the mandate and authority to declare someone a disbeliever who does not only declare la Ilaha Illallaah but further declares Muhammad Rasoolullaah (pbuh)? Going contrary to this position of Muhammad Rasoolullaah (pbuh) could mean that one believes he loves Islam more than Muhammad Rasoolullaah (pbuh) or believes that this position of Muhammad Rasoolullaah is deficient. Na’uudhu billaah, may Allah protect us from any such action.
If Muhammad Rasoolullaah (pbuh) condemns the killing and thought of disbelief about a person who only declared there is no god but Allah when he was confronted by a Muslim with a sword in hand, what would be his position regarding the one who does not only stop at declaring the kalima but goes further to practice other pillars of Islam particularly Salaat which is considered as the distinction between a believer and disbelief? Of course, we do not have to speculate the answer; Muhammad Rasoolullaah has provided us the answer. This Hadith is also narrated in Sahih Al-Bukhari that the Holy Prophet Muhammad Rasoolullaah (pbuh) said:
One who observes the same prayer as we do, faces the same Qibla (in prayer) as we do, and partakes from the animal slaughtered by us, then such a one is a Muslim concerning whom there is a covenant of Allah and His Messenger; so you must not seek to hoodwink Allah in the matter of this Covenant. (Bukhari, Kitabus-Salat, Baab Fazl Istiqbal il-Qibla)
It is wrong to call such a person a non-Muslim. Doing so is a clear insult to the status of Muhammad Rasoolullaah (pbuh).
In fact, Muhammad Rasoolullaah (pbuh) has said that anyone who calls himself a Muslim should be considered a Muslim. The following statements will prove the point. When a census of Madinah was being conducted, Muhammad Rasoolullaah (pbuh) instructed:
Write down for me the name of every such individual who claims to be a Muslim by the word of his own mouth (Sahih Bukhari, Kitaabul Jihaadi Was Siyar, Baabu Kitaabatil Imaaminnaasa).
You are also referred to Sahih Muslim, Kitabul Iman, Babu Jawazil Istisrari Bil Imani Lil Kha’ifi.
This is the man about who the Holy Qur’an says that he never speaks of his own desire, whatever he says is a pure revelation from the Most High, Allah the Almighty. These statements of Muhammad Rasoolullaah (pbuh) clearly tells us that to be counted as a member of Islam the religion, it is enough to call oneself a Muslim. The veracity of the claim now rests with Allah to judge. There are technical definitions of a Muslim but these relate to the heart and the ultimate Judge of who fulfills the technical definition of Islam is none other than Allah. That is not a human domain; it is purely the domain of Allah.
Based on these statements of Muhammad Rasoolullaah (pbuh) it is sinful to refer to a declarant of the kalima as a kaafir. In fact, he has instructed Muslims not to do that. Here are a few Ahadith regarding that:
“Three things are the basis of faith. [One is] to withhold from one who says `There is no god but Allah’ — do not call him kafir for any sin, nor expel him from Islam for any misconduct.” (Abu Dawud, Book of Jihad).
“Withhold [your tongues] from those who say `There is no god but Allah’ — do not call them kafir. Whoever calls a reciter of `There is no god but Allah’ as a kafir, is nearer to being a kafir himself.” (Tabarani, reported from Ibn Umar).
So many references regarding this issue could be cited but I believe these are sufficient. I look forward to the answers of the Gambia Supreme Islamic Council to the following questions.
Now my question to the ‘scholars’ of Islam in The Gambia, particularly the scholars of the Gambia Supreme Islamic Council: What is their basis, from the clear statements of the Holy Qur’an and the Sunnah of Muhammad Rasoolullaah, in declaring fatawa of kufr? Where does the Holy Qur’an and the Sunnah give authority to a human being or an organization to declare a declarant of the kalima as a kaafir? Who owns Islam and has the authority to make such declarations? Can they back their position in contradistinction to the above facts from the life of Muhammad Rasoolullaah (pbuh)? Can they cite one example from the life of Muhammad Rasoolullaah (pbuh) where he calls a declarant of the kalima a non-believer?
I must conclude with a quote. A general quote that can refer to anyone.
“Claiming wisdom in the presence of God, only shows your ignorance and insanity.”
Our last call is: All praise belongs to Allah, the Lord of the universe; and peace be upon those who follow the guidance from Allah. Aameen. | <urn:uuid:6ad16fe3-ed23-4928-a4cc-e98dd786b9ab> | CC-MAIN-2023-50 | https://standard.gm/owns-islam-authority-accept-someone-expel-question-scholars-gambia-supreme-islamic-council/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.954902 | 2,748 | 3 | 3 |
Commercial and industrial establishments produce tons of waste material and by-products in their daily operations. However, waste management may not be among a company’s list of prioritised operations as it’s typically seen more like a chore and extra cost and doesn’t provide any profit. But numerous companies have shown that proper waste management can be beneficial to both the company and the environment.
Proper waste management could cut a company’s cost by recycling materials and even wastewater. Recycled materials and wastewater leads to less cost in production and, thus, allowing the company to produce at lower prices or higher profit margins. Additionally, a company that’s known to properly manage their waste and practice recycling are seen as environmentally responsible or “green” which helps with the company’s image, thus attracting customers and motivating employees.
So let’s take a look at tips and strategies your company could employ to properly manage your business’ wastes and reap the benefits of proper waste management:
Know Your Waste
It’s possible that your company isn’t entirely aware of how much and what kind of wastes your operations produce, but only mind how much you’ve been paying for waste disposal and transportation. But to improve your company’s waste management plan and coming up with waste management solutions, you’ll want to account the volume and types of waste you regularly produce. Check out each part of your operations and see what wastes they produce and determine the types of wastes you can recycle. You should also look into how much water your company uses and how much wastewater it produces.
After inspecting your company’s production processes and operations that produce waste, you’ll be able to identify the most common wastes being produced and could perhaps streamline or improve your operations to reduce wastes. For example, if you see that your office operations produce volumes of paper waste on reports and documents that don’t necessarily need to be printed out, revise your company’s paper-and-paperless policies to limit paper wastes. The bottom line is that you need to identify areas of your operations to reduce excessive and wasteful use of raw materials and goods.
Once you’ve identified and accounted for all the wastes your company produces, you’ll have to determine which one can be recycled. If your company produces several litres of wastewater in its operations, see if having an on-site water recycling system could be installed and integrated into your operations to reuse wastewater and cut the costs for fresh water and in transporting wastewater. If your company produces a lot of solid (non-toxic or biohazard or chemical) wastes such as wirings, wood chips, metal shavings, try to find ways wherein you can re-use these waste on other products or as spare parts for repairs to cut costs and reduce wastes going to landfills.
Partner With A Waste Management Company
If you lack the expertise, time, or manpower for waste management, you can always partner with a waste management company to help you out with dealing with your company’s waste. That way, your wastes are professionally and legally taken care of, and they can sometimes help you identify ways to recycle or reduce your waste, as well as help you develop your company’s waste management plan.
Proper waste management could be advantageous to your company, environment, and society. So make sure to keep these basic tips and strategies in mind when developing and improving your company’s waste management planning and practices. | <urn:uuid:ae1b346c-f4bb-479c-8f06-66e383ae20b5> | CC-MAIN-2023-50 | https://startsavingoninsurance.com/tips-in-managing-your-business-wastes/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.943555 | 724 | 2.53125 | 3 |
- There are two layers of tax on investment income. First, corporations pay the corporate income tax on their profits. Second, shareholders pay an income tax on the dividends they receive (dividends tax) and capital gains they realize (capital gains tax).
- On average, in the OECD, long-term capital gains from the sale of shares are taxed at a top rate of 19.1 percent, and dividends are taxed at a top rate of 24.4 percent.
- To encourage long-term retirement savings, countries commonly provide tax preferences for private retirement accounts. These usually provide a tax exemption for the initial principal investment amount and/or for the investment returns.
- Tax-preferred private retirement accounts often have complex rules and limitations. Universal savings accounts could be a simpler alternative—or addition—to many countries’ current system of private retirement savings accounts
Long-term savings and investment play an important role in individuals’ financial stability and the economy overall. Taxes often impact whether, and what share of, income individuals set aside for savings and investments. There are various factors that determine the amount of taxes one is required to pay on these savings and investments, such as the type of asset, the individual’s income level, the period over which the asset has been held, and the savings purpose.
While long-term savings and investment can come in many forms, this paper generally focuses on the tax treatment of stocks in publicly traded companies. Each OECD country approaches the taxation of stocks differently, but most countries levy some form of capital gains and dividend taxes on individuals’ income from owning stocks. Capital gains and dividend taxes are levied after corporate income taxes are paid on profits at the entity level, and thus constitute a second layer of taxation.
However, lawmakers have recognized the need to incentivize long-term savings—particularly when it comes to private retirement savings. Thus, OECD countries commonly provide tax preferences for individuals who save and invest within dedicated private retirement accounts—usually by exempting the initial principal investment amount or the investment returns from tax. These tax-preferred private retirement accounts play a significant role when looking at an economy’s total savings and investments. For example, in the United States, about 30 percent of total U.S. equity is held in tax-preferred retirement accounts. Foreigners hold 40 percent of U.S. equity, and only about 25 percent is estimated to be in taxable accounts.
This paper will first explain how dividends and capital gains taxes impact one’s investment income, and how tax-preferred private retirement accounts lower the tax burden on such investments. Second, a survey of capital gains taxes, dividends taxes, as well as the tax treatment of private retirement accounts shows how the taxation of savings and investments differs across OECD countries. Finally, we briefly highlight the importance of simplicity when it comes to retirement savings and explain how universal savings accounts could be a step in that direction.
Understanding the Tax Treatment of Savings and Investment
Savings and investment can come in many forms. This paper focuses on saving in the form of owning stocks in publicly traded companies. Stocks provide two ways for investors to get income.
The first is by buying a stock and selling it later at a higher price. This results in a capital gain. An investor who buys a stock for $100 and later sells it for $110 has earned a $10 capital gain.
The second way to get income from stocks is to purchase stocks in companies that regularly pay out dividends to shareholders. A company that pays out annual dividends at $1 per share would provide an individual that owned 10 shares of that company $10 each year.
Two types of taxes apply to those different earnings: capital gains taxes and taxes on dividends, respectively. A capital gains tax applies to the $10 in gains the investor made, and a dividends tax applies to the $10 in dividends that were paid out.
Both taxes create a burden on savings. If an individual has a savings goal and needs an 8 percent total return on investment to reach that goal, a capital gains tax would require that individual’s actual return on investment to be higher than 8 percent to meet the goal. If the capital gains tax is 20 percent, then the individual’s before-tax return on investment would need to be 10 percent.
Similarly, taxes on dividends reduce earnings for investors.
For workers who are investing their money after paying individual income taxes, taxes on capital gains and dividends represent an additional layer of tax on their earnings.
However, when it comes to retirement savings, governments regularly provide tax exemptions for either the wages used to contribute to a savings account or an exemption on the gains.
Table 1 shows there are four basic tax regimes for investors. The two dimensions of taxation concern the principal, or the initial deposit, and the returns to investment. Systems generally fall into one of the four categories in the table.
Some investments are taxed both on the initial principal and on the return. These include investments in brokerage accounts. For this type of investment there is usually no exemption or deduction for the initial cost of purchasing stocks and the income from the investment (whether a capital gain or a dividend) is taxable.
Private retirement savings, on the other hand, usually face an exemption from tax on the initial principal investment amount or on the returns to that investment. In the U.S. this is referred to either as “Traditional” or “Roth” treatment for Individual Retirement Arrangements (IRAs). With traditional treatment, there is no tax on the initial investment principal, but there is a tax on the total amount (principal plus gains) upon withdrawal. Roth treatment includes taxable principal investments and no tax upon withdrawal.
In the U.S., health savings accounts provide an exemption from tax both on the principal and the returns upon contribution as well as withdrawal, representing the fourth type of tax treatment on investment where neither the principal nor the returns are taxed at any point.
|Tax on Principal Investment Amount||No Tax on Principal Investment Amount|
Tax on Returns/Withdrawal
|Individual Brokerage Accounts||Defined Benefit Pensions, Traditional IRAs, and 401(k)s|
No Tax on Returns/Withdrawal
|Roth IRAs and Roth 401(k)s||Health Savings Accounts|
The Multiple Layers of Taxes on Investment
Individual investors who save outside of a retirement account will face several layers of taxation. If an investor buys stock in a corporation, that company will owe the corporate income tax, and the investor will owe dividends tax on any dividend income or capital gains tax if the investor sells the stock at a higher price.
The following example shows how $47.47 in tax would apply to $100 in corporate profits when accounting for both corporate taxes and taxes on dividends. First, the corporation earns $100 in profits. If it is a U.S. company and faces the combined state and federal corporate income tax rate, it would pay $25.77 in corporate taxes on that income.
This leaves $74.23 available for a dividend. The shareholder would owe an additional $21.70 in dividend taxes.
From the $100 in profits, just $52.53 in after-tax profit remains for the shareholder in the form of a dividend.
In a similar way, the capital gains tax is an additional layer on corporate income.
However, some countries have integrated tax systems. This means that if a company pays corporate taxes on its profits, an investor can claim a (partial or full) credit against taxes on capital gains and dividends. This results in investors only paying taxes to the extent that capital gains or dividends tax liabilities are more than the (partial or full) credit for corporate taxes paid.
Tax Treatment of Private Retirement Accounts
Most individuals in OECD countries can utilize a tax-preferred savings account to build up individual retirement savings—often in addition to public pensions. Two general forms of tax treatment are the most common and fall into the categories discussed earlier.
One approach allows individuals to contribute to retirement accounts using money that has already been taxed as wages. However, returns on the investment and withdrawals from the account are tax-exempt. This is what is called a Taxed, Exempt, Exempt (or TEE) approach, referring to the policy’s treatment of contributions, returns on investment, and withdrawals from a retirement account. In the U.S., this is referred to as “Roth” treatment for retirement savings.
The other approach allows individuals to contribute to accounts with either pretax earnings or provide a tax deduction for contributions. Returns on the investment do not face tax, but withdrawals from the account (principal plus earnings) are taxed. This is called an Exempt, Exempt, Taxed (EET) approach. In the U.S., “Traditional” retirement vehicles follow this approach.
Figure 2 compares how these two preferences for retirement savings impact an investor and compares them to an investor who is saving outside a retirement account.
In each scenario, $1,000 is the initial deposit. In the first and second scenarios, a 20 percent tax applies to that initial deposit. Think of this as a tax on the wages that are being used to fund the investment.
So, right off the bat, scenarios 1 and 2 have $800 for investing. Scenario 3 does not include a tax on wages used for contributing to a retirement account and allows the full $1,000 to be invested because it is an EET approach (meaning that contributions are tax-exempt).
In each scenario, the investor leaves the funds in their investment account for 20 years and earns a 7 percent annual return. At the end of this period, both scenarios 1 and 2 have the same amount of money in their investment account, $3,095.75. Because scenario 3 started off with a larger initial deposit, that scenario has $3,869.68 in their investment account.
Now, when funds are withdrawn, taxes apply both to amounts withdrawn in scenario 1 and scenario 3, but not scenario 2. Scenario 2 operates as a TEE account, so withdrawals are exempt from tax.
Upon withdrawal, Scenario 1 pays a 20 percent tax on the gains (final amount minus the $800 initial investment). This results in final, after-tax earnings of $2,636.60. Scenario 2 does not owe taxes on gains or principal upon withdrawal; the final earnings are $3,095.75. Scenario 3 owes a 20 percent tax on the withdrawn amount which includes both the principal and gains—so the total withdrawal amount—and has final earnings of $3,095.75, the same as in Scenario 2.
This example shows two things. First, because fully taxable accounts have more than one layer of taxes, they result in lower after-tax investment earnings. Second, if the tax rate on the principal in Scenario 2 and the tax rate on principal and gain upon withdrawal in Scenario 3 is the same, then the earnings from both will be equivalent.
The tax rates on deposit and withdrawal may not always be the same, however. Many tax systems have a progressive rate structure for wages which may mean an individual will be in a different tax bracket when the investment is made than when they have retired and begin making withdrawals.
If an individual faces a 30 percent tax rate when they invest, but a 15 percent tax rate when they withdraw their earnings, it would be advantageous to use an investment account as in Scenario 3.
Other Types of Tax-Preferred Savings Accounts
In addition to retirement accounts, some countries offer tax preferences for other savings purposes. Examples include savings for future education and health-related costs.
For example, the United States offers so-called “qualified tuition plans” for future education cost, also known as “529 plans.” Depending on the U.S. state and type of 529 plan, savers may be able to deduct contributions from state income tax or receive matching grants; gains are not subject to tax; and withdrawals are exempt from state and federal income tax.
Similarly, Canada offers a Registered Education Savings Plan (RESP), which exempts earnings as they accrue, and a government savings bonus is paid (earnings and bonus are taxed at the student’s tax rate upon withdrawal).
In the United States, there is also a Health Savings Account (HSA), which can be used to pay for qualified medical expenses. As shown in Table 1, contributions are made from pretax earnings, gains are tax-exempt, and withdrawals are not taxed either.
Survey of Capital Gains Taxes, Dividend Taxes, and Retirement Savings in OECD Countries
While most OECD countries levy some form of tax on savings and investment, the tax treatment differs not only between countries but also between types of investment income and savings purpose.
For example, the average top long-term capital gains tax rate in the OECD is 19.1 percent, while dividends face an average tax rate of 24.4 percent. When it comes to private retirement savings, the tax treatment as well as contribution limits also vary significantly.
Capital Gains Tax Rates
Many OECD countries tax capital gains at various rates depending on the holding period, the individual’s income level, and the type of asset sold.
Recognizing the importance of long-term savings, some OECD countries tax the gains from long-term savings at a lower capital gains tax rate than those from short-term savings. For example, in Slovenia, capital gains on the disposition of immovable property, shares, or other capital participations are taxed at 27.5 percent if held up to five years, at 20 percent if held between five and 10 years, at 15 percent if held between 10 and 15 years, at 10 percent if held between 15 and 20 years, and at 0 percent if held for more than 20 years.
While some countries levy flat capital gains tax rates regardless of an individual’s income level, others include capital gains when calculating personal income taxes—which in most countries results in a progressive taxation of capital gains. Still other countries have a separate progressive capital gains tax structure. Some countries have an annual exempt amount for capital gains. For example, in the United Kingdom the first £12,300 (US $15,800) of realized capital gains are tax-free.
Many OECD countries exempt owner-occupied residential property from capital gains tax.
Table 2 shows the top marginal capital gains tax rates levied on individuals in the OECD, taking into account exemptions and surtaxes. If a country has more than one capital gains tax rate, the table shows the tax rate applying to the sale of listed shares after an extended period of time.
Denmark levies the highest top marginal capital gains tax on long-held shares in the OECD, at a rate of 42 percent. Chile’s top capital gains tax rate is the second highest, at 40 percent, followed by Finland and France, at 34 percent each.
Roughly one-fourth of all OECD countries does not levy capital gains taxes on the sale of long-held shares. These are Belgium, the Czech Republic, Korea, Luxembourg, New Zealand, Slovakia, Slovenia, Switzerland, and Turkey.
On average, long-term capital gains from the sale of shares are taxed at a top marginal rate of 19.1 percent in the OECD.
|Top Marginal Capital Gains Tax Rates on Individuals Owning Long-Held Listed Shares without Substantial Ownership (includes Exemptions and Surtaxes)|
|OECD Country||Top Marginal Capital Gains Tax Rate||Additional Comments|
|Australia (AU)||23.50%||Capital gains are subject to the normal PIT rate and there is a 50% exemption if the asset was held for at least 12 months.|
|Belgium (BE)||0.00%||Capital gains are only taxed if they are regarded as professional income.|
|Canada (CA)||26.75%||Capital gains are subject to the normal PIT rate but only 50% of the gains are included as taxable income.|
|Chile (CL)||40.00%||Only certain gains on the sale of traded shares of Chilean corporations are tax-exempt.|
|Colombia (CO)||10.00%||10% rate applies for assets that were held for two or more years; otherwise, capital gains taxed as ordinary capital income at 31%.|
|Czech Republic (CZ)||0.00%||Capital gains included in PIT but exempt if shares of a joint stock company were held for at least three years (five years if limited liability company).|
|Denmark (DK)||42.00%||Capital gains are subject to PIT.|
|Estonia (EE)||20.00%||Capital gains are subject to PIT.|
|France (FR)||34.00%||Flat 30% tax on capital gains, plus 4% for high-income earners.|
|Germany (DE)||26.38%||Flat 25% tax on capital gains, plus a 5.5% solidarity surcharge.|
|Hungary (HU)||15.00%||Capital gains are subject to flat PIT rate at 15%.|
|Israel (IL)||28.00%||Flat 25% tax on capital gains, plus a 3% surtax for high-income earners.|
|Japan (JP)||20.32%||Flat 20.315% tax on capital gains (15.315% national tax and 5% local inhabitant’s tax).|
|Korea (KR)||0.00%||Capital gains on listed shares owned by non-large shareholders are not taxed. Other types of capital gains are taxed.|
|Lithuania (LT)||20.00%||Capital gains are subject to PIT, with a top rate of 20%.|
|Luxembourg (LU)||0.00%||Capital gains are tax-exempt if a movable asset (such as shares) was held for at least six months and is owned by a non-large shareholder. Taxed at progressive rates if held <6 months.|
|Netherlands (NL)||31.00%||Net asset value is taxed at a flat rate of 31% on a deemed annual return (the deemed annual return varies by the total value of assets owned).|
|New Zealand (NZ)||0.00%||Does not have a comprehensive capital gains tax.|
|Norway (NO)||31.68%||Capital gains are subject to PIT (an adjustment factor applies).|
|Slovakia (SK)||0.00%||Shares are exempt from capital gains tax if they were held for more than one year and are not part of the business assets of the taxpayer.|
|Slovenia (SI)||0.00%||Capital gains rate of 0% if asset was held for more than 20 years (rate up to 27.5% for periods less than 20 years).|
|Switzerland (CH)||0.00%||Capital gains on movable assets such as shares are normally tax-exempt.|
|Turkey (TR)||0.00%||Shares that are traded on the Stock Exchange and that have been held for at least one year are tax-exempt (two years for joint stock companies).|
|United Kingdom (GB)||20.00%||–|
|United States (US)||29.20%||29.2% applies if the asset was held for more than one year; includes federal and state taxes on capital gains, as well as the 3.8% Net Investment Income Tax (NIIT) for high-income earners.|
|Note: “PIT” refers to personal income tax.|
|Sources: Bloomberg Tax, “Country Guide,” https://www.bloomberglaw.com/product/tax/toc_view_menu/3380/; and PwC, “Worldwide Tax Summaries Online,” https://www.taxsummaries.pwc.com/.|
Dividend Tax Rates
While some countries tax dividends at the same rate as capital gains, other countries differentiate between the two forms of income. In addition, as previously mentioned, several OECD countries have integrated their taxation of corporate profits and dividends paid. Table 3 shows the top marginal dividends tax rates levied in each OECD country, taking into account credits and surtaxes.
As with capital gains tax, some OECD countries levy personal income taxes on dividend income, while others levy a flat, separate dividends tax. Exemption thresholds are also relatively common. For example, the United Kingdom provides a £2,000 ($2,600) dividend allowance, above which a progressive dividend tax is levied.
On average, OECD countries levy a top marginal tax rate of 24.4 percent on dividend income. However, as with capital gains, there is significant variation. Ireland’s top dividend tax rate is the highest among OECD countries, at 51 percent.
Estonia and Latvia are the only OECD countries that do not levy a tax on dividend income. This is due to their cash-flow-based corporate tax system. Instead of levying a dividend tax, Estonia and Latvia impose a corporate income tax of 20 percent when a business distributes its profits to shareholders.
Of the OECD countries with a tax on dividend income, Greece’s is the lowest, at 5 percent. Second and third are Slovakia and Colombia, at 7 percent and 10 percent, respectively.
|Top Marginal Dividends Tax Rates on Individuals (Includes Credits and Surtaxes)|
|OECD Country||Top Marginal Dividends Tax Rates|
|Czech Republic (CZ)||15.00%|
|New Zealand (NZ)||15.28%|
|United Kingdom (GB)||38.10%|
|United States (US)*||29.20%|
|Note: Japan’s and the U.S.’ dividends tax rates for 2021 were not available in the OECD dataset. The 2020 dividends tax rates were used instead. Colombia’s rate was researched individually, as it was also missing in the OECD’s dataset.|
|Source: OECD, “Tax Database: Table II.4. Overall statutory tax rates on dividend income,” column “Net personal tax,” updated Apr. 29, 2021, https://www.stats.oecd.org/Index.aspx?QueryId=59615.|
Tax Treatment of Retirement Savings in the OECD
In addition to universal pension systems, most OECD countries provide tax preferences for private retirement savings. As explained above, the most common tax treatment of retirement savings accounts is TEE (contributions are taxed, but gains are tax-exempt and there is no tax upon withdrawal) and EET (contributions and gains are tax-exempt, but withdrawals—principal plus gains—are taxed).
OECD countries generally limit the amount of savings one can place in tax-preferred retirement accounts. This is done through annual contribution caps. For example, in Spain, total employer and employee contributions made to personal and occupational pension plans are limited to €8,000 ($9,100) per year. Ireland and the United Kingdom are the only two OECD countries that also have a lifetime contribution limit for tax-preferred retirement savings accounts, at €2 million ($2.3 million) and £1,073,100 ($1.4 million), respectively.
Some countries impose penalty fees on withdrawals made before a certain age is reached. For example, in the United States early withdrawal from an Individual Retirement Account (IRA) prior to age 59½ is subject to being included in gross income plus a 10 percent additional tax penalty.
Details on the tax treatment of private retirement savings in each OECD country can be found in Appendix Table 1.
Long-term savings and investments play an important role in individuals’ financial stability and the economy overall. Lawmakers have recognized the need to incentivize savings through tax- and non-tax-related policies. However, in many cases, tax-preferred savings accounts come with a myriad of complex rules and limitations, which ultimately may deter individuals from opening such tax-preferred savings accounts and potentially lower the amount of total savings.
Universal Savings Accounts
Universal savings accounts can significantly simplify a country’s tax-preferred savings system. These accounts are not limited to a certain type of savings (e.g., retirement savings) and have no income limitations or withdrawal penalties. Returns to the account would not be subject to tax, mirroring the tax treatment of most tax-preferred private retirement savings accounts in the OECD. Annual contribution limits could be set to ensure that the tax benefits are capped at a certain level.
Since 2009, Canada has had a universal savings account, the so-called “Tax-Free Savings Account (TFSA).” The annual contribution limit in 2021 is $6,000 CAD ($4,500). Contributions are made with after-tax dollars, earnings grow tax-free, and withdrawals can be made for any reason without triggering additional taxes or penalties. If someone makes less than the maximum contribution one year, the remaining contribution eligibility is added to the next year’s maximum contribution.
The United Kingdom has had a similar program of Individual Savings Accounts (ISAs) since 1999. ISAs have an annual contribution limit of £20,000 (approximately $25,700). As with TFSAs, contributions are made with after-tax dollars, and earnings grow tax-free; unlike with TFSAs, however, the rollover option is not allowed.
There have been several proposals by U.S. lawmakers to introduce a universal savings account, though none have been enacted. Establishing a universal savings account in the United States that has an annual contribution limit of $2,500 per year, uses after-tax contributions, and allows earnings to grow tax-free would reduce federal revenue by about $15.1 billion from 2022 to 2031. Due to the annual contribution limit of $2,500 the losses in federal tax revenue would be relatively small. However, it would slightly increase the after-tax return to saving, leading to small increases in output and after-tax incomes.
Due to the importance of long-term savings and investment for individuals as well as the economy overall, dividend and capital gains taxes should be kept at a relatively low level—particularly when taking into account the corporate taxes paid at the entity level. On average in the OECD, long-term capital gains from the sale of shares are taxed at a top rate of 19.1 percent, and dividends are taxed at a top rate of 24.4 percent.
To encourage private retirement savings, OECD countries commonly provide tax-preferred retirement accounts. However, in many countries, including the United States, the system of tax-preferred retirement accounts is complex, which may deter savers from using such accounts—and potentially lower overall savings. Canada and the United Kingdom have implemented universal savings accounts, and thus provide an example for how the system of tax-preferred retirement accounts can be simplified while providing more flexibility for what the funds can be used for
Often, the tax treatment of stocks or ownership shares of private companies or other tradeable properties receive similar tax treatment to that of publicly traded stocks.
Steve Rosenthal and Theo Burke, “Who’s Left to Tax? US Taxation of Corporations and Their
Shareholders,” New York University School of Law, Oct. 27, 2020, https://www.law.nyu.edu/sites/default/files/Who%E2%80%99s%20Left%20to%20Tax%3F%20US%20Taxation%20of%20Corporations%20and%20Their%20Shareholders-%20Rosenthal%20and%20Burke.pdf.
Taylor LaJoie and Elke Asen, “Double Taxation of Corporate Income in the United States and the OECD,” Tax Foundation, Jan. 13, 2021, https://www.taxfoundation.org/double-taxation-of-corporate-income/.
Named for the late Senator William Roth (R-DE).
Named for the relevant section of the U.S. tax code.
For more details on universal savings accounts, see Robert Bellafiore, “The Case for Universal Savings Accounts,” Tax Foundation, Feb. 26, 2019, https://www.taxfoundation.org/case-for-universal-savings-accounts/.
Government of Canada, “The Tax-Free Savings Account,” https://www.canada.ca/en/revenue-agency/services/tax/individuals/topics/tax-free-savings-account.html.
See H.R. 937, H.R. 6757, and S. 232 from the 115th Congress. | <urn:uuid:bf62de81-1faf-49bc-a975-233dfe14faec> | CC-MAIN-2023-50 | https://stockfellas.com/2021/05/27/savings-and-investment-the-tax-treatment-of-stock-and-retirement-accounts-in-the-oecd/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.933246 | 6,053 | 2.5625 | 3 |
Editor’s Note: Sign up for CNN’s Life, But Greener newsletter. Our limited newsletter series guides you on how to minimize your personal role in the climate crisis — and reduce your eco-anxiety.
The reliability of our faucets providing water every time we turn them on can make water seem like a magical, never-ending resource.
“Four billion people today already live in places that are affected by water scarcity at least part of the year,” said Rick Hogeboom, executive director of the Water Footprint Network, an international knowledge center based in the Netherlands. “Climate change will have a worsening influence on the demand-supply balance,” he said.
“If all people were to conserve water in some way, that would help ease some of the immediate impacts seen from the climate crisis,” said Shanika Whitehurst, associate director of sustainability for Consumer Reports’ research and testing. Consumer Reports is a nonprofit that helps consumers evaluate goods and services.
“Unfortunately, there has been a great toll taken on our surface and groundwater sources, so conservation efforts would more than likely have to be employed long term for there to be a more substantial effect.”
Yes, businesses and governments should play a part in water conservation by, respectively, producing goods “water efficiently” and allocating water in a sustainable, equitable way, Hogeboom said.
But “addressing the multifaceted water crises is a shared responsibility. No one actor can solve it, nor is there a silver bullet,” he added. “We need all actors to play their part.”
Contrary to what you might think, the water used directly in and around the home makes up a minor portion of the total water footprint of a consumer, Hogeboom said.
“The bulk — typically at least 95% — is indirect water use, water use that is hidden in the products we buy, the clothes we wear and the food we eat,” Hogeboom said. “Cotton, for instance, is a very thirsty crop.”
Of the 300-plus gallons of water the average American family uses every day at home, however, roughly 70% of this use occurs indoors, according to the— making the home another important place to start cutting your use.
Here are some ways to reduce your water footprint as you move from room to room and outdoors.
Since the kitchen involves dishwashing, cooking and one of the biggest water guzzlers — your diet — it’s a good place to start.
An old kitchen faucet can release 1 to 3 gallons of water per minute when running at full blast, according to. Instead of rinsing dishes before putting them in the dishwasher, scrape food into your trash or compost bin. Make sure your dishwasher is fully loaded so you only do as many wash cycles as necessary and make the most use of the water.
With some activities you can save water by not only using less but also upgrading the appliances that deliver the water. Dishwashers certified by, the government-backed symbol for energy efficiency, are about 15% more water-efficient than standard models, according to .
If you do wash dishes by hand, plug up the sink or use a wash basin so you can use a limited amount of water instead of letting the tap run.
If you plan on eating frozen foods, thaw them in the fridge overnight instead of running water over them. For drinking, keep a pitcher of water in the fridge instead of running the faucet until the water’s cool — and if you need to do that to get hot water, collect the cold water and use it to water plants.
Cook foods in as little water as possible, which can also retain flavor, according to the University of Toronto Scarborough’s.
When it comes to saving water via what you eat, generally animal products are more water-intensive than plant-based alternatives, Hogeboom said.
“Go vegetarian or even better vegan,” he added. “If you insist on meat, replace red meat by pig or chicken, which has a lower water footprint than beef.”
It takes more than 1,800 gallons of water to produce 1 pound of beef, Consumer Reports’ Whitehurst said.
The bathroom is the largest consumer of indoor water, as the toilet alone can use 27% of household water,. You can cut use here by following this adage: “If it’s yellow, let it mellow. If it’s brown, flush it down.”
“Limiting the amount of toilet flushes — as long as it is urine — is not problematic for hygiene,” Whitehurst said. “However, you do have to watch the amount of toilet paper to avoid clogging your pipes. If there is solid waste or feces, then flush the toilet immediately to avoid unsanitary conditions.”
Older toilets use between 3.5 and 7 gallons of water per flush, but WaterSense-labeled toilets use. WaterSense is a partnership program sponsored by the EPA.
“There’s probably more to gain by having dual flush systems so you don’t waste gallons for small flushes,” Hogeboom said.
By turning off the sink tap when you brush your teeth, shave or wash your face, you can save more than 200 gallons of water monthly,.
Cut water use further by limiting showers to five minutes and eliminating baths. Shower with your partner when you can. Save even more water by turning it off when you’re shampooing, shaving or lathering up, Consumer Reports suggests.
Replacing old sink faucets or showerheads with WaterSense models can save hundreds of gallons of water per year.
Laundry rooms account for nearly a fourth of household water use, according to the EPA. Traditional washing machines can use 50 gallons of water or more per load, but newer energy- and water-conserving machines use less than 27 gallons per load.
You can also cut back by doing full loads (but not overstuffing) and choosing the appropriate water level and soil settings. Doing the latter two can help high-efficiency machines use only the water that’s needed. If you have a high-efficiency machine, useor measure out regular detergent, which is more sudsy and, if too much is used, can cause the machine to use more water, according to Consumer Reports.
Nationally, outdoor water use accounts for 30% of household use, particularly in the West.. This percentage can be much higher in drier parts of the country and in more water-intensive landscapes,
If you prefer to have a landscape, reduce your outdoor use by planting only plants appropriate for your climate or ones that are low-water and drought-resistant.
“If maintained properly, climate-appropriate landscaping can use less than one-half the water of a traditional landscape,” the EPA says.
The biggest water consumers outside are automatic irrigation systems, according to the EPA. To use only what’s necessary, adjust irrigation controllers at least once per month to account for weather changes. WaterSense irrigation controllers monitor weather and landscape conditions to water plants only when needed.
#Saving #water #deal #climate #crisis #Heres #reduce #CNN | <urn:uuid:9fcced54-d76c-46d0-b523-20835247270f> | CC-MAIN-2023-50 | https://store.zittrex.com/tag/companies/page/3/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.939022 | 1,545 | 3.03125 | 3 |
This story was originally published in The Kitchener Waterloo Community Foundation’s Annual Report
Learning to read is one of the essential building blocks of life. Strong Start’s programs aim to help children reach their potential and strengthen our community through literacy.
‘Healthy Children and Youth’ is one of Wellbeing Waterloo Region’s key priority areas. For Strong Start’s Executive Director, Machelle Denison, recognizing the connection between literacy and healthy brain development in children is vital.
Two high-impact programs offered by Strong Start are Letters, Sounds and Words™ and Get Ready for School™.
Letters, Sounds and Words is a 10-week program that targets children in Senior Kindergarten and Grade 1 who need a literacy boost. They are paired with trained community volunteers who go into schools to work one-on-one with children, playing carefully designed games and activities. The program is organized in four strands that help children recognize letters, a sound each one represents, how to learn words by sight and how to learn a word by using the sounds of its letters.
Get Ready for School is a program for pre-schoolers during the six- month period before entering Junior Kindergarten. Through the 44 classes, children build vocabulary, learn letter sounds and practise classroom behaviours. The program is particularly beneficial for children who are learning English as a Second Language or for those who are from a low to middle socio-economic background.
According to Machelle, one of the key features of these programs is that they are completely free of charge. “It really is the great equalizer. No matter what your background is, if you are given the resources and help you need to learn to read, you have an equal chance at success.”
Strong Start’s programs also support many of the United Nation’s Sustainable Development Goals, including: ‘No Poverty’, ‘Quality Education’, ‘Decent Work and Economic Growth’, ‘Reduced Inequalities’.
“When you consider the immediate and long-term impact that being able to read has on a human being and their life trajectory, there are many linkages to these goals,” says Machelle.
In many cases, the programs not only have an impact on the children, but their parents as well. “I was concerned prior to this program that my son wouldn’t be ready for Junior Kindergarten in the fall due to a speech delay,” says Michelle Delahunty, whose son was in the Get Ready for School program. “This program has not only given me the confidence that he is more than ready for school, but my son is now talking and has taken more from this program than I could have ever imagined.”
Grants from Kitchener Waterloo Community Foundation have allowed Strong Start to add an additional Get Ready for School program in a designated high-needs neighbourhood, and another program servicing rural areas of Waterloo Region. The funding has also assisted the Letters, Sounds and Words program by replenishing the learning materials needed to operate the program as well as train new volunteers.
Since its inception in 2001, Strong Start has helped nearly 39,000 children learn to read with the help of over 28,000 volunteers.
“This really is a great example of a community rallying around its children, with its time and its money, to help them learn to read. It really does take a village to raise a child.” – Machelle. | <urn:uuid:af6289fc-693f-4c71-8d3e-5f94d1fff7d6> | CC-MAIN-2023-50 | https://strongstart.ca/news/childrens-literacy-the-great-equalizer/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.968679 | 726 | 3.234375 | 3 |
Dig a hole in the City of London and a few metres down you will hit Londinium, the original city, because before the Romans there seems to have been no major settlement.
For nearly 400 years from around 45CE this was a major Roman centre, and although it declined from the late 2nd century onwards (and was destroyed at least twice, the first time in 60CE by Boudicca, and then by a huge fire around 125) it boasted temples, a forum (the biggest north of the Alps), a governor’s palace, nearly 4km of wall, an amphitheatre that could 6,000 spectators and, because the Romans loved bathing, a number of bath houses.
These were more on the line of saunas than pools, the bather progressing from the frigidarium (ambient temperature), to the tepidarium (around 30℃) to the caldarium (the hot room – 40℃ to 50℃). Follow that with a dunk in a plunge pool and afterwards perhaps a massage with oil, or a variety of body treatments (think ‘spa day’). And it was a social activity – friends met, gossip exchanged, business discussed. Bathing was part of what it meant to be Roman, so Londinium would have had numerous public and private bath houses from very soon after its foundation.
Quite a few of these have been found by archeologists over the years (including a large public one on Huggin Hill, under the present Cleary Gardens), but there is only one that is open to the public.
This is over the road from the old Billingsgate market building on Lower Thames Street (a stone’s throw from the Tower). It was first discovered in 1848 during the construction of the New Coal Exchange, but it took until the demolition of that building in the late 1960s for proper archaeological work to be done.
And what a find! It is the only domestic (as distinct from public) structure from Londinium that is still in situ. There is a high status dwelling (probably built in the second half of the 2nd century) that would have fronted onto the Thames. It is thought this had three ‘wings’ (imagine a square ‘U’) and there was another building within the open space between the wings. Some 100 years after the original construction of the dwelling this central building was converted to a bath house (we don’t know what it was before) by adding a tepidarium and caldarium to the end of it. It is not known why this was done – was it the ‘homeowner’ adding a private bath house for his own use, or did the main building become some sort of ‘inn’ which created the bath house to attract guests.
The site is still in the basement of an office block so is only accessible on Saturdays over the summer (April to November). As it is a scheduled ancient monument one moves on walkways over the dwelling and bath house, looking down on the walls of the building, the tesserae on the floor of the frigidarium (imagining Roman Londoners walking barefoot over these little tiles) and the furnaces and hypocausts for both the bath house and the main dwelling.
We don’t know how long the bath house remained in use, but coins have been found that date from the last decade of the 4th century, as well as fragments of amphora (which came from the eastern mediterranean) that have been dated to 410-415CE. As the last Roman legions withdrew in 407 (the end of the Roman Empire in Britain is traditionally dated to 410, when the Emperor Honorius responded to a plea for help from his erstwhile colony with “you’re on your own now”) we’re talking about occupancy of the site right at the end of ‘Roman Britain’.
Londinium seems to have been completely abandoned by the middle of the 5th century and the anglo-saxon settlers did not inhabit the area within the walls until the time of Alfred the Great in 886.
The site, expertly introduced and guided by City of London Guides, is a recommended visit for anyone with an interest in Roman Londinium. Details here. There’s a short film about the site (produced for Open House) below. | <urn:uuid:b2c40a9d-e46a-4a8e-b83d-ac3d9c4875cf> | CC-MAIN-2023-50 | https://stuffaboutlondon.co.uk/london/londinium-the-billingsgate-roman-bath-house/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.974653 | 924 | 2.78125 | 3 |
American Heart Association
News Release (AHA)
June 27, 2013
- Non-invasive brain stimulation may help stroke survivors recover language function.
- Survivors treated with the technique regained more language function than those who did not get treatment.
Non-invasive brain stimulation may help stroke survivors recover speech and language function, according to new research in the American Heart Association journal Stroke.
Between 20 percent to 30 percent of stroke survivors have aphasia, a disorder that affects the ability to grasp language, read, write or speak. It's most often caused by strokes that occur in areas of the brain that control speech and language.
"For decades, skilled speech and language therapy has been the only therapeutic option for stroke survivors with aphasia," said Alexander Thiel, M.D., study lead author and associate professor of neurology and neurosurgery at McGill University in Montreal, Quebec, Canada. "We are entering exciting times where we might be able in the near future to combine speech and language therapy with non-invasive brain stimulation earlier in the recovery. This could result in earlier and more efficient aphasia recovery and also have an economic impact."
In the small study, researchers treated 24 stroke survivors with several types of aphasia at the rehabilitation hospital Rehanova and the Max-Planck-Institute for neurological research in Cologne, Germany. Thirteen received transcranial magnetic stimulation (TMS) and 11 got sham stimulation.
The TMS device is a handheld magnetic coil that delivers low intensity stimulation and elicits muscle contractions when applied over the motor cortex.
During sham stimulation the coil is placed over the top of the head in the midline where there is a large venous blood vessel and not a language-related brain region. The intensity for stimulation was lower intensity so that participants still had the same sensation on the skin but no effective electrical currents were induced in the brain tissue.
Patients received 20 minutes of TMS or sham stimulation followed by 45 minutes of speech and language therapy for 10 days.
The TMS groups' improvements were on average three times greater than the non-TMS group, researchers said. They used German language aphasia tests, which are similar to those in the United States, to measure language performance of the patients.
"TMS had the biggest impact on improvement in anomia, the inability to name objects, which is one of the most debilitating aphasia symptoms," Thiel said.
Researchers, in essence, shut down the working part of the brain so that the stroke-affected side could relearn language. "This is similar to physical rehabilitation where the unaffected limb is immobilized with a splint so that the patients must use the affected limb during the therapy session," Thiel said.
"We believe brain stimulation should be most effective early, within about five weeks after stroke, because genes controlling the recovery process are active during this time window," he said.
Thiel said the result of this study opens the door to larger, multi-center trials. The NORTHSTAR study has been funded by the Canadian Institutes of Health Research and will be launched at four Canadian sites and one German site later in 2013.
The Walter and Marga Boll and Wolf-Dieter-Heiss Foundations funded the current study.
Co-authors are Alexander Hartman, M.D.; Ilona Rubi-Fessen, M.Sc.; Carole Anglade, M.Sc.; Lutz Kracht, M.D.; Nora Weiduschat, M.D.; Josef Kessler, Ph.D.; Thomas Rommel, M.D.; and Wolf-Dieter Heiss, M.D. Author disclosures are on the manuscript. | <urn:uuid:868a7e94-75d5-4dca-bd56-f87e4a5e1268> | CC-MAIN-2023-50 | https://superdoctors.com/article/Stimulating-Brain-May-Help-Stroke-Survivors-Recover-Language-/65ce4d6f-0a6e-4641-9f5d-7acc1311a052.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.941392 | 764 | 2.578125 | 3 |
Cell Membrane (J.O. Plowe (1931) called it plasmalemma):
- Its presence was first recognised by Naigeli & Cramer who gave the term ‘plasma membrane’
- Gorter and Grendel (1925) postulated that it is made of “bimolecular lipid layer”.
- 1) A thin, delicate membrane of about 70 Å to 100 Å thickness made almost entirely of proteins and lipids. The most common lipids are Phospholipids.
2) Each phospholipid molecule consists of a polar head containing phosphate, and two non-polar hydrocarbon tails from the fatty acids used make the molecules.
3) The head is hydrophilic (water –loving) and the tails are hydrophobic (water-hating) in presence of water they form a bilayer.
4) Membrane proteins have been classified as Integral (Intrinsic) or Peripheral (extrinsic) according to the degree of their association with the membrane and the methods by which they can be solubilized.
5) Peripheral proteins are separated more than 70% of the membrane proteins and require drastic procedures for isolation. e.g. spectrin of erythrocytes, cytochrome c of mitochondria.
6) Integral Proteins represents more than 70% of the membrane proteins and require drastic procedures for isolation. e.g. most membrane bound enzymes, drug and hormone receptors, histocompatibility antigens (glycophorins). | <urn:uuid:740b2c4c-124d-4156-a081-e7bed1c2fb8c> | CC-MAIN-2023-50 | https://sureden.com/topics/11-pmt-biology-cell-the-unit-of-life-cell-membrane-951.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.906518 | 339 | 3.59375 | 4 |
Chill Balance: Trying typically the Forceful Duo from Freezers and Refrigerators
In the heart of each and every kitchen, two silent stalwarts quietly go about their business, ensuring the freshness and longevity of our food – the freezer and refrigerator. As indispensable members of the modern household, these appliances have transformed the way in which we approach food storage and preservation. This article delves in to the symbiotic relationship between freezers and refrigerators, unraveling the intricacies of the functions, technological advancements, and providing insights into how they come together to keep our culinary delights at their best.
I. The Cooling Ballet: Distinct Roles in Culinary Preservation
The refrigerator and freezer engage in a fine dance of temperature control, each with a unique role.
Refrigerators maintain a temperature array of approximately 35°F to 38°F (1.7°C to 3.3°C) 雪糕櫃 , suitable for slowing the spoilage of perishables.
Freezers operate at colder temperatures, around 0°F (-17.8°C) or lower, freezing items and extending their shelf life.
Refrigerators create an environment that preserves the grade of fruits, vegetables, dairy, and prepared foods, slowing the growth of bacteria.
Freezers take preservation an action further by halting the aging process through freezing, enabling extended storage.
II. Technological Marvels: Innovations in Refrigeration
Enter the era of intelligent appliances with the advent of smart refrigerators.
Equipped with touchscreens, Wi-Fi connectivity, and advanced sensors, these refrigerators offer features such as for instance inventory tracking, recipe suggestions, and handy remote control capabilities.
Modern refrigerators prioritize energy efficiency with features like LED lighting and inverter compressors.
Energy Star-rated appliances not only reduce electricity consumption but in addition donate to environmental sustainability.
III. Organizational Brilliance: Maximizing Storage Space
Refrigerators feature adjustable shelves to allow for items of varying sizes.
The capability to customize shelf configurations allows for efficient space utilization and easy organization.
Door Storage Strategies:
Refrigerator doors are optimized for convenience, providing storage for frequently accessed items such as for instance condiments and beverages.
Adjustable bins and shelves in the doors enhance flexibility, catering to containers of different shapes and sizes.
IV. Strategies for Refrigeration Mastery
Regularly check and adjust thermostat settings to keep up optimal temperatures.
Refrigerators must be set between 35°F to 38°F, while freezers operate best at 0°F or lower.
Group similar items together for quick access and visibility.
Clear containers assist in quickly identifying the contents of the refrigerator and freezer.
V. Conquering Common Challenges
Regularly defrost freezers to prevent excessive frost buildup.
Frost-free freezer models automate the defrosting process, minimizing the necessity for manual intervention.
Ensure proper ventilation around the appliances to prevent overheating.
Periodically clean condenser coils to keep up optimal performance and temperature control.
VI. Embracing Sustainability in Refrigeration
Decide for appliances with Energy Star certifications to lessen electricity consumption.
Consider models with green refrigerants for a greener option.
Minimizing Food Waste:
Implement proper storage practices and organizational strategies to minimize food spoilage.
Regularly check expiration dates and consume perishables before they reach their limit.
In the culinary symphony of our homes, freezers and refrigerators play the role of conductors, ensuring the seamless harmony of freshness and preservation. Through technological innovations, organizational brilliance, and a commitment to sustainability, these appliances are becoming essential companions in our daily lives. By understanding their roles, implementing effective storage strategies, and embracing innovations, we can unlock the full potential with this dynamic duo, keeping our food fresh, our beverages chilled, and our kitchens vibrant centers of culinary exploration. | <urn:uuid:3aff8fda-c847-456b-9751-f0cfc7d38fc0> | CC-MAIN-2023-50 | https://thechurchphotographer.co.uk/2023/11/20/chill-balance-trying-typically-the-forceful-duo-from-freezers-and-refrigerators/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.853421 | 815 | 2.796875 | 3 |
🕑 Reading time: 1 minute
Ordinary Portland Cement(OPC) is the most widely used cement in the construction world. It is the basic ingredient for producing concrete, mortar, stucco, and non-specialty grouts. Ordinary Portland Cement is graded based on its strength. The grade indicates the compressive strength of the mortar cube that will be attained after 28 days of setting.
Grades of Ordinary Portland Cement
The different grades of OPC are discussed below:
1. OPC 33 Grade Cement
This grade of cement is used for general construction under normal environmental condition. But low compressive strength and availability of higher grades of cement have impacted the use and demand of OPC 33.
Compressive Strength of OPC 33 - The average compressive strength of at least three mortar cubes, having a face area of 50 sq.cm is taken into account while checking the compressive strength. These mortar cubes are composed of one part of cement and three parts of standard sand.
|a) 72 +/- 1 hour||Not less than 16 N/mm2|
|b) 168 +/- 2 hours||Not less than 22 N/mm2|
|c) 672 +/- 4 hours||Not less than 33 N/mm2|
IS Code - IS 269 : 1989 for Ordinary Portland Cement, 33 Grade.
2. OPC 43 Grade Cement
This grade of cement is the most popular cement used in the country today. OPC 43 is used for general RCC construction where the grade of concrete is up to M30. It is also used for the construction of precast items such as blocks, tiles, asbestos products like sheets and pipes, and for non-structural works such as plastering, flooring etc.
Compressive Strength of OPC 43 -
|a) 72 +/- 1 hour||Not less than 23 N/mm2|
|b) 168 +/- 2 hours||Not less than 33 N/mm2|
|c) 672 +/- 4 hours||Not less than 43 N/mm2|
IS Code - IS 8112: 1989 for 43 Grade Ordinary Portland Cement.
3. OPC 53 Grade Cement
OPC 53 is used when we need higher strength concrete at very economical cement content. In concrete mix design, for concrete M20 and above we can achieve 8 to 10% saving in cement with the use of OPC 53. This cement grade is used for specialized works such as prestressed concrete components, precast items such as paving blocks, building blocks etc, runways, concrete roads, bridges, and other RCC works where the grade of concrete is M25 and above.
Compressive Strength of OPC 53
|a) 72 +/- 1 hour||Not less than 27 N/mm2|
|b) 168 +/- 2 hours||Not less than 37 N/mm2|
|c) 672 +/- 4 hours||Not less than 53 N/mm2|
IS Code - IS 12269 : 1987 for Specification for 53 grade ordinary portland cement
Physical Properties of OPC Cement
Physical requirements other than
When tested for fineness by Blaine’s air permeability method, the specific surface of cement shall not be less than 225 m2/kg. This is applicable to all grades of cement.
When tested by ‘Le-Chatelier’ method, unaerated cement shall not have an expansion of more than 10mm. When tested by Autoclave test, unaerated cement shall not have an expansion of more than 0.8
When tested by Vicat apparatus method, the setting time of cement shall conform to the following requirement:
a) Initial setting time - not less than 30 mins.
b) Final setting time - not less than 600 mins.
The setting time requirements mentioned above are applicable to all grades of cement.
Chemical Requirement of OPC Cement
The chemical requirements for OPC 33, OPC 43 and OPC 53 are as follows :
OPC 33 Grade
OPC 43 Grade
OPC 53 Grade
|1||Ratio of percentage of lime to percentages of silica, alumina and iron oxide||Not greater than 1.02 and not less than 0.66||Not greater than 1.02 and not less than 0.66||Not greater than 1.02 and not less than 0.8|
|2||Ratio of percentage of alumina to percentage of iron oxide||Not less than 0.66||Not less than 0.66||Not less than 0.66|
|3||Insoluble residue, percent by mass||Not more than 4||Not more than 2||Not more than 2|
|4||Magnesia, percent by mass||Not more than 6||Not more than 6||Not more than 6|
|5||Total Sulphur content calculated as Sulphuric anhydride (SO2), percent by mass|
|(a) When tricalcium aluminate is less than or equal to 5||Not more than 2.5||Not more than 2.5||Not more than 2.5|
|(b) When tricalcium aluminate is greater than 5||Not more than 3||Not more than 3||Not more than 3|
|6||Total loss on ignition||Not more than 5 %||Not more than 5 %||Not more than 5 %|
- IS 269 : 1989 for Ordinary Portland Cement, 33 Grade.
- IS 8112 : 1989 for 43 Grade Ordinary Portland Cement.
- IS 12269 : 1987 for Specification for 53 grade ordinary portland cement. | <urn:uuid:16790f96-8536-448f-8153-c8badc7aae18> | CC-MAIN-2023-50 | https://theconstructor.org/building/grades-of-ordinary-portland-cement-is-codes/31909/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.865594 | 1,185 | 3.203125 | 3 |
The American Staffordshire Terrier is a muscular dog breed that is popular for its strong and large size. Despite their ferocious appearance, they are loving and affectionate dogs, especially with their human family. This breed loves being with the humans they care most about, whether they are playing in the yard, cuddling up on the couch, or going out for a jog, they love being around with people who care for them. American Staffordshire Terriers likes to please; that is why they are highly trainable. But, taking care of an intelligent dog means that they need also needs mental stimulation along with physical exercise, or you’ll see them using their strong jaws and chewing anything they can find because they are bored. In this article, we are going to find out the American Staffordshire Terrier’s history, characteristics, and ways on how to take care of them.
History of American Staffordshire Terrier
The roots of the modern American Staffordshire Terrier can be traced back to England. This dog breed was a result of a mix between Bulldogs and Terrier breeds. The result of this mix earned them several names such as the Pit Bull Terrier, Half and Half, and Bull-And-Terrier Dog. As the years passed, breeders eventually settled with one name, and they decided to call it Staffordshire Bull Terriers. This is because these dogs were first taken care of by butchers in order to help them manage bulls. This dog breed also allows hunters to take down wild boars, and aside from that, they also help farmers with farm work while serving as ratters as well as family companions. Years later, people started to use them in the cruel sports of bear-baiting bull-baiting because of their courage, strength, as well as their muscular build. When these dangerous and evil sports eventually became illegal, Stafford Terriers were once again used in dogfighting rings, which unfortunately still continues in illegal events up until today. This misuse by humans is one of the reasons why these dogs have a reputation for being an aggressive breed. By 1850, several Stafford Terriers made their way to America, and this is where they were started to be called as Pit Bull Terriers, American Bull Terriers, and American Pit Bull Terriers. By the turn of the 20th century, the United Kennel Club recognized them as American Pit Bull Terriers.
On the other hand, the American Kennel Club recognized this dog breed as Staffordshire Terrier in 1936. Forty years after that, the American Kennel Club decided to change the breed’s name to American Staffordshire Terrier. This is because Americans were able to breed more of these dogs compared to the original Staffordshire Bull Terrier. Today, the American Pit Bull Terrier and American Staffordshire Terrier and they still have several things in common, even if they have been bred separately for over 50 years. Today, American Staffordshire Terriers are commonly used as watchdogs, K9 dogs helping with police work, or as family pets. They still, however, have that bad reputation for being aggressive dogs.
Characteristics of American Staffordshire Terrier
Height: 18-19 inches in male, 17-18 inches in female
Weight: 55-70 pounds in male, 40-55 pounds in female
Life Expectancy: 12-16 years
Male American Staffordshire Terriers can stand up 17 to 19 inches in height. In comparison, female American Staffordshire Terriers can be a bit smaller with an average height of 16 to 18 inches in height. Their average weight is between 40 and 60 pounds. Despite having the reputation of being an aggressive dog, this breed is still popular for being a family dog that loves to spend their time around humans. American Staffordshire Terriers are just happy when they spend time with their families, whether they are having a vigorous play session, or taking a long walk. The American Staffordshire Terrier’s muscular build and their reputation for being aggressive Pit Bulls that intimidate burglars keep them away. With that being said, several many American Staffordshire Terrier owners claim says that dogs from this breed are great judges of character and they can sense people’s intent, which is why they make excellent watchdog. Dogs from this breed can be intense dogs that will chew, dig, bark, and even pull when they are bored. Keep in mind that they are athletic and strong dogs which is why they can be quite difficult to take for walk, and they can have the tendency to pull their walker wherever they go. That’s why if you’re planning to have an American Staffordshire Terrier for a pet, you need to be a confident and assertive trainer that can handle them on a leash, as well as give them proper mental and physical stimulation. This breed might also need early socialization with humans and other animals. This is because they can be confrontational with other dogs when they are not used to socialize.
Taking Care of American Staffordshire Terrier
American Staffordshire Terriers are known for being prone to bad breath, which is why to need to brush their teeth at least once week or even more frequently in order to germs from growing. It would be best if you also trimmed their nails as needed. Still, this task can be a little bit difficult because American Staffordshire Terriers tend does not like to get their paws touched. This is why training them to be comfortable with touching and grooming while they are still puppies will help a lot. You should also have their ears checked for wax buildup and debris weekly and clean them up as needed in order to prevent ear infections as well as pest infestation. | <urn:uuid:34329eb9-66a8-4431-b0ce-2dbacce92903> | CC-MAIN-2023-50 | https://thepuppyplace.org/the-smart-and-confident-american-staffordshire-terrier.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.977809 | 1,157 | 2.609375 | 3 |
Translated from the Third Reich original Wikinger by Bernhard Kummer. This brief, sympathetic look at the Viking era from a National Socialist perspective presents the start of the Viking era as largely a response to Charlemagne‘s bloody subjugation of the pagan Saxons. The nine original illustrations, four of them by the famous SS artist W. Petersen, are included.
In those years when Kaiser Charlemagne subjugated the pagan Saxons for the Pope in decades of struggle, and then in Rome, on Christmas Day of the year 800, tricked in prayer, had to accept the emperor’s crown from the Pope’s hand, the Viking storm against France broke out in the rear of the fighting and subjugated Saxons. It then raged for over two centuries, fleets conquered cities and harbors, armies took land and founded states, and finally, in piracy that became ever more unsystematic, warriors and “princes without land” campaigned and plundered or feuded among themselves until they perished without honor and victory. But we will now explain how it could come to this final tragedy so alien to Germanic nature. Seen as a whole, this great Norse storm at the brink between paganism and Christianity is a great Nordic struggle of Nordic nature against south and east, a continuation of those earlier journeys and struggles of Nordic folks whom we already saw fight and perish in the Far East or in the Mediterranean region. In part, the same hostile aliens who faced the Nordic peasants in India and Persia, in Greece and Rome, had penetrated across the Alps and eastern trade routes and on Hun assaults into Germanic core land. An alien worldview and a new priesthood, a morality of alien blood and a new ideal, an alien view of folk community and rule, of peasant freedom and tyrant right, grabbed everywhere, openly and secretly, into Germanic life. The unrest of the folk wandering, which had threatened the south, was banished. From the mixed-race Franks, the Catholic state idea and faith united the Germanic tribes; the sword of the converters slew Alemannic resisters at Cannstadt and Saxon ones at Verden in a horrible manner, sufficient news of which certainly spread to all Germanic people. The Emperor Charlemagne in Aachen had planned to advance into the pagan north with conversion and subjugation as well. During his fighting against the Saxons many of them had fled to Denmark and reported there about the horrible enemies. But already centuries earlier, Norsemen had fought against the south and brought home precise news about all the heroic deeds. So now, too, one had clearly enough realized in the north that the struggle was about faith and freedom, and that one had to employ full energy against an enemy who after the so horrible subjugation of the Saxons now directly threatened Northern Germanic man. Only so are the great Viking campaigns to be understood with which the already always a sea power north intervened into the struggle of the period. | <urn:uuid:48fbcb62-14e5-4744-a0f5-d6948809a893> | CC-MAIN-2023-50 | https://third-reich-books.com/product/vikings/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.954953 | 603 | 3.59375 | 4 |
Which emperor lost his rebellion against Mexico?
After his contract was revoked in 1826, Edwards and his brother declared the colony to be the Republic of Fredonia. He was forced to flee Mexico when the Mexican army arrived to put down the rebellion, and did not return until after the Texas Revolution had broken out.
What led to the Fredonian Rebellion?
The Fredonian Rebellion (December 21, 1826–January 31, 1827) was caused by a desire of Anglo settlers in Texas to alienate themselves from its Mexican portion as new immigrants moved into their land.
Who led the rebellion against the Mexican government in 1825?
Miguel Hidalgo y Costilla, a Catholic priest, launches the Mexican War of Independence with the issuance of his Grito de Dolores, or "Dolores Cry". The revolutionary treaty, so called because it was read publicly by Hidalgo in the town of Dolores, called for the end of 300 years of Spanish rule in Mexico.
Which Mexican official was sent to Mexican Texas to investigate conditions following the Fredonian Rebellion?
The increasing number of settlers keeps the United States in Texas, the Fredonian Rebellion and the US offer to buy Texas caused concern among Mexican nationalists. In 1828, government leaders sent General Manuel Mier y Teran, a respected commander, to investigate conditions in Texas.
What was the first attempt to secede from Mexico?
The Fredonian Rebellion (December 21, 1826 – January 23, 1827) was the first attempt by white English settlers in Texas to secede from Mexico.
Who was the leader of the Fredonian Rebellion?
Fredonian Rebellion. The Fredonian Rebellion, between December 21, 1826 – January 23, 1827, was the first attempt by English settlers in Texas to secede from Mexico. The settlers, led by Empresario Haden Edwards, declared independence from Mexican Texas and created the Republic of Fredonia near Nacogdoches.
Who was the leader of the rebellion in Nacogdoches?
On November 22, 1826, local militia colonel Martin Parmer and 39 other Edwards colonists entered Nacogdoches and arrested Norris, Sepulveda, and the commander of the small Mexican garrison, charging them with oppression and corruption.
YT 2- https://www.youtube.com/channel/UCOEt53JAqyL_OkE5Oq-bIkgMy Patreon- https://www.patreon.com/mlaserVideo scripts with sources are available for free on … | <urn:uuid:fa4e0dde-18d4-4062-ab43-954f9620ab1d> | CC-MAIN-2023-50 | https://tipseri.com/which-emperor-lost-his-rebellion-against-mexico/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.969305 | 528 | 3.25 | 3 |
Anyone involved in an online course, whether that person is the instructor or student, can find the absence of face-to-face communications challenging for one primary reason: interaction or the lack thereof. To address the interaction issue, Krause (2020) stresses the importance of building a community through exercises that involve connectedness and shape relational trust. Wehler (2018) reinforces this notion with the reminder that developing community in an online environment can be accomplished by encouraging interaction between students and faculty and students with other students. One way to increase interactions and communication, thus building community, in online classes is by using digital tools such as memes, infographics, or avatars. Specifically, supporting the use of memes, Tu, Sun, and Levin’s study (2022) suggest that memes can be used to promote peer-socialization.
Link to example artifact(s)
Even though I never meet my online students in person, I want to foster a relationship with and among them that extends beyond the surface. To do this, I ask them to introduce themselves in an accessible, interactive manner. For this example, I challenge them to create a meme that highlights a few key details about who they are (you may want to try the Meme Generator tool). Because I want them to get know one another, I then ask them to share their meme by posting it on a class Padlet wall where they can not only view each other’s meme but also make comments. The Padlet wall, as a discussion board, allows students to interact and navigate through in a much more dynamic manner than a traditional discussion board does.
Specifically, I begin by first sharing my own meme and then I present the following challenge:
Now it’s your turn! Tell me about who you are by using a meme to introduce yourself. Choose a meme image that resonates with you & include a few details about your life. Once you are finished, post the link for your meme to the course Padlet page.
- Meme Generator (or branch out and find one of your own)
- Course Padlet page (direct link for the class Padlet page is provided for students)
- Padlet step-by-step (see the “How do I create a Padlet” tutorial at Padlet Help, see direct link below)
The example described was completed with students enrolled in a graduate-level course in literacy education delivered solely online. Students were in-service, licensed teachers or post-baccalaureate students seeking initial certification.
Class Padlet Wall:
Padlet Tutorial at Padlet Help: https://padlet.help/l/en/article/f5of9fy9lc-how-do-i-create-a-padlet
My Meme: https://imgflip.com/i/4sp64h
While memes and the Padlet wall are great for introductions, they are not the only digital tools you can use for introductions. You may want to try one of the following:
Avatars: Voki Avatar allows students to select/design an avatar and then use their own voice to create a brief introduction. Instead of reciting a complete intro, I ask students to describe themselves in eight nouns. Voki link: https://l-www.voki.com/
Link to scholarly reference(s)
Krause, C. (2020, April 15). How to forge a strong community in an online classroom. Edutopia.
Tu, K., Sun, A., Levin, D. (2022). Using memes to promote student engagement and classroom community during remote learning. Biochemistry and Molecular Biology Education, 51(2), 202-205.
Wehler, M. (2018, July 11). Five ways to build community in online classrooms. Faculty Focus.
Comer, M. (2023). Who We Are: Building Community in an Online Class, One Meme at a Time. In deNoyelles, A., Bauer, S., & Wyatt, S. (Eds.), Teaching Online Pedagogical Repository. Orlando, FL: University of Central Florida Center for Distributed Learning. | <urn:uuid:8da44089-7477-4a14-b495-bdd243e4a3bd> | CC-MAIN-2023-50 | https://topr.online.ucf.edu/who-we-are-building-community-in-an-online-class-one-meme-at-a-time/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.920646 | 862 | 2.703125 | 3 |
You might have heard a thing or two about how you can save the planet by making certain decisions and lifestyle changes. It probably sounded cool to be saving the planet from your own small corner.
Over the years, several ideas have come up on how we can save the planet from exploitation by humans. One of the most popular of these ideas has been maintaining a green environment by recycling.
However, in recent times, have you heard of how there is a need to pay attention to what happens to your old phones?
Yes! Taking the action has been said to have a role in ensuring that Gorillas do not go into extinction. Since this information has been circulated through various means (such as the internet), users who are wondering the correlation between phones and Gorillas.
Recycling of phones has specifically been called for and here is the question on the lips of most users: “is it true that recycling of used phones/E-waste will save Gorillas?”
Yes, it is true that recycling of E-waste will save Gorillas.
Now, you must be wondering how this whole thing works. The questioning definitely doesn’t end here, as you would now be wondering how recycling will save Gorillas.
Basically, your mobile phones are made from coltan, a substance gotten from the Republic of Congo. Due to the high demand for mobile phones and the production of new models, the mining of coltan hardly ever stops.
This substance, coltan, is found in the natural habitat of Gorillas (the largest mammals walking the earth) and some miners have also reported that they found some under the skin of these animals. Gorillas are commonly found in the Eastern Democratic Republic of Congo and this is where the mining takes place. The mining destroys the natural habitat (the comfort zone) of these Gorillas, predisposing them to death. Also, some of them are also killed in the mining process.
This continual mining is not surprising as the number of cell phones users in the United States is up to 270 million. There is also a record of about 4.1 billion cell phone users around the globe.
The number is not even about to decline as the number of users is set to become an estimated 4.68 billion by 2020. This also implies that many Gorillas would be killed again because users want to have a phone upgrade and more people want to start using cell phones.
Where are all your used phones? I guess you have no idea concerning the location of any of them.
If you are in the United States, you probably change your phone once every 18 months. All those times you had to change your phone, have you ever thought of having to sell your Samsung phone, as opposed to probably forgetting it somewhere in your house till it becomes an item to be disposed?
Have you ever wondered where all your used cell phones currently are?
You probably never bothered about used phones because you had no reason to do so. However, now, you certainly have a reason to do so. Every specie deserves to live—including gorillas.
By selling your used phone or properly disposing it in a recycle bin (for appropriate recycling), you are definitely making a huge difference in the world, right from your corner.
Now that you know that the information on recycling E-waste can save Gorillas and make a change in the world around you, what will your reaction be? | <urn:uuid:b5c9f862-3b27-44c4-a8ef-061b4e67a27a> | CC-MAIN-2023-50 | https://totlol.com/category/tech/page/3/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.963684 | 701 | 3 | 3 |
Hawthorn A Wild Edible You Can Forage
Hawthorn (Crataegus), also known as hawberry, quickthorn, whitethorn, and thornapple, is a member of the rose family and is a wild-growing plant that is used for food and medicine. Hawthorn a wild edible has all parts edible and foraging for hawthorn has become increasingly popular due to its versatile uses as food and herbal medicine. A quick search of the USDA Plant Database provides information for approximately 150 different species of hawthorns that range from shrubs to small trees that can reach upwards of 30 feet. Even more interesting, I have read there are well over 200 different types of hawthorns one of which can be found somewhere in North America. If you are interested in foraging, get to know the types of hawthorns that grow in your area.
Hawthorn is a term that encompasses multiple species. In general, they are shrubs to small trees growing to around 20 ft plus. As member of the rose family, the branches are covered with thorns. The branches develop deep fissures that reveal an orange interior under the gray-brown exterior. The berries look much like rose hips – red and round – but can be yellow, orange, blue, or black.
The plant leaves are wedge-shaped and have 5-7 lobes with fine teeth at the tip on some species while could be more “leaf like” with small serrations on the edges on others.
Hawthorns bloom in May and are covered with clusters of small white to red based flowers (depending on the specific species). The flowers give off a strong scent that is described in two very different ways – some say the blooms smell sweet and pleasant while other describe the scent as that of a rotting corpse. Both sides agree that the fragrance of a hawthorn tree in bloom is a strong scent that can be smelled from a distance.
Wild Growing Location
Hawthorn is native to Europe and can be found in Asia, Africa, Australia, and North America. The shrub grows wild along the edges of wooded areas and thickets and grows best in moist soil that is loose and rich with decomposed plant matter.
Hawthorn growing in the wild often create a natural living fence along the edge of a wooded area and is often planted as a living fence in large landscapes.
Flavor and Uses
Hawthorn a wild edible, its berries have a tart flavor while the plant leaves have a light floral flavor. The berries and leaves are used in the making of tea, wine, jelly, jam, ketchup, infused oil, and vinegar.
The young leaves and flowers are gathered in the spring and used in a fresh green salad. The leaves can be harvested anytime for making tea.
The berries ripen in early fall and will be at their peak flavor after the first frost of fall. They can be harvested before frost but will have a tarter flavor.
The leaves, flowers, and berries are used to make tea for drinking or tinctures. The tea can also be used to add flavor to foods like rice or pasta by using it as a cooking liquid.
The edible plant parts are rich in vitamins B and C, fiber, and loaded with antioxidants. Antioxidants neutralize the free radicals (unstable molecules) in the body that are precursors to many chronic diseases, including cancer, heart disease, and diabetes.
Hawthorn is also a powerful anti-inflammatory that helps reduce the amount of inflammation in the body. Chronic inflammation can lead to debilitating diseases like diabetes, cancer, and asthma.
Hawthorn extract (tincture) has been shown in studies to significantly reduce the amount of blood fat in the body. Lowering the blood fat reduces high cholesterol to help reduce the risk of heart attack and stroke.
The natural fiber content of the berries aid in digestion and help improve gut health. The berries keep food moving swiftly through the digestive process for better elimination. Hawthorn extract has been shown in studies to provide a protective coating on the lining of the stomach to help treat and/or prevent stomach ulcers.
Hawthorn extract is rich in polyphenols (micronutrients) that are beneficial for skin and hair. One study shows that hawthorn extract is good for stimulating hair growth because it increases the size and number of hair follicles.
To harvest the leaves and flowers, prune off some of the branches from the tree in spring when the shrub is in bloom. If you are on the side of describing the flowers as smelling bad, the smell will fade as the flowers dry and the dried flowers don’t taste as bad as they smell.
Place the small branches with flowers and leaves intact in a paper bag and hang the bag upside down in a warm location until they dry. The dried leaves and flowers will be easy to remove from the branches, just be careful of the thorns.
Harvest the berries by carefully picking them off the plant in late summer or fall. Place them in a single layer in a warm location to dry or use a dehydrator to dry.
Grow Your Own
Plant hawthorn seeds in late February. Mix compost and leaf mold into the soil, plant 2 seeds in a hole that is 2-inches deep, and water well. Keep the soil moist until the seeds germinate.
You can start a new plant by taking a cutting from an older plant. Take a 10-inch cutting in spring, remove leaves, dip the cut end into rooting hormone and insert 2-inches deep into a container of potting soil. Place container in a shaded area and allow the roots to develop then transplant outdoors.
Hawthorn a Wild Edible Notes of Interest
* Hawthorn has long been used as a natural way to control high blood pressure, lower high cholesterol, improve circulation, and increase blood flow to the heart. Hawthorn widens the blood vessels and increases the amount of blood that is pumped out of the heart during contractions.
* Hawthorn supplements typically include all parts of the plant. The leaves and flowers contain more antioxidants than the berries.
* Honey bees love the hawthorn shrub when it’s in full bloom. The abundant pollen produced by the flowers helps the bees create dark, nut-flavored honey known as ‘Hawthorn honey’.
*Tinctures and salves are also made from various parts of the hawthorn plant to treat skin disorders, like boils and open sores. | <urn:uuid:25a789c0-959d-42ef-a2fc-78f1f64839bd> | CC-MAIN-2023-50 | https://traderscreek.com/tag/survival-training/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.948107 | 1,351 | 2.6875 | 3 |
Double Mer Point Archaeology
This project was initiated by the community of Rigolet in 2013 to provide a destination for tourists to learn about Inuit history. The community invited Lisa and her team to excavate the late 18th century Labrador Inuit winter village to learn as much as possible about the lives of its inhabitants. Once the village has been reconstructed, this information will be used for interpretation and education within Rigolet and for tourists. The Double Mer Point winter village is composed of three Inuit sod-walled dwellings and associated middens (garbage/discard areas) located near Rigolet. Nearby there are also the remnants of summer tent camps which were used by Inuit when the weather was warmer. The site itself marks the end, or destination, of the boardwalk from town, approximately 8.5 kilometers to the northeast of the modern community of Rigolet. This site is particularly interesting because it was first occupied when the Inuit operated a long-distance coastal trade network; the network served as a link between their traditional communities, and offered a means to exchange Inuit-produced goods with European fishermen and traders in southern Labrador, who offered European goods. During the life of the occupation, its inhabitants would have witnessed the arrival of Moravian missionaries, and ultimately the first European settlers in the region. This project has a strong focus on Rigolet history and culture, which seeks to stimulate tourism activity, revive local traditions, train youth, engage Elders, and celebrate the strong culture of the community.
The Double Mer project has been very active since the initial 2013 community invitation; it has offered multiple employment opportunities for community youth and boat drivers, and offered locally relevant education opportunities for community members, and promoted local tourism. The establishment of the Net Loft Museum in the centre of town has been particularly helpful for education and tourism promotion; this field lab/museum was developed in collaboration with the Rigolet Heritage Society and has become a frequent destination for interested community members and tourists who arrive in the summer by cruise ship and coastal ferry. Community input has been instrumental in identifying objects which archaeologists have not previously encountered. Local volunteers who wanted to spend time working at the site have also been welcomed and trained. Most importantly, there has been the opportunity for community members of all ages to gather in the Net Loft Museum to discuss daily finds, engage with history and help the archaeological team understand local traditions; this has encouraged knowledge sharing between youth and Elders, as well as between local knowledge bearers and archaeological students. Excavations at the village have also provided opportunities to train a new generation of archaeology students who work alongside high school students from Rigolet. Three students have gone on to produce valuable Master’s theses describing the life of Inuit residing in each of the three houses. Once the excavation is complete, the community will begin the work to reconstruct the village as a tourist destination and a teaching place.
While still ongoing, the excavations of the Double Mer Point winter village have revealed much about the daily lives of its inhabitants. We now know that the village was occupied from the late 18th century to the early 19th century. Villagers survived the long winters by hunting seal and caribou as well as fishing for salmon. They butchered their food using traditional tools such as ulus and cooked in soapstone pots hung over kudliks. Much of the winter would be spent hunting, sewing clothes, producing elaborate beadwork, beautiful carvings, and playing games. The families from Double Mer Point participated in long-distance trade, helping to move European-manufactured goods, like beads, dishes and hunting equipment acquired from British and French fishers and traders in southern Labrador to Inuit communities along the coast. They were well connected to the global economy, one of the houses even contained a Turkish pipe.
The excavations at Double Mer Point allowed a wonderful opportunity for the archaeology team from Memorial University and community members from Rigolet to get to know one another. Each year we held community meetings to explain our finds and interpretations to the town. The Net Loft Museum provided a place for archaeologists to interact with community members daily to learn about the use and importance of the artifacts we found. The annual Rigolet Salmon Festival provided a great opportunity to discuss the excavations with people returning home for the celebration. The support from the Rigolet community was overwhelming: they housed us, provided us with research space, took us fishing and exploring, and shared their pride and enthusiasm for their history and culture.
At the request of the Rigolet Inuit Community Government we have developed an all-weather interpretive plaque in Inuttitut and English to describe life at the Double Mer Point winter village. More plaques will be added at destinations along the boardwalk and at other archaeological sites soon.
In 2016 the cast and crew of the APTN series Wild Archaeology (Archaeology from an Indigenous Eye) spent a week in Rigolet filming and interviewing community members and archaeologists about the excavations at the Double Mer Point site. Their work resulted in a two-episode series The Inuit of Rigolet broadcast nationally by APTN in January 2017.
Excavations only take place in the summer, but a lot of archaeological work stabilizing artifacts, reconstructing them and learning more about them goes on in the laboratory during the winter. From the beginning of the project we wanted to develop a way for all interested community members to find out more about the archaeology at all stages. We created the Rigolet Community Archaeology Facebook page to share pictures of the excavation, the artifacts, the team and all the lab work at every stage of the process.
Media & Publications
Memorial University President’s Award for Engaged Partnership; Presented to Mayor Jack Shiwak of the Rigolet Inuit Community Government and Lisa Rankin for outstanding collaboration between the University and the community of Rigolet.
Community Project: Connecting the Present and Past in Rigolet; This is an informational video produced by the OPE department of Memorial University, with footage recorded by a local Rigolet Company, Bird’s Eye Inc. | <urn:uuid:f6671e60-3e42-426f-85a0-59f74573909c> | CC-MAIN-2023-50 | https://traditionandtransition.com/projects/double-mer-point-archaeology/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.960585 | 1,261 | 2.84375 | 3 |
Adolescence is a crucial period that requires careful monitoring and support to protect the well-being of teenagers. In Australia, suicide has tragically become the leading cause of mortality among teenagers, and self-harm is alarmingly prevalent, affecting 18% of adolescents aged 14-17. However, risk assessment and intervention strategies have been limited in their effectiveness, particularly for adolescents outside healthcare settings.
In an effort to address this critical issue, researchers from UNSW Sydney, the Ingham Institute for Applied Medical Research, and the South Western Sydney Local Health District (SWSLHD) have made groundbreaking progress in the field of mental health. They have developed machine learning (ML) models that significantly improve the ability to predict the risk of suicide and self-harm attempts in adolescents, surpassing the accuracy of standard approaches that rely solely on previous attempts as a risk factor.
Machine learning algorithms have provided a powerful framework for analyzing vast amounts of patient data in mental health. By detecting potential risk factors and evaluating their predictive capability regarding suicide and self-harm attempts, ML algorithms offer valuable insights into identifying at-risk individuals.
Dr. Daniel Lin, a leading psychiatrist and mental health researcher affiliated with UNSW, the Ingham Institute, and SWSLHD, emphasized the importance of utilizing machine learning algorithms to process and interpret an overwhelming amount of information beyond the capacity of clinicians alone.
In their study, the researchers analyzed data from the Longitudinal Study of Australian Children, a comprehensive research initiative tracking children nationwide since 2004. The study comprised 2809 participants divided into two age groups: 14-15 years and 16-17 years. Data from questionnaires completed by the children, their caregivers, and their instructors unveiled significant insights. Within the past 12 months, 10.5% of participants reported an act of self-harm, while 5.2% reported attempting suicide.
Furthermore, Dr. Lin underlined the challenge of underreporting these behaviors, suggesting that the actual figures may be even higher. By analyzing over 4,000 potential risk variables, including mental health, physical health, social interactions, and the home and school environment, the researchers employed a sophisticated machine learning approach known as the random forest classification algorithm.
Through their analysis, the researchers identified depressed moods, emotional and behavioral issues, self-perception, and school and family relationships as the most influential risk factors for suicide and self-harm attempts. Their findings underscore the significant role played by an individual’s environment, offering opportunities for prevention and intensified support measures.
Parental and school support emerged as crucial protective factors, prompting the need for society to prioritize initiatives that enhance both parenting and education. Recognizing the impact of a lack of self-efficacy on suicide and emotional regulation on self-harm, researchers insist on the importance of empowering adolescents to take control of their environment and emotions.
While further studies are warranted to validate the effectiveness of these machine learning models in therapeutic settings, these advancements offer promising prospects for revolutionizing risk assessment and intervention strategies. Applying the models to real-world clinical datasets and investigating the influence of different risk factors on behavior will shape a more comprehensive understanding of the complex factors contributing to adolescent suicide and self-harm.
What is the current leading cause of mortality among teenagers in Australia?
Suicide is currently the leading cause of mortality among teenagers in Australia.
What percentage of Australian adolescents aged 14-17 engage in self-harm?
18% of Australian adolescents aged 14-17 engage in self-harm.
How have machine learning models contributed to predicting suicide and self-harm risk in adolescents?
Machine learning models have significantly enhanced the prediction of suicide and self-harm risk in adolescents. By analyzing vast amounts of patient data and identifying potential risk factors, these models offer invaluable insights into at-risk individuals.
What are the most relevant risk factors for suicide and self-harm in adolescents?
Depressed moods, emotional and behavioral issues, self-perceptions, and school and family relationships are identified as the most relevant risk factors for suicide and self-harm in adolescents.
How can parental and school support promote prevention and support measures?
Parental and school support play a crucial role in protecting adolescents. Prioritizing initiatives that enhance parenting and education is essential to better equip younger generations in navigating their environment and emotions. | <urn:uuid:3296f253-8bc5-426f-a36f-d5ac6e4fdec5> | CC-MAIN-2023-50 | https://ts2.ai/machine-learning-models-to-predict-adolescent-suicide-and-self-harm-risk/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.93597 | 873 | 3.0625 | 3 |
Florida is a popular tourist destination due to its scenic beaches, theme parks, and natural landscapes, but beneath its sunny exterior lies a darker side. The state is home to several abandoned sites that are now in a state of decay and are known for their spooky histories. One such place is the G. Pierce Wood Memorial Hospital, a former psychiatric institution located on the Carlstrom military airfield premises. This unsettling relic is a reminder of its grim past and the horrors that once took place within its walls.
The History of Carlstrom Field
The G. Pierce Wood Memorial Hospital has a disturbing past that dates back to its origins as Carlstrom military airfield. The airfield was founded in 1917, located just south of Arcadia, in response to the United States’ involvement in World War I. It was primarily used as an advanced school for pursuit pilots, providing a six-week course that could accommodate up to 400 students. Following the conclusion of World War I in November 1918, the airfield’s activities gradually decreased and eventually led to its closure in 1926.
In March 1941, the Riddle Aeronautical Institute took over the operation of Carlstrom Field due to the high demand for primary pilot training during World War II. Under their operation, the 53d Flying Training Detachment was activated and led by Brigadier-General Junius Wallace Jones. Interestingly, Jones had learned how to fly at Carlstrom himself.
Carlstrom Field, built alongside remnants of World War I-era structures, boasts of unique architecture. The buildings are grouped within a circular road, while the southern perimeter is encircled by five hangars. Interestingly, the flying operations are carried out from a vast 1-square-mile grass field, as no paved runway was ever built.
After World War II had ended, Carlstrom Field stopped its operations and was later transformed into the G. Pierce Wood Memorial Hospital in 1947.
The Horror of G. Pierce Wood Memorial Hospital
George Pierce Wood Sr., a former Florida House of Representatives Speaker and a staunch supporter of mental health, was the namesake of the hospital. Despite this, mental health advocates claimed that the hospital did not provide adequate care. The hospital was investigated by the U.S. Department of Justice’s Civil Rights Division in the mid-1990s, following reports of abuse that included resident deaths, sexual assaults, and beatings.
Back in 1995, Governor Lawton Chiles received a letter from the U.S. Justice Department that shed light on the disturbing reality of the facility. The letter detailed nine deaths that had occurred there, each one more tragic than the last. One of the cases involved a man who was suffering from delusions and amnesia. He managed to flee the facility, which lacked proper fencing, only to be found dead a month later near a tree that was a mile away. Another patient, who also fled, ended up taking their own life in a nearby orange grove. These incidents serve as a harsh reminder of the dire consequences that can arise when proper care and security measures are not in place.
In a heartbreaking incident, a woman who had fled was hit by a car and lost her life on the Pennsylvania Turnpike. What makes this even more alarming is that some patients, who were supposed to be on a soft food diet, choked to death on solid food. Sadly, these were not isolated incidents, and as a result, the state closed down the facility within six years primarily due to financial reasons. As a result, around 300 patients had to be relocated to other state-run mental institutions and community treatment programs.
The Eeriness of G. Pierce Wood Memorial Hospital Today
As of today, the G. Pierce Wood Memorial Hospital stands alone, abandoned, and forlorn. The once vibrant and functional buildings are now covered in graffiti, with broken windows and peeling paint, and surrounded by overgrown vegetation and ominous barbed wire fences. The remnants of the past can still be seen in certain rooms, with old furniture, medical equipment, and the personal belongings of former patients and staff strewn about. Of particular note is the hospital’s seventh floor, which used to house a psychiatric ward that included a receiving unit for Baker Act patients. These patients were subjected to involuntary examination and treatment under Florida law due to mental health concerns.
For many years now, brave urban explorers and those fascinated by the paranormal have dared to enter the hospital’s premises. They have reported experiencing eerie occurrences, such as unidentifiable sounds, ghostly whispers, and even sightings of apparitions. There are those who firmly believe that the hospital is indeed haunted, with the spirits of those who suffered and died within its walls still wandering its corridors. Alternatively, some believe that the hospital is a place where malevolence thrives, and darkness looms.
Florida’s G. Pierce Wood Memorial Hospital is a site shrouded in mystery and dark history. Its rundown appearance and disturbing past are constant reminders of the injustices that have plagued the mental health system, leading to tragic outcomes for many. The hospital’s walls hold untold stories that continue to intrigue visitors to this day. Regardless of one’s beliefs in the supernatural, there’s an undeniable chill that overtakes anyone who sets foot in this eerie, abandoned institution. | <urn:uuid:aadc7ce1-6039-440e-9a41-04acc0079ced> | CC-MAIN-2023-50 | https://ucreview.com/this-abandoned-florida-psych-ward-is-one-of-the-eeriest-places-in-the-state/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.973687 | 1,080 | 2.828125 | 3 |
How the Church Fathers Read the Bible: A Short Introduction by, Gerald Bay
Read the Scriptures with the insight of our forebears. Christians live in the house built by the church fathers. Essential Christian doctrines were shaped by how figures such as Justin Martyr, Irenaeus, and Augustine read the Bible. But appreciating patristic interpretation is not just for the historically curious, as if it were only a matter of literary archaeology. Nor should it be intimidating. Rather, the fathers gleaned insights from Scripture that continue to be relevant to all Christians.
?How the Church Fathers Read the Bible is an accessible introduction to help you read Scripture with the early church. With a clear and simple style, Gerald Bray explains the distinctives of early Christian interpretation and shows how the fathers interpreted key Bible passages from Genesis to Revelation. Their unique perspective is summed up in seven principles that can inspire our Bible reading today. With Bray as your guide, you can reclaim the rich insights of the fathers with reverence and discernment.
- Publisher : Lexham Press (April 13, 2022)
- Language : English
- Hardcover : 184 pages
- ISBN-10 : 1683595831
- ISBN-13 : 978-1683595830
- Item Weight : 1.74 pounds | <urn:uuid:1861a866-3dfe-4c32-8c2f-e0bdcfbc7650> | CC-MAIN-2023-50 | https://ue.masters.edu/collections/books/products/how-the-church-fathers-read-the-bible-a-short-introduction | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.897506 | 262 | 2.75 | 3 |
Nasa officials said they have received no reports of damage or injury so far from the reentry, which occurred in the wee hours of the morning in Sudan.
Most of the 660-pound (300-kilogram) satellite, called Rhessi, was expected to burn up while plummeting through the atmosphere. But experts anticipated some pieces would survive and slam into the ground.
Launched in 2002, Rhessi was turned off in 2018 following a communication problem. Before falling silent, it studied solar flares and coronal mass ejections from the sun.
Rhessi stands for the Reuven Ramaty High Energy Solar Spectroscopic Imager. | <urn:uuid:13d9a128-b590-4cdc-a378-6b9834992ab8> | CC-MAIN-2023-50 | https://ufcfight.online/sports/nasa-old-nasa-satellite-plunges-to-earth-over-sahara-desert/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.959326 | 136 | 2.8125 | 3 |
Blue Planet Law is the global and future-oriented environmental law that is necessary to face the global environmental crisis in the Anthropocene, assuming especially the link between climate action (SDG 13) and ocean sustainability (SDG 14). This open access book focuses on means of overcoming global environmental problems such as climate change, ocean degradation and biodiversity loss and the consequent risks for human life, health, food and wellbeing. It explores how environmental law, at the international, European and national levels, might set economic and technological development on a more sustainable path. Law must engage in dialogue with other areas such as philosophy, economics, ecology, and biology. This book highlights protection of the climate and the oceans and sustainable use of natural resources, through new policies, economies and technologies, including biotechnology, with a view to the preservation of life, health, food and a healthy environment for the present and future generations. The book may be seen as a contribution to the UN Sustainable Development Goals 13 and 14 and a tribute to the Declaration of the United Nations Conference on the Human Environment, also known as the Stockholm Conference (1972), on its 50th Anniversary.
This book is included in DOAB.
Why read this book? Have your say.
You must be logged in to comment.
Rights InformationAre you the author or publisher of this work? If so, you can claim it as yours by registering as an Unglue.it rights holder.
This work has been downloaded 8 times via unglue.it ebook links.
- 8 - pdf (CC BY) at Unglue.it.
- Biology, Life Sciences
- Climate change law
- Earth sciences
- Ecological science, the Biosphere
- EU Green Deal
- Global environmental sustainability
- green taxes
- Hydrology & the hydrosphere
- International Environmental law
- International law
- Law and renewable energy
- Life sciences: general issues
- Marine genetics and living resources
- Mathematics & science
- Ocean sustainability
- Oceanography (seas)
- Public International Law
- SDG 13
- SDG 14
- The environment
Copy/paste this into your site: | <urn:uuid:5cca234f-6922-43a2-b163-dbdf8091bd95> | CC-MAIN-2023-50 | https://unglue.it/work/569339/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.887873 | 446 | 2.90625 | 3 |
Which of the following can you not conclude from the information given in the passage?
The compact disc, or CD, has become an important part of the music industry. For a long time, music recordings were stored on large “records,” called LPs for “long-playing.” Next, we used 8-track and cassette tapes to play music. Finally, the CD was invented and it has taken over the musical storage industry. It is much smaller and takes less space to store than any of the previous recording methods.
Cassette tapes came before CDs.
Cassette tapes came before 8-track tapes.
The CD is a modern invention.
The CD is easy to store.
Already signed up? Sign in
Let's continue studying where you left off. | <urn:uuid:da4a1090-42be-4ca1-b708-961e700cc0bf> | CC-MAIN-2023-50 | https://uniontestprep.com/accuplacer-test/practice-test/esl-reading-skills/pages/40 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.969167 | 164 | 3.15625 | 3 |
Head of working group
AG Hydrogeology and Landscape hydrology
IBU, Fk. V, Building A1
Carl von Ossietzky Universität
Room: A1 1-130
Phone: ++49 (0) 441 / 798 - 4236
Fax: ++49 (0) 441 / 798 -3769
University for Children (Kinder-Universität)
Explanation to the film 'Is there water under the ground?'
The film is intended to prove that there is also water underground (often very close to the surface in Germany). This water is called groundwater and is normally not visible and is therefore a 'secret treasure'
Explantation to the film "Water balance Germany":
The film is intended to show how high the water would stand on average in Germany if it accumulated over a year (up to the shoulder of the 1. 20 m tall Lina). In the next step, the part of precipitation that evaporates on average in Germany disappears again (most of it, water is now still standing up to the top edge of the rubberboots). If you subtract the portion that corresponds to the surface effluent, about 10 cm of water height (water up to the ankle) remains, which seeps into the subsoil and forms the groundwater anew.
Explanation to the film 'Water balance Australia':
The film shows how high the water would stand on average in drier climates in comparison to Germany (values from South Australia, even if the background picture actually does not come from South Australia) if it would accumulate over a year (only up to the lower legs of the 1. 20 m tall Lina). Next, the part of the precipitation disappears (evaporates) again (almost everything, water only up to the soles of the feet). Subtracting the portion that corresponds to the above-ground runoff (about 1 cm), about 0. 3 cm of water height remain (i. e. at most a few puddles) , which seep into the subsoil and form new groundwater.
Explanation to the film 'How does it look under the ground?':
Using a sand pit as an example, the film is intended to illustrate what it often looks like underground (at least in the Geest): Below the ground, many metres of sand follow, which was deposited as meltwater sand during the iceages. Ulrike removes sand for an experiment which is supposed to show that water seeps away in such sands,or rather can flow as groundwater (as soon as the sands are at groundwater level). | <urn:uuid:d0162fd7-1352-4638-a2e0-eea6aa71d46f> | CC-MAIN-2023-50 | https://uol.de/en/hydrogeology/pictures-movies-simulations/university-for-children | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.924358 | 543 | 3.65625 | 4 |
Enhanced by citizen science data, the Environment Agency is using water quality monitoring activity to prioritise how and where to tackle phosphate pollution on the River Wye.
The Wye catchment is an iconic location and hugely important for biodiversity, principally due to the wide range of rare river wildlife.
Over 60 per cent of the phosphate load in the catchment is from diffuse agricultural pollution from livestock manure and nutrients washing into the river during heavy rain.
A range of partners through the Wye Nutrient Management Board, including the Environment Agency, Natural England and Natural Resources Wales, are working collaboratively to address concerns about phosphate levels in the catchment and drive forward nature recovery.
Based on the latest water quality monitoring report, which has been enhanced by new data obtained by an ongoing citizen science monitoring programme, the Environment Agency has made a series of recommendations on where actions are most needed and the locations across the catchment where those actions can have the most impact.
This includes a recommendation that partners take a catchment-based approach targeting five upstream areas of the river that have high phosphate levels relative to the wider catchment.
Evidence also indicates that efforts to increase shade by tree planting and better management of riparian trees could help mitigate high temperatures.
The Environment Agency is developing an algal bloom early warning system to respond to excessive temperatures, with advice for anglers and river users.
The latest water quality monitoring report, along with analysis, activities and plans to improve water quality in the Wye and Lugg river catchments, can be found on the Environment Agency’s recently launched Wye Water Monitoring webpage.
Grace Wight, Area Environment Manager for the Environment Agency, said: “The inclusion of citizen science data in our latest monitoring report is a fantastic step in combining more data sources and is already helping identify where measures can be targeted to reduce the inputs of pollution to the river.
“The scale and complexity of the challenges in the catchment means solutions will take time and must be founded on a strong evidence base.
“Through our new website and ongoing engagement work, we will continue to update our partners and stakeholders so that we can work together to tackle pollution and ensure the future health of the river.”
The Environment Agency continues to scale up its monitoring operation to inform next steps on the Wye.
£100,000 has been secured for the Wye and Lugg catchments from a national fund that supports agricultural regulation.
The additional funds will supply monitoring equipment that will capture real-time, high-resolution data over the catchments.
The equipment comprises of 10 sondes and 6 auto-samplers that will be positioned across the rivers Wye, Lugg and Arrow.
Don’t forget to subscribe to the Van Life Matters newsletter to stay up-to-date with the latest UK Van Life news, tips and advice. | <urn:uuid:02de0577-de93-471d-8d71-81afa0aeaf5e> | CC-MAIN-2023-50 | https://vanlifematters.co.uk/environment-agency-boosts-water-quality-monitoring-to-help-tackle-pollution-on-the-river-wye | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00000.warc.gz | en | 0.932063 | 588 | 3.125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.