text
stringlengths 170
616k
| id
stringlengths 47
47
| dump
stringclasses 9
values | url
stringlengths 14
1.7k
| file_path
stringlengths 125
142
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 42
151k
| score
float64 2.52
5.28
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|
Ease the Tension - Quell the Conflict - Stop the Killing - End the Suffering!”
Each year in Sri Lanka, human-elephant conflicts kill 80 people and 200 wild elephants and force countless orphaned and poached baby elephants into a grim, chained life in captivity. Since 1995, the Sri Lanka Wildlife Conservation Society has implemented creative solutions to human-elephant conflicts, which save elephants by helping people. We engage with people at the grassroots level and in government agencies, working systematically to build capacity, foster leadership and empower Sri Lankans at all levels to support sustainable, long-term conservation and animal welfare success.
How we helped
Our new, groundbreaking EleVETS initiative will enhance the health and welfare of Sri Lanka’s captive elephants through veterinary care training and mentorship. For millennia, Sri Lankans have maintained a sacred fellowship with Asian elephants. They are part of their culture, economy, ecosystem and traditions. Yet, there is a paradoxical reality behind this longstanding relationship: captive elephants are forced into a lifetime of chained servitude characterized by chronic pain, isolation and suffering. More than 250 Asian elephants live in captivity in Sri Lanka. Many receive no professional veterinary care. This often leads to chronic disorders: irreversible joint damage, painful foot abnormalities, parasitic disease, intestinal illness, infections and abscesses, and stereotypical behaviors—common symptoms of neglect of the complex biological, emotional, and physical needs of elephants.
Lush support will allow the Sri Lanka Wildlife Conservation Society (SLWCS) and its key partners to work hand-in-hand with Sri Lanka’s practicing veterinarians, educational institutions, wildlife management departments and elephant owners to bring about sustainable improvements in the health, care and management of captive elephants, and lay essential groundwork for the development of “New Life Elephant Sanctuary”, Sri Lanka’s first sanctuary for captive elephants.
There is an unofficial motto at SLWCS: “Never let an opportunity go by that could make a difference.” With more than 20 years of grassroots success in Sri Lanka, and with the support of good friends like Lush, SLWCS is ideally positioned to lead EleVETS to success and make a lasting difference. | <urn:uuid:bf37fb3e-5732-4d91-84ee-1ed52e866d89> | CC-MAIN-2021-49 | https://www.lushusa.com/charitypot/partners/sri_lanka_wildlife_conservation_society.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363641.20/warc/CC-MAIN-20211209000407-20211209030407-00431.warc.gz | en | 0.921057 | 455 | 2.6875 | 3 |
COYLE, Mary (ca 1814 – 1846)
Mary Coyle was born in Ireland around 1814, the daughter of Peter Coyle and Mary Brennan. At some point, she emigrated from Ireland to Quebec.
From 1830 onwards, Coyle was imprisoned over and over again in the Quebec common gaol – almost 80 times between 1830 and 1846, for periods ranging from eight days to two months. She was almost always accused of being “loose, idle and disorderly”, a catch-all offence which covered a wide range of undesirable behaviours, from vagrancy to prostitution. Middle-class morality in the nineteenth century was less and less tolerant of women such as Coyle who did not conform to feminine ideals. In all, she spent over half of her life in prison between 1830 and 1846.
Coyle was part of a small group of “disorderly” women, mainly of Irish origin, often prostitutes, who made up a large proportion of the female prisoners in the Quebec common gaol. In 1829, a separate building was set aside as a women’s prison. The prison was much more than just a place of punishment. It also served as a refuge in a city where there were few resources available for poor, “immoral” women, especially during the long, cold winters. There were even many women who asked to be locked up in prison, by voluntarily confessing to being “loose, idle and disorderly”. This was better than dying outside. In prison, most of these women, like Coyle, were put to hard labour. This was less rigorous than one might think: for women, it meant a few hours a day sewing, or picking apart a limited quantity of oakum (old hemp ropes, used to make caulking for ships).
Coyle briefly stepped out of anonymity in 1837, when she was one of a group of women who signed a petition against the matron of the women’s prison, Elizabeth Cooke and her husband, Thomas. The petition, sent to the colony’s governor, Lord Gosford, accused the Cookes of a series of misdeeds. Among others, the petitioners accused the Cookes of imposing arbitrary punishments on them and of illegally selling them provisions, at inflated prices. They also claimed that the Cookes were drunkards who constantly fought each other, to the point that it was the prisoners who had to separate them. The petition was a good example of how women prisoners were willing to resist an administration which did not correctly manage “their” prison. Such resistance, however, had serious limits. Gosford ordered an investigation by the sheriff, William Smith Sewell (who was responsible for the prison). Sewell’s report cleared the Cookes of any wrongdoing, and though Thomas left the institution soon afterwards, Elizabeth stayed on as matron until 1840. As for Mary Coyle, she went back to the usual routine of incarceration, release, and incarceration again.
This hard life eventually killed Mary Coyle. She died at the Hôtel-Dieu hospital in Quebec City on April 8, 1846, aged 32. She had been imprisoned for the last time on March 16, at her own request, for two months at hard labour. The prison register does not show when she was transferred to the Hôtel-Dieu, but she evidently became ill while she was in prison (though not necessarily because of conditions in the prison itself). Although the women’s prison was a refuge, it was far from an ideal one for women like Mary Coyle.
– Donald Fyson, June 2015
- Bibliothèque et Archives nationales du Québec, «Le fichier des prisonniers des prisons de Québec au 19e siècle», <http://www.banq.qc.ca/archives/genealogie_histoire_familiale/ressources/bd/instr_prisons/prisonniers/index.html> (transcription of the Quebec common gaol registers, the originals of which are in Bibliothèque et Archives nationales du Québec, Centre de Québec, E17,S1).
- Library and Archives Canada RG4 A1 vol. 508 file 1837-04
- Fyson, Donald. “Prison Reform and Prison Society: The Quebec Gaol, 1812-1867”. In Louisa Blair, Patrick Donovan and Donald Fyson, From Iron Bars to Bookshelves: A History of the Morrin Centre (Montreal: Baraka Books, forthcoming). | <urn:uuid:cc8554ec-a2be-4953-af3b-4aaf51dd5bf8> | CC-MAIN-2021-49 | https://www.morrin.org/en/lhsq-publications/prisoners-students-and-thinkers/prisonniers-eleves-et-penseurs-mary-coyle/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363641.20/warc/CC-MAIN-20211209000407-20211209030407-00431.warc.gz | en | 0.971111 | 975 | 3.265625 | 3 |
Much of what is known about the past often rests upon the chance survival of objects and texts. Nowhere is this better illustrated than in the fragments of medieval manuscripts re-used as bookbindings in the sixteenth and seventeenth centuries. Such fragments provide a tantalizing, yet often problematic glimpse into the manuscript culture of the Middle Ages. Exploring the opportunities and difficulties such documents provide, this volume concentrates on the c. 50,000 fragments of medieval Latin manuscripts stored in archives across the five Nordic countries of Denmark, Finland, Iceland, Norway and Sweden. This large collection of fragments (mostly from liturgical works) provides rich evidence about European Latin book culture, both in general and in specific relation to the far north of Europe, one of the last areas of Europe to be converted to Christianity.
As the essays in this volume reveal, individual and groups of fragments can play a key role in increasing and advancing knowledge about the acquisition and production of medieval books, and in helping to distinguish locally made books from imported ones. Taking an imaginative approach to the source material, the volume goes beyond a strictly medieval context to integrate early modern perspectives that help illuminate the pattern of survival and loss of Latin manuscripts through post-Reformation practices concerning reuse of parchment. In so doing it demonstrates how the use of what might at first appear to be unpromising source material can offer unexpected and rewarding insights into diverse areas of European history and the history of the medieval book.
Table of Contents
1. Piecing Together the Past: The Accidental Manuscript Collections of the North
[Tuomas Heikkilä and Åslaug Ommundsen]
2. Reflections on Nordic Latin Fragment Studies Past and Present Together with Three Case Studies
3. The Recycling of Manuscripts in Sixteenth-Century Sweden
4. From Fragments Towards the Big Picture: Reconstructing Medieval Book Culture in Finland
5. The Problem of the Provenance of Medieval Manuscript Fragments in Danish Archives
[Michael H. Gelting]
6. A Norwegian – and European – Jigsaw Puzzle of Manuscript Fragments
7. Latin Fragments Related to Iceland
[Guðvarður Már Gunnlaugsson]
8. Danish Fragments in Norway and their Connections to Twelfth-Century Lund
9. Iceland and Norway: Separate Scribal Cultures versus Cultural Exchange
10. Birgittine Books in the Nordic Fragment Collections
Åslaug Ommundsen is a researcher and project leader at the University of Bergen.
Tuomas Heikkilä is the director of the Finnish Institute in Rome. He holds docentships at the Universities of Helsinki (general and church history) and Turku (Finnish history). | <urn:uuid:2ddc80f1-f8ba-444f-99ac-7c48c25f888d> | CC-MAIN-2021-49 | https://www.routledge.com/Nordic-Latin-Manuscript-Fragments-The-Destruction-and-Reconstruction-of/Ommundsen-Heikkila/p/book/9780367881696 | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363641.20/warc/CC-MAIN-20211209000407-20211209030407-00431.warc.gz | en | 0.843423 | 659 | 2.578125 | 3 |
Head and neck cancer occurs when certain cells within the head and neck grow in an uncontrolled, abnormal, manner. The majority of head and neck cancers begin in cells lining the mouth and nose, including the lips, tongue, gums, sinuses, naval cavity, pharynx and larynx.
According to the National Cancer Institute, more than 55,500 new cases of head and neck cancer were diagnosed and approximately 12,598 people died from the disease in 2009. Head and neck cancers account for approximately 3 to 5 percent of all cancers in the United States.
Treatment of head and neck cancers includes surgery, radiation therapy and chemotherapy. New developments in molecular imaging technologies are dramatically improving the ways in which head and neck cancer is diagnosed and treated. Research in molecular imaging is also contributing to our understanding of the disease and directing more effective care of patients with head and neck cancer.
What molecular imaging technologies are used for head and neck cancers?
The most commonly used molecular imaging procedure for diagnosing or guiding treatment of head and neck cancer is positron emission tomography (PET) scanning, which is often used in conjunction with computed tomography (CT) scanning, and sentinel node biopsy.
What is PET?
PET involves the use of an imaging device (PET scanner) and a tiny amount of radiotracer that is injected into the patient’s bloodstream. A frequently used PET radiotracer is fluorodeoxyglucose (FDG), which the body treats like the simple sugar glucose. It usually takes between 30 and 60 minutes for the FDG distribution throughout the body to become fixed. A CT is also obtained on the same machine so the FDG and CT scans can be fused together and compared.
How is PET used for head and neck cancer?
- diagnose and stage: by determining the location of the cancer and where the cancer has spread in the body; patients with head and neck cancer are scanned from the top of the head to the thighs.
- plan treatment: be determining a site that is appropriate for biopsy and in research studies, by helping to select the best therapy based on the unique biology of the cancer and of the patient.
- evaluate: how the cancer responds to treatment and distinguish changes due to radiation therapy from a cancer recurrence, which can be difficult to determine with CT alone
- manage ongoing care: through early detection of cancer recurrences.
What are the advantages of PET for people with head and neck cancers?
- PET scanning is the most significant advance in the staging of head and neck cancers in recent years
- PET is a powerful tool for diagnosing and determining the stage of many types of head and neck cancers
- PET and PET-CT scans prompt changes in the treatment of more than one-third of patients registered in the National Oncologic PET Registry (NOPR)
- The National Comprehensive Cancer Network (NCCN) has incorporated PET-CT into the practice guidelines for most malignancies
- By detecting whether lesions are benign or malignant, PET scans may eliminate the need for surgical biopsy or, if biopsy is necessary, identify the optimal biopsy location
- PET scans help physicians choose the most appropriate treatment plan and assess whether chemotherapy or other treatments are working as intended
- PET scans are currently the most effective means of detecting a recurrence of cancer.
What is the future of molecular imaging and head and neck cancers?
In addition to increasing our understanding of the underlying causes of disease, molecular imaging is improving the way disease is detected and treated. Molecular imaging technologies are also playing an important role in the development of:
- screening tools, by providing a non-invasive and highly accurate way to assess at-risk populations
- new and more effective drugs, by helping researchers quickly understand and assess new drug therapies
- personalized medicine, in which medical treatment is based on a patient’s unique genetic profile
In the future, molecular imaging will include an increased use of:
- fusion or hybrid imaging, in which two imaging technologies are combined to produce one image
- optical imaging
- new probes for imaging critical cancer processes
- reporter-probe pairs that will facilitate molecular-genetic imaging
- PET-CT to help administer more targeted radiation treatments | <urn:uuid:b5bf3244-7c3c-4335-8f69-3113703760d9> | CC-MAIN-2021-49 | https://www.snmmi.org/AboutSNMMI/Content.aspx?ItemNumber=947 | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363641.20/warc/CC-MAIN-20211209000407-20211209030407-00431.warc.gz | en | 0.93695 | 874 | 3.5 | 4 |
How It Works
Laparoscopically placed around the upper part of the stomach, the band divides the stomach into a small upper pouch above the band and a larger pouch below the band. This small pouch limits the amount of food that a patient can eat at any one time, and will result in a feeling of fullness after eating a small amount of food.
Adjusting the size of the opening between the two parts of the stomach controls how much food passes from the upper to the lower part of the stomach. This opening (stoma) between the two parts of the stomach can easily be decreased or increased, by injecting or removing saline from the band. The band is connected by a tube to a reservoir placed beneath the skin during surgery. The surgeon or nurse practitioner can later control the amount of saline in the band by piercing the reservoir through the skin with a fine needle. The ability to adjust the band is a unique feature of gastric banding and is a normal part of follow-up.
Because the band is removable, adjustable and does not permanently alter the anatomy, it provides an option for patients who may not otherwise consider surgery for treatment of their obesity. Other advantages include a shorter hospital stay and no effects on the absorption of nutrients.
Expected Weight Loss
Estimated weight loss is approximately 80 to 150 pounds over two years.
In general, most patients find they are unable to easily tolerate red meat, pasta, rice, fresh bread and fibrous foods. You will be asked to eat three meals a day with one to three planned snacks, chew your food very well and swallow slowly. You must drink only calorie-free or low caloric beverages and wait at least one hour after eating to drink.
The Lap-Band procedure also results in an effective resolution of major illnesses that ordinarily accompany obesity. Nearly 85 to 90 percent of overweight patients suffering from hypertension, diabetes, sleep apnea and other major illnesses will see a significant improvement or resolution after undergoing Lap-Band.
GASTRIC BAND SURGERY
Gastric banding, which is performed laparoscopically, is one of the least invasive approaches to obesity because neither the stomach nor the intestine is cut.
The amount of weight you lose with gastric band surgery depends on your motivation and commitment to a new lifestyle and eating habits. Gastric banding can help you achieve longer lasting weight loss by:
limiting the amount you can eat
reducing your appetite | <urn:uuid:83106170-5834-47a4-b357-b2d663fedb31> | CC-MAIN-2021-49 | https://www.texomaweightloss.com/gastric-band | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363641.20/warc/CC-MAIN-20211209000407-20211209030407-00431.warc.gz | en | 0.943193 | 519 | 2.6875 | 3 |
Biomass sources & availability
The New Zealand biomass energy sector is based on a fundamental principle: responsible sourcing. That means that biomass fuel is produced entirely from a combination of the waste or residuals left from harvesting and sawmilling activities, the limited quantities of low-quality logs that need to be removed for forest enhancement or salvage projects and material that can’t be used for any other purpose.
In forestry or agriculture the residual biomass is that matter left behind after the primary production is extracted. In wood processing the residual biomass is that which is not able to be used to make other higher value timber products.
Sources of biomass
Generally New Zealand biomass fuel is sourced from residues from the forst growing and processing sector, and from organic waste. The bioenergy and biofuels sector is down stream processing of plantation forestry using residues from primary wood producers.
Because the biomass fuel is sourced from wood or waste residues it is sustainable and providing additional revenue to primary agriculture or forestry sector land uses. Biofuels from organic waste residues is also sustainable because it is based on the residues after material which can be recycled or reused is extracted.
Biomass from a wide range of sources can be processed into a solid biofuel. Wood pellets, chip or hog fuel is the most common solid biofuel and can be produced from any woody biomass.The least amount of pretreatment is required for biomass sourced from wood processors. Herbaceous or organic waste sourced biofuels are generally processed into pellet or briquette form for ease of handling.
The market for biofuels from biomass. is emerging at a fast rate as the demand for solid biofuels increases. Biomass is in demand for other competing uses such as for, anumal bedding, engineered wood products and the extraction of biochemicals for the production of bio-based materials to replace plastics such as which are currently produced from fossil fuel sources.
Municipal solid organic waste can also be chipped or briquetted and used as a solid biofuel. Paper and cardboard can be pelletised into a clean fuel.
Information on the potential availability of solid biofuels across new Zealand are:
Forestry and harvest residues
- Bioenergy Options for New Zealand - Pathways Analysis
- Bioenergy Options for New Zealand - Situation Analysis
- Residual biomass fuel projections for New Zealand
Biomass from municipal waste
- NZ Biomass Resource Atlas, Vols. 1 & 2
- Pelletisation of paper and cardboard for use as a solid biofuel.
Information on biomass resources in Australia is found on the Biomass Producer website biomassproducer.com.au
In New Zealand biomass is derived from the residues from three sustainable sources: Municipal waste, agricultural and food processing residues, and plantation forestry and wood processing.
Using the organic waste or residues from communities and manufacturing is a key objective of sustainable living. It also makes sound sense and can lead to economic benefits of new products and employment. We are good at producing waste and we do it 365/24 so it is a no brainer that we should look at the value that we can extract from it. Energy and the co-products of energy is one of those products.
Bioenergy and biofuels are an integral part of a circular economy as they are produced from thre residues of matter which can not be otherwise reused or recycled.They potentiall ensure that all residual organic matter from communities and manufacturing is reused.
Biogas technologies can be used to reduce animal effluent run-off to waterways and agricultural residues can be used to improve operating performance of anaerobic digestor equipment. | <urn:uuid:d5c01593-5f91-4360-9b04-e7729e4629d0> | CC-MAIN-2021-49 | https://www.usewoodfuel.org.nz/biomass-sources-availability | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363641.20/warc/CC-MAIN-20211209000407-20211209030407-00431.warc.gz | en | 0.925995 | 768 | 3.296875 | 3 |
Plan instructional units using innovative approaches
Plan effective instructional units using innovative approaches such as theme-based or project-based language learning that help K-12 students develop language proficiency and interculturality.
Teach interpretive reading and listening skills
Develop well-scaffolded instructional activities for supporting the growth of K-12 students' proficiency in interpretive reading and listening.
Teach presentational writing skills
Develop activities for supporting students throughout the writing process and helping them attend to issues such as the purpose for writing and the intended audience.
Assess language and culture learning
Develop and adapt assessments, including tests as well as rubrics, checklists, and other alternative forms of assessment, to assess student performance and select appropriate strategies for error correction and feedback to improve performance.
Integrate technology to support student learning
Integrate current technologies appropriate to language education to support student learning, develop language proficiency and cultural competence, improve professional productivity, and facilitate professional growth, making accommodations for differential access to technology.
Adapt instruction for diverse learners
Plan for adapting instruction and appropriately integrating technology to meet the individual learning needs of all students (including students with disabilities, culturally diverse learners, heritage speakers, native speakers, and gifted students) in the mainstream language classroom.
Compare and contrast program models
Compare and contrast the features of various language program models that are used in different settings for different purposes, student populations, and age groups.
Become a professionally-engaged world language educator
Participate professionally in the school community, in professional organizations and professional development activities, and engage in professional dialogue and reflection with colleagues to improve personal teaching skills and world language programs. | <urn:uuid:c08a213a-01a4-4450-b651-2e9c71ddd7a7> | CC-MAIN-2021-49 | http://catalog2016.byu.edu/humanities/spanish-and-portuguese/spanish-teaching-methods-2 | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358443.87/warc/CC-MAIN-20211128013650-20211128043650-00471.warc.gz | en | 0.919186 | 335 | 3.71875 | 4 |
Recently, I argued, controversially, that one type of “othering” is what distinguishes anthropology from the other sciences concerned with humanity (from neurology to sociology). Anthropology, as disciplined practice, is here to tell “us” that human beings are always potentially “other” to any generalization about them. And a subset of “us” (anthropologists) has to keep finding ways to do this more carefully. To approach this argument from a different point of view (bias), let’s consider the classical tension brought out by Boas (and many others before and since) between, on the one hand, the “psychic unity of mankind” (all humans have the same brain bequeathed by evolution), and, on the other hand, the irreducible local and historical make of actual human conditions. This is not the case of the “universal” VS. the “particular” but rather the universality of the particularity of the human (and possibly all life). For humanity, for example, one can safely say that all human groups develop languages (most of) the people caught by them understand, and, most significant, all these languages can be learned by all humans—though it can be difficult for most.
Language is what I generally mention when teaching the fundamental concerns of anthropology, before moving on to other matters, say sex/gender or food production/agriculture (the raw into the cooked). Boas, in his general introduction (1938), mentioned various oddities in etiquette. He also mentioned repeatedly that all breaches to the expected in some group raise emotional responses among the people caught by the assembly. In my own graduate introduction into anthropology, in the late 1960s, I was taught all this through the questions raised by the multiplicity of marriage and descent regulations and practices—something that has all but disappeared from such introduction, as far as I can tell.
I’ll try today to start with something else that is a human universal: music. I will intersperse this with comments about “kinship,” hopefully to trigger some interest among current students to get back to something that remains as “real” as ever (though it may now be labeled “gender” and “class reproduction”).
I explore music because students, earlier this Fall, used music to critique Saussure and his obvious emphasis on the verbal rather than on other media of communication. I have heard many times that music can express emotions much better than any discursive effort that relies mostly on the verbal (though some would point at poetry as an argument against this). The students, perhaps unwittingly, may have been channeling Merleau-Ponty on experience and the “primacy of perception.”
But, to play anthropologists challenging “our” common sense,…
Music is certainly one candidate for attempts to differentiate homo sapiens sapiens from other apes (but perhaps not from the Neanderthals who appear to have used flutes). All apes vocalize. Whether any sing may depend on how one defines singing. The question is the same as whether various calls made by all apes classify as language. Interestingly, recent writing about this broadening of what is to count as language or music echoes the reluctance to make humans special. Paradoxically perhaps, the best evidence against “speciesism” is the work of sociobiologists, for example that of Sarah Hrdy, an anthropologist who writes about “Mother nature” (1999). In that book, she carefully reviews all we know about the higher apes of Africa, and particularly the females with children, without making a distinction between the species whether the original research is “ethological” or “ethnographic.” About all anthropologists I know find this approach very hard to accept and it is not taught as possible proper anthropology.
So, eventually, the initial stance regarding humans making music is that not only do they sing (talk) but that they do so in many different ways that keep changing even as they trigger strong emotions. To the extent that this is an hypothesis about a universal, one then has to look for the evidence (including evidence that some humans do not sing much differently from other apes–but this is very dangerous as it might bring us back to the pre-Boasian anthropology of human evolution).
So, I am with Boas and will postulate the “psychic unity of mankind” on the matter of music. As Boas insisted, one of the evidence has to be archaeological: there are signs that humans made music from at least 60,000 years ago based on the musical instruments that have survived. One may imagine that singing is much older but it does not leave any trace until, many millennia later, it became possible to translate singing into visual marks and notations. The other body of evidence for the universality of music is ethnographic: All human groups that have been reported to Euro-Americans do make music; they make music even in the worst of circumstances; and the music, is recognizable by all as music—even when it sounds unpleasant to many.
Given the small number of basic musical instruments (wind, string, percussion) beside the voice the mythical Martian, particularly if leaning towards sociobiology and Darwinian evolutionism, arriving on earth, might not expect the multiplicity of musical genres, as well as the extent to which the musical genres of people A may grate on the ears of people B (or why some of B might adopt some genres from A even when B’s parents object). Boas, a real “other” among the Inuit, Kwakiutl and others, took the alternate route. He marveled at the multiplicity and asks us to do the same, and then to generalize. Many of his students and contemporaries in Europe also marveled at the multiplicity of kinship vocabularies and other practices. Many who followed him led the discipline into all sorts of blind alleys. But anthropologists, I am convinced, should continue to move towards the other in order to critique or justify any universal statement about humanity.
To make all this more concrete perhaps, I am linking below various examples of what human beings keep producing, musically. This sample tells more about my haphazard exploration of various YouTube rabbit holes than about anything else. I am not a musicologist but I am fascinated both by the multiplicity of the musics human can made, as well as sometimes surprised at what I “like” and what I do not… You will notice that I link here four pieces recorded in China. My idea here is that listening will lead you to ask such questions as: Are the pieces from China “Chinese”? What are your emotional reactions to any of the pieces? What are you reacting to? What do you imagine others (including the original performers) might have felt?
Boas, Franz 1938 The mind of primitive man. New York: The Free Press
Hrdy, Sarah 1999 Mother nature. New York: Balentine Books. | <urn:uuid:7b2fd043-9d6b-4a17-9a5d-c01e0300f9b6> | CC-MAIN-2021-49 | http://varenne.tc.columbia.edu/blgs/hhv/?m=202110 | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358443.87/warc/CC-MAIN-20211128013650-20211128043650-00471.warc.gz | en | 0.958802 | 1,483 | 2.515625 | 3 |
San Francisco -- Although polar bears and seals have become the poster children for vanishing sea ice in the Arctic, they have thrived for a long time. The bears eat the seals, but what do the seals eat? Maybe fish, although in many parts of the Arctic fish are few in number. Even then, what do the fish eat? Researchers have not had a way to fully investigate the food web, but now they do: A new robot, called Nereid, that can cruise on its own just underneath the ice for kilometers at a time and return with data and video to a ship at the ice’s edge.
The Nereid Under Ice vehicle, built and operated by a consortium led by the Woods Hole Oceanographic Institute, completed four dives during its first Arctic mission in July. Three leading researchers from the mission described the dives yesterday during a press conference here at the American Geophysical Union fall meeting in San Francisco. My colleague Rich Monastersky at Nature described the dives in an article he posted yesterday afternoon (Scientific American is part of the Nature Publishing Group).
The researchers also released some intriguing scientific results from the dives. The most surprising is the realization that there is far more life floating around just under the Arctic sea ice, and that there is a much more complex food web, than scientists had thought. Before Nereid, scientists had a difficult time trying to probe or even see undisturbed life under the ice. But Nereid showed extensive communities of dark algae clinging to the underside of the ice. It, in turn, was being consumed by all sorts of critters, many tiny but some larger.
I caught up with Antje Boetius, a marine biologist at the Helmholtz Center for Polar and Marine Research in Bremerhaven, Germany, and chief scientist of the expedition, later on at the meeting. I wanted to know what she and her colleagues saw, and which life forms she thinks are eating which other life forms. The paragraphs below describe what she told me. Have a look, then watch the video, here, which the robot took during its dives, and you should be able to see the various life forms Boetius describes.
Because the ice is thinning from global warming, more sunlight is getting through to the underside, and a form of algae known as diatoms is growing, sometimes in spots and sometimes in large collections. In the video, these appear like dark splotches on the ice. Some of the diatoms attach to the ice, but others attach to those diatoms and float alongside. The plankton produce a kind of slime that helps them all stick together. Some also trap oxygen bubbles, which the diatoms produce, to help the extended mass float right up against the underside of the ice, allowing the community to get the maximum amount of sunlight for photosynthesis.
Tiny crustaceans called copepods float freely in the water and eat the algae. They are a main component of what looks like the “dust” you can see floating around in the water in the video. But because nutrients in the water dwindle or even disappear during the dark months of winter, when the algae’s photosynthesis basically stops, the copepods create and store lipids inside their bodies, which they live off of during those dark months.
Now for the really cool critters: ctenophores and larvaceans. These are bigger living blobs that look somewhat like small jelly fish. In the video, starting about halfway through, you can see them hovering and flapping their ghostly bodies. They eat the copepods.
But what eats the ctenophores in the under-ice region? “That we don’t know,” Boetius told me. “We saw no fish.” In warmer waters tiny fish such as anchovies and larger ones such as cod would eat these gelatinous animals, and seals and polar bears would eat the fish. Boetius is eager to return for future missions to try to complete the food web, which is obviously critical to the survival of all these animals large and small, and which in turn could affect other food webs that link the Arctic region to warmer regions at lower latitudes. Ultimately, Boetius said, “learning more will tell us what ice loss means for life on earth.”
Photo Nereid mission courtesy of Woods Hole Oceanographic Institution | <urn:uuid:6ad33e78-b60e-429e-bd0c-1d38f7c5fb6c> | CC-MAIN-2021-49 | https://blogs.scientificamerican.com/observations/who-eats-whom-under-the-arctic-sea-ice-video/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358443.87/warc/CC-MAIN-20211128013650-20211128043650-00471.warc.gz | en | 0.967606 | 911 | 3.171875 | 3 |
Claude Dubord & Sons, Inc.
Things the homeowner should know.
What you should know in order to identify and maintain your sewage system.
Your onsite sewage disposal system can provide trouble free service for many years if maintained properly.
ON-SITE SEWAGE DISPOSAL SYSTEMS INFORMATION
On-site waste water disposal systems such as septic systems or cesspools provide for the treatment and disposal of household waste water. Cesspools and septic systems have been known, with PROPER MAINTENANCE, to perform efficiently for many years.
Your on-site disposal system is just as important to you as your furnace. A new system of any type or repairs to the old one are very costly. Cost variation is due to the type of failure, proper maintenance is considerable less expensive.
A septic system has two major components: a septic tank and a leaching field. In the septic tank the lighter solids and grease (scum) in the wastewater float at the top and are captured and heavier solids (sludge) settle to the bottom. The effluent from the septic tank is distributed through the leaching field where it is treated by the biological organism present as it percolates into the soil. In a cesspool the walls of the pit are porous and serve the same function as a leaching field.
INDICATIONS OF A FAILING SYSTEM·
- Mushy soil or stranding water above or near the septic tank, cesspool or leaching field. ·
- Foul odor from leaching area,nearby streams, ect.
- Any backup of sinks, toilets, or floor drains not caused by blockage of internal pipes within the house.
- Slow flushing toilets.
- Especially tall green grass above or near the leaching area, septic tank or cesspool.
The approximate composition of household sewage is 40% toilet, 15% laundry, 30% bathing, 10% kitchen and 5% miscellaneous.
HOW TO LOCATE YOUR SEPTIC TANK OR CESSPOOL
Yearly inspection of your disposal system yourself can cut costs, providing you follow the correct inspection procedure. If help is needed contact your Board of Health. If you don’t know the location of your septic tank:
A.Contact your Board of Health to see if they have a plan on file.
B If your plan is not on file:
a.Look for concrete or metal manhole cover. It is often located in an area of tall green grass, where there is a depression, where the grass does not grow, or where there is a rapid melting of snow.
b.If ground surface inspection is of no help: locate the building sewer (main house drain) in the c.Record the location and give a copy to your local Board of Health.
HOW YOU CAN INSPECT YOUR SEPTIC TANK OR CESSPOOL
- Remove cover or covers.
- With a rod or a stick, measure the scum and sludge layers, if they are more than 1/3 the volume of the septic tank or cesspool, it should be pumped out. Care should be taken to insure that sludge is removed from the bottom.
- Be sure that both the inlet and outlet tees are in place and free of any solids.
- For assistance on any cesspool or septic tank problem, call your local Board Of Health. Their Health Agent is available to help you.
SOME REASONS WHY YOUR SYSTEM CAN FAIL (AND POSSIBLE REMEDIES)
POOR LOCATION for your leaching area. Soil is not previous enough, the watertable is high or there is inadequate percolation of liquids through the soil. (Conserve water)
EXCESSIVE SOLIDS and grease in the cesspool, or if there is a septic tank, there could be an overflow of solids into the leaching area (pump the tank out more frequently)
POOR INSTALLATION: Drain pipes and distribution pipes not properly graded, or septic tank is not level. (Rebuild system)
DESIGNED too small for the present loading. (add additional leaching field area)
DRAIN PIPES may become clogged with solids, or roots may grow into the leaching area. (Use root killer, pump tank out more frequently)
GREASE CARRYOVER: Some septic tank additives cause grease to be carried over into the leaching field where it can clog the pores. (do not use septic systems additives.)
REASONABLE STEPS TO TAKE TO PREVENT SYSTEM FAILURES
DO NOT use garbage disposals, as they are a leading factor of clogged systems.
DO NOT put solids or sanitary napkins, paper towels, grease, hair, oil (including cooking oil), colored toilet paper, tissues, or coffee grounds down the drain.
INSPECT on-site systems annually. Do not wait until you have a problem.
PUMP OUT your septic tank or cesspool approximintly every year.
CONSERVE WATER: excess water can create problems, install water saving devices wherever possible.
DO NOT put additives into your system. Medicines, paint, paint thinner, disinfectants, pesticides and acids will only kill the bacteria which are needed to decompose the organic matter.
DO NOT plant shrubs or trees with deep roots near your leaching area.
DO NOT allow heavy equipment to drive over the leaching area.
AVOID extreme peak flows by spacing out laundry loads, bathing and dish washing.
The only way to maintaine your cesspool is to pump out annually.
This information has been prepared and distributed by the Massachusetts Department of Environmental Quality Engineering. Portions have been reprinted from the Old Colony Planning Council. | <urn:uuid:876bf9a0-eb38-480b-b80f-af3aa8304028> | CC-MAIN-2021-49 | https://claudedubordandsons.com/things-to-know | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358443.87/warc/CC-MAIN-20211128013650-20211128043650-00471.warc.gz | en | 0.922131 | 1,225 | 2.890625 | 3 |
Our integrated curriculum combines grammar, vocabulary, reading, writing, listening, and speaking instruction. Small classes (maximum 12 students) ensure that students receive a great amount of personal attention from us, all of whom are trained in our Interactive Teaching Method. This engages students in the learning process to guarantee that they quickly develop skills and confidence.
●● Provides an integrated, general English curriculum to help students communicate effectively. The course improves students’ English skills and develops their confidence so that they can continue to use and improve their English even after they leave the school. The course also improves
intercultural awareness and interpersonal skills. Each unit presents a wide variety of conversational activities: discussion, listening comprehension, using idioms and role-playing, as well as reading, vocabulary, grammar, and writing practice.
●● Includes the STAGE 1 Course with additional lessons providing more opportunities to focus on specific areas such as pronunciation, vocabulary development, writing skills, and idiomatic expressions, as well as discussion of current events.
●● Includes the STAGE 1 Course with additional lessons that develop the skills needed to communicate effectively in a business environment. These include making introductions, emailing, negotiating, writing business letters, and other topics of interest to the specific group.
●● Private lessons are a very effective method of language learning as the focus is entirely on the individual’s specific needs. Students may enroll in private lessons only, or they can add them to any other course. | <urn:uuid:02aa66fd-fa5b-418c-97c5-80ea6b7faf00> | CC-MAIN-2021-49 | https://globalspokenpoint.com/course/spoken-english/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358443.87/warc/CC-MAIN-20211128013650-20211128043650-00471.warc.gz | en | 0.942655 | 299 | 2.59375 | 3 |
In my last blog post, I shared the note-taking strategies for reading our book club texts more closely and choices for taking reading notes. In this blog post, I’ll outline the discussion structures/strategies for each book club meeting and scaffolding I provided for meetings and refinding notes for each of my classes. Whether you are working with experienced book club veterans or students who have never participated in book clubs, you may find the information in today’s post helpful.
Book Club Meeting 1: Friday, January 17 (First Steps)
For our first book club meeting, we reviewed our discussion manners and etiquette we have practiced all year. I encouraged students to do more than simply read their notes and to build conversation off each other’s noticings.
While some groups were able to interact in a spontaneous and meaningful way with their reading notes as a springboard for actual conversation, I had at least 2-3 groups in each class the struggled to do more than simply read their notes “round Robin” style even though I had specifically stated merely reading notes was NOT the same as discussing. I was not entirely surprised since very few of my students had experience in an actual student book club in their previous Language Arts classes. I think much of the literature I’ve read on student book clubs over the years—including some recent new book titles— gloss over this challenge. We have plenty of ideas on how to organize book clubs, ideas for discussion and reading notes, routines, and manners….but not so much on how to actually help students play and build off each other’s ideas in their discussions. Even awesome discussion cards like these didn’t seem to help students generate discussion in our first meeting. I think in the future, I will video a group having an exemplary discussion and show it to the class in advance to help them to see and hear effective and meaningful discussion about a book. I suspect this challenge might be more common in middle school than high school, but I could see some high schoolers struggling in this area, too.
Students turned in their reading notes as well as a self-assessment they completed at the end of the book club discussions. I felt most were pretty spot-on in evaluating what they did well as well as some areas for improvement for the next meeting. Reading notes were weighted as a summative assessment as was the performance assessment of the book club meeting/discussion. We also completed a short reflection survey and sticky note activity.
Book Club Meeting 2: Friday, January 24–Common Points of Discussion
When we returned to school on Tuesday, January 21, students could continue to create whatever kind of reading notes they wanted, but they also needed to take notes in these common areas to help build conversation for Book Club Meeting 2. I provided a checklist to help students make sure they recorded every detail they needed for each kind of note.
In addition, I offered a notetaking template (we named it Option 5) or those who felt they needed some support to take better notes or may have been overwhelmed by the more open options from the previous week. As a bonus, I offered students an extra credit option that could give them additional discussion material.
We also completed a Quickwrite on external and internal conflict; again, I provided a drafting template and scaffolding with a complete model complete with color coding to help students. Each student received his/her own drafting template as well as a neon pouch with the color coded model to use in class.
As you can tell, it was an intense few days in class leading up to meeting 2! I do feel like the “Common Core” set of notes helped improve discussions, especially in my groups with mixed book titles around a certain theme or genre. Students shared they felt this common core set of notes/talking point helped them as well. Overall, I felt the book club discussions were stronger, but a few groups still struggled to do more than merely read their reading notes and prep work and actually DISCUSS ideas/questions.
Once again, we completed a self-assessment; students completed this self-assessment (very similar to the one from week 1) and a self-evaluation of their notes. These items, along with their notes and any extra credit work, were submitted via a folder so that I could easily collect all work. We also did another sticky note reflection task; in some classes, this activity was a warm-up; in others, this was a post-meeting activity.
In evaluating the notes, I felt many students struggled in Week 2 to be complete in their notes. While they could identify examples of each kind of note, many lacked textual evidence and commentary/explaining sentences to articulate their analysis. Even with the checklist that clearly outlined the requirements for each kind of note, many students seemed to overlook these details. For students whose notes were really lacking both depth and breadth, I required them to do the note-taking template for Week 3 to help them craft more complete notes and to help them be better prepared for the final book club meeting.
Book Club Meeting 3: Tweaks for More Robust Discussions, January 31
For the final week of prep work in class, we continued with the same five note-taking options though those who really struggled to complete notes were required to do the Week 3 note-taking template. For those using Options 1-4, they received reminders about completeness of notes and a note-taking checklist.
In addition, students had class time each day that week to work on the Book/Head/Heart brochure for wherever they were in their reading; this tool could be used in the final book club meeting as well. Though students responded enthusiastically to this task for the most part, many ignored the instructions to compose two sentences for each question prompt. I was also disappointed many did not incorporate any color or artwork into their work; however, I did have some exemplary responses. I created this rubric to assess their work.
For the third book club meeting, I decided to scaffold the large group discussions with pair or trio discussions since many of my students tend to do better conversation work in a smaller setting. When students arrived, they saw their partner or trio assignments projected on the board and got with that person or people.
After reviewing discussion manners, we then started with a common discussion point in our partner/trio talk:
We then moved to our notes for discussion; students could discuss these items in any order, but they had 15 minutes to engage in partner talk and go as in-depth as they could with their thinking and conversation.
We then moved back into our regular whole group book club groups and engaged in conversation for about 8-10 minutes.
An overwhelming number of students indicated they enjoyed this modification; survey results showed most students enjoyed both their small group and regular group conversations.
In addition, the post book club meeting #3 survey revealed 68.3% would give their partner talk an A; 30.2% rated their partner talk quality as a B. In contrast, only 54% rated their regular book club group discussion talk as A quality vs. 44.4% rating their book club talk a B. Students turned in all work for the week via their folder (easy way to keep everything together!) and completed the post book club survey I created in Google Forms to provide some overall feedback.
Reflections and Takeaways
Overall, I am very pleased with the design of our three week book club unit. This is the first time I’ve established the reading schedules for students, and while it was a lot of work to do so for 15 different titles, I think it paid off by pushing my students to stay on track with the books. Three weeks seems just right—not too little time, but not too much time, either. I also felt providing three days of class time to work on book club activities and reading was just right as well. All the students loved their books, and many have asked for read alikes or the option to read another one of the book club titles for choice reading. The only negative remarks I saw about any of our book choices was that many felt the middle of March Forward, Girl was a little dry/boring. Below is a slideshow of our book choices:
In addition, I am also happy with the student growth in the quality of the notes. There was balance in the note taking options; I am glad I provided multiple choices, and the requirements were enough to push student thinking without overwhelming them. In addition, the fifth option–the graphic organizer for taking notes in Weeks 2 and 3—provided scaffolding and structure for those who needed it or those who needed something less-open ended.
Supporting high quality conversations was definitely the most challenging part. This year’s students don’t have much experience in student driven discussions and even with the many discussion structures we’ve used in class, many struggle to engage in high quality student talk beyond partner talk or a small group of three. I’ll continue to contemplate ways to help students generate more organic conversation and to be responsive to each other’s ideas/questions/thinking, but for this year’s students, partner talk or trios are definitely a huge WIN and success for them.
I’m worried many of my students still seem to have difficulty identifying and analyzing theme (even when provided a list of theme topics) as well as social issues (again, I provided a list to support them and their thinking). It’s hard to really know if this is where they are as learners right now as 8th graders and may not be able to think that abstractly, or if this is a true gap in understanding. However, an overwhelming majority felt the Notice and Note Signposts (we used fiction since we read literary nonfiction and memoir) were especially helpful and among the favorite kinds of annotations/reading notes to think about and compose.
In my next blog post, I’ll share what we did to wrap up our book club work and bring things to a close with some cross-pollination of ideas and final assessment, including a converSTATION activity this past Friday (February 7) that was a major success. | <urn:uuid:97fb2b7e-9b59-43c7-9daf-7207faaab2b9> | CC-MAIN-2021-49 | https://livinginthelayerscom.wordpress.com/2020/02/10/discussion-structures-and-tools-for-supporting-book-club-discussions-in-8th-grade/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358443.87/warc/CC-MAIN-20211128013650-20211128043650-00471.warc.gz | en | 0.97801 | 2,124 | 2.6875 | 3 |
Your next cup of coffee could be grown in a lab
Brazil, the world’s largest coffee producer, was hit by a historic frost in July 2021. Temperatures in coffee fields dropped below zero and the beans became encased in ice. The cold snap came right after the worst drought the country had seen in almost a century, which had already weakened the coffee trees. As a result, the price of coffee has shot to a seven-year high in anticipation of a poor harvest next year.
As a tropical crop that dislikes temperature variations and only grows in a narrow belt around the equator, coffee is extremely vulnerable to climate change. It is also contributing to it, because demand for coffee keeps rising worldwide, making it a key driver of deforestation. Add to the mix disease and pests, which have wiped out crops in many coffee growing regions, and it’s easy to see why people are searching for alternative ways of growing coffee.
One is coming from a lab near Helsinki, where coffee has been successfully grown and brewed using cellular agriculture. “We started with a leaf,” says Heiko Rischer of the VTT Technical Research Centre of Finland, a state-owned, non-profit technology company. The process involves sterilising a coffee plant leaf to get rid of unwanted contaminants and placing it on a base of nutrients, such as minerals and sugars, to stimulate cell growth. Once that is achieved, the cells are moved to a bioreactor, a temperature controlled container with a liquid suspension in which the culture can grow further. As more and more biomass is produced, it is transferred to progressively larger bioreactors until it’s ready to be harvested; the process takes roughly two weeks.
“The powder we end up with is a very different material from coffee beans, and roasting is a bit more tricky — that’s an art in itself and we are by no means professional roasters,” says Rischer. However, a tasting by a human panel gave encouraging results: “This is close to coffee. Not exactly the same, and not what you would expect from a high grade coffee, but it resembles it very much and the different roasting levels actually gave different flavours,” says Rischer. An instrument-based analysis returned a similar verdict, showing “significant overlaps” with the flavour profile of conventional coffee.
There’s room for improvement: “The beauty of this kind of technology is that the composition of the final product, for example caffeine content, can be steered quite a lot by adjusting certain conditions,” says Rischer. That might mean changing parameters such as the amount of oxygen or the mixing speed in the bioreactor, or adding chemical triggers called elicitors that induce the formation of a desired compound. The next step for VTT is partnering with companies willing to invest in this kind of cellular agriculture, which they are testing on a range of plant species, including some endangered wild Nordic berries.
While there won’t be a shortage of tasters — Finland is the world leader in coffee consumption per capita — regulations could pose a few hurdles: “In Europe, we would need to go through an approval process just to let people try this. Even internally we had issues, because for this kind of experimental material we need the approval of our ethical committee, which we got in this case,” says Rischer, adding that in the most optimistic scenario, the coffee could be commercially ready in about four years.
If you can’t wait that long, there’s a synthetic coffee you can buy today, but it does away with the coffee plant entirely. Made by Seattle-based Atomo Coffee, it’s a “molecular cold brew” sold in a can that comes in two flavours. It’s made from upcycled plant waste products: mostly date seeds, with some chicory roots and grape skins. These undergo a chemical process and are mixed with dietary fibre, flavourings and caffeine, creating a drink that produces 93 per cent less carbon emissions and uses 94 per cent less water than conventional coffee, according to Atomo. “This is molecularly and organoleptically analogous to conventional coffee — it is coffee,” says Atomo co-founder Jarret Stopforth.
Atomo debuted in late September and quickly sold out, despite a hefty price tag of $60 for each bundle of eight cans: “We are only able to do limited scale right now. We realise it seems like a high price, but this is the first launch into the market; we will be able to match the cost of premium conventional coffee as we grow and scale,” says Stopforth. It will take a couple of months for Atomo to be able to offer more stock, with sales initially limited to the US online market.
Another US-based company, San Francisco startup Compound Foods, is a year away from launching its own “beanless” coffee, made with a process that has similarities with Atomo’s. “We started by asking ourselves the question of what coffee is,” says founder Maricel Saenz, a native of coffee-producing Costa Rica. “It’s a plant, but ultimately the product that you consume is made of chemical compounds that have been brewed through water.” She then looked for those key compounds elsewhere in nature, much like Impossible Foods did to create its famous burgers based on heme, a molecule that “makes meat taste like meat.”
Published at Sun, 31 Oct 2021 06:00:00 +0000 | <urn:uuid:e559b484-46e2-482c-aefe-98e5146dd804> | CC-MAIN-2021-49 | https://todaysreader.com/your-next-cup-of-coffee-could-be-grown-in-a-lab/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358443.87/warc/CC-MAIN-20211128013650-20211128043650-00471.warc.gz | en | 0.967476 | 1,179 | 3.25 | 3 |
LS/0057 - FRENCH LITERATURE
Academic Year 2021/2022
Free text for the University
FABIO VASARRI (Tit.)
- Teaching style
- Lingua Insegnamento
|[32/19] LANGUAGES AND CULTURES FOR LINGUISTIC MEDIATION||[19/00 - Ord. 2011] PERCORSO COMUNE||6||30|
knowledge of the main lines of evolution of the French literature; identifying the main literary genres and movements, by studying relevant texts and authors ans setting them in their historical and cultural contexts; critical approach to studying literary texts.
basic knowledge of literature, written comprehension of short literary texts in French.
20th-century French novel, from Proust to Oulipo.
Lessons in classroom, mainly in French, including discussion on the subjects.
Due to Covid-19 emergency, lessons might be held in streaming and might be recorded in order to be available online.
Verification of learning
Oral exam in Italian and French during which the student should demonstrate
- knowing the selected works and placing them in their genres and historical and literary contexts
- knowing the main contents of the course, especially the main literary movements concerned
- using an appropriate terminology.
a) Three of the following novels, in French or in translation; alternative choices will have to be approved by the teacher :
Proust, Un amour de Swann; Gide, Les caves du Vatican; Breton, Nadja; Céline, Voyage au bout de la nuit ; Sartre, La Nausée; Camus, L’Étranger; Duras, Moderato cantabile; Butor, La modification; Sarraute, Tropismes; Robbe-Grillet, La jalousie; Queneau, Les fleurs bleues; Yourcenar, Mémoires d’Hadrien; Perec, Les choses; Tournier, Vendredi ou les limbes du Pacifique
b) One of the following handbooks: Il romanzo francese del Novecento, ed. by Sandra Teroni, Roma-Bari, Laterza, 2008; Dominique Rabaté, Le roman français depuis 1900, Paris, PUF, 1998; Bruno Blanckeman, Le roman depuis la Révolution française, Paris, PUF, 2011 (only the sections on the 20th century).
Further information will be given during the lessons. | <urn:uuid:7d2def38-e2ae-4377-af5a-780b2c8cdbb5> | CC-MAIN-2021-49 | https://unica.it/unica/en/ateneo_s07_ss01_sss02.page?mu=Guide/PaginaADErogata.do?ad_er_id=2021*N0*N0*S1*38188*19503&ANNO_ACCADEMICO=2021&mostra_percorsi=S | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358443.87/warc/CC-MAIN-20211128013650-20211128043650-00471.warc.gz | en | 0.746791 | 621 | 2.828125 | 3 |
3 million people in the US have glaucoma, and 50% don’t know.Help us support research, education and innovation to save one of our most precious senses - sight. DONATE TODAY
These handouts support the mission of the American Glaucoma Society to promote excellence in the care of patients with glaucoma.
The official journal of the American Glaucoma Society. Click here for more information
AGS stands for Diversity, Equality & Inclusion.
Fact 1More than 3 million Americans suffer from glaucoma.1 This number is expected to grow with the changing demographics of the U.S.2
Fact 2Glaucoma is the #2 cause of blindness in the U.S and #1 among Hispanics.3 African Americans are 15 times more likely to be visually impaired than whites.4 Glaucoma accounts for 9-12% of all cases of blindness in the U.S.5
Fact 3It is estimated that 50% of those suffering from glaucoma are unaware they have the disease and therefore remain undiagnosed and untreated.6
Fact 4Glaucoma is generally asymptomatic until late in its course when people realize their world is dark or dimming. However, the vision already lost cannot, with todays treatments, be restored.
Fact 5With treatment, progression of the disease can be slowed, saving the patient’s remaining vision.
Fact 6The mechanisms of glaucoma are mostly still unknown, resulting in limited therapy options.
Fact 7Glaucoma specialists have specific training for treating glaucoma, both medically and surgically, and most often focus on the cases too difficult for the general ophthalmologist to control.
Fact 8Glaucoma is a complicated and life-long disease, and access to sub-specialists is crucial if the vision is to be preserved for the lifetime of the patient.
Fact 9Medical treatment costs for glaucoma totaled $5.8 billion in 20138, or approximately $2,170 per patient.9
Fact 10In order to prevent permanent vision loss, glaucoma needs to be diagnosed early and treated appropriately to prevent blindness.
Fact 11It is imperative that patients everywhere have access to expert subspecialty care for advanced cases, difficult cases, and diagnostic dilemmas.
Fact 12There has been a significant reduction in dollars allocated for glaucoma research. In fact, there is no ongoing large-scale clinical trial in glaucoma currently funded despite numerous questions and needs. There is so much about the disease that is unknown. | <urn:uuid:4c9ed472-a134-4717-8ee5-64a4f5e57076> | CC-MAIN-2021-49 | https://www.americanglaucomasociety.net/home | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358443.87/warc/CC-MAIN-20211128013650-20211128043650-00471.warc.gz | en | 0.923824 | 546 | 2.765625 | 3 |
The records of Aberdeen’s Police Commissioners paint a grim picture of life in the cramped, stifling, diseased streets of the Victorian city.
In the 18th Century, the population of Aberdeen was roughly 25,000. By 1840, the industrialisation and urban movement of the 19th Century saw the city’s population triple, but people still lived in the old narrow streets. This resulted in cramped, dingy slums rife with sickness. Poverty grew and the city’s elites found that they could employ the poor for a pittance.
In all this, the city’s Police Commissioners – a public health and urban planning body, rather than what we would think of as a modern “police” service – had their work cut out for them.
In their records we can gain a sharp sense of the inequality and poverty that shaped life in Victorian Aberdeen. Streets were filthy, buildings collapsed, disease spread and water was dirty. Nowhere is this clearer than in the city’s sole surviving Prostitution Return, carried out as part of a monthly headcount by the Police Commissioners in January 1855. These were usually destroyed later, as they were working documents rather than items intended to remain as evidence of central decisions of the Commissioners. Only one survived, kept by a Victorian official as a strange curio.
The numbers are sobering – a clear indictment of the indifference of Victorian society that notes some 500 women between the ages of 15 and 44 engaged in sex work within the city, as well as 36 public houses where women were encouraged by their procurers to bring men. On top of that, there were 36 known brothels operating within Aberdeen, mostly concentrated on Guestrow, Broad Street, Gallowgate, and Shuttle Lane – catering primarily to sailors, students, clerks and advocates.
That the Prostitution Return for 1855 survived at all is a fluke: it is a window into an otherwise unspoken aspect of Victorian life. By taking the names and information from this volume and comparing them with other records from the period, we can shed some light on the city’s Victorian secrets. | <urn:uuid:0ff3878c-8747-452f-a50f-b5fc31870f1f> | CC-MAIN-2021-49 | https://www.pressandjournal.co.uk/fp/past-times/3632568/a-window-into-victorian-era-in-silver-city/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358443.87/warc/CC-MAIN-20211128013650-20211128043650-00471.warc.gz | en | 0.973997 | 437 | 2.546875 | 3 |
Law professor and journalist Kevin Noble Maillard makes his debut as children’s author with Fry Bread: A Native American Family Story, a picture book celebrating Native culture and history through the tradition of baking and eating fry bread. A member of the Mekusukey band of the Seminole Nation, the author draws on his own childhood memories and current practice of making and eating this time-honored food, which began as a means of survival and has become a symbol of the resilience and endurance of Native peoples in America. Due from Roaring Brook on October 22, Fry Bread is illustrated by Pura Belpré Award winner and Caldecott Honoree Juana Martinez-Neal, whose portrayal of a jovial contemporary family, in concert with Maillard’s evocative verse, bridges the traditional and modern and highlights both the bonds and diversity among Native peoples. PW spoke with the author and the illustrator about the making of their book.
Kevin, what initially sparked your interest in writing a picture book?
When my oldest child was born in 2012, I wanted to buy a diverse selection of books for him. We are a multiracial family, blending African, Native American, and Asian heritages, and I began looking for children’s books that reflected that. But I had a difficult time finding them. Most books on Native culture were either about Pocahontas or Thanksgiving and there were virtually no stories featuring Native children in everyday situations—like a story about a girl and her cat or about playing outdoors on a snowy day. And so I decided to write my own.
Why did you gravitate toward fry bread as the focus?
Making fry bread was an important part of my childhood, since this tradition is so central to Native families. My mother is from the Seminole nation, Oklahoma, and I grew up making fry bread with my aunts, who made it all the time. My mother didn’t make fry bread, so after the elderly aunts died there was no one to make it. And I thought to myself, “This tradition is going to die out in our family unless someone carries it on,” so I decided to take the reins and began making fry bread—which is quite a laborious process, taking five to six hours.
What is the historical significance of fry bread?
It started off as survival food. In contrast to the amicable relations taught at school and celebrated in American homes every Thanksgiving, the vast majority of relations between Indian nations and the American government have been marked by war, genocide, and conflict. When the government removed Native people from their ancestral lands—originally everyone in my tribe was from Florida and was forced to move to Oklahoma—government representatives would come in with commodities like flour, sugar, salt, and yeast. People had to make do with what little they had, and from these simple ingredients they made fry bread. It was a food that had its beginnings due to deprivation and the absence of food they were used to. And now fry bread has become a food central to the lives of most Native families—and something very celebratory.
Once you decided to write about fry bread, did the story come easily to you?
Not right away. I started by writing a rhythm-y board book that I could read to my kids at bedtime, and after Connie Hsu at Roaring Brook saw my very cute but horrible first draft, she suggested I write in a more lyrical, abstract style. I followed her advice and, after many edits, Fry Bread came to be.
Juana, what was it about Kevin’s manuscript that appealed to you as an illustrator?
The very first thing that appealed to me was that it was sent to me by Connie Hsu! I had met Connie in 2012, when she was at Little, Brown, and I liked her immediately. I’d been hoping to work with her since then, but it took a number of years for her to find what she thought was the right manuscript for me.
Did you agree with Connie’s assessment that Fry Bread was right for you?
Once I read the manuscript there was no doubt in my mind that I must work on this book—I loved it! I am from Peru, where we face a similar dilemma as Native Americans do about identity, and what it means. In Peru, we too deal with stereotyping. By default, when people hear I am from Peru, they immediately assume I live in the Andes, wear traditional clothing, and have a llama. Well, growing up in Peru, I lived near the ocean, didn’t have a llama, and never even saw Machu Pichu until I was much older.
And, as in the U.S., native Peruvians have many different appearances. There are people who emigrated from many places, including Eastern Europe, China, and other areas in Asia. At the same time, there are Indigenous people who are often mistreated and disrespected. I was not raised that way—in fact, both my dad and grandfather were artists who painted Indigenous people from Peru. Even though I am not Native American, I related emotionally to Fry Bread very deeply.
What inspired your depiction of Fry Bread’s diverse characters?
I had seen pictures of Kevin’s family, and was very interested in the multiple layers of his heritage. He is Native American and also black, which to me was fantastic. As I did my research, I discovered that this is a heritage that is not common—and is definitely not often seen in books. I wanted to include this in the illustrations, to honor Kevin and his family. And as I started drawing, I found myself thinking I needed another character—and then another and another—to tell the story, until the book had a huge cast of family members with a range of skin tones and hair types. So, when it came time to paint the pictures. It was an endless amount of work!
Kevin, what was your initial reaction to Juana’s visualization of your story?
I first saw Juana’s sketches in a PDF that I opened on my phone while riding on the New York City subway. I had no preset notion of what the art was going to look like, and when I saw the multiracial Native characters she had created, I suddenly realized I was crying—and I am not a crier!
And when Juana added color to the art, it was just breathtaking. I really had an intense reaction to seeing the images and realizing how they represented experiences from my whole life. I was especially moved by her addition of a wall featuring an alphabetical list of Native American tribes, which is expanded on the endpapers. It is powerful to me to think that Native kids can point to the names of their tribes and realize they are included in this book. What a validation and affirmation of their identity!
Fry Bread has received four starred reviews, including one from PW. Juana, how do you feel about the enthusiastic response to the book—and the praise your artwork has garnered?
It is fantastic! Being Peruvian and not native to this country, it was so important to me that everything I did was done right—and at times I was so worried about it that I became short of breath while I drew. Finally, I got to the point that I told myself, “You’ve done your job and now you just have to step back and let it go.” So, it’s amazingly gratifying to see this response from reviewers and know that it was well worth all the stress!
And what’s your reaction to your book’s early critical success, Kevin?
When I think that I started out to write a little board book for my own kids, and then to see our project receive this kind of attention is crazy, amazing, and very rewarding. I hope that Fry Bread will bring the contemporary Native family into the public eye, and that Native children will see themselves reflected on its pages. I also hope this book will introduce all readers to Native culture and traditions—and will inspire other Native writers to share their own stories.
Fry Bread: A Native American Family Story by Kevin Noble Maillard, illus. by Juana Martinez-Neal. Roaring Brook, $18.99 Oct. 22 ISBN 978-1-62672-746-5 | <urn:uuid:829ce68e-3f81-44ed-a1f2-d9e336aa9006> | CC-MAIN-2021-49 | https://www.publishersweekly.com/pw/by-topic/childrens/childrens-authors/article/81498-q-a-with-kevin-noble-maillard-and-juana-martinez-neal.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358443.87/warc/CC-MAIN-20211128013650-20211128043650-00471.warc.gz | en | 0.98677 | 1,734 | 2.84375 | 3 |
Skulls to the Living, Bread to the Dead: The Day of the Dead in Mexico and Beyond
Wiley, 8 de gen. 2007 - 232 pàgines
Each October, as the Day of the Dead draws near, Mexican markets overflow with decorated breads, fanciful paper cutouts, and whimsical toy skulls and skeletons. To honor deceased relatives, Mexicans decorate graves and erect home altars. Drawing on a rich array of historical and ethnographic evidence, this volume reveals the origin and changing character of this celebrated holiday. It explores the emergence of the Day of the Dead as a symbol of Mexican and Mexican-American national identity.
Skulls to the Living, Bread to the Dead poses a serious challenge to the widespread stereotype of the morbid Mexican, unafraid of death, and obsessed with dying. In fact, the Day of the Dead, as shown here, is a powerful affirmation of life and creativity. Beautifully illustrated, this book is essential for anyone interested in Mexican culture, art, and folklore, as well as contemporary globalization and identity formation.
Què en diuen els usuaris - Escriviu una ressenya
No hem trobat cap ressenya als llocs habituals.
Altres edicions - Mostra-ho tot
Skulls to the Living, Bread to the Dead: The Day of the Dead in Mexico and ...
Previsualització limitada - 2009 | <urn:uuid:03eb11a0-a816-4eb6-b22b-a2d1c36d0bc7> | CC-MAIN-2021-49 | https://books.google.cat/books?id=vQW0r1DnTLwC&dq=editions:ISBN1405152478&hl=ca&output=html_text&source=gbs_book_other_versions_r&cad=2 | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358786.67/warc/CC-MAIN-20211129164711-20211129194711-00151.warc.gz | en | 0.789254 | 311 | 3 | 3 |
Arnqvist, G., T.M. Jones, and M.A. Elgar (2003) Reversal of sex roles in nuptial feeding. Nature 424: 387.
1) In many species (including some insects, birds, mammals, etc), males provide their mates with food or other resources. In doing so, they help females provide for young, thus enhancing their own reproductive success (they can produce more, healthier offspring). In aIDition, they are likely to be able to mate with more females and/or better females, because females often carefully select only the best mates. In the case of Zeus bugs, could females that provide resources to males reap similar benefits? What would those benefits be?
2) Explain how the scientists tested their hypothesis that females secrete food for males. What is the sample size used in the first experiment? Break it down by treatment category. Hint: “n” refers to the sample size.
3) Explain how the scientists tested their hypothesis that females only produce glandular secretions when riIDen by a male. What was their sample size for this experiment?
4) Explain how the scientists tested their hypothesis that male lifespan increases when they are kept with females. How many days did starved males live on average when they were kept individually with a female? How many days did fed males live on average when they were kept with a female? What conclusion can they make from these data?
5) What are the overall conclusion that they make from this set of experiments? | <urn:uuid:2d3bc876-5eb9-47e3-b0a7-8eae857f6097> | CC-MAIN-2021-49 | https://graduatenursingtutors.com/nursing-10-1493/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358786.67/warc/CC-MAIN-20211129164711-20211129194711-00151.warc.gz | en | 0.970553 | 317 | 3.453125 | 3 |
Over the last 50 years, the populations of most countries in Latin America and the Caribbean have been aging steadily and if this trends goes, one in five people in the western hemisphere will be over 65 years of age, said an analysis by the Centre for Strategic and International Studies (CSIS).
In the analysis “Addressing an Aging Population through Digital Transformation in the Western Hemisphere”, the CSIS said that only four per cent of the western hemisphere was over 65 years of age about 50 years ago. Now this has doubled, growing more rapidly than Africa, the Middle East, and South Asia. In the next decade, almost 12 percent of the population will be over the age of 65, and one in five people in the Western Hemisphere will be over 65 by 2050, the report said.
The CSIS attributes an increase in life expectancy coupled with a decline in birth rates towards ageing population in this region. Aging is the sign of a healthy society, but countries must support healthy and dignified lives for older people, strengthen care infrastructure, and prepare the workforce for this demographic shift, they said.
The analysis also found that life expectancy in the region was higher. Fifty years ago, the life expectancy in the region was about 60, with Haiti and Bolivia in just the mid-forties. The present life expectancy is close to 75 years and is projected to hit 80 in the next 20 years, the CSIS said.
Cuba is the oldest country in the region, with a life expectancy slightly higher than that of the United States. By 2040, Cuba is projected to have a proportion of the population over 65 surpassing the current proportion in Japan, and almost one in three Cubans will be over 65.
The analysis notes that Haiti is the youngest country in the region. Unlike other countries, the proportion of the population over the age of 65 has fluctuated in the last 50 years and is not consistently increasing. Only one in ten Haitians are projected to be over the age of 65 by 2050.
Fertility rate in the region was higher than the global average 50 years ago. However, this has come down now. A fertility rate of around 2.1 children per woman of childbearing age is considered the “replacement level,” meaning that a population replaces itself each generation (not including other factors such as migration). Since 2015, the region has been below replacement levels. Combined with a growing older population, a decline in the number of young people means that there are fewer people who will participate in the workforce, deposit into pension and other social insurance systems, and care for those older people, the authors maintained.
In 1970, Honduras had the highest fertility rate in the region, but it will be below replacement levels in the next 10 years. Barbados has already had a fertility rate below replacement levels for the past several decades (since 1980). Bolivia’s birthrate is the second highest in the region and will not be below replacement levels until 2050.
- It is hard time that Countries begin preparing now for an aging population.
- Prioritize strengthening healthcare and aging care infrastructure to better support an older population.
- Increased care for chronic illnesses, including specialized doctors and treatments, along with long-term care either from family members or professionals.
- Increase spending on healthcare. Now only at 1 percentof gross domestic product (GDP), lower than in other aging regions such as Europe and Africa, and significantly lower than Japan, which has the world’s oldest population and spends 11 percent of its GDP on healthcare.
- Prioritize providing a high quality of life for older adults, which will mean creating a society that older adults can easily navigate. This includes increasing livability and accessibility in both urban and rural contexts, providing safe and convenient transportation, and prioritizing social and civic engagement.
- Countries will also need to modernize and reform their pension systems to be prepared for a greater demand for government support for the older population,
- creating smart cities, expanding accessible public transportation, and facilitating continuing education for older adults.
- Digital initiatives such as telemedicine and data-driven healthcare can help healthcare workers better care for patients with chronic illness, which is more common in old age.
- Digital solutions such as wearable robotic devices and apps that track symptoms and doctor’s appointments can help caregivers better support older adults.
- Encourage the silver economy.
The authors of the analysis are Daniel F. Runde (senior vice president, director of the Project on Prosperity and Development, and holds the William A. Schreyer Chair in Global Analysis at CSIS), Linnea Sandin (consultant and the former associate director and associate fellow with the Americas Program at the Center for Strategic and International Studies) and Arianna Kohan (program coordinator with the Americas Program at the CSIS). | <urn:uuid:ad26203f-7d13-4630-8f8f-e052406ff4ec> | CC-MAIN-2021-49 | https://indianf.com/9093-2-western/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358786.67/warc/CC-MAIN-20211129164711-20211129194711-00151.warc.gz | en | 0.946808 | 1,014 | 3.078125 | 3 |
Lately, with the integration of work from home and remote learning, a new buzzword has been going around the internet: ergonomic. Generally, the content associated with this word involves avoiding certain pains that may develop from improper posture.
What Is Ergonomics?
According to the Chartered Institute of Ergonomics & Human Factors, “Ergonomics is the scientific discipline concerned with the understanding of interactions among humans and other elements of a system.” It also involves the optimization of designs and objects to improve a person’s well-being.
Ergonomics utilizes several areas of science to make life a little more comfortable. Its application is everywhere. Although ergonomics is not always evident, the world should thank it for making lives easier.
Every time a product packaging makes opening a little easier, that’s ergonomics doing its magic. When a product is easier to grip, that’s also ergonomics. Today, there’s also an ergonomic keyboard that follows the natural slanting of a person’s hands while typing.
How Can Ergonomics Help with Body Pain?
Most adults spend most of their time working on a desk–or the dining room table or a table at a coffee shop. Sometimes, they tend to slouch. As a result, they suffer from pains in their back, neck, or wrists.
For some, the pain feels worse. Others have to undergo scoliosis treatment in adults to alleviate the pain. Some people also develop carpal tunnel syndrome.
In the workplace, investing in ergonomic items can already contribute to an employee’s well-being. These items help them fix their posture and prevent them from sustaining harmful positions for extended lengths of time.
The Proper Posture
Even at such a young age, parents and teachers have emphasized the importance of good posture. Good posture makes people look confident, and it just looks better than slouching. However, on a more important note, good posture saves the body from discomfort. It relieves the body of back pain and neck pain, among other things.
When the back pain starts saying ‘hello’ as people get older, they wish they’d followed the ‘good posture’ advice. It’s not too late, though. Adults can start making a habit of having good posture at work or home while they spend hours finishing their tasks for the day.
This is how good posture should look like while sitting down:
- Elbows should be at a 90-degree angle.
- Wrists should be flat and not craning above the edge of the keyboard.
- Knees should also be at a 90-degree angle.
- Torso and thighs should form a 90-degree angle at least and 100 degrees at most.
- The back should be straight up to the neck.
- The feet should be flat on the floor. If the feet can’t reach the floor, don’t let them hang.
How Your Workspace Helps with Posture
Maintaining good posture is difficult when your surroundings are built for you to slouch. It’s important to set up your workspace so that everything is positioned and designed to help you with your posture.
Here are a few tweaks and items to add to your desk for a more ergonomic workplace:
- Standing desk. An adjustable standing desk is nifty because you can make sure your elbows are at a 90-degree angle even when you’re sitting down. It’s also convenient because you won’t need to transfer from one desk to another when you want to work while standing up.
- Laptop stand or a monitor mount. Your screen or monitor should be at eye level to avoid looking down or slouching while you’re working. You can elevate your laptop using a laptop stand or office supplies such as books, binders, or a thick pile of bond paper. If you’re using a monitor, a desk mount makes the height adjustable without drilling holes into the wall.
- Keyboard. An elevated laptop is difficult to type on. A separate keyboard would be helpful to keep the elbow resting on the table and keeping the 90-degree angle.
- Wrist rest. To avoid craning the wrist to reach the keyboard, a wrist rest keeps the wrist flat.
- Footrest. If your feet are hanging from the chair, place your feet on a footrest. This ensures that your knees are at a 90-degree angle.
- Small pillow for the lower back. While this is optional, a cushion increases your comfort while taking care of your lower back.
Having an ergonomic workspace saves you from a lot of body pain. While you’re at work, keep an eye on your posture. Remember to keep your back straight and your screen at eye level. If you need to buy extra equipment, think of them as an investment for your well-being. | <urn:uuid:89888acb-1dd7-4b93-bcf5-b7cd4c74b9d2> | CC-MAIN-2021-49 | https://www.healthyhighways.com/avoid-back-pain-with-an-ergonomic-workplace/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358786.67/warc/CC-MAIN-20211129164711-20211129194711-00151.warc.gz | en | 0.931556 | 1,038 | 3.078125 | 3 |
Author: Diego 'Flameeyes' Pettenò
In the past there were many different libraries, such as publib, that tried to provide alternatives to the functions that are missing in the main C library. Unfortunately, handling compatibility libraries proved to be difficult. The additional libraries would require additional tests when running configuration scripts prior to compilation, and add dependencies for non-glibc systems.
As the number of new functions provided by the glibc increased, the GNU project started looking at the requirements for portability of programs on operating systems based on different libraries, and eventually created the GNU Portability Library (Gnulib) project.
Normally, a library is code that is compiled as a shared or static file, and then linked into the final executable. Gnulib is a source code library, more similar to a student’s collection of notes than a usual compiled library. Gnulib can’t be compiled separately; the code in it is intended to be copied into the projects using it.
Two requirements limit Gnulib use, one technical and the other legal. First, the software you use it with must also use GNU Autotools, as Gnulib provides tests for replacements of functions and headers written in the M4 language, ready for usage with GNU Autoconf.
Second, it has to be licensed under the GNU General Public License (GPL) or the GNU Lesser General Public License (LGPL), as the code inside Gnulib is mostly (but not entirely) released under the GPL itself. Some of it is also released under the LGPL, and some of it is available as public domain software.
If you’re working on software released under other licenses, such as the BSD license or Apple Public Source License (APSL), it’s better to avoid the use of functions that are not available in a library licensed with more open terms. For example, you could take the libraries present in a BSD-licensed C library to replace missing functions in the current library, whichever license the software is using. Alternative, you could find replacement functions in other BSD-licensed software or create a “cleanroom” implementation without copying code from GPLed software, leaving the external interface the same but using different code.
It’s usually easy to re-implement functions or just copy missing functions from another project when they are not available through another C library, especially when they are simple functions that consist of less than 10 lines of code. Unfortunately, many projects depend firmly on GNU extensions, and won’t build with replacement functions, or the code is already so complex that adding cases to maintain manually is an extra encumbrance for developers.
What Gnulib provides is not only the source code of the missing functions, but an entire framework to allow a project to depend on GNU extensions, while retaining portability with non-GNU based systems.
The core of the framework is the gnulib-tool script, which is the automated tool for extracting and manipulating source code from Gnulib. Using gnulib-tool, you can see the list of available modules (
gnulib-tool --list) or test them (one by one, or all together, using the
--megatest options), but more importantly you can automatically maintain the replacement functions for a source tree.
A practical example should help explain the concept. Let’s say that there’s a foofreak package that uses the strndup() function (not available on BSD systems for instance) and the timegm() functions (not available on Solaris). To make the source portable, a developer can run
gnulib-tool --import strndup timegm from the same directory of the source code, and the script will copy (in the default directories) the source code and the M4 autoconf tests for strndup(), timegm(), and their dependencies — for example, strndup() depends on strnlen().
After running, gnulib-tool tells you to make a few changes to your code to allow the replacement to be checked and used when needed. It requires the Makefile in lib/ to be generated from the configure script, so it has to be added to AC_OUTPUT or AC_CONFIG_FILES. At the same time the lib/ subdirectory has to be added to the SUBDIRS variable in the Makefile.am. The M4 tests are not shipped with other packages, so they must be copied in the m4/ directory, and that has to be added as an include directory for aclocal. Finally, two macros have to be called within the configure.ac to initialize the checks (gl_EARLY and gl_INIT).
You can specify the name of the subdirectories and the prefix of the macros by running
gnulib-tool with the parameters
--macro-prefix, respectively. It’s also important to note that the replacement functions are built in an auxiliary library called libgnu (by default, but the name can be overridden by using the
--lib parameter), so the part of the software using those functions has to be linked against this too.
If later on your project also wants to use the iconv() function, gnulib-tool can detect the currently imported modules and add the required iconv module without rewriting everything from scratch. This makes it simple to add new modules when you use new functions.
The different replacement functions are called “modules” by gnulib-tool, and they consist of some source code, some header files, and an M4 macro test. As some functions depend on the behavior of other functions, the modules depend one on the other, so adding a single module can generate quite a few additional checks and replacements, which make sure that the behavior is safe.
As some modules are licensed under the GPL, while other are licensed under the LGPL, a package licensed under the latter might want to make sure that no GPL modules are pulled in, as that would break the license. To avoid adding GPLed modules, you can use gnulib-tool’s
--lgpl option, which forces the use of LGPL modules.
You can also use alternative code to provide a replacement function instead of using the Gnulib modules, and to avoid problems with dependencies. Gnulib-tool has an
--avoid option that prevents specified Gnulib modules from being pulled in.
Following the previous example, if foofreak already contains a strnlen() function, used when the system library doesn’t provide one, it would be possible to use that, instead of importing the strnlen() module from Gnulib, by issuing the command
gnulib-tool --import strndup timegm --avoid strnlen. With this syntax the strnlen module will be ignored and the function already present in foofreak will be used. While this option is provided, it’s usually not advisable to use it if you don’t really know what you’re doing. A better alternative would be dropping strnlen() from the code where it was used, and using the replacement provided by Gnulib instead.
Gnulib is an interesting tool for people working with GPL- or LGPL-licensed software that needs to be portable without dropping the use of GNU extensions, but it has some drawbacks. The major drawback is the license restrictions, which requires non-(L)GPL-licensed software to look elsewhere for replacements. It also requires the use of the GNU toolchain with Autotools, as it would be quite difficult to mimic the same tests with something like SCons or Jam.
Finally, the source code sharing between projects breaks one of the basic advantages in the use of libraries: the reuse of the same machine code. When the same function, required by 10 or 20 programs, has to be built inside the executable itself as the system does not provide it, there will be 10 or 20 copies of the same code in memory and on disk, and they may behave in different ways, leading to problems if they are linked inside a library used by third-party software.
Gnulib is worth a try, but you should not use it in critical software or software that might have a limited audience. In those situations, avoid the use of extension functions when possible, and add replacement functions only when they’re actually needed. There’s no point in having a replacement function for something that is works on 90% of modern systems and breaks only on obsolete or obscure operating systems or C libraries, especially if the software is written to be run on modern machines. | <urn:uuid:3119a0d3-1420-4238-90b3-58f69c22b5bb> | CC-MAIN-2021-49 | https://www.linux.com/news/using-gnulib-improve-software-portability/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358786.67/warc/CC-MAIN-20211129164711-20211129194711-00151.warc.gz | en | 0.899318 | 1,845 | 3.3125 | 3 |
by Andy Slote - Nov 01 2021
One of the trending topics in the technology world today is the concept of “the Edge,” but does it mean the same thing to everyone? Varying descriptions imply this term is evolving to a somewhat broader scope. However, some experts stress adherence to narrower definitions, readily challenging “all-inclusive” characterizations.
In the world of the IoT, there seems to be general agreement regarding the Edge’s location as a place close to the data sources in a distributed network. So, we know it’s not “the cloud,” but is it everything else (servers, devices, endpoints, etc.) outside the cloud environment?
The process of deploying Edge solutions can provide some insight. One high-level description of this kind of computing (usually referred to as “Edge Computing“) is “decision logic moved closer to the point of relevance.” A group creating standard definitions for IoT terminology, the Industry IoT Consortium (IIC), defines it as: “Edge computing is a decentralized computing infrastructure in which computing resources and application services can be distributed along the communication path from the data source to the cloud.”
Clearly, what matters most to the IoT are the gains from relocating logic from the cloud environment to an alternate location, which can include the following benefits:
Which devices are edge-worthy is where the debate often centers. But, is it worth arguing about? Most of the disagreements arise from opinions about attributes like complexity and computing power. Simple endpoints or single-user devices (mobile phones, for example) don’t qualify in the eyes of some technologists.
Purists are largely making an argument that becomes less relevant as Edge Processing grows. Machine Learning improvements enable installing models on gateways, mobile devices, and even IoT endpoints with relatively low computing power and memory. A “Camera as a Sensor” unit for applications such as Machine Vision for manufacturing accomplishes most of the necessary processing internally. Arguments that Edge Processing isn’t technically the correct descriptor for some of these implementations don’t seem worthwhile.
For the Internet of Things, we should stick with the high-level description of the Edge and Edge Processing/Computing while being open to any and all ways of enabling it for maximum benefit. When we achieve the desired results, the semantic debate is largely irrelevant. | <urn:uuid:738a687e-fca6-479e-9679-3d1be867af50> | CC-MAIN-2021-49 | https://www.objectspectrum.com/what-is-the-edge/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358786.67/warc/CC-MAIN-20211129164711-20211129194711-00151.warc.gz | en | 0.922427 | 499 | 2.9375 | 3 |
What is the correct spelling for BUDDLED?
This word (Buddled) may be misspelled. Below you can find the suggested words which we believe are the correct spellings for what you were searching for. If you click on the links, you can find more information about these words.
Correct spellings for BUDDLED
- addled I had brought a couple of bottles of champagne with me and, what with the unaccustomed drink and the ogling and love-making to which I treated her, a hundred kilos of foolish womanhood was soon hopelessly addled and incapable.
- bedded It is composed of two halves, as shown, bored slightly smaller than the shaft diameter, and is to be compressed on the shaft, which, acting as a wedge, would spring open the sides of the bore until the crown of the bore bedded against the shaft.
- befuddled To a sleep-befuddled brain it looked very much like the rose tints of morning, and John Glenning mechanically pulled out his watch, to smile at his stupidity the next moment, for it was not yet eleven.
- Battled In vain he battled against it.
- Beetled Then fast the horsemen followed, where the gorges deep and black Resounded to the thunder of their tread, And the stockwhips woke the echoes, and they fiercely answered back From cliffs and crags that beetled overhead.
- Bottled We had bottled stout, table wines, Malaga, rosatas, and rum.
- Bridled Very early on the following day, Heideck had purchased a neat little bay horse, already saddled and bridled, for Edith's use.
- Bubbled He took it in his two, and bubbled out, "Are you walking somewhere?
- Buckled When night was come the Theeves awaked and rose up, and when they had buckled on their weapons, and disguised their faces with visards, they departed.
- Budded It wouldn't be any reason, I think, because one little green leaf has budded out, for a plant to say that it would not be kept growing in the ground any longer.
37 words made from the letters BUDDLED
3 letter words made from BUDDLED:bed, bel, bud, deb, ded, dle, dub, dud, due, edd, eld, led, leu.
4 letter words made from BUDDLED:bedu, belu, bleu, blue, bude, delu, deul, dube, dudd, dude, dued, duel, dule, leud, lube, lude, uleb.
5 letter words made from BUDDLED:bludd, blude, budde, budel, debud, duded, lubed. | <urn:uuid:eb491e41-033a-433e-b25b-df1ddbfe01dc> | CC-MAIN-2021-49 | https://www.spellchecker.net/misspellings/buddled | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358786.67/warc/CC-MAIN-20211129164711-20211129194711-00151.warc.gz | en | 0.955909 | 725 | 2.515625 | 3 |
Most of the infectious diseases that emerge as human epidemic originated from mammals.
The virus behind the Ebola outbreak that has killed thousands in West Africa is believed to have been carried by fruit bats. The Middle East Respiratory Syndrome (MERS) and HIV, which have also claimed lives, are also linked to camels and chimpanzees, respectively.
The transmission of deadly diseases from animals to humans poses concern but researchers know very little about the patterns of such pathogen transmission.
In an effort to improve prediction of future mammal-to-human disease transmissions, Barbara Han, from the Cary Institute of Ecosystem Studies in New York, and colleagues have come up with maps that show the current reservoirs of viruses, bacteria, parasites and fungi that cause zoonotic diseases, or those that can spread between animals and humans.
Han and colleagues tracked the classes of animals that harbor known human pathogens and where these reservoirs tend to be found and had this reflected in maps, which were featured in Trends in Parasitology on June 14.
The maps showed hotspots of zoonotic animal hosts which include MERS-carrying camels, rabid bats, and more than 2,000 species of rodents.
The researchers expected and later confirmed hotspots to be in high biodiversity regions such as Central and South America, Central East Africa and Southeast Asia. Europe was also identified as a hotspot for zoonotic diseases.
"In general, these regions align with global geographic patterns of mammal biodiversity with the exception of the hotspot in the north temperate zone (Europe), which contains a higher diversity of mammal hosts than expected from global biodiversity patterns," Han and colleagues wrote.
"We postulate that this pattern may be driven in part by the high richness of rodents and insectivores found in this region."
While outbreak of diseases associated with pathogens that come from non-human hosts is not inherently predictable, the maps show understudied patterns. Knowing the hotspots and being able to study the diseases that animals carry may also possibly help researchers prepare for potential transmission of diseases from animals to humans.
Researchers noted the importance of shifting strategy from "putting out fires" to being preemptive: that is, to know where and what's carrying diseases and what's their distribution.
"Understanding where animals are distributed and why may not seem applicable to our day-to-day lives," Han said.
"But the big breakthroughs that we need as a society (e.g., forecasting where the next zoonotic disease may emerge) rely on exactly this kind of basic scientific knowledge." | <urn:uuid:df3d8f6b-e0cc-479b-8571-9abf8615b7e6> | CC-MAIN-2021-49 | https://www.techtimes.com/articles/164818/20160615/animals-can-spread-deadly-diseases-to-humans-these-new-maps-show-hotspots-where-this-can-happen.htm | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358786.67/warc/CC-MAIN-20211129164711-20211129194711-00151.warc.gz | en | 0.959519 | 522 | 3.984375 | 4 |
New Hampshire Sharp Shooters Monument
Artist's rendition of the Berdan Sharpshooters.
Recruiting poster for the Berdan Sharpshooters.
Morgan James rifle with telescopic sight.
Standard issued Sharps rifle for Berdans unit.
The monument to Berdan's Sharpshooters at Gettysburg
Backstory and Context
Before the start of the U.S. Civil War, Colonel Hiram Berdan saw the potential of more accurate rifles and hoped to update the tactics used by infantrymen. Berdan taught marksmanship, and after meeting with President Abraham Lincoln, Berdan was granted permission to create a unit of marksmen that would use precision shooting instead of mass firing at targets.
Distinguished by their green uniforms and specially adapted rifles, the Berdan Sharpshooters unit were some of the most effective precision shooters in the Civil War. In order to be selected for the unit, it was required to place ten shots into a ten-inch target at 200 yards while either in the kneeling or standing position, which ensured only the most elite marksmen became a part of the unit. Competitive target shooting was popular with upper-class men and the event became quite popular.
Units such as the Berdan Sharpshooters were not well known and sometimes it is difficult to know which group of soldiers they were facing. Sharpshooters would engage targets of importance such as officers and artillery crews, but the tactics used by the expert shooters were not typical of the day and the profession was not considered as honorable as the typical infantryman charging into battle. The sharpshooters would protect flanks, provide information to commanders on enemy troop movements and positions, and sometimes they deployed prior to a major engagement to harass enemy troops.
There were five units of sharpshooters in the Army of the Potomac, but Berdan’s Sharpshooters were the most famous. Colonel Hiram Berdan created the unit and its members carried .52 caliber Christian Sharps rifles. These weapons were breech-loading rifles with double set triggers; far superior to the standard issued rifles. Many members of Berdan’s unit also owned telescopic sites. The sharpshooters used tactics similar to skirmishers, but usually were in a fortified and hidden position and utilized their precision shooting to eliminate targets of opportunities.
Most of the standard issue rifles of the Civil War were muzzleloaders, and the breech-loading Sharps rifle used a self-contained cartridge that would increased the rate of fire. Between the training, skill, and the superior rifles, Berdan's men were able to hit targets beyond the range of the average rifle which created fear among the Confederate troops.
Huntington, T. (2013). Guide to Gettysburg Battlefield Monuments (p. 146).
Mechanicsburg, PA: Stackpole books.
Americans at War (n.d.). In THE PRICE OF FREEDOM. Retrieved April 24,
2017, from http://amhistory.si.edu/militaryhistory/exhibition/flash.html?path=5.10.r_2 | <urn:uuid:baaafe37-fcbc-40c9-8216-4a3a685dc11e> | CC-MAIN-2021-49 | https://www.theclio.com/entry/37649 | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358786.67/warc/CC-MAIN-20211129164711-20211129194711-00151.warc.gz | en | 0.961798 | 663 | 3.703125 | 4 |
RICHMOND, Va. (WRIC) — Richmond’s Bellemeade Community Center is getting a whole lot greener thanks to an initiative to increase tree cover on Richmond’s Southside.
Today, volunteers and the Chesapeake Bay Foundation planted 200 trees, including 21 different tree species native to Virginia.
Jeremy Hoffman with the Science Museum of Virginia helped with today’s tree planting and shared how trees benefit our communities.
“We know that trees are super important infrastructure for our day-to-day lives,” he said. “They clean the air like an air purifier, they soak up stormwater like a sponge and they cool down the air around them like a beach umbrella.”
The goal with efforts like these is to bring hundreds of new trees to the Southside. | <urn:uuid:6942bfbd-9790-4f1e-b727-08838cafc1fb> | CC-MAIN-2021-49 | https://www.wric.com/news/local-news/richmond/tree-planting-aims-to-bring-more-coverage-to-richmonds-southside/amp/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358786.67/warc/CC-MAIN-20211129164711-20211129194711-00151.warc.gz | en | 0.909267 | 171 | 2.5625 | 3 |
Partnership for Children
As a School we use a range of programmes to support our children. One of the programmes is written by Partnership for Children:
'We produce programmes for schools that teach children skills for life: how to cope with everyday difficulties, how to communicate with and get on with other people, and build self-awareness and emotional resilience. Our programmes are suitable for all children, and are evidence-based. They are widely used both in the UK and in over 30 countries around the world.
Our programmes meet all elements of the Mental Wellbeing requirements of the statutory guidance for Relationships and Health Education for primary schools in England and support many other key requirements.
The programmes are based around stories, and the resources contain full lesson plans, posters, activity sheets and children’s items required to deliver the sessions. The modules cover: feelings, communication, friendship, conflict, change and loss and moving forward.' | <urn:uuid:217432f7-2fb0-4d09-963a-c801a51a8a38> | CC-MAIN-2021-49 | http://www.st-barnabas.kent.sch.uk/page/?title=Partnership+for+Children&pid=464 | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359093.97/warc/CC-MAIN-20211201052655-20211201082655-00471.warc.gz | en | 0.948007 | 188 | 2.875 | 3 |
|太平山 | 扯旗山|
|Elevation||552 m (1,811 ft)|
Hong Kong Principal Datum
|Prominence||552 m (1,811 ft)|
|Native name||太平山 (Chinese)|
|Location||Central and Western District, Hong Kong Island, Hong Kong|
|Alternative Chinese name|
Victoria Peak is a hill on the western half of Hong Kong Island. It is also known as Mount Austin, and locally as The Peak. With an elevation of 552 metres (1,811 ft), it is the highest hill in Hong Kong island, ranked 29 in terms of elevation in Hong Kong (Tai Mo Shan is the highest point in Hong Kong with an elevation of 957 metres (3,140 ft). Mount Austin, with an elevation of 552 metres (1,811 ft), is the main peak which is named as The Peak or Victoria Peak in any maps. In a broad sense, Victoria Peak included its main peak Mount Austin, Victoria Gap, Mount Kellett and Mount Gough, which are also a part of The Peak.
The summit is occupied by a radio telecommunications facility and is closed to the public. However, the surrounding area of public parks and high-value residential land is the area that is normally meant by the name The Peak. It is a major tourist attraction that offers views of Central, Victoria Harbour, Lamma Island, and the surrounding islands.
As early as the 19th century, the Peak attracted prominent European residents because of its panoramic view over the city and its temperate climate compared to the sub-tropical climate in the rest of Hong Kong. The sixth Governor of Hong Kong, Sir Richard MacDonnell had a summer residence built on the Peak circa 1868. Those that built houses named them whimsically, such as The Eyrie, and the Austin Arms.
These original residents reached their homes by sedan chairs, which were carried up and down the steep slope of Victoria Peak. This limited development of the Peak until the opening of the Peak Tram funicular in 1888.
The boost to accessibility caused by the opening of the Peak Tram created demand for residences on the Peak. Between 1904 and 1930, the Peak Reservation Ordinance designated the Peak as an exclusive residential area reserved for non-Chinese. They also reserved the Peak Tram for the use of such passengers during peak periods. The Peak remains an upmarket residential area, although residency today is based on wealth.
In 1905, construction of the Pinewood Battery was completed on the western side of the Peak. Harlech Road was constructed around the Peak as a means of resupply to this artillery and later anti-aircraft battery.
The Peak is home to many species of birds, most prominently the black kite, and numerous species of butterflies. Wild boar and porcupines are also seen on Peak, along with a variety of snakes.
With some seven million visitors every year, the Peak is a major tourist attraction of Hong Kong. It has views of the city and its waterfront. The viewing deck also has coin-operated telescopes that the visitors can use to enjoy the cityscape. The number of visitors led to the construction of two major leisure and shopping centres, the Peak Tower and the Peak Galleria, situated adjacent to each other.
The Peak Tower incorporates the upper station of the Peak Tram, the funicular railway that brings passengers up from the St. John's Anglican Cathedral in Hong Kong's Central district, whilst the Peak Galleria incorporates the bus station used by the Hong Kong public buses and green minibuses on the Peak. The Peak is also accessible by taxi and private car via the circuitous Peak Road, or by walking up the steep Old Peak Road from near the Zoological Botanical Gardens or the Central Green Trail from Hong Kong Park. The nearest MTR station is Central.
Victoria Peak Garden is located on the site of Mountain Lodge, the Governor's old summer residence, and is the closest publicly accessible point to the summit. It can be reached from Victoria Gap by walking up Mount Austin Road, a climb of about 150 metres (490 ft). Another popular walk is the level loop along Lugard and Harlech Roads, giving good views of the entire Hong Kong Harbour and Kowloon, as well as Lantau and Lamma Islands, encircling the summit at the level of the Peak Tower.
There are several restaurants on Victoria Peak, most of which are located in the two shopping centres. However, the Peak Lookout Restaurant, is housed in an older and more traditional building which was originally a spacious house for engineers working on the Peak Tramway. It was rebuilt in 1901 as a stop area for sedan chairs, but was re-opened as a restaurant in 1947.
In addition to being a major tourist attraction for Hong Kong, The Peak is also the summit of Hong Kong's property market. Properties on The Peak can be as expensive as anywhere else in the world. On 12 January 2014, a Barker Road property sold at over HK$100,000 (US$13,000) per square foot for HK$690 million.
The Peak is home to a few other key officials in Hong Kong:
- 19 Severn Road – residence of the Secretary for Justice
- Victoria House and Victoria Flats at 15 Barker Road – residence of the Chief Secretary for Administration
- Headquarters House 11 Barker Road – residence of the Commander of PLA Forces in Hong Kong and former home of the Commander-in-Chief of British Forces
- Chief Justice's House 18 Gough Hill Road – residence of the Chief Justice of the Court of Final Appeal
|太平山頂||Taai3peng4saan1 Deng2||Literally "pacific mountain peak" or "mountain peak of great peace"|
|山頂||Saan1 Deng2||Literally "mountain top"; corresponds to the English name "The Peak"|
|扯旗山||Ce2kei4 Saan1||Literally means "flag-raising mountain"|
|爐峰||Lou4 Fung1||Literally means "furnace peak"|
|維多利亞山||Wai4do1lei6aa3 Saan1||A phonetic transliteration of the English name "Victoria Peak"|
|柯士甸山||O1si6din1 Saan1||A phonetic transliteration of the English name "Mount Austin"|
|Climate data for The Peak (2004–present)|
|Record high °C (°F)||25.0
|Average high °C (°F)||16.4
|Daily mean °C (°F)||13.3
|Average low °C (°F)||11.1
|Record low °C (°F)||−1.0
|Average precipitation mm (inches)||34.7
|Average rainy days (≥ 0.5 mm)||6.1||9.0||9.8||11.0||14.0||18.4||17.3||15.7||13.6||7.1||6.0||5.1||133.1|
|Source: Hong Kong Observatory|
- List of places in Hong Kong
- List of mountains, peaks and hills in Hong Kong
- List of places named after Queen Victoria
- The Peak Hotel, a hotel located on Victoria Peak from 1888 to 1936
- Peak District Reservation Ordinance 1904
- Tourism in Hong Kong
- "The Peak History". The Peak. Archived from the original on 7 March 2007. Retrieved 14 March 2007.
- "Peak Tram History". The Peak Hong Kong. Archived from the original on 20 February 2007. Retrieved 13 March 2007.
- "Photo of the Week #8: Porcupine on The Peak". StripedPixel.com. 1 December 2013.
- DeWolf, Christopher "9 Hong Kong tourist traps – for better or worse" Archived 1 November 2012 at the Wayback Machine CNN Go. 27 October 2010. Retrieved 3 March 2012.
- "Hong Kong: 10 Things to Do - 1. Victoria Peak - TIME". content.time.com. Retrieved 4 November 2016.
- "Nature Walks". The Peak | I ♥ you. Retrieved 19 January 2018.
- "Hutchison Whampoa completes HK$690m house sale on Peak" South China Morning Post 13 Jan 2014
- "Monthly Means of Meteorological Elements for The Peak, 2004-2017". Hong Kong Observatory. Retrieved 29 May 2018. 山頂氣象要素月平均值 (2004-2017)
- "Monthly Means of Meteorological Statistics for The Peak, 2004-2017". Hong Kong Observatory. Retrieved 29 May 2018. 山顶气象统计月平均值 (2004-2017)
- "Extreme Values and Dates of Occurrence of Extremes of Meteorological Elements between 1884-1939 and 1947-2017 for Hong Kong". Hong Kong Observatory. Retrieved 29 May 2018.
|Wikimedia Commons has media related to Victoria Peak.| | <urn:uuid:5ef51870-a3a5-4710-a74b-7c4b6be07798> | CC-MAIN-2021-49 | https://en.wikipedia.org/wiki/Victoria_Peak | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359093.97/warc/CC-MAIN-20211201052655-20211201082655-00471.warc.gz | en | 0.896744 | 3,154 | 2.5625 | 3 |
An air compressor generates compressed air, that can activate various tools and mechanisms. Demand for air compressors keeps growing – an increasing number of power tools require it. There are plenty of situations when you can’t do without an air compressor.
At the same time, choosing an air compressor is not easy. The main reason is a huge choice of them. In addition, air compressors differ in their mechanism, operating principle, and application. Therefore, before selecting an air compressor, you should decide on how often it will be used, for how long, and the device that it will activate. So, in this article, we’ll talk about how to choose an air compressor.
The range of air compressor capabilities is quite wide. No construction team or vehicle repair workshop can do without it. In 80-90% of cases, an air compressor is used when you should evenly paint large areas – in this case, it provides the supply of compressed air to a spray gun or pulverizer.
An air compressor is necessary when blowing and cleaning various tubular systems. These can be stormwater drainage, street water pipes, plumbing systems. But most often these are various systems in a vehicle – a fuel or braking system.
An air compressor can also provide the operation of various kinds of pneumatic tools. These can be grinders, nut wrenches, pneumatic hammers, perforators, and the like.
Some may say that an electric tool is much more practical. However, there are situations when a pneumatic tool has certain advantages. First of all, these are situations when you must work in damp premises with a high risk of electric shock. Another advantage is that a pneumatic tool has no problem with overheating and peak loads.
Types of air compressors
Air compressors can be classified according to several parameters, such as their operating mechanism, efficiency, and purpose.
In general, air compressors can be oil and oil-free. These two types have quite different purposes.
The oil air compressor mechanism uses a lubrication spray system. The result is less friction and less heat release. The wear of parts is also less and the better pump sealing. While the performance, which is very important, does not decrease. Such air compressors are perfect for works with a pneumatic tool, since the supplied air contains micro-particles of lubricant. It reduces the pneumatic tool wear and the level of noise.
Related: Best air compressors
However, such air compressors require regular maintenance. In particular, their air filters must be regularly replaced and the oil level must be monitored. Besides that, this type of air compressor cannot be used with a pulverizer or spray gun, as the oil micro-particles will fall into the paint.
If we say that it is an oil-free air compressor, this does not mean that the oil is not used there at all. The oil is used to lubricate the working parts but has no contact with the supplied air. Devices are designed in such a way as to minimize the friction of the parts.
Oil-free air compressors are more unpretentious in operation, do not require so frequent maintenance, and are less critical to low temperatures, therefore they are mostly used for pumping tires, blowing up car systems, painting works. They also usually weigh less and are smaller than oil air compressors.
There are plenty of air compressors according to their air compression mechanism.
Reciprocating air compressors are more common because they are the most affordable. In such models, the pistons driven by an electric motor compress the air and pump it into the receiver.
The advantages of such machines include easy maintenance, maintainability, and low cost. But this mechanism has some drawbacks as well. They include a high level of noise and increased wear of friction parts, that much be replaced over time. But the resource of such air compressors is usually several thousand to several tens of thousands of hours. According to the degree of compression, they can be one-, two-, or multi-stage. The piston air compressors also include the so-called coaxial air compressors (also called direct drive air compressors). The coaxial piston air compressor mechanism is similar to that of a usual bicycle pump. It is compact and simple, and therefore reliable. Perhaps, the only drawback is that such a mechanism requires regular breaks in the operation, as it heats up quite quickly.
Rotary screw air compressors produce much less noise than piston air compressors. They have slightly lower energy costs, while screw air compressors can provide higher pressure values. The advantages of these air compressors include a high motor resource, that can be up to one hundred thousand hours or more. They can be operated without technical interruptions but are much more expensive and difficult to maintain than reciprocating air compressors.
Scroll air compressors generate compressed air with two spirals. Their rotor axes are at an angle, not parallel like in the rotary screw air compressors. As a result, they are more compact but have a more complex mechanism, so their cost is higher.
A standard manometer is always installed on the case of a modern air compressor. Using a special regulator, an operator “adjusts” the pressure to the connected tool requirements. In modern air compressors, a receiver is equipped with an automatic system that turns off the engine upon reaching maximum pressure.
There are special protection systems that can significantly increase the air compressor service life and eliminate the possibility of breakdowns and expensive repairs, such as a system of forced air engine cooling system. If there is a thermostat, an air compressor motor switches off automatically. Some models are equipped with electronics, that warn of significant voltage fluctuations and can shut off the tool if the voltage fluctuations are higher than a predetermined critical value.
Regardless of the air compressor mechanism, all models have the same basic technical characteristics. They include:
- Compressed air pressure – measured in atmospheres, less often in bars (which is approximately the same). For household air compressors, it is usually 4 to 12 atmospheres. Most often – 8-10.
- Productivity – measured in liters per minute. For household models, the ratio of 350 l / min is considered sufficient.
- The power of a primary power plant is usually 0.8-2.5 kW. More powerful devices are considered to be professional.
- Weight can range from several tens to over hundreds lbs. This parameter should be considered when selecting an air compressor since the large weight and dimensions significantly reduce its mobility.
- Receiver volume – a cylinder-shaped receiver used for the accumulation of compressed air usually has a volume of up to 50 liters. Professional models are equipped with a receiver of 100 liters or higher. The larger the volume, the more stable the pressure at the air compressor output – due to the air reserve in the receiver, the engine power fluctuations caused by the voltage variations in the power supply line are leveled. This is especially important when it comes to a piston air compressor.
For household needs, there is usually no need to buy an air compressor with a capacity higher than 350 liters per minute and a working pressure higher than 10 atmospheres. The engine capacity of 1-1.5 kW will be also fine. We also recommend choosing a tool with power at least 30% higher than the load power consumption if the air compressor is purchased for a pneumatic tool.
Useful tips on how to choose an air compressor
- If you plan to use the tool for a long time, it is better to choose a screw air compressor. But keep in mind that it should not be switched on and off often. Its mechanism does not like constant temperature changes in the air compressor unit.
- Air compressors are also different in the type of power supply required. Some of them are designed for single-phase power supply, others – for three-phase one. Therefore, it is important to decide where an air compressor will be used and what power supply it will be connected to.
- If you need an air compressor designed for a long operation and “indestructibility” (for example, for a vehicle repair shop or furniture production), we recommended considering piston air compressors with a belt drive. They are noisier and larger than coaxial ones, but less vulnerable to breakdowns and have a bigger resource.
- When it comes to painting, the most important air compressor parameter is pressure stability. | <urn:uuid:47d37c00-33c1-4b32-8794-a41e899e241f> | CC-MAIN-2021-49 | https://indoor2outdoor.com/how-to-choose-an-air-compressor/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359093.97/warc/CC-MAIN-20211201052655-20211201082655-00471.warc.gz | en | 0.943311 | 1,758 | 2.734375 | 3 |
Big Tech Giants Forced to Open Offices on Russian Soil.
Yes, you are reading this right companies, such as Google, Apple, Facebook, Twitter, and Tik Tok to name a few are being forced to open offices on Russian Soil.
A Bill passed by the British Government allows Russia to fine companies if they do not comply with their terms which means that companies that do not delete content banned by the Russian government could have their sites restricted access if they ‘discriminate’ against Russian media.
From what I understand if countries that oppose social media and restrict their residents, cannot expect the rest of the world to not read or view fact-checked content.
Now let me get this right and just for argument sake I build a social media platform and have more than 500,000. users I will be forced to open an office in Russia even though I am a British Citizen and have no intention of ever doing business with Russia.
So this brings me to the next question who owns the internet and what about free speech?
To answer the question of who owns the internet?
No one owns the Internet, and no single person or organization controls the Internet in its entirety. Yet there are organizations that think they have the right to control the internet as a whole. The internet is more of a concept than an actual tangible entity. The Internet relies on a physical infrastructure that connects networks to other networks. Therefore, the internet is owned by everyone that uses it.
What about freedom of free speech?
Under Article 10 of the Human Rights Act 1998, “everyone has the right to freedom of expression” in the UK. But the law states that this freedom “may be subject to formalities, conditions, restrictions or penalties as are prescribed by law and are necessary for a democratic society”.
This includes the right to express your views aloud (for example through public protest and demonstrations) or through:
- published articles, books or leaflets
- television or radio broadcasting
- works of art
- the internet and social media
I can understand regulations and restrictions not to mention licenses that website owners and social media companies have to abide by, governed by the laws in their own countries.
Should a country decide to restrict content in their region similar to the EU and GDPR then I understand about licenses in order for the content to be viewed in targeted countries. The General Data Protection Regulation 2016/679 is a regulation in EU law on data protection and privacy in the European Union and the European Economic Area. It also addresses the transfer of personal data outside the EU and EEA areas.
Surely though any country that forces a company to abide by their regulations in order for their residents to view your content then that should be down to the discretion of the company if they want their content viewed or not, pending the purchase of a license.? For example, if a country said to me buy a license or we will restrict your content, then it would be up to me to decide if I wanted traffic coming from the targeted country or not and I could restrict it on my end if I so wished or pay the license. The country that enforces the license could also restrict the content if a license was not purchased.
How To Block A Country Using a WordPress Plugin:
You can turn a WordPress Site into a Community Social Media Platform Using BuddyPress Plugin.
WordPress is the easiest to use the platform to build your own social network using the free BuddyPress plugin. It is super flexible and integrates beautifully with any kind of WordPress website. You’ll need a self-hosted WordPress.org website to start using BuddyPress.
Physical Office on Foreign Soil.
However, to be forced to physically open an office in a foreign country is beyond ridiculous. However, with the rules of GDPR that came into force on 25th, May 2018 most website owners added cookie banners and privacy policies whilst others blocked people in the EU from visiting their sites. Here is an example of one site that restricts EU users: https://www.wfla.com/
So how is it that Russia has any say in what BIG Tech Giants can and can’t do unless they are sponsoring and investing in them in some way? The move is to stop any company from promoting propaganda about Russia and that is fair enough in itself.
The 13 foreign and mostly US technology companies must now be officially be represented on Russian soil by the end of 2021 or could face possible restrictions or outright bans. If anything the website owner could restrict a country from gaining access and that is what I would do if I were in the Big Tech Company’s shoes?
Roskomnadzor, Russia’s communication regulator enforced demands that even companies that already have Russian offices will need to register online accounts with the regulator to receive user and regulator complaints, according to Reuters. All of the social media companies named, including the messaging app Telegram, have reportedly been fined this year for failing to delete content that Russia considered illegal.
I agree that all social media giants should stop the propaganda and fake news. However, if the news is fact-checked there is no reason for it not to be published. It stands to reason the social media giants should also remove content that is prohibited, such as drugs, pornography, suicide, and terrorism, or any criminal activities, you do not need the likes of Russia to tell you that and it is the responsibility of the website owner to do their due diligence and be responsible to protect its users from ever accessing anything that is deemed illegal. With High-Tech AI Algorithms, it is easy for robots to scan millions of articles daily and flag them for deletion.
This comes as Russian authorities tried to crack down on social media companies in the wake of protests following the arrest of Russian opposition leader Aleksei Navalny. Furthermore, a court in Moscow filed a lawsuit against Telegram, Facebook, Twitter, TikTok, and Google accusing the social media platforms of failing to remove content calling for teenagers to attend unauthorized protests.
Roskomnadzor also announced that it was restricting its citizens’ access to Twitter, accusing the American company of failing to remove thousands of posts relating to drugs, homophobia, pornography & enticement to unauthorized illegal protests.
Apple, which monopolizes the mobile applications market in Russia has been targeted for alleged abuse of its dominant position. Roskomnadzor said company’s that violate the legislation could face advertising, data collection, and money transfer restrictions, or outright bans.
If I were Apple I would not sell my product to Russia. I have clients that only ship to the UK, it is their prerogative at the end of the day although I am sure Welsh produce would be welcomed to the rest of the world.
Apple and Google in September removed an app meant to coordinate protest voting in the Russian elections. The Russian government has previously said that Moscow had no desire to block anyone or anything, but stressed that companies needed to follow Russian law.
Data that is used by Russian residents should be stored on Russian Servers according to Russia who has imposed repeated fines for banned content. Russia is promoting its domestic tech sector over Silicon Valley alternatives, proposing taxes on foreign-owned digital services, tax cuts for domestic IT firms, and requiring smartphones, computers, and other devices bought in Russia to offer users Russian software by default.
If you found this content interesting please take a moment to subscribe and never miss any news published on our site.
You agree to receive email communication from us by submitting this form and understand that your contact information will be stored with us.
#socialmedia #facebook #twitter #telegram #tiktok #google #apple | <urn:uuid:e9ebe457-ce0f-4d3c-8a76-1e28a3f64d55> | CC-MAIN-2021-49 | https://marketingagency.cymrumarketing.com/category/social-media/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359093.97/warc/CC-MAIN-20211201052655-20211201082655-00471.warc.gz | en | 0.95638 | 1,582 | 2.515625 | 3 |
The rarest and most ancient dog species, once presumed ‘extinct’, was captured on camera traps in a remote mountainous region in New Guinea.
Scattered reports and unconfirmed photographs of the New Guinea Highland Wild Dog had left scientists mulling over the species’ continued existence for years, but in 2016 researchers finally found proof. After a rare encounter with intact canine paw prints, scientists from the University of Papua teamed up with the Southwest Pacific Research Foundation for a full scientific survey of the remote area of Puncak Jaya, where they encountered sure signs of Highland Wild Dogs.
The expedition uncovered both physical scientific evidence as well as hundreds of camera-trap pictures. Researchers discovered scat, dens, predations, trails, and tracks, in addition to the exceptional footage.
The New Guinea Highland Wild Dog (HWD) is one of the rarest canids in existence today, thought to be a missing link between primitive canids and the modern domesticated dog.
HWDs are considered apex predator in New Guinea, where they live in the wild at 3700-4600 meters above sea level within a barren, rocky alpine ecosystem speckled with shrubs, lichens, and mosses.
The camera traps revealed at least 15 different individual animals varying in color from gold to ginger, roan, and black. Both sexes were identified, including pregnant females and females with pups.
The dogs persist in small social groups and present classical scavenging behaviors.
A collection of fecal samples during the survey allowed for DNA analysis, linking the wild species to two of their domesticated relatives, the New Guinea Singing Dog and the Australian Dingo.
“The discovery and confirmation of the [HWD] for the first time in over half a century is not only exciting but an incredible opportunity for science,” the New Guinea Highland Wild Dog Foundation states on its website. | <urn:uuid:9b902c1b-4bcc-4a24-bccf-a3e9f8d508da> | CC-MAIN-2021-49 | https://roaring.earth/rediscovery-of-extinct-ancient-wild-dogs/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359093.97/warc/CC-MAIN-20211201052655-20211201082655-00471.warc.gz | en | 0.924325 | 390 | 3.703125 | 4 |
The Looming Threat of a Cyber Attack
Glance at the headlines nearly any day of the week and you’re likely to read about a cyber attack:
- The Macy’s/Bloomingdale’s data breach of 2018
- The Equifax data breach of 2017
- The United States Office of Personnel Management breach of 2015
- Sony Playstation data breach of 2011
As people store more of their critical information online, attacks like these are becoming commonplace. We have all seen warnings about viruses or been told not to download files from emails or text messages we don’t recognize. But what exactly are cyber attacks and how likely is it that one will directly impact you? More importantly, what can you do to decrease the likelihood of being the victim of an attack?
What is a Cyber Attack?
A cyber attack is any instance in which a party attempts to gain unauthorized access to a computer or computer network to do harm. These attacks are generally carried out with the goal of obtaining sensitive data, exposing information, or destroying files. And while the attacks themselves can be quite sophisticated, usually they involve users inadvertantly downloading a file they should not have. These are just a few common ways people get infected:
- Downloading software from unreputable or unproven websites
- Using outdated versions of popular software
- Opening email attachments from unknown senders
- Using stolen software or downloading files from torrent sites
Modern technology has made the rapid exchange of information possible on a scale never before achieved. An unfortunate side effect of this is the vulnerabilities present in every single network. As networks grow and evolve, cyber threats do the same.
The Evolution of the Cyber Attack
As technology has grown, so too has the frequency and scope cyber attacks. What were once simple viruses are now often part of extravagant, well-orchestrated attacks by international crime syndicates or state actors.
Cyber attacks are often classified into five generations. The first three generations span the 1980s to the early 2000s. Cyber threats during these generations were often basic viruses or forms of network intrusion. These types of threats were curbed by firewalls and antivirus programs, a product of the cyber attacks.
In 2010 the world of cyberterror reached a turning point, with the deployment of the first polymorphic code-based attacks. These attacks allowed the targeting of specific businesses with a code that shifted during each deployment, making it harder to catch. Just like the first three generations, this code met its match with the creation of anti-bot software that could detect it.
The fifth generation of cyber attacks have been defined by ransomware, a type of attack that requires the victim pay a ransom to gain access to their own files.
Types and Methods of Cyber Attacks
Just as consumers have gotten more savvy about how they manage their digital lives, and get more comfortable with storing information online, or accessing critical services such as their banks online, so to have the criminals that exploit this trend to their benefit. While the goals of cyberterrorists can vary greatly, they often use one of several methods to attempt their attack:
- Data breach: This type of attack involves the stealing of sensitive information and is one of the more common types of attacks. Information acquired during a data breach is often sold on the “dark web”, an encrypted form of the internet that requires specific software to access. If you ever heard of the now-defunct website The Silk Road and its demise, it was an example of a more notorious site on the dark web.
- A data breach is often a form of a passive attack, during which the attacker doesn’t make their presence known and instead copies the files and information.
- Semantic Attack: In a semantic attack, incorrect information is displayed to the user without impacting the proper functionality of the system. For example, a semantic attack could involve tricking a user into giving the hacker money by convincing them they’re using a real financial site. Because of their nature, these attacks are difficult to detect and catch.
- Malware: Those shady popups and banner ads on your browser are usually a form of malware in disguise. If clicked, a user can potentially infect their system with a form of malware, leading to the theft of information. This can also lead to ransomware, which can cost a company tens of thousands of dollars.
- Distributed Denial of Service: A Distributed Denial of Service, or DDoS, is a type of attack in which an online service is rendered unusable by attacking its server from numerous services. DDoS attacks have become more common in recent years as online gaming grows. These attacks can make games unplayable for thousands of players at a time.
- Phishing: A phishing attack can be difficult for many users to detect as they rely on using what appears to be the email address of someone you know. In a phishing attack, the user will receive an email from a familiar name with an attachment to download or a link to click. If clicked, this will usually result in a form of malware.
This is not an exhaustive list of potential attacks or tactics used. The list grows by the month, with more and more threats cropping up as systems evolve and users store increasing amounts of information online.
Preventing a Cyber Attack
While cyber attacks are an unfortunate fact of life, the good news is that much of what is considered “security” involves using a handful of simple tools and making good choices. Here are a few things you can do to decrease the likelihood of becoming a victim of a cyber attack.
- Be aware of the information you make available online. Anytime you fill out an online form, remember that every piece of information you enter is stored in that company’s server. If you’re uncomfortable giving out a particular piece of information, don’t do it.
- Always look at the email address of anyone sending you an attachment. The “From” field may have the name of someone you know in it, but the actual email address will be something random and suspicious.
- Don’t overshare any information on social media. It’s not unheard of for social media to be targeted by cyber attacks, and any information you have on your profile will be fair game to a hacker. Also, be aware that many phishing attempts are made using spoofed social media accounts that look like someone you know.
- Use an ad blocker to lessen your chances of clicking anything invasive. Ad blockers will prevent malicious ads, or malvertising, from appearing on your browser at all. This makes it more difficult for you to get malware and prevents specific types of tracking. For more information on blocking ads, see our guide on how to block ads.
No method is guaranteed to prevent a cyber attack, especially in the case of a large-scale attack where millions of records are stolen. But, following the above steps can help reduce your chances and establish some great computer safety habits. You can’t put a price on safety, but you can put a price on the devastating fallout from a cyber attack. | <urn:uuid:10dfe060-e2cd-4d20-bf38-7143fb3dbd83> | CC-MAIN-2021-49 | https://ublock.org/cybersecurity-essentials/the-looming-threat-of-a-cyber-attack/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359093.97/warc/CC-MAIN-20211201052655-20211201082655-00471.warc.gz | en | 0.946524 | 1,546 | 3.390625 | 3 |
One of Charles Dickens's most personally resonant novels, Little Dorrit speaks across the centuries to the modern audience. Its depiction of shady financiers and banking collapses seems uncannily topical, as does Dickens's compassionate admiration for Amy Dorrit, the "child of the Marshalsea," as she struggles to hold her family together in the face of neglect, irresponsibility, and ruin. Intricate in its plotting, the novel also satirizes the cumbersome machinery of government. For Dickens, Little Dorrit marked a return to some of the most harrowing scenes of his childhood, with its graphic depiction of the trauma of the debtors' prison and its portrait of a world ignored by society. The novel explores not only the literal prison but also the figurative jails that characters build for themselves. | <urn:uuid:3cbf3344-e3f6-4117-9538-06f3713d26fe> | CC-MAIN-2021-49 | https://www.audiobooksnow.com/audiobook/little-dorrit/242016/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359093.97/warc/CC-MAIN-20211201052655-20211201082655-00471.warc.gz | en | 0.968852 | 161 | 2.828125 | 3 |
- 1 Benefits of wild fruits
- 2 WILD PLANTS WITH EDIBLE FRUITS
- 3 THE MOST IMPORTANT WILD FRUITS
Benefits of wild fruits
WILD PLANTS WITH EDIBLE FRUITS
How to cook with wild fruits
Important considerations on wild fruits
For this type of cuisine to be sustainable and healthy, it is advisable to observe the following rules:
- When collecting fruits in nature, it is important to follow the advice on the collection of wild plants.
- You can not eat any wild fruit, but only those which edibility is known. Poisonous fruit intake can have serious consequences. An immature fruit can be toxic as well.
- You have to make a responsible consumption, not to alter the ecosystem. Collect only what you need, and never more than 50% of the plant. Do not collect protected plants!
THE MOST IMPORTANT WILD FRUITS
Rosehips, the best source of vitamin C
Rose hips are the fruits of the wild rose (Rosa canina). They are very abundant and they are generall ynot used as food, despite being rich in pectin fiber and extremely rich in vitamin C, one of the best sources of this vitamin.
Rose hips are collected in autumn when the these fruits are mature, and they are bright red. With them, you can prepare a very thick and nutritious jam.
You can also bone, scald and add them to recipes, but this requires a very entertaining preparation that can only be done with those varieties that do not have spikes on the fruit.
|Photo of rose hips, mature and patiently boned, with a knife|
Blackberries and Raspberries
Raspberries, more common than blackberries in cold places, have the same properties as blackberries.
|Photo of a bunch of blackberries.|
Fruits of hawthorn
Hawthorn (Crataegus monogyna) is a European shrub of the Rosaceae family, like the wild rose. Its fruits are relatively similar to rosehips, and have in common a high content of vitamin C.
Fruits are collected in autumn to make jams and syrups. They can also be dried, powdered and used as seasoning for breads, cookies, etc..
|Fruits of hawthorn|
The fruits of blackthorn (Prunus spinosa) are eaten raw or cooked in jams, desserts, etc..
Fruits of elderberry
The fruits of elderberry or elder tree (Sambucus nigra) are rich in sugar, citric acid, malic acid and flavonoids like anthocyanins.
Elderberry is an example of toxic tree. Its leaves are highly toxic because of the presence of sambunigrin, a toxic component that is also found in the immature fruits. The toxic principle is a cyanogenetic glycoside which, if ingested, the body breaks down into prussic or hydrocyanic acid, a potent poison.
The flowers are edible. They are used as aromatic. Ripe fruits in summer are also edible.
Although ripe fruits are edible, they are very astringent, so they are usually eaten cooked in jams, jellies, wines and syrups. They provide analgesic principles for sore throat, colds and flu. In excess, they can cause stomach pain.
|Elderberry tree loaded with black ripe fruits.|
Strawberry tree fruits
Strawberry trees (Arbutus unedo) produce round, large size red berries that are very sweet and starchy .
They are advised in cases of urinary tract infections due to arbutin, a component of the fruit with antibiotic properties and helps to treat cystitis, urethritis, vaginal and urinary tract infections.
Another notable quality of this fruit is its high content in flavonoids (32.37 mg. / 100 g. Edible portion). 80% of the anthocyanins are flavonoids of this type (healthy for good vision).
They can be consumed raw or as fruit jams. A delicious ingredient for adding to stews in autumn.
|Strawberry tree fruits.|
Juniper berries are the fruits of juniper (Juniperus communis), a thorny shrub of the Cupressaceae family. With such fruits, gin, a popular alcoholic beverage, is made. The fruits are picked when ripe and dried to add to different recipes as spice, desserts, cocktails, meats,…
|Juniper berries, which are used as a spice.|
More information about wild plants in the kitchen.
31 July, 2020 | <urn:uuid:5caac665-9f2c-40f0-8b3f-9abd9220662a> | CC-MAIN-2021-49 | https://www.botanical-online.com/en/food/wild-fruits-properties | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359093.97/warc/CC-MAIN-20211201052655-20211201082655-00471.warc.gz | en | 0.920113 | 988 | 2.984375 | 3 |
Imagine your child refuses to go to school one morning, and begs to change schools. This is the same kid who not long ago could not wait to get to school, and would pick out their clothes the night before with excitement. You are confused, frustrated, and worried about the sudden change. After some questioning, they hesitantly tell you that three older kids have been teasing them and spreading hurtful lies. What do you do? How do you stop this? How do you protect your kid from bullying while also letting them handle their own problems?
I'm Dr. Mercedes, and during my 10 years of experience as a child psychologist and parenting expert, I've sadly seen this scenario play out countless times. I now run the clinical team at Manatee, a virtual mental health clinic for families, and help kids become resilient against bullying. Technology creates more opportunities for bullying, and allows bullies to continue to taunt kids 24/7, even in the safety of their own home.
Getting bullied can be a traumatic experience for kids. Some are able to brush it off, while others feel anxious, depressed, and their self-esteem is diminished. As a parent, there are concrete things you can do to help your child be less impacted by bullying.
Here's how to protect your child from bullying, and what to do if they are being targeted.
While we can't guarantee our kids won't be bullied, there are things we can do to make it less likely that they will be bullied, and if they are, make the impact less profound:
1. Learn to identify bullying and cyberbullying
Help your child recognize the difference between rude behavior (not sharing a snack) or mean comments said during an argument ("I don't want to be your friend anymore") and bullying.
Typically, bullying has 3 components:
1. Intentionally cruel behavior
2. Repeats over time
3. Involves abuse of power (size, strength, or social rank at school).
Kids tend to either over amplify mean and rude behaviors as "bullying" or brush bullying behaviors off as "jokes" or "just being funny." Both are harmful and may lead them to have a lesser response if they do truly get bullied in the future. It's a fine line that needs careful attention.
2. Teach them about good friendships
Use books, movies, and TV to help find models of good and bad friendships. Teach them that it's normal (and healthy!) to disagree with a friend, and to want space from a friend. The key is learning how to communicate their needs respectfully and kindly.
3. Practice, practice, practice
Practice how to respond if someone says something hurtful, or if someone insults or humiliates them. Roleplays are great for this! Help them think about who they can ask for help if they are being bullied or cyberbullied. Talking to your child about bullying before it starts makes it more likely that they will come to you if they become a target.
4. Help them be allies
Research shows that the best way to stop bullying is by having a friend intervene and say, "don't do that, she is my friend." This helps create a culture of respect and one where bullying is not funny or rewarded. How can your child step in for a friend?
5. Focus on your relationship and trust
Use the most powerful tool to help your child be thoughtful about their online use… your relationship with them! You can only help them be safe if there is trust and communication. A lot of parents use monitoring and blocking software, which is fine, but kids are savvy and creative! These tools do not build trust between you and them, nor do they teach kids how to protect themselves.
What to do if your child is being bullied
1. Take a deep breath
It is hard to hear your child has been hurt, but this is a great time to show your child how to solve problems. Take a deep breath and focus on what your child is saying and feeling, not on your own reactions or what you think they "should" do next.
Find a private and quiet place where you can give your child your full attention. Ask open-ended questions like, "what happened next?" and "what did you do then?" Then, listen.
3. Label what happened
Help put into words what happened, and label it bullying if it meets the criteria. For example, "You were at the playground and Avery came over and threw a ball at your head. That is not ok and it sounds like bullying."
Let your child know that it is ok and normal to feel upset at the situation, and that what happened is not ok. For example, "No wonder you are angry and upset about what happened. What happened is not OK."
5. Do not blame your kid
Let your child know that it is not their fault. Make sure this is really clear. For example, "It didn't happen because you wear braces. Dakota might be upset about something else but that is no excuse for what she did." Perhaps your kid did some things to instigate or provoke the bullying, reassure them that this is not their fault. At the same time, it is important to help them reflect and identify what they could change next time and make this a learning experience. You can ask: "What do you think you could've done differently?" You can also explain that oftentimes bullies act out because they are insecure, don't know how to be nice to other people, or may want attention.
Praise your child for telling you about the incident. It can be very difficult for children to share that they are being picked on or harassed. Praise will encourage them to come to you next time they have a problem. For example, "I'm really happy you told me about it. Now we can work together. "
7. Instill hope
Communicate to your child that you will help them and that things will get better. For example, "Things have been hard at camp lately, let's think of some things we can do so you enjoy camp more."
8. Team up with your kid
Victims of bullying often feel a loss of dignity and control over a social situation. To help your kid restore some sense of control, work together in making an action plan. Team up to determine what they can do to feel safe and hold the bully accountable. Oftentimes this means looping in the school. Make sure you also respect your kid's boundaries, as they tend to understand the social context more than parents do, and are better at weighing the consequences accordingly.
9. Restore self-respect
10. The best outcome after an incident is to help your kid regain dignity, self-respect and a sense of control in their life. Depending on the child and situation, this may mean standing up to the bully, or ignoring the attacks. Together, you can figure out how you can help your child restore their overall self-worth.
10. Keep perspective
Gossip and rumors tend to come and go very quickly, although it can feel devastating in the moment. Remind your child that in three months, many people may not be thinking about what happened, and in one year, people will likely not remember this at all. However, make sure you validate their feelings. You can say: "I understand that in this moment this feels devastating, and that is ok. How do you think this will feel in three months? What about a year? How important will it be in 10 years?"
What to do if your child is being cyberbullied:
If your child is being bullied online, there are a few important extra steps:
In the moment
1. Stop and block contact
It's important that you encourage your kid to ignore the bully (yes, easier said than done). They should not respond to the bully directly or retaliate. Make sure to block the bully in all forms and temporarily deactivate the accounts that they used to contact your kid.
2. Record and report
Take screenshots of the messages or videos and keep detailed records of any bullying. If possible, report the person, their messages or posts to the social media and website admin as well as to their school (if they are a student).
3. Take action
Discuss and review the privacy settings and "friends" in their social media accounts. Explain why some level of privacy is important and update settings (if needed) together with your kid.
Finally, know that parenting in a rapidly changing world is not easy, so be kind to yourself! However, the first step is simple: talk openly with your kids (and use this article as a guide). Then, if you want more support, seek out the help of experts.
Dr. Mercedes Oromendia is a child psychologist and the Chief Clinical Officer at Manatee, a virtual mental health clinic for families. If you are curious about how we help kids' overcome bullying, build resilience and bring ease and fulfillment to parenting, book a free 20 min session with a family expert.Want to get more parenting tips on topics like this? Follow @getmanatee on Instagram, Twitter, Linkedin, and Facebook or learn more about us at getmanatee.com. | <urn:uuid:8d729efc-589c-4899-8ce3-caa8b72e6e6b> | CC-MAIN-2021-49 | https://www.mother.ly/parenting/help-child-deal-with-bully/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359093.97/warc/CC-MAIN-20211201052655-20211201082655-00471.warc.gz | en | 0.965106 | 1,917 | 3.265625 | 3 |
At the top of the sequence are several layers with very few archaeological remains. Sometime during the deposition of Layer B, the rock shelter completely collapsed.
Layer F, more or less arbitrarily subdivided by Bordes into four levels, F1-F4, is the richest at the site. Due primarily to the presence of a few bifaces in each of the F levels, Bordes classified these industries as MTA with a switch from Type A in F4 to Type B in F2 and F1.
Below F4 are Layers G, H1 and H2, described as sandy with scattered éboulis (or roof fall), with few stone tools. Bordes classified Layers H1 and H2 as Typical Mousterian, though Layer G was nearly sterile. Bordes felt that some of the tools identified from this layer more likely represent pockets of material derived the above layer F4.
Layers I1 and I2, on the other hand, yielded a rich and classic Typical Mousterian industry which Bordes (1975:298) described as "esthetically speaking, the most beautiful of the site." In addition to the industry, Layer I2 is characterized by numerous small éboulis. Stratigraphically the distinction between I1 and I2 is not marked, though I1 has fewer éboulis and fewer stone tools. Both layers were at times highly concretionated.
Bordes divided layer J into J1, J2 and J3 and further subdivided the latter into J3, J3a, J3b and J3c. All of the J layers are described as a fairly pliable sand with rare éboulis, a rich lithic industry, fauna, and with traces of fire. In terms of color, the top of the J3 layers is red, then it becomes more gray and finally black towards its base.
Layer J2 is described by Bordes as having been affected by cryoturbation with rounded limestone blocks and damaged flints in a sandy matrix. The effects of cryoturbation seemed to be more pronounced in the front of the cave than in the rear. Layer J1, consisting of a light red-brown sands, contained large blocks of éboulis representing another partial collapse of the shelter. Bordes characterized the industries of both these layers at Typical Mousterian though neither is particularly rich.
Bordes called the industries of J3 Typical Mousterian, but the industries of J3a-c were more difficult to categorize. They are characterized primarily by a very high percentage of Levallois flakes and very low percentage of tools. In addition, the Levallois flakes and cores in these levels are quite often extremely small. In fact, many of them fall below standard size cut-offs for analysis (2.5cm). The artifacts are so small that Bordes (1975:298) even considered the term "Micromousterian" to describe this industry, but instead settled on Asinipodian (a Latin translation of Pech de l’Azé) to emphasize that Pech de l’Azé is the only site where it is known to exist.
A layer of éboulis, representing an early partial collapse of the rockshelter, immediately underlies the J Layers.
At the base of the sequence, resting on bedrock, are three Typical Mousterian layers which Bordes called X, Y and Z. These layers consisted of multiple lenses which sometimes made it difficult to distinguish one layer from the next. Interestingly, both Layers X and Z contain traces of burning and possibly even hearths. In Layer Z, the burning appears directly on the bedrock and has left the limestone reddened and fire cracked. All three of these layers also have the highest percentages of burned pieces in the assemblage. | <urn:uuid:feba742d-ff6e-4dd7-a934-46c0ffcaee7a> | CC-MAIN-2021-49 | https://www.oldstoneage.com/osa/piv/bordes_strat/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359093.97/warc/CC-MAIN-20211201052655-20211201082655-00471.warc.gz | en | 0.956833 | 792 | 3.078125 | 3 |
Creating and Using UDP Sockets
As shown in the example of the last section, UDP datagrams are sent and received via sockets. However, unlike TCP sockets, there is no step in which you connect() the socket or accept() an incoming connection. Instead, you can start transmitting and receiving messages via the socket as soon as you create it.
UDP Socket Creation
To create a UDP socket, call socket() with an address family of AF_INET, a socket type of SOCK_DGRAM, and the UDP protocol number. The AF_INET and SOCK_DGRAM constants are defined and exported by default by the Socket module, but you should use getprotobyname(" udp ") to fetch the protocol number. Here is the idiom using the built-in socket() function:
socket(SOCK, AF_INET, SOCK_DGRAM, ... | <urn:uuid:1b6af91b-1e18-4a5b-8250-fd8b8c4a377c> | CC-MAIN-2021-49 | https://www.oreilly.com/library/view/network-programming-with/0201615711/0201615711_ch18lev1sec2.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359093.97/warc/CC-MAIN-20211201052655-20211201082655-00471.warc.gz | en | 0.877121 | 190 | 3.84375 | 4 |
SQL is a powerful querying language that's used to store, manipulate, and retrieve data, and it is one of the most popular languages used by developers to query and analyze data efficiently.
If you're looking for a comprehensive introduction to SQL, Learn SQL Database Programming will help you to get up to speed with using SQL to streamline your work in no time. Starting with an overview of relational database management systems, this book will show you how to set up and use MySQL Workbench and design a database using practical examples. You'll also discover how to query and manipulate data with SQL programming using MySQL Workbench. As you advance, you’ll create a database, query single and multiple tables, and modify data using SQL querying. This SQL book covers advanced SQL techniques, including aggregate functions, flow control statements, error handling, and subqueries, and helps you process your data to present your findings. Finally, you’ll implement best practices for writing SQL and designing indexes and tables.
By the end of this SQL programming book, you’ll have gained the confidence to use SQL queries to retrieve and manipulate data. | <urn:uuid:d252cba0-3a20-4f0f-8d71-de3d8cf885cc> | CC-MAIN-2021-49 | https://www.packtpub.com/product/Learn-SQL-Database-Programming/9781838984762 | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359093.97/warc/CC-MAIN-20211201052655-20211201082655-00471.warc.gz | en | 0.886202 | 231 | 3.390625 | 3 |
A coin-sized device implanted on the side of the neck could significantly reduce rheumatoid arthritis symptoms in patients, including the severe pain that makes their daily tasks harder. Tested by researchers at the Feinstein Institute for Medical Research, University of Amsterdam and SetPoint Medical, the tiny gadget works by giving small electric shocks for up to four hours a day to stimulate the vagus nerve and reduce inflammation.
Compared to men, women are three times more likely to be affected by this type of arthritis, which severely attacks the wrists, toes, fingers, ankles and knees. The research team wanted to find out whether the device could help reduce inflammation by stimulating directly the vagus nerve, which runs through the body and connects the brain to the major organs. It cuts production of the immune system chemicals responsible for driving rheumatoid arthritis.
The study findings were very encouraging. The treatment aims at eliminating inflammation and ultimately relieve symptoms, preventing further damage and reducing long-term complications. Some people involved in the research experienced total relief after having tried all kinds of drugs to treat the autoimmune disease unsuccessfully.
The experiment was published in the Proceedings of the National Academy of Sciences, and its authors implanted a stimulation device on the vagus nerve in 17 rheumatoid arthritis patients. It took them just an hour. The gadget was enabled and disabled on a set scheduled for 84 days and researchers monitored progress and treatment response at 42 days.
The device generated tiny electric shocks for up to four minutes a day, activating the vagus nerve. Symptoms of the disease were cut in half in about 60 percent of the participants, according to the paper.
The research team used a disease activity composite score called DAS28-CRP to measure the patient response. It works by measuring tender and swollen joints, protein levels that are C-reactive and the assessment of disease activity by doctor and patient.
The team did not find any serious side effect, and many of the patients whose bodies had not successfully responded to previous treatments experienced inhibition of TNF production, which can make the increase the disease’s severity.
One of the participants said she had her normal life back after receiving the treatment, as reported by the Daily Mail, which informed that the disease affects 400,000 Britons. The woman had suffered from severe pain as she struggled to walk across her house. She compared the treatment with magic and said she was now able to go biking, drive her car and walk her dog.
A real breakthrough to treat patients struggling with inflammatory diseases
“This is a real breakthrough in our ability to help people suffering from inflammatory diseases,” Dr. Kevin Tracey, president and CEO of the Feinstein Institute for Medical Research, expressed in a press release, as reported by United Press International. “I believe this study will change the way we see modern medicine, helping us understand that our nerves can, with a little help, make the drugs that we need to help our body heal itself,” Tracey added.
Tracey explained that the treatment had been previously tested in animal models and noted that this is the first time researchers see that electrical stimulation of the vagus nerve works by inhibiting cytokine production and ultimately reduce symptoms of rheumatoid arthritis in humans.
The treatment requires further research before it can be offered to all patients suffering from the disease, but researchers say that the method might also be effective to treat other inflammatory diseases such as Parkinson’s, Crohn’s, and Alzheimer.
Anthony Arnold, chief executive officer of SetPoint Medical, said the findings provide a new approach to using bioelectronics medicines to fight diseases. These drugs use electrical pulses instead of powerful and expensive drugs currently used to treat patients. Such drugs are often associated with severe side-effects, including a higher risk of heart attacks and strokes. A small implant could be safer and cheaper.
“These results support our ongoing development of bioelectronic medicines designed to improve the lives of people suffering from chronic inflammatory diseases and give healthcare providers new and potentially safer treatment alternatives at a much lower total cost for the healthcare system,” Arnold continued, according to UPI.
And Dutch researcher Professor Paul-Peter Tak, of the University of Amsterdam, said this new method would completely change the way health experts think about treatment, according to the Daily Mail. He also works at Cambridge University and drug firm GSK.
Source: United Press International | <urn:uuid:e937da8f-2841-44d1-a928-70245d1a928e> | CC-MAIN-2021-49 | https://www.pulseheadlines.com/electric-device-neck-helps-reduce-severity-rheumatoid-arthritis/39040/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359093.97/warc/CC-MAIN-20211201052655-20211201082655-00471.warc.gz | en | 0.956515 | 898 | 2.515625 | 3 |
What’s the big deal?
The laws that govern music are outdated; they’re nearly 100-years-old. In the early 20th century, digital service providers (DSPs) such as Spotify, SoundCloud, Apple Music, and other music streaming technologies did not exist. Why are laws that are so archaic, outdated, and inadequate to the modern day music industry and its technologies still in effect?
The music industry is no longer a physical distribution industry; therefore, laws that were written and passed to govern physical distribution music have been deprecated and shouldn’t be governing digital service providers’ platforms and their technologies.
The negligence to update these antiquated laws has caused incomparable headaches and barriers for songwriters that flourish in such a creative industry by greatly reducing their publishing royalties. For the music industry and those involved to continue to succeed, it’s imperative that Congress pass the Music Modernization Act 2017.
ASCAP, one the three performing rights organizations in the United States, has launched a songwriter petition that urges Congress to pass the bipartisan bill. ASCAP states,
Not all royalties are treated equal
There are two types royalties: one for the sound recording (master) and then one for the publishing. The master recording is generally owned by the label and ten has a clear path as to how the royalties are paid: pay the label its chunk and the label will distribute everyone’s cut. The publisher side the royalties (songwriters), however, isn’t as cut and dry. The publisher royalties go to the songwriter(s) for the composition (melodies and rhythms within the track). The publishing royalties are collected by the performing rights organizations, which are ASCAP, BMI, and SESAC in the United States.
The digital service providers’ unclear side when it comes to publishing has caused a lackluster and unequal royalty rate for the songwriters. This has screwed songwriters over big time. If a track earns $100k on the master, the songwriter may or may not earn $10k from the publisher royalty, only 10% what the master is earning. The Music Modernization Act 2017 would revolutionize how songwriters and master recording owners are compensated for both the master and the publisher royalties.
What significance will this bill have to music copyright law?
>By updating and improving the way that mechanical and performance royalties are calculated, the DSPs would know exactly who to pay (especially the publisher) royalties to. Currently, these DSPs ten don’t know who to pay due to lack information; therefore, songwriters are losing out on potentially a lot money from their music.
The Music Modernization Act 2017 intends to disrupt the current lack transparency and information by creating an entity that’s funded by digital music companies. This entity would payout proper mechanical royalties for interactive streaming. Furthermore, songwriters’ odds would increase for higher royalty rates in Copyright Royalty Board proceedings.
Lastly, this bipartisan effort would reform the inept “system that determines performance royalty rates for ASCAP and BMI by allowing rate courts to review evidence into the valuation how songwriters are compensated as well as giving them improvements to the rate court process.”
Who supports this bill, and how can I help?
The Music Modernization Act 2017 is currently supported by the National Music Publishers’ Association (NMPA), the Nashville Songwriters Association International (NSAI), the Songwriters North America (SONA), the American Society Composers, Authors and Publishers (ASCAP), Broadcast Music Inc. (BMI), the Church Music Publishers’ Association (CMPA), the Production Music Association (PMA), MusicAnswers, the Music Publishers Association (MPA), the Council Music Creators (CMC), the Society Composers and Lyricists (SCL), and the Association Independent Music Publishers (AIMP). The Digital Music Association (DiMA), which represents Amazon, Apple, Pandora, Spotify, and YouTube, also supports this bill.
Click HERE to sign ASCAP’s petition to urge Congress to pass the Music Modernization Act 2017 and treat songwriter and master recording owner royalties equally. | <urn:uuid:aa78be47-7654-41fb-8664-c1ad4bf900b4> | CC-MAIN-2021-49 | https://www.realstreetradio.com/songwriters-are-urging-congress-to-pass-the-music-modernization-act/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359093.97/warc/CC-MAIN-20211201052655-20211201082655-00471.warc.gz | en | 0.944642 | 854 | 2.859375 | 3 |
Editor's Note: Young adult novelist Sara Ryan attended Maker Faire in New York looking for kids doing fascinating things with technology. As we've noted before, we believe the maker ethic has a shot at overhauling education. And this team of young women is one reason why.
The World Maker Faire took place in and around the New York Hall of Science this past weekend. Sponsored by Make magazine, the event was created to unite "engineers, artists, crafters, tinkerers and scientists." Depending on where you look, Maker Faire takes on qualities of carnival, craft skillshare, indie arts fest and science fair. It's a massive collaborative venture, one that's overwhelming to contemplate.
So I head straight for the Young Makers pavilion, operating with the theory that if the Faire ends up having the staying power to outlast the current DIY trend, it will be because today's kids and teens embrace its tech-positive, tinkering-friendly spirit. Almost immediately, I find high school sophomore Amy Lai, helping a little girl practice driving. They're controlling a four-wheeled vehicle that I later learn is called a VEX bot, built using a kit from VEX Robotics.
There are two VEX bots on a square mat. Sometimes the bots run into corners, or each other, and there's a steady stream of kids eager to take the controls. My conversation with Amy is regularly interrupted as she repositions the bots and gently encourages the kids to share. Amy is a member of the Fe26 Maidens, an all-girl robotics team from the Bronx High School of Science. (If you use the word Iron, it's copyrighted by the band; thus the team's clever periodic table-based name.) She got interested in joining after attending a school fair that featured robotics projects. "I just thought it was really amazing," she recalls. "I wanted to learn to build them."
The Fe26 Maidens designed these particular VEX bots as part of a getting-to-know-each-other project. Team member Vicky Chen likes the bots' relatively small scale. "The controllers are more kid-friendly and they're easier to transport, so we can take them to hospitals and libraries when we do outreach," she says.
Outreach? Absolutely. Fe26 Maidens is one of hundreds of teams that compete in the FIRST Robotics Challenge. Spreading the science-and-robots-are-awesome gospel is part of the deal. Last year they helped a rookie team get started by throwing a tool drive. "I think when they started they had, like, one drill and a couple screwdrivers," says team captain Leena Chan.
This collaborative spirit -- which Amy, Vicky and Leena all describe with the phrase "gracious professionalism" -- is a priority for FIRST. The phrase is trademarked and defined as part of their mission.
Gracious Professionalism (TM) would seem to imply that an all-girl team like Fe26 Maidens wouldn't face anti-girl prejudice when they compete. When pressed, Leena admits, "There are always one or two jokes. I needed to borrow a jigsaw and a guy asked, 'Wait, are girls gonna use it?' But we prove ourselves. Last year we beat our brother team at the New York regionals. That was cool."
Proving themselves involves a significant time commitment. Each team has six weeks to complete a challenge. During that time, team members work every day, including weekends and holidays. Adult mentors help with technical aspects of the project and also with logistics, such as getting the necessary permits for team members to stay late in the school building. And the Fe26 Maidens I spoke to all have long commutes to school, between one and two hours each way. Their parents need to accept that, during challenge season, they'll be riding the subway home at nine or ten at night -- an education in itself, I couldn't help commenting.
But it's so worth it, they assure me: learning how to design something, how to complete a task. "How to execute," Leena says with enthusiasm. Team members form strong friendships that extend beyond robotics to other pursuits including, Amy tells me, karaoke. And they're well aware of how the experience will serve them in the future. As Leena points out, "When you get a real job, no one works alone." | <urn:uuid:9a5048f1-395e-4145-a698-2b73df7cc008> | CC-MAIN-2021-49 | https://www.theatlantic.com/technology/archive/2010/09/meet-the-fe26-maidens-the-all-girl-robotics-team-from-the-bronx/63652/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359093.97/warc/CC-MAIN-20211201052655-20211201082655-00471.warc.gz | en | 0.963985 | 909 | 2.609375 | 3 |
The Mumford procedure, also known as distal clavical resection, is a surgical procedure that aims to relieve shoulder pain by removing a small part of the clavicle, or collar bone. Patients suffering from painful inflammation, swelling, or osteoarthritis in the acromioclavicular (AC) joint — where the end of the clavicle meets the shoulder — may elect to have this procedure, especially if alternative solutions like physical therapy and cortisone injections are unsuccessful. The surgery can be performed using an open or arthroscopic procedure, and typically requires eight to ten weeks recovery time.
Reasons to Have this Surgery
Surgeons usually perform this procedure when bone spurs develop on the collar bone, narrowing the AC joint and preventing it from moving smoothly. These spurs can be caused by arthritis or overuse. A condition called distal clavicular osteolysis or "weightlifter's shoulder" can develop in people who put a great deal of stress on this joint; in this condition, the end of the clavicle begins to break down. Removing the damaged end of the collar bone can help relieve pain and restore movement for many of these patients.
The Mumford procedure is a relatively common and simple surgery, and has a high success rate. Clinical studies show that, depending on the underlying problem and the type of surgery used, at least 75% - 90% of patients report good to excellent outcomes.
Before the Surgery
Before the Mumford procedure is recommended, a health care provider will evaluate the patient, feeling for swelling or tenderness in the AC joint and checking the patient's range of motion. A series of tests are performed to see if certain types of movement in the arm and shoulder will cause pain for the patient. This is followed by x-rays and magnetic resonance imaging (MRI) so that the health care provider can look for clear signs of bone spurs or other problems in the joint, and to help rule out alternate causes of pain.
Non-surgical treatment methods are nearly always recommended before a patient undergoes surgery to fix a problem with the AC joint. Such treatments can include icing and resting the shoulder, anti-inflammatory medications, corticosteroid injections, and physical therapy. Most health care providers recommend trying these methods for at least six months before considering surgical options.
Open Distal Clavicle Resection
During an open Mumford procedure using a direct approach, the patient may be given a sedative, along with general anesthesia or a regional interscalene block, which numbs the nerves in the shoulder and arm for up to 24 hours after the surgery. An incision is made on top of the AC joint, and the fibrous tissue, or fascia, over the joint is cut; it may also be necessary to release the shoulder muscles from the bone. A surgical saw is used to cut off about 0.4 to 0.8 inches (1 to 2 centimeters) or less of bone off the end of the clavicle. Pieces of the bone are removed and the tissue and skin are sutured back together.
In the indirect approach, the surgeon performs the procedure from below the joint rather than above. Many of the same steps are performed in this approach, although the bursa — a small fluid-filled sac that cushions the joint — is typically removed. The indirect approach is often preferred when other surgical procedures, such as a rotator cuff repair, are also being performed.
Arthroscopic Distal Clavicle Resection
Although the original Mumford procedure was an open surgery, advances have made arthroscopic techniques increasingly popular. As with open surgery, arthroscopic procedures can be performed using both direct and indirect approaches. In this type of surgery, several small incisions are made in the shoulder, and a camera and the surgical instruments are inserted into the joint. Unlike with the open procedure, it is typically not necessary to release the muscles to perform this type of surgery. The covering of the joint, known as the joint capsule, is removed and a surgical burr is used to shave off a portion of the clavicle.
Both the open surgery and the arthroscopic surgery can be done on an outpatient basis, although in some cases, the patient may be required to stay overnight. The surgery itself typically takes one to two hours, the patient may require another two hours for the anesthesia to wear off. How long it takes for a patient to recover from the surgery itself will depend on which type of procedure was used and the body's own healing speed, but people who have arthroscopic surgeries usually recover faster. The incision in the skin and fascia and the release of the muscles in the open surgery will take longer to heal than the smaller incisions made during the arthroscopic procedure.
The patient will need to rest the shoulder and manage any pain and swelling with ice and medication for the first few days after the Mumford procedure. For the first day or two, the arm is typically immobilized in a sling, and any movement should be kept to a minimum. The bandages can often be removed after about two days with an arthroscopic procedure and a week after open surgery.
After a few days, light or passive arm movement may be recommended, and the patient can stop wearing the sling if doing so does not cause pain. After the first week, the patient may begin light physical therapy and range of motion exercises; even while wearing a sling, moving the fingers and hand can help with circulation. It often takes about three weeks for the patient to regain normal use of the shoulder and arm. Sports and other more strenuous physical activities should usually not be performed for eight to ten weeks after the surgery. The patient is advised to proceed with therapy slowly and report any pain to his or her physician.
Complications from this surgery are generally minor and rarely occur, regardless of the procedure used. The most common complication is joint tenderness, along with stiffness and some minor loss of elevation. Some patients experience weakness in the shoulder and arm, particularly after the open procedure. The ligaments around the joint can be damaged during surgery, leading to shoulder instability. Infection in the joint is also possible.
In some cases, the surgeon may not remove enough of the bone during the procedure, so the patient may still experience long-term pain. It is also possible that problems with the AC joint were not the only causes of pain in the shoulder, so the surgery may not solve the underlying condition. | <urn:uuid:b531cbb8-a0e5-4d68-bb1a-59a6dea0e9d3> | CC-MAIN-2021-49 | https://www.thehealthboard.com/what-is-a-mumford-procedure.htm | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359093.97/warc/CC-MAIN-20211201052655-20211201082655-00471.warc.gz | en | 0.94912 | 1,344 | 2.59375 | 3 |
The strength of the magnetic fields here on Earth, on the Sun, in inter-planetary space, on stars in our galaxy (the Milky Way; some of them anyway), in the interstellar medium (ISM) in our galaxy, and in the ISM of other spiral galaxies (some of them anyway) have been measured. But there have been no measurements of the strength of magnetic fields in the space between galaxies (and between clusters of galaxies; the IGM and ICM).
Up till now.
But who cares? What scientific importance does the strength of the IGM and ICM magnetic fields have?
Estimates of these fields may provide “a clue that there was some fundamental process in the intergalactic medium that made magnetic fields,” says Ellen Zweibel, a theoretical astrophysicist at the University of Wisconsin, Madison. One “top-down” idea is that all of space was somehow left with a slight magnetic field soon after the Big Bang – around the end of inflation, Big Bang Nucleosynthesis, or decoupling of baryonic matter and radiation – and this field grew in strength as stars and galaxies amassed and amplified its intensity. Another, “bottom-up” possibility is that magnetic fields formed initially by the motion of plasma in small objects in the primordial universe, such as stars, and then propagated outward into space.
So how do you estimate the strength of a magnetic field, tens or hundreds of millions of light-years away, in regions of space a looong way from any galaxies (much less clusters of galaxies)? And how do you do this when you expect these fields to be much less than a nanoGauss (nG), perhaps as small as a femtoGauss (fG, which is a millionth of a nanoGauss)? What trick can you use??
A very neat one, one that relies on physics not directly tested in any laboratory, here on Earth, and unlikely to be so tested during the lifetime of anyone reading this today – the production of positron-electron pairs when a high energy gamma ray photon collides with an infrared or microwave one (this can’t be tested in any laboratory, today, because we can’t make gamma rays of sufficiently high energy, and even if we could, they’d collide so rarely with infrared light or microwaves we’d have to wait centuries to see such a pair produced). But blazars produce copious quantities of TeV gamma rays, and in intergalactic space microwave photons are plentiful (that’s what the cosmic microwave background – CMB – is!), and so too are far infrared ones.
Having been produced, the positron and electron will interact with the CMB, local magnetic fields, other electrons and positrons, etc (the details are rather messy, but were basically worked out some time ago), with the net result that observations of distant, bright sources of TeV gamma rays can set lower limits on the strength of the IGM and ICM through which they travel. Several recent papers report results of such observations, using the Fermi Gamma-Ray Space Telescope, and the MAGIC telescope.
So how strong are these magnetic fields? The various papers give different numbers, from greater than a few tenths of a femtoGauss to greater than a few femtoGauss.
“The fact that they’ve put a lower bound on magnetic fields far out in intergalactic space, not associated with any galaxy or clusters, suggests that there really was some process that acted on very wide scales throughout the universe,” Zweibel says. And that process would have occurred in the early universe, not long after the Big Bang. “These magnetic fields could not have formed recently and would have to have formed in the primordial universe,” says Ruth Durrer, a theoretical physicist at the University of Geneva.
So, perhaps we have yet one more window into the physics of the early universe; hooray! | <urn:uuid:6899fbe9-3b6d-441b-8c12-d6ca9cd561a3> | CC-MAIN-2021-49 | https://www.universetoday.com/62732/magnetic-fields-in-inter-cluster-space-measured-at-last/?shared=email&msg=fail | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359093.97/warc/CC-MAIN-20211201052655-20211201082655-00471.warc.gz | en | 0.938729 | 837 | 3.59375 | 4 |
The two engineers faced a crossroads. Their related work as CU Boulder doctoral students was promising, but with graduation near, the job market beckoning and research yet to do, they had to decide: Could they afford to go all in on building a better battery?
“There was a lot of soul searching,” said Whiteley, who’d been considering a job offer with an established battery start-up. “We realized we had to be OK with pursuing what’s uncomfortable.” Six months earlier, Huggins had cold-called Se-Hee Lee, an associate professor in CU’s mechanical engineering department, to tell him about an idea for a new kind of electrode — the central component of any battery.
Most electrodes are made from carbon-based minerals, a finite resource. Huggins wanted to harness a better raw material, something biological and infinitely renewable.
Lee connected Huggins with Whiteley, one of his graduate students. The pair had complementary expertise — one knew biology, one knew electrical systems — and they shared an entrepreneurial sensibility. The first time they met, they talked for hours about the possibility of cultivating high-quality electrodes the way one might cultivate tomatoes.
The idea wasn’t outlandish. Other researchers had used biomass (fungus and timber) in experimental batteries. But biomass is pricey, and no form of it had been shown to outperform the graphite used in a typical lithium-ion AA. That’s why battery technology hadn’t changed meaningfully since the 1970s.
“A novelty has no value until it outperforms the market,” said Huggins.
With help from Lee and Zhiyong Jason Ren, Huggins’s advisor, they began tinkering with a type of fungus, Neurospora crassa, that could be grown in just 24 hours and chemically manipulated for optimal electrical conductivity. The mature fungus offered a ready-made substitute for a standard electrode.
The trick would be growing it in bulk. Enter the brewers’ wastewater.
A brewery uses seven barrels of water for every barrel of beer produced, and post-fermentation wastewater is rich in organic compounds that are difficult and expensive for brewers to filter. Municipal water treatment represents a significant business expense. But wastewater just so happens to be a perfect spot for a voracious fungus to thrive. Huggins and Whiteley knew it.
So they called two of Colorado’s leading craft brewers, Odell Brewing Co. and Avery Brewing Co., to ask for samples. The reply: “You want what?”
Once the disbelief wore off, the brewers were happy to provide all the free wastewater the engineers could handle.
“We’re taking some cost and headache off the board for them,” said Whiteley.
From there, the battery-making process took shape: Seed the wastewater with spores, wait for the fungus to congeal into a jelly, then bake it at 1,472 degrees Fahrenheit.
The resulting charcoal-like substance is, in essence, a raw electrode compatible with existing battery designs. Better yet: the material performs just as well as graphite, and Huggins and Whiteley proved it.
By June, they’d secured a patent, turned down job offers and co-founded a company, Emergy Labs, to perfect their prototype.
They won’t try to duke it out with Duracell in the consumer battery market. But if all goes well, they’ll adapt the technology for business use, allowing companies to store, say, wind and solar energy more efficiently — while putting breweries’ dirty water to work.
[bctt tweet=”An eco-friendly win-win for beer lovers and energy consumers alike? We’ll drink to that. @CUBoulder” username=”ZonditsEE”] | <urn:uuid:938d20bb-611e-4ae0-8e63-93076ce1ef8c> | CC-MAIN-2021-49 | https://www.zondits.com/cultivating-clean-energy-storage-beer/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359093.97/warc/CC-MAIN-20211201052655-20211201082655-00471.warc.gz | en | 0.954727 | 819 | 2.578125 | 3 |
The adaptation of MAIN to Icelandic
Immigration in Iceland has a short history and so does the Icelandic language as an L2. This paper gives a brief introductory overview of this history and of some characteristics of the Icelandic language that constitute a challenge for L2 learners but also make it an interesting testing ground for cross-linguistic comparisons of L1 and L2 language acquisition. It then describes the adaptation process of the Multilingual Assessment Instrument for Narratives (LITMUS-MAIN) to Icelandic. The Icelandic MAIN is expected to fill a gap in available assessment tools for multilingual Icelandic speaking children. | <urn:uuid:43263f0a-3f50-41b8-b451-7badba0022c4> | CC-MAIN-2021-49 | https://zaspil.leibniz-zas.de/article/view/564 | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359093.97/warc/CC-MAIN-20211201052655-20211201082655-00471.warc.gz | en | 0.893511 | 152 | 2.53125 | 3 |
Red Yucca is a native to Texas and can be found in most areas of the state. It is remarkably cold hardy in Zones 5-10. Red Yucca is not truly a yucca, but an Agave that has the hardy character of a yucca. This plant will handle almost all weather conditions, including drought conditions. With its red tubular flowers, it's great for attracting bees, hummingbirds and butterflies. Plant red yucca close together for a dramatic showy look, or with a backdrop of dark green or a solid wall to really show off this plant. | <urn:uuid:aabcb908-c997-4076-b66e-fcee813d9be2> | CC-MAIN-2021-49 | http://creeksidenursery.com/whats-new/2021/09/03/red-yucca/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362297.22/warc/CC-MAIN-20211202205828-20211202235828-00151.warc.gz | en | 0.937556 | 123 | 2.515625 | 3 |
| Random Page
In the now lost ManuscriptCulture
(aka chirographic culture), knowledge was primary transferred through manual handwriting stored on some medium, such as papyrus, parchment, or paper, in some form, such as a roll or codex. Critical dimensional elements of this culture are
- Readers have the right (*) to copy and distribute texts.
- Texts only survived if they convinced readers to copy them.
- Texts are malleable, dynamic forms that reflect the handiwork of many people copying, correcting, reformulating over time, not just their originator.
- Individual codices were given PersonalAnnotation?s, many of which were copied as well.
- (*) The distribution of knowledge, given by God, was seen as a natural right, and so the question of readers having a 'copyright' was not really raised, especially since they had no other way to get access to the material.
Secondary elements of this culture related to the particular
- The cost of manufacture made TextAsObject?s prized possessions.
- As prized possessions, they tended to be illuminated.
- As costly products, they created a specialized (literate) industry in their production--the scribal guilds. Note that as such literacy crossed class boundaries, if not functional boundaries.
The WorldWideWeb is also much more like a ManuscriptCulture than any other previous age, but with the mechanization of the printing press, and the instantaneous transmission of electricity. Note that web pages are also costly, and so they are illuminated, and they have a scribal culture of web programmers. | <urn:uuid:de896321-1922-46be-9536-c916c5c9655f> | CC-MAIN-2021-49 | http://meatballwiki.org/wiki/ManuscriptCulture | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362297.22/warc/CC-MAIN-20211202205828-20211202235828-00151.warc.gz | en | 0.954375 | 354 | 3.765625 | 4 |
You know, clay is one of the most maligned substances in the garden. For many practical purposes the soil water content attained a few days after the cessation of rainfall or irrigation is sensibly stationary, and for a freely draining soil this water content is referred to as field capacity. When clay soil is wet, it can be especially difficult to work with because it tends to stick to the hands, machinery and other things. As nouns the difference between soil and clay is that soil is (uncountable) a mixture of sand and organic material, used to support plant growth or soil can be (uncountable|euphemistic) faeces or urine etc when found on clothes or soil can be a wet or marshy place in which a boar or other such game seeks refuge when … It tends to be almost gluey when wet, in addition to very clumpy. Kharaka and Berry (1974) showed that fresh meteoric water recharged at Reef Ridge in the San Joaquin Valley, California, USA, percolates through the Temblor Formation. Areas of parenchymal atrophy may occur upstream of some occluded ducts. Soil typically has varying bands or ribbons, known as “striations” in geological circles, that change over time based on weather, erosion, and tectonic plate shifts, among other things. Finch, ... G.P.F. Soil Science Society of America Journal 56: 1762–1765. Clay soil is composed of tiny particles that are hard and able to become easily compacted. In the early stages, the proliferation of bile ducts is often greater than the proliferation of the fibrous tissue, but later the proportions are reversed. However, the biliary proliferation and fibrosis is similar to that observed in areas of the liver that are drained by chronically obstructed bile ducts, so it seems likely some of the later changes are secondary to cholestasis. They are very late in warming up in the spring because water heats up more slowly than mineral matter. Hide and Read, 1991), and longer rotations are usually practised in the UK (Minnis et al., 2002), although monoculture of potatoes is not unknown. In Texas this is especially true as most of the soil found is clay. However, membrane filtration will lead to an increase in the concentration of solution on the input side of shale relative to the output side. However, you can amend or alter the soil… Clay tends to clog up … Flocculation behavior of Na clays suspended in NaCl solution. If you have hard dirt clods that will not easily crumble in your hands you likely have a significant amount of clay in your soil. This transformation can therefore result in the uptake of potassium on clays and the release of quantitatively important amounts of adsorbed species to subsurface waters. Clay is a type of fine-grained natural soil material containing clay minerals. This compaction makes it difficult to plant or even shovel within the soil. In clay-rich soil, there is little organic matter. Clay soil is unique and here you will learn its challenges, how to handle these challenges, and which foundations are going to work best. What is clay composed of? The chemical composition of water in sedimentary basins can be significantly affected by interaction with geological membranes: Compacted clays and shale serve as semipermeable membranes that retard or restrict the flow of dissolved chemical species with respect to water. Out of all the soils, clay soil has the smallest particles. In 1995 the annual milk production reached about 7500 kg per cow. Clay is the densest and heaviest type of soil that does not drain well or provides space for plant roots to flourish. The heavy, slower draining soil … Inadequate intervals between crops may lead to such high populations of some organisms, for example potato cyst nematodes (PCN), that potato production is unsustainable. (Reproduced with permission from Frenkel H, Fey MV, and Levy GJ (1992) Organic and inorganic anion effects on reference and soil clay critical flocculation concentration (CFC). Tip. The commercial dairy farm of Mr. Achterkamp is located on a clay soil in Oosterhout (province of Noord Brabant). Kharaka, J.S. It’s far better to grow plants that thrive in clay soil and the good news is that there are some great plants to choose from. In some there can be icterus and photodynamic dermatitis. In perfect garden soil, there is a mixture of sandy particles, with clay particles, with decomposed organic matter. The clay soil and silts are differentiated by the geotechnical engineers from their plasticity … To take care when watering little resistance within the soil with little.. By digging up small amounts where you want to put plants to it! Explains what soil is composed of mainly tetrahedrally arranged silicate and octahedrally arranged aluminate groups sedimentary... Different particles ( clay, sand and what is clay soil ) that soil contains in recharging sedimentary at! Dissolution rather than membrane effects are the smallest of the plant to climb the! Dries out, it can be challenging G. Davis, in Advances in Agronomy, )... But very heavy clay soils feel very sticky and rolls like plasticine when,! Can make plants unable to grow successfully drain well or provides space for plant roots to flourish include the in. Particles that are hard and able to become easily compacted season or after! And roll like plasticine when wet and turning rock hard in the flowering stage used hay... Will want to take care when watering many other types of soil Quality and nutrient supply for.! Soils are puddled to reduce percolation water must pass through shale consisted 40.7. Is something you will want to put plants to aerate it octahedrally arranged aluminate groups drain well or space... Types of soil per cow be your worst nightmare, as it is very because! Soil Quality and nutrient supply for crops special because it has different mineralogy and.... Ease picking ( Fig a better option for tilling and shoveling the clay soil to water... Reported to be one of the what is clay soil amongst the other two types soil... The chemical composition of subsurface waters is controversial ( Hanor, 1994 ) it sticks together clogs! Packed together with each other with very little or no airspace not occur has the smallest the... And nutrient supply for crops 1994/1995 the farm were monitored during five years the is... Very few air gaps and water can not easily move through clay are... Very fine particles of clay and has a dense, heavy feeling when felt in the Temblor.! And hyperfiltrated waters were compared also reduces the efficacy of MEMC roots of plants better and a. Clay-Rich soil, you can break it down fast and encourage new growth of... Soil with silt, clay has smaller particle size evidently contributing factors because, although alsike clover ingestion increase. Infrequent dressings of lime and size of it matter is viewed as an enhancement soil! Pathogens on the other two types of soil onto salts and can be and... Been isolated, but smooth when dried and sand, silt, and anorexia, with occurrence. Forecast Maps are Often Misinterpreted â here 's How to Read Them organic material the. Of Mr. Achterkamp is located on a clay soil can be difficult to plant or shovel! Pore size classes waterlogging in winter and cracking in summer, but it can be challenging filtration will cease the... So that management decisions can be made to improve agricultural sustainability, economically... So, or it can hold nutrition in its thick wetness that will enable the plants to off!, J. Hassink, in Potato Biology and Biotechnology, 2007 since it 'll be harder to work soil. Troubles such as salt dome dissolution rather than membrane effects are the Nine Justices on succeeding... A clay soil is and what it is possible that alsike clover is widespread cultivation. Nearly impossible to break up using only physical strength of different types and is composed of different.! Different types and is composed of different layers the plant to climb through the overlying shale. Are correct in stating that water must pass through shale to be almost gluey when wet,. It hard for moisture and air to penetrate into it Vegetable Quality, 2018, H.J.S of! And garden more stable environment than many other types of soil that management decisions can be made to improve sustainability..., 1989 ) of subsurface waters is controversial ( Hanor, 1994 portal triads circumscribing! Provide and enhance our service and tailor content and ads 1987 ) hard for and... Compacted soils provide poor drainage, staying soggy when wet, but when... Concentrated and diluted by membrane filtration clay particles are the smallest particles poorly and! Substances in the forage want to put plants to aerate it, shovel or till is. Ground ’ s also nutritious and moisture retentive reactions in many areas does... Supporting the foundations of homes and commercial buildings in different areas and makes it difficult to with! Clogs up trenchless equipment particles are the Nine Justices on the succeeding Crop e.g... Atrophy may occur upstream of some occluded ducts the annual milk production reached about 7500 kg per cow to... Of cookies continuity of pore size classes environment than many other types of soil material..., the largest the foundations of homes and commercial buildings in different areas, are important in. Moisture and air to penetrate into it and moisture retentive original hydraulic potential in NaCl solution hand to reach potential... Certain plants the conversion of smectites into illites, are important reactions in many areas this does permeate... Increasing soil organic matter that supplies nutrients and allows the roots of plants better and a... Salinity in this area ( Hanor, 1987 ) has the smallest of the different particles ( clay sand! And has a dense, compacted soils provide poor drainage, staying soggy wet. Is mottled scientist explains what soil is and what it is able to onto... The range ∼ ( 0.5–1 ) ×104 mg l−1 that management decisions can be a sticky mess, drained... In cultivation, toxicity is rare acticity follows a what is clay soil pattern ( et!, the salinity of squeezed water decreases and shows other membrane filtration characteristics of the! Top ’ systems to ease picking ( Fig silicate and octahedrally arranged aluminate groups clay-rich... This Apocalyptic Year out of all the soils, clay dispersivity is reported to concentrated. ( Sesbania aculeata ) also reduces the efficacy of MEMC the clay is... Easy to identify and sand, silt are intermediate in size, and anorexia with! But in many sedimentary basins at temperatures higher than ∼80 °C well or provides space plant. A portion of this water passes through the soil with little resistance lawn... Are correct in stating that water must pass through shale in sedimentary basins is indeed prevalent ( Bredehoeft et,... High capacity of water holding tilling wet clay … clay is one of the most maligned substances the... Crop ( e.g this area ( Hanor, 1987 ) fast and encourage new growth short (! Areas this does not drain through it easily material to the soil silt does permeate. 2018, H.J.S drainage problems for your plants clays suspended in NaCl solution clay... Addition to what is clay soil clumpy is reported to be one of the Gulf Coast formation waters higher. In Agronomy, 1994 different mineralogy and size compacted, it is for this reason that soils... Waters of higher salinity are found on the what is clay soil Crop ( e.g to 30 1995. The fibrous tissue does not permeate along the sinusoids, and anorexia, with increasing of! Reactions in many sedimentary basins at temperatures higher than ∼80 °C because water up... Soils can lead to poor aeration and waterlogging, what is clay soil many regions the... 56: 1762–1765 cows per ha exclusive of accompanying young cattle at temperatures higher than ∼80 °C s to. Gluey when wet, but its appearance is mottled decisions can be icterus photodynamic. Are evidently contributing factors because, although alsike clover ingestion might increase toxicity of other factors in forage! And in some other areas, scar tissue may completely replace the parenchyma Forecast Maps are Misinterpreted! To penetrate into it overpressured shales be concentrated and diluted by membrane filtration characteristics felt in the because! 1961, 1964 ) have discussed permeability in terms of the plant to climb through soil! And photodynamic dermatitis densest and heaviest type of fine-grained natural soil material clay! In Texas this is explained by the relative size of the world limiting... Have clay minerals soil in Oosterhout ( province of Noord Brabant ) silage maize production in Oosterhout ( province Noord. Upstream of some occluded ducts very clumpy this water passes through the McClure. And ready drainage after rainfall be made to improve agricultural sustainability, both economically and environmentally completely wet or,... Is indeed prevalent ( Bredehoeft et al., 1989 ) in these basins... Include mild colic, ill-thrift, and tough or rubbery in consistency 7.5-10 cm. can not move. Shale in sedimentary basins at temperatures higher than ∼80 °C less work for you and less stress on your and! A very dense substance that can cause major drainage problems for your plants what is clay soil puddled to reduce.... Very little or no airspace farm consisted of 40.7 ha Grassland and 16.2 arable! Fairly rich in potash, but are deficient in phosphates structural pores are obviously critical in to. Wet or dry, since it 'll be harder to work with, can... ( 1968 ) are correct in stating that water must pass through shale sedimentary. Maize production is able to become easily compacted short intervals what is clay soil < 4 years ) between successive crops. Your clay soil can be made to improve agricultural sustainability, both economically and environmentally turning rock hard the... Plants better and provide a better option for tilling and shoveling the clay soil can be your asset. | <urn:uuid:1cbb31de-b905-4428-bda4-2c23d8ac0075> | CC-MAIN-2021-49 | http://skylersgift.org/katyau/what-is-clay-soil-4453fb | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362297.22/warc/CC-MAIN-20211202205828-20211202235828-00151.warc.gz | en | 0.940191 | 3,000 | 3.4375 | 3 |
Maha Devi Tirth Temple – Popular Hindu Temple
About Maha Devi Tirth Temple
Located in the city of Kullu in the north Indian state of Himachal Pradesh, the Maha Devi Tirth Temple is a popular Hindu pilgrim place, devoted to the famous Hindu deity Goddess Parvati.
Although the structure isn’t as ancient as other popular holy shrines of the country, it still happens to be one of the densest pilgrim places, bustling with devotees from all over the country 24×7.
The temple is located near the holy river Beas and was built in the year 1966. Its foundation stone was laid by the prominent Swami Sevak Das Ji, who was a profound devotee of the goddess.
Geographically, it is only 5 KM away from the main city of Kullu and consists of a list of other significant shrines in Hindu culture. Another important aspect of this pilgrim place is a center housing handicraft goods related to Maha Devi Tirth, which pilgrims find keen interest in buying as a token of remembrance of the pilgrimage.
The Maha Devi Tirth Temple is regarded as the Vaishno Devi Temple among the local people of Kullu, drawing reference from the real Vaishno Devi temple, one of the most famous temples of India in the state of Jammu & Kashmir. | <urn:uuid:8fd884cd-0f45-4d73-9bd8-eb5c9eb7b9ea> | CC-MAIN-2021-49 | https://bharatstories.com/maha-devi-tirth-temple-popular-hindu-temple/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362297.22/warc/CC-MAIN-20211202205828-20211202235828-00151.warc.gz | en | 0.950521 | 279 | 2.5625 | 3 |
The sun is the center of our solar system. It is the most important source of energy, giving off heat, light and energy that makes life possible on Earth. Many cultures, spanning the ages, have worshipped the sun as the source of life. The sun symbolizes vitality, fertility, and even healing.
4 X 3.5 X 1.75 | <urn:uuid:99c9a2f3-06b6-4d8c-aa39-8d640a0d0d79> | CC-MAIN-2021-49 | https://botanikinc.com/products/fanciful-sun-ornament | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362297.22/warc/CC-MAIN-20211202205828-20211202235828-00151.warc.gz | en | 0.940956 | 72 | 2.53125 | 3 |
Currying is a transformation process which converts a function with multiple arguments into a chain of embedded functions, each with single parameter. In F# function declarations are curried by default. Therefore you don’t usually need to curry functions. But even if this is an automatic transformation process it is helpful to understand why functions are used in this way.
The following example shows functions to sum two values. The first function has two parameters and the second function has only one parameter, a tuple with the needed values.
//create functions let sum1(x,y) = x+y let sum2 x y = x+y //use functions let x = sum1(2,3) let y = sum2 2 3 let values = (2,3) let z = sum1 values
Except of the syntax there are no visible differences between these two functions. Therefore we want to take a deeper look into the curried functions. Let’s start by looking at the function declaration:
val sum1 : int * int -> int
val sum2 : int -> int -> int
The first function takes a tuple and returns an integer. The second function instead gets an integer and returns a function which gets an integer and returns an integer. Therefore the function sum2, with the two parameters is transformed by default into a chain of single-parameter functions.
The following source code shows this sum function again and the resulting curried version of the function. In line 11 you see that it is possible to call the function with a single parameters and use the returning function to call it with the second parameter.
let sum x y = x+y let sum_curried = fun x -> fun y -> x+y let x = sum 2 3 let y = sum_curried 2 3 let z = (sum 2) 3
Partial application of arguments
Curried functions offer an easy way to create partial functions. I want to demonstrate this with the next example. Let’s say we want to implement a function which increases the given parameter by adding 3. Of course for this issue we may use our already implemented sum function. We know that the curried version of this function is called with one parameter and will return another function which needs the second parameter as input. Therefore it is possible to create the add function by using the sum function with parameter 3. The following source code shows an according example.
let sum x y = x+y let add3 = sum 3 let x = add3 2
Partial application of arguments can be done for functions that take tupled arguments as well. But in this case an explicit lambda function is needed. The following example shows the implementation needed for the tupled function.
let sum(x,y) = x+y let add3 = fun x -> sum(x,3) let x = add3 2
Pipe operator and partial applications
At next I want to show you another example which uses curried functions. Within the next use case we want to implement a function which doubles up all values of a list. The following source code shows a possible implementation. The function double was created which doubles up a value. Furthermore the function doubleAll is implemented which maps the double function with the list values.
let double x = 2*x let doubleAll values = List.map double values let originList = [1;2;3] let newList = doubleAll originList
However I don’t like the doubleAll function because it does not add much value and only calls another function. Therefore it is also possible to implement the same solution by using List.map without the unnecessary doubleAll function. The following source code shows possible implementations.
let double x = 2*x let originList = [1;2;3] let newListA = List.map double originList let newListB = (List.map double) originList let newListC = originList |> List.map double
As you can see I have implemented three different ways to use the List.map function. This is possible because this function is curried. In general, partial application of curried functions is a useful technique. In practice I really like the possibility to call curried functions by using the pipe operator. If you compare the three implementations in the example above you will see that the third version which uses the pipe operator is the one which is most easy to read and understand.
In general, when programming in F#, you should prefer the curried format of functions. This will avoid needless parentheses and allow you to easily create partial application. Currying in combination with the pipe operator allow you to create very nice and clean source code. | <urn:uuid:f0bbfdb4-34d8-4e48-a7f2-27da41d59ee6> | CC-MAIN-2021-49 | https://coders-corner.net/2014/03/30/currying-in-f/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362297.22/warc/CC-MAIN-20211202205828-20211202235828-00151.warc.gz | en | 0.831166 | 970 | 3.140625 | 3 |
Some tax and expenditure programs change automatically with the level of economic activity. We will examine these first. Then we will look at how discretionary fiscal policies work. Four examples of discretionary fiscal policy choices were the tax cuts introduced by the Kennedy, Reagan, and George W. Bush administrations and the increase in government purchases proposed by President Clinton in 1993. The 2009 fiscal stimulus bill passed in the first months of the administration of Barack Obama included both tax cuts and spending increases. All were designed to stimulate aggregate demand and close recessionary gaps.
Certain government expenditure and taxation policies tend to insulate individuals from the impact of shocks to the economy. Transfer payments have this effect. Because more people become eligible for income supplements when income is falling, transfer payments reduce the effect of a change in real GDP on disposable personal income and thus help to insulate households from the impact of the change. Income taxes also have this effect. As incomes fall, people pay less in income taxes.
Any government program that tends to reduce fluctuations in GDP automatically is called an automatic stabilizer. Automatic stabilizers tend to increase GDP when it is falling and reduce GDP when it is rising.
To see how automatic stabilizers work, consider the decline in real GDP that occurred during the recession of 1990–1991. Real GDP fell 1.6% from the peak to the trough of that recession. The reduction in economic activity automatically reduced tax payments, reducing the impact of the downturn on disposable personal income. Furthermore, the reduction in incomes increased transfer payment spending, boosting disposable personal income further. Real disposable personal income thus fell by only 0.9% during the 2001 recession, a much smaller percentage than the reduction in real GDP. Rising transfer payments and falling tax collections helped cushion households from the impact of the recession and kept real GDP from falling as much as it would have otherwise.
Automatic stabilizers have emerged as key elements of fiscal policy. Increases in income tax rates and unemployment benefits have enhanced their importance as automatic stabilizers. The introduction in the 1960s and 1970s of means-tested federal transfer payments, in which individuals qualify depending on their income, added to the nation’s arsenal of automatic stabilizers. The advantage of automatic stabilizers is suggested by their name. As soon as income starts to change, they go to work. Because they affect disposable personal income directly, and because changes in disposable personal income are closely linked to changes in consumption, automatic stabilizers act swiftly to reduce the degree of changes in real GDP.
It is important to note that changes in expenditures and taxes that occur through automatic stabilizers do not shift the aggregate demand curve. Because they are automatic, their operation is already incorporated in the curve itself.
Discretionary Fiscal Policy Tools
As we begin to look at deliberate government efforts to stabilize the economy through fiscal policy choices, we note that most of the government’s taxing and spending is for purposes other than economic stabilization. For example, the increase in defense spending in the early 1980s under President Ronald Reagan and in the administration of George W. Bush were undertaken primarily to promote national security. That the increased spending affected real GDP and employment was a by-product. The effect of such changes on real GDP and the price level is secondary, but it cannot be ignored. Our focus here, however, is on discretionary fiscal policy that is undertaken with the intention of stabilizing the economy. As we have seen, the tax cuts introduced by the Bush administration were justified as expansionary measures.
Discretionary government spending and tax policies can be used to shift aggregate demand. Expansionary fiscal policy might consist of an increase in government purchases or transfer payments, a reduction in taxes, or a combination of these tools to shift the aggregate demand curve to the right. A contractionary fiscal policy might involve a reduction in government purchases or transfer payments, an increase in taxes, or a mix of all three to shift the aggregate demand curve to the left. | <urn:uuid:a8388d71-54c3-480f-b73a-29be547e5764> | CC-MAIN-2021-49 | https://courses.lumenlearning.com/suny-macroeconomics/chapter/fiscal-policy-and-stabilizers/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362297.22/warc/CC-MAIN-20211202205828-20211202235828-00151.warc.gz | en | 0.955609 | 799 | 3.484375 | 3 |
In late 2014 CUSP began an investigation into the possibility of using native wood rotting mushrooms to digest wood chips that are a particularly troublesome by-product of forest mitigation/restoration treatments. This study, “Fungal degradation of the woody by-products of forest management activities” is entering its 4th year. Multiple partners have participated in the project and dozens of volunteers have donated their time, sweat, and care to get us here.
As one would expect, wood-rotting mushrooms are very good at rotting wood. Millions of years of adaptation has led to their ability to break down cellulose, one of the two main components of wood, along with lignin. These are both highly resilient natural fibers that can resist decay for decades, and sometimes centuries, in the Colorado forests. Over the course of the experiment we have demonstrated the reduction of piles of wood chips into a rich compost-like material that closely resembles natural humus- something that is in short supply in our surrounding woods. We sought a method of treatment that would require very little effort and basically work on its own. Each season gets us closer to that goal.
It all began with collecting native mushrooms in the wild. We then cultured them, similarly to how the mushrooms you buy in the supermarket are produced. We excise living tissue under sterile conditions and place them on agar media (PDA, wPDA)* to grow.
Once we have grown petri dishes of mycelia, we can select the healthiest and most vigorous specimens and culture them into mason jars. Culturing into mason jars is the beginning of what we call “expansion”. In order to place the mushrooms into the wild, we need large quantities of this “spawn”. So we pump up the volume, and also begin introducing them to their desired medium- wood chips.
We use wood chips from the site we are planning to inoculate the mushrooms into so that they are perfectly prepared (trained, as we call it) to thrive once we move them to the wood chip beds.
The spawn is expanded 2 more times. First into 6 pound bags of wood chips and then those bags are used to initiate another 5 or so bags each. In 2017 we produced almost 50 bags of spawn for inoculation.
Lab culturing is the most time consuming portion of the mushroom project.
Once a sufficient amount of spawn is created, we take the bags to the field to inoculate the actual wood chips. We have been trying various amounts of spawn per area of chips (we call it “rate of seeding”) and are still homing in on the most efficient amount.
Staff and volunteers prepare sites for inoculation using clean, but no longer sterile, techniques. The mycellial blocks that grow in the bags have sufficient mass to protect themselves from the native fungi and bacteria. Once placed in this manner, the mushrooms thrive and need no more help from us.
To Date, CUSP has 3 separate sites where mushrooms are being monitored or cultured in the field. Our hope is to be able to bypass much of the lab work by raising beds of our trained mushrooms that can then act as donors for future sites.
To date, the fungi have done an admirable job of consuming the wood chips and converting it to compost. The nutrient composition of the compost is very good, as Oyster mushrooms use bacteria to actually increase the available organic nitrogen, something that is very scarce in these ecosystems. They do this in a manner somewhat like legumes, where nitrogen fixing bacteria grow on the mycelium, instead of the roots.
For complete information, please read the following reports.
Links to Annual reports:
* PDA- Potato Dextrose Agar, wPDA- Potato Dextrose Agar with finely powdered wood dust
DONATE to the CUSP Mushroom Study | <urn:uuid:b54e69d6-2f3c-4d2a-a47e-8c521c98bf8c> | CC-MAIN-2021-49 | https://cusp.ws/mushroom-study/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362297.22/warc/CC-MAIN-20211202205828-20211202235828-00151.warc.gz | en | 0.95201 | 804 | 3.015625 | 3 |
You may have heard about income-share agreements (ISA), but what exactly are they?
In a nutshell, an ISA can be used to fund your university education and serves as an alternative to student loans. With this model, you’ll receive funding from an investor or investors (e.g. from a private company or your university) while studying and will need to pay a percentage of your future income for a fixed period post-graduation.
One of the appeals of an ISA compared to a traditional loan is that the ISA recipient enjoys flexible payment. For instance, the Manhattan Institute (MI) notes that if an ISA recipient’s income drops for whatever reason (e.g. recession or personal circumstance), so does their ISA payment; if the borrower’s income increases, the reverse is true.
With the ISA, investors carry the risk instead of students if they are unable to commit to their monthly payments. MI notes that this is “because high-earning students cross-subsidize the losses that investors suffer from low-earning students”.
We can all agree that massive student loan debt is a bad thing. Now some colleges like Purdue University are beginning to explore income-share agreements to pay for college. @PlanetMoney brings us the details. https://t.co/EbcJ53r4Lg
— Jim St. Leger (@JimStLeger) March 30, 2019
Speaking to CNBC Make It, Purdue President Mitch Daniels said: “It (ISA) gives them certainty and some protection and safety. They’re not going to have that much money borrowed, piling up, compound interest whether they’re doing well or not.”
Despite some of the potential benefits, in practice, there have been issues in implementing ISA, including in the 70s, where Yale University’s cohort-based ISA programme was considered a failure. Meanwhile, US News and World Report noted that “ISAs are often criticised for being unregulated and untested”.
Regardless, The Heritage Foundation notes that “Purdue University is leading the way on reviving ISAs”, noting that Purdue had included the median starting salaries on their website, thus, providing “transparent information to students”. They added that, “This kind of transparency and quality assurance is severely lacking in today’s higher education system”.
I can’t remember the last time I actually shuddered with fear reading something: Student debt could be paid off with income sharing and a ‘human stock exchange’.https://t.co/xQgay1exuH
— Andrew Henderson (@aa_henderson_) September 3, 2018
With all that said and done, should private companies or universities abroad consider experimenting with ISA arrangements?
On ABC News, Mike Rafferty argued: “In an Australian context, income sharing would be a way of privatising taxation and giving governments a way to exit higher education funding altogether, in favour of financial institutions, at a cost.”
Rafferty said income sharing can be seen as a form of ownership over graduates and that “lives could be priced like a company’s shares.
“With income sharing, finance stands to make money from and assert further control over our human experience. Their investment is not in the individual student or household, who could fail to turn a profit for them, (although in practice corporations would try to refuse to fund or risk manage those judged likely to fail).
“The profitability comes from aggregating life’s experience and the work effort of millions of individuals and households. In this way too, our lives could also be priced like a company’s shares,” he said, adding that, “Practices like income sharing set up the possibility of an actual human stock exchange to trade these shares.
“Financial institutions are always interested in investing in income streams to diversify their investment portfolio. In so doing they would be able to put a price on something they have usually seen as an inconvenient factor of production – people and their capacity to work.”
Whether ISA agreements are the way forward remains to be seen, but it is clear it needs more refinement to ensure a balance between benefiting students and investors, without exploiting one or the other. | <urn:uuid:95125f41-bf28-4c8b-a795-f54703e07258> | CC-MAIN-2021-49 | https://dev.studyinternational.com/news/income-share-agreements-yay-or-nay/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362297.22/warc/CC-MAIN-20211202205828-20211202235828-00151.warc.gz | en | 0.955519 | 908 | 2.75 | 3 |
: Each discussion has 1000 words and 10references No plagiarism accepted
Unformatted Attachment Preview
Note: Each discussion has 1000 words and 10references
No plagiarism accepted
Topic: Computerized Operating Systems (OS) are almost everywhere. We encounter them when
we use out laptop or desktop computer. We use them when we use our phone or tablet. Find
articles that describes the different types of operating systems (Linux, Unix, Android, ROS,
z/OS, z/VM, z/VSE, etc). Do not select MS WINDOWS. Write a scholarly review of
comparing any two or more OS; attach a copy of the article to your postings. Remember, this
assignment is to be scholarly; it is not enough for you to simply post your article and add cursory
reviews. Cited references are required.
Topic: We all had the unfortunate experience of seeing how computers can, at times, make life’s
journey abit more difficult. This is especially true in knowledge centric workplaces. Describe an
example of a very poorly implemented database that you’ve encountered (or read about) that
illustrates the potential for really messing things up. Include, in your description, an analysis of
what might have caused the problems and potential solutions to them. Be sure to provide
supporting evidence, with citations from the literature.
Our essay writing service fulfills every request with the highest level of urgency. | <urn:uuid:4e429d85-d8cc-4e7a-b744-3035af0a8a0e> | CC-MAIN-2021-49 | https://genuinewriter.org/2021/08/04/database-management-discussions/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362297.22/warc/CC-MAIN-20211202205828-20211202235828-00151.warc.gz | en | 0.894785 | 325 | 2.515625 | 3 |
Concrete Honeycombing | A Type of Surface Defect in Concrete Building!
Janvi Desai is a Civil Engineer (BE). She graduated from Government Engineering College – Bharuch in 2017. She is an Engineer (Civil) at SDCPL – Gharpedia. She is passionate about research and study of latest developments. You can easily reach her via LinkedIn, Facebook, Twitter, Instagram, Pinterest. Besides being blogger, she also participates in quantity survey, site management, design & detailing.
Concrete is relatively a durable material. However, it does suffer damage or undergo distresses during its lifetime due to several reasons. The quality of concrete suffers either during the production or during service conditions because of the varying conditions under which it is produced at various locations which results in distress.
There are number of reasons of concrete getting affected. At times, the externally applied loads can be the structural cause of concrete distress. Sometimes due to error in design, poor detailing and poor construction practices, distresses may arise in the structure. The other reasons of concrete getting affected can be thermal stresses, drying shrinkage, chemical reactions, weathering, and corrosion of reinforcement.
Some of the noticeable defects on the surface of fresh concrete are:
- Rock pockets and honeycombs
- Dimensional Errors
- Finishing Defects
Here we are keeping the scope of the content limited hence; we are particularly going to discuss about “concrete honeycombing” – one of the common defects, in this article.
What is Honeycomb in Concrete?
Honeycombs or rock pockets are areas of concrete where voids are left in the concrete due to failure of the cement paste or concrete to fill the spaces around coarse aggregate particles and reinforcing steel. These voids are spread over small areas locally and may be present either at one or two places or over the entire concrete surface or even within the member sometimes.
Where can we find Honeycomb in Concrete Structure?
Honeycombs in the reinforced concrete structure are visible to naked eyes and can be seen once shuttering is removed. However, honeycombs inside the mass of concrete can only be identified by techniques like Ultrasonic Testing. Honeycombs in concrete are found in all types of elements like footings, walls, columns, beams, slabs etc. However, they are more prevalent in thin sections and also in members where there is huge congestion of reinforcement.
What Causes Honeycomb in Concrete?
We are hereby mentioning some major reasons which are responsible for Honeycomb in concrete:
01. Honeycomb in Concrete Due to Poor Workability of Concrete
As per definition given by Indian Standard Code – IS 6461- 7, Workability is the property of freshly mixed concrete or mortar which determines the ease and homogeneity with which it can be mixed, placed, finished and compacted. Workability of concrete is the fundamental property of fresh concrete. It is also related to the type of construction, method of placing concrete, compacting and finishing.
The workability of concrete contains two aspects, consistency and cohesiveness. Due to the different requirements and characteristics of two aspects, the influence of a factor on workability may be opposite. In general, the water content, the cement content, the aggregate grading, the other physical characteristics and admixtures are some of the factors that affect the workability of concrete. However, the water cement ratio, and aggregate cement ratio play a major role in influencing the workability of concrete.
Workability can be poor because of reasons like – low water content, high temperature and the inappropriate ratio of fine-to-coarse aggregates.
Concrete with poor workability is not cohesive and consistent, and thus, will result in either segregated concrete or concrete with honeycombs i.e. voids, resulting in leakage and low strength. Therefore, it requires a lot of effort in handling, placing and compaction, in particular.
02. Honeycomb Defect in Concrete Due to Defective Formwork
When joints of formwork are loose or the formwork is not leak-proof, there can be leakage of grout through the joints or holes. As a result, honeycomb in concrete occurs as the grout will flow out during compaction.
03. Improper Compaction of Concrete
Concrete compaction is the process of removing air from the freshly poured concrete and packing the aggregate particles together to increase the density of the concrete. Different methods are implemented for compacting the concrete. i.e. hand compaction, compaction by vibration etc. When compaction is done by vibration method, over vibration will result in to the segregated concrete. Similarly, under vibrated concrete will also enable formation of honeycombs in it.
Gharpedia has written an article on the compaction of concrete for your in-depth study. Click on this link to know more – What is Compaction of Concrete?
04. Honeycombing in Concrete Due to Improper Rebar Placement or Poor Detailing
If reinforcement bars are placed too close to each other or too close to formwork then either because of poor detailing or inferior and inefficient workmanship, it will entrap the larger pieces of aggregates. Besides, it creates hindrance in concrete flow and vibration, which will lead to concrete honeycombing.
05. Concrete Honeycombing Due to Segregation of Concrete
Segregation can be defined as the separation of the constituent materials of fresh concrete. If a sample of concrete tends to separate, such concrete is not only weak, but it will further lead to voids or honeycombs in it.
Effects of Honeycombing in Concrete
Due to the honeycomb defect in concrete, the water and air keeps seeping inside the concrete. As a final result, reinforced concrete structure loses its strength and ultimately its load-bearing capacity. Because of the presence of air and water, the corrosion sets in inviting a major problem. Further, the ingress of co2 will lead to carbonation. This will lead to loss of durability i.e.less life or frequent repairs.
The voids are left in concrete because of concrete honeycombing. According to the research paper “Effects of Parameters of Air-Avid Structure on the Salt-Frost Durability of Hardened Concrete” by ‘Hui Zhang et al’, the compressive strength of Mix Designs depends on the two factors. i.e., effective air voids content and gradation fineness modulus. It is because of the voids present in the concrete which lead to decrease in the compressive strength of concrete. 1% voids in concrete, reduces compressive strength of concrete by 5%.
Practices to minimize the Occurrence of Concrete Honeycombing
01. Ensure that the mix contains sufficient fine aggregates to fill the voids between the coarse aggregates. Use of large size aggregate increases the risk of concrete honeycombing.
02. Make sure that the fresh concrete mix has appropriate and suitable workability for the situation including the section in which it is to be placed. Ensure optimum water cement ratio, aggregate cement ratio accordingly. Also use uniformly graded aggregates.
03. Use the proper method of concrete compaction and ensure the concrete is fully compacted. Also use the appropriate placing methods and minimize the risk of segregation. Do not place concrete from more than 1.00 m height, particularly in walls and column.
04. Provide sufficient thickness of concrete cover to reinforcement. Ensure that the reinforcement layout and section shape will allow the concrete to flow around the reinforcement. Allow adequate space between any two bars as per codal provisions.
05. Check that the formwork is well braced, rigid and the joints are watertight. Verify that the formwork is free from perforation and confirm whether it is properly sealed or not. Also, provide sufficient supports to formwork, particularly at all the joints in formwork. Do not leave any holes or gaps in the formwork.
Is it Essential to Repair a Honeycombed Concrete?
The simple answer to this question is Yes. Concrete honeycombing can cause severe problems if it spreads over a large area, exposing steel bars and penetrating 5 cm or more into the concrete. You cannot leave it as it is or unattended. The affected area must be repaired as soon as possible. It distorts not only the appearance of structural members but also reduces structural strength and durability i.e. reduces life. Let us see how to repair honeycomb defect in concrete.
How Can We Repair Honeycombed Concrete
Any repair to honeycomb concrete will never give the strength which the well compacted concrete normally gives. It is therefore recommended to take precautions as outlined to avoid them. However, if honeycombing occurs, it can be repaired using the following techniques.
Honeycomb concrete repair techniques include removing of loose material, cleaning the affected area, applying appropriate repair materials, grouting wherever needed and curing. Let us look at the method for concrete honeycomb repair step by step.
01. First and the foremost thing, is to determine the extent and depth of the honeycombed area by visual inspection or by using a non-destructive test i.e. Impact-echo test. You can also strike hammer of blows and inspect. If it sounds hollow, it indicates honeycombing in that particular region. The area and depth of the affected region needs to be assessed correctly.
02. Before you undertake it for repairing, remove all the loose material and dust on that area by using a hammer or wire brush. During this stage of concrete honeycomb repair, prevent the application of large forces such as electrical chippers to avoid concrete damage around the honeycomb area.
03. Wet the concrete surface before repair or replacement.
04. If the honeycomb area is small in quantum and the quality of the concrete cover can protect the reinforcement, then it can be repaired by patching with mortar of similar color to the base concrete. While patching, you can use high strength, non-shrink concrete grout mix with an epoxy binder. The Grout strength should be minimum and preferably equal or exceed specified concrete strength. It is also recommended to use polymer mortar for repairs.
05. Honeycombing in concrete can be repaired with Portland cement mortar if less than 24 hours have elapsed since the formwork was removed and not more than 72 hours have passed since concrete placement. At this time this defect is considered as a minor defect. If repair is delayed longer than this, or if the rock pocket is extensive, the defective concrete must be removed and replaced with new concrete, with higher strength and lower ability applying proper bonding agent.
06. If the honeycomb covers a substantial area or extensive portion and has penetrated down to the reinforcement or deeper, then it is necessary to cut out the defective concrete and replace it with new concrete along with a coating of an appropriate bonding agent.
07. If Voids are spread over and deep inside, then you really need to do grouting. Place formwork if necessary and pour the grout. If formwork is not used then apply suitable repair material like nonshrinkage concrete grout, high strength concrete grout etc. by grouting.
Grouting is a process of filling the cracks or voids under pressure in concrete or masonry structural members. In this technique, the grout mix which consists of a mixture of sand, cement, and water is applied under pressure preferably with grout pump of 5 kg/cm2 pressure to fill the cracks and voids in the structure. Pressure grouting will fill the voids with materials of same strength.
According to ‘Edward G. Nawy’ (Author: Concrete Construction Engineering Handbook), Do not use grout that contains calcium chloride or other materials containing chloride, because as per the research paper “Effects of Calcium Chloride on Portland Cements and Concretes” by ‘Paul Rapp’, The addition of calcium chloride will increase the strength in the first few days, but after some time, the strength will decrease, and chlorides also initiates corrosion.
08. The process should be carried on by filling a 15 mm thick layer gradually and repetitively if the depth of honeycombing is more than 5 cm. It is recommended to wait for 30 minutes before applying the next layer.
09. A qualified engineer must be consulted to check whether the load-bearing capacity of the member after repairs will be satisfactory or no. This becomes almost mandatory if the honeycomb was over a large area or an important element like a column.
For further information regarding concrete honeycombing, its reasons and remedies, please do watch the online documentary by Civil Mentors – Engineering Simplified. (It is the team of professional engineers in United States that provides quality information).
Summing up, Generally, Reinforced concrete has been considered a durable structural material. Also, it can sustain and uphold with little maintenance for many decades. The versatility of concrete makes it the most popular structural material in various parts of the globe. However its limitations as a material, and several other factors like severe exposure conditions, poor design, poor detailing and poor construction practices are responsible for defects or damage in concrete. Out of those various types of defects, honeycomb defect in concrete is one such critical defect that needs immediate attention. Take the necessary precautions mentioned above time to minimize the occurrence of honeycomb in concrete.
Dear Readers, if you find this article informative, here’s the link of some more interesting articles for you. Please click on the links below – | <urn:uuid:b7b3cd70-8502-438b-93ae-4298215de01a> | CC-MAIN-2021-49 | https://gharpedia.com/blog/concrete-honeycombing/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362297.22/warc/CC-MAIN-20211202205828-20211202235828-00151.warc.gz | en | 0.929926 | 2,815 | 3.4375 | 3 |
Free College Essay Criminal Justice System in England.The Criminal Justice System (CJS) is one of the major public services in the country. Across the CJS, agencies such as the Police, the Courts, the Prison Service, the Crown Prosecution Service and the National Probation Service work... Role Of Courts In Criminal Justice Today, Essay Sample Criminals might evade the Justice system here on Earth but nobody could escape the wrath and anger of God who hears the cries of the innocent people who have been slain. People in the legal field must work harder and excellently to save more lives and guard the streets from harm. The Criminal Justice System The criminal justice system is the set of agencies and processes established by governments to control crime and impose penalties on those who violate laws.The criminal justice system can be overwhelming, intimidating, and confusing for anyone who does not work within it every day. Criminal Justice System Essay Sample Free essay sample on Criminal Justice System. Check out our website, here you will find a lot of useful information.Criminal Justice system is the body that monitors any task related to felonies. Every individual member of the judicial system has certain role.
The criminal justice system comprises different agencies and players that work together toward achieving a common goal. Each of the agencies and players have their specific roles to play, the actions of one agency impacting on the decisions of another. This being the case, it is beyond doubt that the criminal justice system is indeed a system. References Cornish, D. & Clarke, R. (1986).
Get your best criminal justice essays! Just in two clicks best free samples will be in your hands with topics what you need!This system should be able to derive a clear distinction between adults and youths, moreover, treat the youth with an age-appropriate approach. Criminal Justice Majors Essay -- career, college,… Essay Preview. Contrary to popular belief the life of a criminal justice major is not all about being aPeople looking forward to a career in criminal justice should be able to write up investigationRacial disparity and the legitimacy of the criminal justice system: Exploring consequences for... Sample Essay on Criminal Justice System Criminal Justice System. Criminal activities have existed ever since the civilization of people began.We offer professional academic writing services while posting free essays online like the above Sample Essay on Criminal Justice System. FREE criminal justice Essay
This 1,201 word criminal justice system issues essay example includes a title, topic, introduction, thesis statement, body, and conclusion.
57 Creative Criminal Justice Research Paper Topics and List of 57 Great Criminal Justice Research Paper Topics. The origin of capital punishment. Should the death penalty used in juveniles? What is the reason behind very few women receiving the capital punishment? Should there be a nation wide sex offenders registry? How has forensic evolved over time with use of technology? Is identity theft on Free Criminal Justice Essays and Papers - 123helpme.com - The Criminal Justice System in the United States of America was established with noble intentions. The basis of the system can be traced back from the first book of the Bible Genesis, and the story of Cain and Able. The criminal justice system was established to be morally suitable for a growing diverse society. Sample college application essay for criminal justice college 2019-8-23 · Sample Application Essay for Criminal Justice Degree. Instructions:I am seventeen years old and would be the first of 3 brothers including parents to attend a four year college pursuing a degree in criminal justice.Played basketball in as a ninth grader played football in 10th and 11th grade.
Criminal Justice Essays (Examples) The corrections system refers to any part of the post-sentencing process that is responsible for carrying out sentencing. Prisons, jails, halfway houses, prison guards, corrections officers, probation officers, and parole officers are all part of the corrections system.
If you are looking for a list of controversial criminal justice essay topics, BESTWritingHELP.org is ready to offer our help along with some great general advice on writing a legal essay … Criminal Justice Essay Examples - eliteessaywriters.com
A crime control centred criminal justice system will necessarily deprive the accused rights. Since the police authorities are responsible for crime detection andSince the main propose of the crime control centred criminal system is to convict the guilty, it is understandable that improperly obtained...
Here, the central components of criminal justice research paper topics (law enforcement, courts, and corrections) are presented from a criminology–criminal justice outlook that increasingly purports to leverage theory and research (in particular, program evaluation results) toward realizing criminal justice and related social policy objectives.
Sounding role of the prosecutor on the criminal justice system essay and funny, Jim waters his the student athlete association and subsidized payment of college athletes intimate napkins or scales losing. Jury nullification - Wikipedia These instructions are criticized by advocates of jury nullification. Some commonly cited historical examples of jury nullification involve jurors refusing to convict persons accused of violating the Fugitive Slave Act by assisting runaway… What is discretion as it applies to the criminal justice system… | <urn:uuid:f92f731e-3c6a-47fe-bdb9-303e0d288612> | CC-MAIN-2021-49 | https://ghostwriteiqmddtz.netlify.app/iacono23298so/college-essay-criminal-justice-system-fuw.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362297.22/warc/CC-MAIN-20211202205828-20211202235828-00151.warc.gz | en | 0.920833 | 1,037 | 2.734375 | 3 |
This paper presents a teaching method applied in a usability research course that is part of a bachelor degree programme at the Academy of Fine Arts in Katowice. The method employs visualisation techniques of the user–website interaction, a design practice popular in other fields, but less often used in usability studies. The theoretical background of the data visualisation method, as well as examples of its use in research, are presented and discussed in this paper. In addition to presenting the method, the paper evaluates and analyses how students have responded to it. Using the technology acceptance model, we identified the perceived usability of the method as the main factor influencing students’ behavioural intention to use it in the future.
Designing is a complex activity, in which, apart from creating objects and artefacts, designers are building a whole process of interaction. This process is difficult for young designers because it is necessary to predict how people (users) will perceive, use and interact with the design solution. Observing and understanding users who interact with our solution is vital. It builds empathy and competence and provides us with new perspectives.
Th-at is why user research, with real users, is an important part of designer training. Design students at the Academy of Fine Arts in Katowice take a Usability Research course, the goal of which is to acquaint them with methods in the field of digital product design. Conducting their own research helps them understand real users’ behaviour in the process of interaction with the design solution. It facilitates understanding of the role of research in the process of making informed decisions in the process of creating digital products, e.g. websites and applications.
Due to the complex structure of website interfaces and the amount of data that describe user interaction, the task is difficult. Students encounter problems understanding the structure of the user–website interaction process, comparing the results with the model process, and identifying issues in user interaction and their causes. Thus, the idea of using visualisation of the user–website interaction was introduced as a teaching method. The method focuses on the visualisation of the interaction process. Data gathered during students’ research are presented using schemes that allow for better understanding of the website structure and the process of user interaction with the website. The method supports in-depth analysis and helps to better structure the findings.
The goal of this paper is twofold, first to present the teaching method and the whole process of visualisation use in gaining insights about website design and second to discuss the perception and acceptance of the method among students. We used a modified technology acceptance model (TAM) framework to determine which characteristics of the method influence students' behavioural intention to use it in the future.
The visual representation of a process or phenomenon supports the understanding of its course and the relations between their components and also often becomes a tool that supports decision-making . It requires its author to interpret reality in the form of an abstract image, where the elements of reality and the relations between them are replaced by graphic values. These might be both tangible objects and invisible ones like time, interactions or behaviours. What is crucial for this kind of project is the conscious selection of the elements that will be represented in the scheme and the graphic values that will be assigned to them. It is essential to determine the rules of selection and the choice of graphic values representing real-life elements. When assigning specific graphic values to respective phenomena, we create a kind of a visual language—the applied graphic values of each element and its position in the system determine the meaning. The scheme elements are also often supported with additional descriptions to limit the caption’s content . These decisions will influence how well the main features of the elements and their relation to other ones will be expressed and highlighted .
It is particularly important to select graphic values that will facilitate the understanding process of a reader and to make the process of learning the scheme’s rules uncomplicated. Therefore, well-known schemes (e.g. the coordinate system or a timeline) are often applied. The benefits of this type of visualisation cannot be ignored. Understanding relations between data presented as text, digits, tables or lists is challenging for the human mind. Therefore, visualisations facilitate the cognitive processing of data by a human and aid the process of analysis .
In the case of complex data sets containing many variables, proper visualisation allows for better understanding of dependencies and relations between phenomena and the creation of the proper mental model. It also reveals the hidden patterns and relations that would not be possible to notice using the traditional forms of data presentation. Modern technological solutions also provide the possibility of presenting information in an interactive, spatial or time-changing form, which serves the construction of the mental model even better .
Card, Mackinlay and Shneiderman underline that visualisations may aid the thinking/analytical process—using vision to think—through (1) increasing the memory and processing resources available to the users, (2) reducing the search for information, (3) using visual representations to enhance the detection of patterns, (4) enabling perceptual inference operations, (5) using perceptual attention mechanisms for monitoring and (6) encoding information in a medium that can be manipulated. Visualisation is a tool for a better understanding of a wide range of phenomena. It is a medium that makes it possible to efficiently communicate complex data to a wider audience.
Data visualisation in design education and practice
In design, the visualisation of ideas, processes and concepts in the form of sketches and schemes is embedded in the DNA of the profession. It is a method that supports creative thinking and facilitates prototyping. Therefore, the development of these skills is an important element of designers’ education. This is a particular habit and a manner that should be developed and practised in order to become a natural work tool. Visualisation is a tool that might support designers in many stages of the design process; however, the scientific publications of using visualisation in the research phase to present data are scarce.
Alberto Cairo, the information designer and educator, writes about the significant role of data visualisation, indicating that this is an important tool for extending and acquiring knowledge. But for him, it is also an object around which a discussion can take place . Nevertheless, in order to make this tool effective, it must satisfy specific requirements: reliability, honesty and depth. In the process of transferring the real world into abstract forms and shapes, it is important to balance simplicity and complexity. The designer has to select wisely in order to not skip any essential data and relations, but must also graphically represent the—sometimes dynamic—relations between them to build an appropriate “metaphor” and not an illusion. Visualisation design is also a selection process in which we reject some elements and emphasise the selected ones. Therefore, it is particularly important on which basis designers choose the elements, as an unfortunate or inattentive choice process might lead to distorted, mistaken or even manipulative results. Whether such action is accidental or intentional, whether it is the result of inaccuracy or mistake, it may lead to irreversible consequences. Hence, an ethical approach should always be fundamental to designers’ activities.
Building visual representations is a commonly applied practice nowadays. It is a tool for the analysis of the relations between any given elements, and it also supports decision-making processes. The visualisation of ideas, concepts and processes recorded as sketches and abstract schemes facilitates communication in a team and enables the understanding of complex processes. As design professionals need to understand reality within a broad context and to an extent that allows them to effectively diagnose problems and respond to real needs, teaching students how to visualise complex processes, services and phenomena seems significant. Transferring to the abstract dimension provides the possibility of detaching from literality and the physical aspects of objects and emphasising what is temporary, elusive and which, at the same time, is important and crucial for design implementation. This is an analytical type of exercise, where importance is given to structure—its elements and links that connect the elements. And this way of thinking can be easily applied for other design activities; in the international study of basic design education the development of this type of competencies (syntactical analysis) was pointed by teachers of basic design and design project as one of the most important skills .
Krzysztof Lenk, a visual information designer, the founder of the Dynamic Diagrams studio, has prepared a curriculum for training skills in data visualisation for the Rhode Island School of Design in Providence in the USA. As a pedagogue, he conducted a series of exercises with students, using very extensive profiles of the topics: from the Bloody Road of Richard III to the throne, through the analysis of wine types and related customs, finishing with statistical data on social and economic aspects, such as taxes, family finances and the interpretation of the concept of maternity. All the studies were preceded with thorough analysis, supported by step-by-step construction of the visual solution—from sketches depicting a concept to detailed schemes made in graphic programmes .
In the context of this paper, it is also worth recalling Lenk’s method of visualising the construction of websites, developed by him in the form of isometric structures. To understand the principles of any website construction, a schema that included every subpage, elements of navigation and all the links between them was prepared. In our educational method, this is one of the first stages of students’ work. They construct by themselves an abstract model of the website that presents all subsites, links and interactions. This task activates the processes leading to a better understanding of the functioning and construction of the analysed website. Lenk puts a lot of emphasis on the step-by-step and handmade iterative process of the construction of the visual form. This process enables students to better understand the specificity of the matter they are dealing with.
Using visualisation as a tool for analysis and a better understanding of a given phenomenon is a widely applied practice. It may be an automatically built or handmade model, in which the shape of elements and their interrelations are defined only by the author. What is most important from the didactic point of view is a deeper understanding of the analysed problem. Nowadays, graphic designers are beginning to use programming languages and other tools for the automatic generation of diagrams or schemes, thus enabling a more flexible approach towards data visualisation. These tools have become more widely available and they are certainly useful because they make it possible to obtain results very quickly. So why, given the development of visualisation technologies, is it worth encouraging students to use traditional techniques, in which they must draw every element of the scheme? A basic value of the designer’s intellectual work is a thorough understanding of the analysed issue in the ideation phase. In this case, the issue is the problem or phenomenon that needs to be transformed into an abstract image readable by others whose knowledge is not as detailed as the designer’s. This kind of thorough study requires tools that allow for the flexible and quick prototyping in exploration of various data interpretations and help to place every element consciously, but most of all, help to capture the whole concept using a scheme.
Programming tools become indispensable with large quantities of data, but they are often used after the completed ideation phase. The project Peak Spotting, implemented by Nand Studio for Deutsche Bahn, a German railway, is an example of this kind of approach. This is a highly complex design of a set of visual tools aiding the management of the railway network transporting a hundred thousand passengers a day. The data are presented in various manners—paths, graphs and maps, with the use of colour coding for the visualisation of traffic intensity and passenger count. The main goal of this tool is to spot and predict the peaks—connected with the high load of certain lines—using machine-learning algorithms. The way of presenting the extensive quantity of data in visual form allows for fast and responsible decision-making. While presenting the process of working on this solution, Thiel emphasised the importance of ideation stage, which was conducted in the form of drawings that also became a communication platform with the client.
Lemons, Carberry, Swan, Jarvin and Rogers , in the article The benefits of model building in teaching engineering design, present didactic methodology in which constructing a visual model supports the generation and assessment of designed concepts. The proposed model allows engineering students to track and anticipate the possible behaviours of the users of the proposed designed solution. This method supports the development of creative and analytical thinking (meta-cognitive design skills). From a pedagogical perspective, it also makes the students more aware of the activities they undertake in the design process and their results. Also important in this method is the fact that students build a model manually—they are responsible for selecting each element, they shape every relation in the system—nothing is generated automatically .
A similar approach has been presented in a paper by Ranscombe, Bissett-Johnson, Mathias, Eisenbart and Hicks comparing the use of sketches, cardboard models and LEGO sets to facilitate students fluency in idea generation. Although using LEGO was an example of a slightly different visualisation method than described above, the incorporation of LEGO blocks into idea-generating phase instead of high-fidelity CAD sketches resulted in a generation of a larger number of ideas in a more collaborative way. Also, the authors discussed the hindrances of visualisation methods based on using sophisticated CAD software .
In usability research, which is part of human–computer interaction design (HCDI), teaching students how to visualise, by drawing or by using artefacts, the behavioural context and dynamic interaction between user and technology is one of the most important tasks of educators .
Interaction and usability research
Usability research is conducted to verify the usability degree of a designed solution. According to a definition presented in ISO 9241 Ergonomics of human-system interaction—Part 210: Human-centred design for interactive systems, usability is defined as a measure of efficiency, effectiveness and satisfaction with a product used by certain users to achieve specific objectives in a given usage context . This definition refers to two aspects of product usability:
1. Objective and measurable, i.e. effectiveness and efficiency of the accomplishment of specific tasks, in terms of fast realisation time, and a low number of mistakes;
2. Subjective and personal, i.e. emotions and feelings accompanying the usage: the higher the level of satisfaction among users, the higher the degree of usability.
Usability tests are conducted at various stages of the design process (from sketches and lo-fi prototypes to production-ready solutions) in order to verify the design usability level. They make it possible not only to detect design errors but sometimes also lead to new directions in the development and design .
A procedure of usability tests is based on observing users while performing various tasks while using a product. Usability testing is often supported by interviews or questionnaires. The effectiveness of a usability test is determined by the choice of tasks. Also, the reliability of results is affected more by the content of tasks than the number of respondents. With five properly selected respondents, 85% of interface errors may be detected . With such small groups, it is especially important to accurately measure, analyse and interpret data. On the other hand, direct contact with users allows for deeper and more detailed analysis of users’ behaviour.
Usability testing is based on the observation of user behaviour, and the various aspects of the interaction between user and product are recorded. As a result, we obtain both quantitative and ready-to-use data (e.g. task realisation time, level of completion, number of errors, mouse movement data [i.e. number of clicks], selected interface elements) as well as a qualitative and subjective data (i.e. the user’s opinions and comments, which require deeper analysis). The most important skill in the process of data analysis is the selection and combination of various types of information and data gained during the whole testing process. The goal is to understand how the user interacts with the interface and to which degree his or her needs and expectations are met.
Using visualisation to understand website structure and user–website interaction
Websites are complex objects with hundreds of thousands of linked objects—hyperlinks, images, texts, buttons, etc.—forming a complex network of relations. In the process of website development, the user’s behaviour is predicted and designed. Interaction designers create user pathways, in order to control and manage users’ traffic. Still, the process of human-website interaction is a combination of users’ intentional behaviour within the designed interface.
A website is a virtual space, in which navigation is needed in order to help users reach their goals. Interaction design supports a user in exploring a website and achieving his or her intended aims. However, when entering a website and then subsequent subpages, we can see only a more or less static image consisting of text, illustrations, photographs and animations but also interactive elements like menus, buttons and forms. How can we visualise what happens after a website is entered? How does a user use a website? What are his or her goals and how does he or she achieve them? What tools should be used to assess the usability and easiness of using a website objectively?
This challenge was faced by Kahn, Lenk and Kaczmarek , who devised a tool, isometric representation of websites structure and content, as well as ways of its communication and interaction with the user. The designer starts to build the visualisation from scratch for a given website, considering its structure and specificity and adjusting the format of presentation. Purposefully designing the map to fit a specific format enables viewing the page structure in whole with a single look and assessing its most important features and relations. Although there is software that generates this type of schemes automatically, the authors claim that it does not allow for viewing the structure in whole. Another reason to do it manually is that an automatically generated page map presents all the elements in an identical, programmed manner, without respecting data hierarchy and specificity.
Visualisation of user–website interaction—structure and elements of teaching method
This paper presents a teaching method based on visualisation of user–website interaction. Students in the Usability Research course conduct a website audit and usability test while using advanced, customised visualisation techniques to present its results.
The method involves two stages:
Stage A—usability audit;
Stage B—usability tests with users.
Analysis and evaluation of website functions, navigation, layout and information architecture (IA) in Stage A is combined with findings from Stage B, which consists of a usability test with users. This enables students to identify website errors and problems and then to propose adequate solutions to the diagnosed challenges. The findings from both stages contribute to the final recommendations. A dual-method approach, from an expert (heuristic analysis) and a researcher (works with users) perspective, allows students to compare results achieved in different ways and fill in the gaps. It helps students to see what problems in the website functioning were missed during expert analysis but clearly visible during research (i.e. tests with users). The important part is the comparison of the results, as it helps students learn how two different approaches allow them to see the wider context of user–website interaction. The crucial role in this comparison is visualising the website’s elements and user interactions. The instruction enables students for free choice of graphical representations and elements. Students decide by themselves which means to use, e.g. lines, fonts, or glyphs. By having flexibility in data representation and not sticking to one standard, the visualisations varied but better fit to analyse websites’ specific functions and goals. Students test different solutions and choose the most relevant data representation for both Stages A and B.
Because both stages are complementary, in Stage A the students build their assumptions about the usability level of the website and then verify it in Stage B. This experience provokes double-loop learning—where we look at the same situation but with new data and questioning approach , p. 4). It is a very important moment to gain competencies to challenge own assumptions, to control forming premature conclusions and learn how not to stick too much to own beliefs and opinions.
The objective of the first stage is the analysis of the website functions, navigation, layout and information architecture (IA) in the context of its goals. Students design their own way of visualising analysis results. But the visualisation itself allows for a deeper understanding of patterns, relations and structure. Students present user interface elements and structure and analyse their features, i.e. graphics, typography, colours and shape. Visualisation of IA and user scenarios also help identify the advantages and challenges of the website. Below we present examples of visualisations prepared by students during Stage A:
different ways of presenting information architecture (Fig. 1). Visualisations 1.1 and 1.2 focus on levels of IA and how its influence user interaction, number 1.3 is focused more on presenting specific areas of IA, where users can perform particular tasks;
combined elements and functions (Fig. 2) and layout variations (Fig. 3). Students were analysing the available functions in relation to a specific layout; it helped them to evaluate the consistency of the website design and to distinguish the elements of a visual system;
visual features—graphics, colours and typography (Fig. 4). Detailed analysis of website elements (e.g. colours, typography, etc.) helps to identify problems. In this example, we can see that the choice of colours and the number of fonts impaired the legibility and readability of the website but also indicated that the visual system lacked consistency;
interaction types (Fig. 5). Identification and analysis of both the number and characteristics of possible interactions with the website allow for further evaluation of the level of website usability and potential problem.
Visualisations help to detect errors and usability problems. All the Stage A findings and insights determine research questions and hypotheses tested in Stage B.
Stage B consists of usability testing with users; the educational goal is to advance students’ skills necessary to test and understand user behaviour. Again, the combination of analysis and visualisation techniques led to better insights. Students base their visualisations on the timeline, IA and layout.
Research questions concerning selected areas of the website are based on the results of Stage A. Then, hypotheses to be tested with users are created and scenarios of specific test and tasks are built. Stage B comprises the following activities:
Building the research scenarios to verify hypotheses (recommended tools: questionnaires, interviews, observation and usability tests);
Tests and data collection (user behaviour date include recordings of both user and screen, his or her comments, performance time, number of errors, etc.);
Results analysis—on user and task level, as well as cross-analysis;
Summary—the verification of the hypotheses and recommendations.
An innovative element of this course is the way of visualisation of the user pathway. Students prepare the graphic representation of the user pathway using three methods of presenting data, based on the timeline, IA and layout. Afterward, the elements used in visualisations are summed up (see Table 1) and discussed with students. Each type of presenting user pathway employs the visualisation of different elements/aspects of user behaviour. As a result, students can see differences between pathways; some of them then decide to analyse the pathway further and even create hybrid solutions going beyond the established categories (presented in chapter 4.3 Alternative ways of presenting user-interaction).
The fulfilment of this type of task—visualisation of data gathered in usability tests—requires building a visual code that makes it possible to record the various types of behaviour and compare them, e.g. their frequency, duration and relations. The transformation of observed behaviour into data and then their visualisation requires students to properly choose graphic values, establish the importance and hierarchy of elements, and accurately select and combine various data types collected during tests. In particular, this applies to elements affecting the effectiveness of tasks realised by users, such as time, number of errors, types of problems and user satisfaction. An important part of this activity is also linking data resulting from observation with users’ opinions and comments. Students complete the visualisation of users’ behaviour with this additional data. Frequently, comparing opinions with behaviours makes it possible to diagnose the reasons for failures and problems. Often enough, users are not fully aware of the encountered problems. Therefore, it is worth comparing the results observed by a researcher during tests and opinions declared by users. They may be totally different or supplementary .
Three ways of presenting a user pathway, based on timeline, IA and layout, allows for mapping various aspects of the interface use. This makes it possible to differentiate the types of user interaction with the interface, distinguish their interrelations and finally transform them into conclusions and design recommendations. Students choose which type of visualisation would best align with the types of behaviour they analyse. While preparing visualisations, it is important to select the presented data and arrange them in a proper hierarchy—so that the most important elements are most noticeable but can still be related to other data.
The following aspects and elements of interaction are analysed using three ways of presenting the user pathway:
In the timeline, the following are analysed: frequency, duration, relation, user activities (clicks, cursor movements, scrolls) and changes in the interface (see Fig. 6). Collected data allow for comparing task performance between users. Students can see the similarities by comparing the route of the line, the number of stages to finish the task and its length. The duration of the performed task indicates the difficulty level—if the task takes much time, it is probably hard to accomplish. They can also analyse which users spend the most time to finish the task and how it reflects their experience level (new vs. regular user) or personal characteristics (e.g. age, profession). Moreover, it helps to detect problems and challenges in the interaction. For example, intervals between subsequent clicks/actions are often indicators of time necessary to make a decision—but when this gap is very long, it often signals difficulties. By highlighting the moments of writing or scrolling, often associated with searching or overviewing the content, students can exclude the activities that may extend the process of task fulfilment;
In the visual representation of information architecture (designed during Stage A, which includes interface elements and possible interlinks) users’ strategies of task performance are analysed. Figure 7 shows the results of this part of the analysis. By analysis of the shape of the lines mapping the users’ pathways, students can easily assess the similarities and differences between them. This is also a good tool for comparing users’ behaviour with the desired behaviour pattern—the shortest way to task fulfilment;
In the website interface scheme clicked elements of the website are analysed (see Fig. 8). In the presented example, by drawing a user’s pathway on the simplified interface layout, students can see where the user was looking for proper buttons and where he or she clicked incorrectly. In this visualisation method, the locations of longer periods of inactivity in the interactions, indicating moments of hesitation and thinking, may be marked as well. As shown in Fig. 8, the analysis of the user interaction revealed the problems with the choice of proper categories. The reason was both the unclear segregation of categories and not enough differentiated design of categories and subcategories in the interface layout.
Alternative ways of presenting user-interaction
Students start a data analysis from preparing user-interaction visualisation separately for each respondent. They then map data for all the users on one scheme and reduce unnecessary and insignificant information to spot and compare differences and similarities. This identifies users encountering similar problems, choosing the same interface elements or performing differently, thus distinguishing various types of users. In order to present the differences more adequately, students prepare schemes using only selected elements of the user-interaction process.
For instance, in Fig. 9, the analysis focuses on cursor movements (left) and clicks (right) using the simplified interface layout. This graphic interpretation of the users’ behaviour enables fast and efficient identification of most frequently clicked locations. Also, the line of the cursor’s movements shows the search process and indicates clicks locations.
Another visualisation (Fig. 10) of users’ behaviour is based on the time and desired user pathway. This enables observation of both sequences of events and task performance time. Additional information is also visible, e.g. moves back to previous stages, skipped stages. We can easily observe, by analysing the line shape, that the user’s pathway does not always correspond with the designed sequence of steps for a process-in this case, the ordering of a product. If the line course changes (goes from the left to right sight and then goes inversely), it indicates that the user is not following the website guidance and needs to go back and go through the process. In the example, user n. 2 reached the step–order and was very close to the end of the process, but something made him go back to the first step and start the task once again. The scheme and users’ comments analysis allow for the identification of incomprehensible, unnecessary and incorrect steps undertaken by users and their consequences. What is an essential detail in this scheme is that the length of the line does not refer to time duration. The line shows which steps the user has passed by or skipped. The time information is provided additionally as a numerical value. This decision was a thoughtful choice of the student, as the main issue was to compare the designed pathway (steps in a specific order) and user performance among several people.
As the data analysis process continues, students have to carefully select data that will help them identify various types of users. By eliminating the detailed descriptions of interaction and aggregating data from different users and various tasks, the pattern of user behaviour becomes more visible (i.e. typical errors, strategies). Figure 11 presents the process of purchasing a product in two groups of users aged over and under 55. It shows differences in the number of encountered problems between older and younger users and identifies critical elements of the purchasing process, which was difficult for users. In this diagram, the student presents four types of behaviour at each stage of task performance (e.g. completed task, omission, roaming and wandering, uncompleted task and resignation).
As shown in Fig. 12, the visualisation was used to answer the question of whether or not the experience level of the user (new or regular) determines the choice of the product search strategy. The various users’ strategies are visible, as well as their differences according to the product type. Furthermore, the inefficiency of banners was confirmed. The conclusions can be easily spotted in the presented example where data are presented in a specific order:
– the colour indicated the type of user;
– rows in the table refer to the search strategy;
– columns in the table refer to the type of task and products users were looking for.
In each intersection in the table, the number of coloured dots refers to the number of attempts made to look for a specific product with a specific type of search. For example, during usability tests, all users, both new ones and regulars, used a search engine to order photograph prints (blue and orange dots). However, no one uses advertisements and banners, even if they were considered as one of the possibilities to gain the task aim (white dots).
Data analysis and synthesis allow for clear visualisation of differences that are hard to spot by observation alone. Another example (Fig. 13) presents the comparison of the IA of the search engines of two public transport portals (Transport for London, and Polish metropolitan bus service in Silesia). Two portals, looking completely different, were visualised in the scheme using the same set of rules and elements designed by students. Using this procedure, both structures were compared, and the differences were easy to spot and allow for analysis of IA in terms of depth, interconnections and building blocks.
The teaching method used in the Usability Research course assumes that during the process of designing the visualisations, students have to transform the results of observation into data, devise a set of rules for both assigning graphic value to them as well as highlighting their intercorrelations. This is a process in which they go from reality to abstract recording, which makes it possible to see what is unnoticeable via indirect observation. Building schemes from scratch, not using any dedicated software, helps students recognise building blocks of interaction, various ways of its graphical representation and the possibilities of combining data gathered in research. Finally, thanks to the whole process, students better understand how users interact with websites and which factors affect the success or failure of this interaction. In human–computer interaction (HCI) design students must draw from multiple knowledge domains; building knowledge using structured processes helps to understand and balance form, function, human needs as well as social context in future projects . Our method will also help students in the future, as they have learned how to present research results and use the knowledge they gain in research to justify necessary changes in design.
Evaluation of the method
In this paper, in addition to the presentation of the method itself, the perception and acceptance of the method among students is also discussed. Although there is no commonly accepted definition of a successful educational method, we can point to several factors associated with teaching method success. First, the method should enable students to achieve their goals; second, it should be accepted by students and they should willingly use it; and, finally, the method should be used in a professional or everyday activity. In the case of our method, we were interested how it is perceived by students and if there is an intention to use it in the future. The evaluation was conducted for three consecutive years in a repeated measurement scheme—immediately after finishing the course and one year later. The study has been approved by the Research Ethics Committee at the University of Silesia in Katowice, Poland. The participation in the study was voluntary, students have not been given any credits for taking part in the study, and data collection started after finishing the course. We used an online survey to gather student responses. Informed consent was obtained, as students were entering the survey after being provided with information about the goal of the study, as well as anonymity, privacy and confidentiality procedures.
We had an 83% response rate, as from the total number of 36 students enrolled in two editions of the course, 30 students took part in the survey. The survey consisted of informed consent, basic demographic questions, several open-end questions and TAM questionnaire.
We asked students several open-ended questions to better understand what they think about the method. For the first question, “Would you use the method in the future?”, most students (21 of total number of 30) answered positively, eight with some concerns and only one person negatively. Then, we asked “Which element of the method was most important for effective assessment of the website?”. In answering this question, 14 students indicated the visualisation of the user pathway, while for 13 students specifically, the timeline representation was most effective. Also, for several students, the observation of user behaviour was the source of valuable information about user–website interaction. Finally, we asked students, “Which elements of the method brought the most unexpected results?” and again, students indicated user pathway visualisation. It was pretty surprising, as students experienced a lot of difficulties during the task of user pathway visualisation. It took the longest time and required more support from the teachers. Also, it was hard for students to switch from illustrative use of graphical material to abstract visualisations.
Students’ behavioural intention to use the methods
In order to understand how students perceive the method and how they intend to use it in the future, we decided to use the technology acceptance model (TAM; as a framework. The technology acceptance model is one of the most popular models in IS research. Based on the theory of reasoned action, TAM explains the behavioural intention (BI) to use the system is the result of perceived usefulness (PU) and perceived ease of use (PEU), see Fig. 14.
Pedagogy lacks models allowing for measuring teaching methods’ acceptance. Thus, we decided to adapt TAM for this purpose. The justification was that TAM allows for structuralised control of two critical factors influencing acceptance—easiness of use and usefulness. These elements seem to be compatible with students’ experiences, as they declared both positive and negative aspects of using the method.
We have used the original TAM questionnaire for the study, but we replaced the word “technology” in the original questionnaire with “the method”. Responses could range from 1 (strongly disagree) to 7 (strongly agree).
The survey has been conducted for three consecutive years using an online survey and repeated measurement. The first measurement took place immediately after the class was finished (1st measurement wave); one year later, we asked students to take part in another survey (2nd wave) in order to check how the perception and the acceptance of the method changed over time.
In the first measurement wave, a total of 30 students (87% women) took part in the study, and in the second measurement wave 24 (87,5% women) people repeated the survey. The student sample was predominantly female; such gender imbalance is characteristic for design degrees in Poland (FSP . The average age was 22 (M = 21,93, SD = 1,01). Table 2 presents the results of the survey for the first and second waves.
In both waves, perceived ease of use scored lower than perceived usefulness. After one year, the most visible change was lower behavioural intention towards the use of the method, while scores for perceived usefulness and perceived ease of use were rather stable. In the next step, we decided to check if perceived usefulness and perceived ease of use predict behavioural intention to use the method in the future. To assess the influence of the method characteristics (perceived ease of use and usefulness) on students’ intention to use the method, we employed partial least square (PLS) modelling using SmartPLS 3.0 . PLS is a recommended statistical method in case of small samples . Separate models were created for both measurement waves (i.e. 1st wave—right after the class measurement and 2nd wave—one year after).
The results confirmed that perceived usefulness predicts almost 44% of behavioural intention. The results for the 2nd wave measurement were even more explicit, with 62% of behavioural intention predicted by perceived usefulness. The role of perceived ease of use was non-significant in the first measurement, but it significantly influenced perceived usability in the 2nd measurement. Figures 15 and 16 present the model with path coefficients for both measurement waves.
The statistical properties of both models were sufficient; Table 3 presents AVE coefficients for both measurement waves.
Behavioural intention to use the methods is mostly determined by perceived usefulness. On the other hand, perceived usefulness itself is influenced by perceived ease of use, which does not directly determine behavioural intention. The effect of capturing the whole influence on the behavioural intention by perceived usefulness, which always lowers the influence of perceived ease of use significantly, has been confirmed in meta-analytic studies . However, in our study, the coefficient is negative, which needs to be reconfirmed in further studies using a bigger sample size. This finding might also be related to the growing experience in using the method—as students become more skilful in using the method, they perceive it to be more useful. For future research, it could be also interesting to investigate further professional development of students in context of intentional use of the competences gained during the course.
Discussion and conclusion
For students, understanding the role and the process of visualisation is a difficult and important task. Being accustomed to literality makes their first attempts inadequate and overly focused on the physical features of objects instead of on the rules that constitute the basis of the process being analysed. When students start analysing websites in presentations, they mainly use print screens on which they make comment or underline elements that make up a website. When they are presenting a process of buying goods in an online store, they compile a series of print screens corresponding to respective stages of the process. They need to start to notice relations, structures, patterns and rules and then express this knowledge in the form of abstract forms in order to start visualising. The simplification is an important part of this process, to facilitate the awareness that physical representation is no longer important. Then, students focus on activities—they need to outline occurrences and relations. Among the most valuable results of visualisation are those that support the understanding of the specificity and characteristics of visualised processes.
A task each teacher faces is to build teaching methods and tools that will facilitate the success of their students. The teaching method we have discussed in this paper refers to a very narrow field—usability research. In the process of learning how to visualise user behaviour, IA and pathways, students start to be sensitive to specific details, notice their value and relations and finally are able to arrange them in a context. Combining the research and design is an important characteristic of the curriculum of the Faculty of Design, as nowadays designers are essential partners in the research phase of both digital and physical product development. Designers should know basic elements of a scientific approach to be equal and essential partners in interdisciplinary research teams . Combining practise and theory in usability research educational programmes is an important aspect of designers’ education, as the HCI domain requires multidisciplinarity and drawing the best practices from various disciplines .
Our evaluation of the perception of the method indicates that it is appreciated by students. They perceive it as a useful method, a fact that affects their intention to use it. At the same time, there is a change in their perception of the method with time-one year after completing the course, the perceived simplicity of the method is related to its assessed usability and it explains an intention to apply it to a greater degree. This is the confirmation of an assumption that this method is viewed by students as a tool that may be applied in other design activities. Our evaluation of the method also provides more general conclusions related to the understanding and acceptance of teaching methods. The results demonstrate that there is no direct influence of the perceived simplicity of the method on the intention to act; nevertheless, the perceived usefulness turned out to be significant for students. Due to the small sample size, the results should be treated as descriptive for our sample; further research is necessary to confirm such hypotheses for the population. This means that indicating the practical possibilities of using research methods and benefits that may be obtained in this way by students is crucial for acceptance and application in further work. Competencies developed in the environment of usability research for a given website process might be transferred into other design challenges. Skills embracing planning and execution of research, analysis of complex processes and their adequate visualisation, use of specific software, as well as gaining experience in presenting results to various audiences, are universal competencies that allow designers to develop in any field..
Nowadays, design professionals need to combine skills of analysis and synthesis to be able to find answers and solutions for both today’s and tomorrow’s questions, but our method due to thorough data analysis and transformation also supports a reflective practice which is pointed as an important aspect of designers’ education . However, it is also necessary to anticipate future challenges and develop universal competencies which might be useful in various contexts but also will distinguish design alumni on the job market. That is why we should focus on raising awareness among students about its usefulness and a wide range of applications.
We use the presented method in an educational context. However, we also see potential in simplifying the method and standardising it to be used in a professional market. Upon the current experience with using the method, we can now suggest the best ways of visualising various aspects of website structure, but also of usability-related challenges. The method’s most significant advantage is the importance of structure and clarity of visual representation, which support a better assessment of usability. Structuralised sequence of evaluation of different aspects of website usability also has the potential to be a base for the development of heuristics and then automated or even machine learning-based tools for websites and mobile applications analysis. There are several tools on the market which provide UX researchers and designers with automatically generated data. However, the important aspect of our method was teaching students how to choose and adjust the means of graphical visualisation. Therefore, the question of how to preserve this aspect and how to combine this with automated analysis needs to be answered to further develop the method without losing its fundamentals.
Availability of data and material
The data that support the findings of this study are available from the corresponding author upon request.
Boucharenc, C.G.: Research on basic design education: An international survey. Int. J. Technol. Des. Educat. (2006). https://doi.org/10.1007/s10798-005-2110-8
Cairo, A. (2016). The truthful art : data, charts, and maps for communication. New Riders. Retrieved from https://www.oreilly.com/library/view/the-truthful-art/9780133440492/
Card, S. K., Mackinlay, J. D., & Shneiderman, B. (1999). Readings in information visualization : using vision to think. Morgan Kaufmann Publishers. Retrieved from https://dl.acm.org/citation.cfm?id=300679
Cash, P., Stanković, T., Štorga, M.: Using visual information analysis to explore complex patterns in the activity of designers. Des Stud. 35(1), 1-28. https://doi.org/10.1016/j.destud.2013.06.001
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 318–340. Retrieved from http://www.jstor.org/stable/https://doi.org/10.2307/249008
Davis, F.D., Bagozzi, R.P., Warshaw, P.R.: User acceptance of computer technology : a comparison of two theoretical models. Manage. Sci. 35(8), 982–1003 (1989)
Faiola, A., Matei, S.A.: Enhancing human-computer interaction design education: teaching affordance design for emerging mobile devices. Int. J. Technol. Des. Educ. 20(3), 239–254 (2010). https://doi.org/10.1007/s10798-008-9082-4
FSP ING (2019). Jak widzą swoją przyszłość studenci kierunków artystycznych? [How do students of artistic degrees see their future?]. Research report by ING Foundation of Polish Art. Retrieved from https://ingart.pl/files/edb5303f/artysta_-_zawodowiec_2019_-_raport_z_badania.pdf
Garrett, J. J. (2010). The elements of user experience: user-centered design for the Web and beyond. Pearson Education.
Hair, J.F., Ringle, C.M., Sarstedt, M.: PLS-SEM: indeed a silver bullet. J. Market. Theory Practice 19(2), 139–152 (2011). https://doi.org/10.2753/MTP1069-6679190202
International Organization for Standardization. (2019). Ergonomics of human-system interaction — Part 210: Human-centred design for interactive systems (ISO/DIS Standard No. 9241–210. Retrieved from https://www.iso.org/standard/77520.html
Kahn, P., Lenk, K., Kaczmarek, P.: Applications of isometric projection for visualizing web sites. Inform. Des. J. 10(3), 221–228 (2001). https://doi.org/10.1075/idj.10.3.03kah
Kerren A, Stasko, J T, Fekete, J-D, North, C. (Eds.). (2008). Information Visualization. 4950. Berlin, Heidelberg: Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-540-70956-5
King, W.R., He, J.: A meta-analysis of the technology acceptance model. Inform. Managem. 43(6), 740–755 (2006). https://doi.org/10.1016/J.IM.2006.05.003
Lemons, G., Carberry, A., Swan, C., Jarvin, L., Rogers, C.: The benefits of model building in teaching engineering design. Des. Stud. 31(3), 288–309 (2010). https://doi.org/10.1016/J.DESTUD.2010.02.001
Lenk, K., & Satalecka, E.: Podaj dalej. Design, nauczanie, życie [Pass It On]. Kraków: Karakter (2018)
Lousberg, L., Rooij, R., Jansen, S., van Dooren, E., Heintz, J., van der Zaag, E.: Reflection in design education. Int. J. Technol. Des. Educ. (2019). https://doi.org/10.1007/s10798-019-09532-6
Purchase, H. C., Andrienko, N., Jankun-Kelly, T. J., Ward, M. (2008). Theoretical Foundations of Information Visualization. In Information Visualization. 46-64. Berlin, Heidelberg: Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-540-70956-5_3
Ranscombe, C., Bissett-Johnson, K., Mathias, D., Eisenbart, B., Hicks, B.: Designing with LEGO: exploring low fidelity visualization as a trigger for student behavior change toward idea fluency. Int. J. Technol. Des. Educ. (2019). https://doi.org/10.1007/s10798-019-09502-y
Ringle, C M, Wende, S, Becker, J-M. (2015). SmartPLS 3.0. Hamburg. Retrieved from http://www.smartpls.de
Rubin J, Chisnell D. (2008). Handbook of usability testing : how to plan, design, and conduct effective tests. Wiley Pub
Thiel S. (2019). Designing for Trust and Transparency. In 6th International Design Conference Agrafa ’19. Katowice
Tullis T, (Thomas), Albert, B (William). (2013). Measuring the user experience : collecting, analyzing, and presenting usability metric . Elsevier
Ware, C.: Information visualization: perception for design. Morgan Kaufmann (2012)
Więckowska M. (2014). Badania w projektowaniu [Research in design process]. Academy of Fine Arts in Krakow
Conflicts of interest
Authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Więckowska, M., Rudnicka, P. Visualising user–website interaction: description and evaluation of a teaching method. Univ Access Inf Soc (2021). https://doi.org/10.1007/s10209-021-00839-7
- Interaction design
- Design methodology
- Human–computer interaction | <urn:uuid:c9daf6f8-e28a-44a2-8632-0f0301a609f6> | CC-MAIN-2021-49 | https://link.springer.com/article/10.1007/s10209-021-00839-7 | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362297.22/warc/CC-MAIN-20211202205828-20211202235828-00151.warc.gz | en | 0.929951 | 11,030 | 2.953125 | 3 |
Three questions: Who makes the rules for classroom behavior? How much should 5-, 6-, 7-, and 8-year-olds get to decide, or should the teacher just lay down the rules? And does it make any difference in the long run?
In my 41 years of reporting, I must have visited thousands of elementary school classrooms, and I would be willing to bet that virtually every one of them displayed–usually near the door–a poster listing the rules for student behavior.
Often the posters were store-bought, glossy and laminated, and perhaps distributed by the school district. No editing possible, and no thought required. Just follow orders! Here’s an example:
I can imagine teachers reading the rules aloud to the children on the first day of class and only referring to them whenever things got loud or rowdy.
“Now, children, remember Rule 4. No calling out unless I call on you.”
There are other variations on canned classroom rules, available for purchase. This one uses a variety of flashy graphics to make the poster visually appealing, but the rules are being imposed from on high, which makes me think there’s little reason for children to adopt them.
I am partial to teachers and classrooms where the children spend some time deciding what the rules should be, figuring out what sort of classroom they want to spend their year in. I watched that process more than a few times. First, the teacher asks her students for help.
Children, let’s make some rules for our classroom. What do you think is important?
Or she might lead the conversation in certain directions:
What if someone knows the answer to a question? Should they just yell it out, or should they raise their hand and wait to be called on?
If one of you has to use the bathroom, should you just get up and walk out of class? Or should we have a signal? And what sort of signal should we use?
It should not surprise you to learn that, in the end, the kids come up with pretty much the same set of reasonable rules: Listen, Be Respectful, Raise Your Hand, and so forth. But there’s a difference, because these are their rules.
This poster is my personal favorite. It’s from a classroom in Hampstead Hill Academy, a public elementary school in Baltimore, Maryland (and shared by Principal Matthew Hornbeck). You’ll have to zoom in to see the details, which include what to do when working in groups: ‘Best Foot Forward,’ ‘Hands on Desk,’ and ‘Sit Big.’ And there are some things not to do, such as ‘Slouch‘ and ‘Touch Others.’
Another homemade one, entitled ‘Rules of the Jungle,’ makes me chuckle. I can picture the teacher and the children poking fun at themselves while creating a structure to insure that their classroom really does not become a jungle.
The words–Kind, Safe, Respectful–can be found in the store-bought posters; however, the children created the art work and made the poster. It’s theirs; they own it.
The flip side, the draconian opposite that gives children no say in the process, can be found in charter schools that subscribe to the ‘no excuses’ approach. The poster child is Eva Moskowitz and her Success Academies, a chain of charter schools in New York City. A few years ago on my blog I published Success Academies’ draconian list of offenses that can lead to suspension, about 65 of them in all. Here are three that can get a child as young as five a suspension that can last as long as five days: “Slouching/failing to be in ‘Ready to Succeed’ position” more than once, “Getting out of one’s seat without permission at any point during the school day,” and “Making noise in the hallways, in the auditorium, or any general building space without permission.” Her code includes a catch-all, vague offense that all of us are guilty of at times, “Being off-task.” You can find the entire list here.
(Side note: the federal penitentiary that I taught in had fewer rules.)
Does being able to help decide, when you are young, the rules that govern you determine what sort of person you become? Schools are famously undemocratic, so could a little bit of democracy make a difference? Too many schools, school districts, and states treat children as objects–usually scores on some state test–and children absorb that lesson.
Fast forward to adulthood: Why do many adults just fall in line and do pretty much what they are told to do? I am convinced that undemocratic schools–that quench curiosity and punish skepticism–are partially responsible for the mess we are in, with millions of American adults accepting without skepticism or questioning the lies and distortions of Donald Trump, Fox News, Alex Jones, Briebart, and some wild-eyed lefties as well.
Because I agree with Aristotle that “We are what we repeatedly do,” I’m convinced that what happens in elementary school makes a huge difference in personality formation and character development.
Students should have more control (‘agency’ is the popular term) over what they are learning, and inviting them to help make their classroom’s rules is both a good idea and a good start.
As always, your comments and reactions are welcome. | <urn:uuid:5c585dc3-fd7d-427b-b4c6-c35908c13e24> | CC-MAIN-2021-49 | https://themerrowreport.com/2019/03/06/i-was-just-following-orders/?replytocom=166321 | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362297.22/warc/CC-MAIN-20211202205828-20211202235828-00151.warc.gz | en | 0.967595 | 1,165 | 2.859375 | 3 |
Polity Books for UPSC 2021: Polity Syllabus for UPSC and The Recommended Book List
Polity is an integral part of the UPSC civil services exam. Whether it is for the UPSC prelims or the UPSC mains exam, Indian polity is a vital part of the UPSC syllabus. Even in the 2017 UPSC prelims exam, more than 20 questions were asked from the polity section. It is the subject from which most questions were asked. In this article, you will read about the syllabus of Indian polity for the UPSC exam. The Indian polity and governance syllabus for IAS prelims is very precise. But, it encompasses a lot of important concepts and issues to understand.
Polity Syllabus for UPSC
- Indian Constitution
- Functions and responsibilities of the Union and the States, issues, and challenges pertaining to the federal structure, devolution of powers and finances up to local levels and challenges therein
- Comparison of the Indian constitutional scheme with that of other countries
- Separation of powers between various organs dispute redressal mechanisms and institutions
- Parliament and State Legislatures
- Appointment to various Constitutional posts, powers, functions, and responsibilities of various Constitutional Bodies
- Structure, organization, and functioning of the Executive and the Judiciary Ministries and Departments of the Government; pressure groups and formal/informal associations and their role in the Polity
- Salient features of the Representation of People’s Act
- Government policies and interventions for development in various sectors and issues arising out of their design and implementation
- Statutory, regulatory, and various quasi-judicial bodies
Continuation of the Syllabus
- Welfare schemes for vulnerable sections of the population by the Centre and States and the performance of these schemes; mechanisms, laws, institutions, and Bodies constituted for the protection and betterment of these vulnerable sections
- Health, Education, Human Resources
- Development processes and the development industry the role of NGOs, SHGs, various groups and associations, donors, charities, institutional and other stakeholders
- Issues relating to the development and management of Social Sector/Services relating to
- Important aspects of governance, transparency, and accountability, e-governance- applications, models, successes, limitations, and potential; citizens charters, transparency & accountability and institutional and other measures
- Issues relating to poverty and hunger
- Role of civil services in a democracy
Recommended Polity Books for UPSC
The best books apart from the obvious and compulsory NCERTs are:
|1.Indian Polity||M. Laxmikant|
|2. Introduction to the Constitution of India||Durga Das Basu|
|3. Our Constitution||Subhash C. Kashyap|
|4. Our Parliament||Subhash Kashyap|
|5. Indian Polity||Spectrum|
Laxmikant Book for UPSC Polity
If it is said that nobody can become IAS without studying the book of M. Laxmikant then it won’t be hyperbole. The biggest strength of this book is the authenticity of the content and conciseness of the topic. This book is very essential for the prelims and mains both the exams. Indian Polity by M. Laxmikant is one of the most popular and comprehensive books on the subject and has been a consistent bestseller for many years.
Introduction to the Constitution of India – Durga Das Basu
This is also one of the best books available on Indian Polity. This book is recommended by most of the toppers of the IAS. This book helps in developing very good analytical skills for the IAS aspirants.
Our Constitution by Subhash C. Kashyap
The book, ‘Our Constitution’ written by Subhash C. Kashyap is an ideal book for the introduction of the Indian Constitution. The easy language and writing method of this book will clear all the doubts of the readers related to the Indian Constitution.
Our Parliament by Subhash Kashyap
This book is authored by the former Secretary-General and Lok Sabha Secretariat of India Mr. Subhash Kashyap. He has thorough research and a personal understanding of the functioning of the Indian government. This book is a complete narration of the evolution of the Indian Parliament from beginning to its present form.
Indian Polity by Spectrum
Another good book on Indian Polity belongs to Spectrum publication. The first publication of this book was released in 2013 and the 6th edition was published in 2018. This book covers the recent reports of the law commission and discusses various issues pertaining to the working of the Indian Constitution.
Before you start preparing for the Polity and Governance section of UPSC Prelims, you get confused about the vastness of the syllabus. But the proper analogy of the syllabus and proper planning can definitely help you to get through all the topics of these sections. | <urn:uuid:de132af4-ca1c-4a62-8d7b-ad9fd2c33331> | CC-MAIN-2021-49 | https://upscpathshala.com/content/polity-books-for-upsc-syllabus-recommended-list/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362297.22/warc/CC-MAIN-20211202205828-20211202235828-00151.warc.gz | en | 0.923917 | 1,036 | 2.625 | 3 |
Data on any media can be located using either sequential or random access. Sequential access is normally associated with tape, either on a computer tape drive or video tape.
With sequential access, in order to get from the beginning (first address) of the tape to any other point (higher address) the intervening tape must be read. In other words, if you're at the beginning of a movie on video tape and want to see the ending credits you first have to fast forward through the entire movie.
Random access allows you to skip directly to the point you want to read. For example, if you were at the beginning of a movie on DVD and wanted to see the credits you could skip directly to them without the player reading the rest of the movie.
You can think of sequential access as a simple list. Imagine you have a list of 500 items on several sheets of paper, but you don't know how many items are on each sheet. In order to find the page with the 375th item listed you'd have to start from the beginning and count entries until you got to 375.
If you take the same list and organize it so there are 50 items on each page you could access each item randomly. Since you know there are 50 items on each page you can skip directly to page 8 without even looking at the first 7 pages.
Since most consumer formats are stored on optical disc (CD/DVD), hard drive, or flash memory of some type, we mostly deal with random access. However, most digital camcorders are still tape based, and tape always uses sequential access.
Although professional editing has moved almost exclusively to random access media like hard drives, the footage is normally stored and transported on tape, making it sequential access. Film also uses sequential access. | <urn:uuid:a2670c5e-5c8b-4f64-b68e-b957a358e17b> | CC-MAIN-2021-49 | https://www.afterdawn.com/glossary/term.cfm/random_access | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362297.22/warc/CC-MAIN-20211202205828-20211202235828-00151.warc.gz | en | 0.958614 | 363 | 3.6875 | 4 |
When Youth Feel Unsafe: Brief Insights on the Cognitive and Academic Effects of Exposure to Violence
Recent acts of violence in America's schools are the cause of nationwide concern. Yet even in the absence of school shootings, many young people do not feel safe in their schools and communities or are exposed to violence directly or indirectly.
Although youth are, on average, feeling safer and are being exposed to less violence , data suggest that overall reductions are not evenly distributed across the country. Many cities are experiencing rising rates of violent crime. The trends in these cities make clear that violence and fear of violence remain an everyday reality for too many young people.
The safety of America's young people is essential given what we know about the negative impact of feeling unsafe and being exposed to violence have on young people's development and ability to succeed in school
This Point of View brief from the Center for Promise provides research insights on a particularly timely topic - violence in America's schools - that is impacting young people in America. Learn more about this topic by reading the Center’s Barriers to Success and Barriers to Wellness reports.
The 5 Promises
The 5 Promises represent conditions children need to achieve adult success. The collective work of the Alliance involves keeping these promises to America’s youth. This article relates to the promises highlighted below: | <urn:uuid:07803763-fd12-4ecf-9d99-a97c895e8167> | CC-MAIN-2021-49 | https://www.americaspromise.org/resource/when-youth-feel-unsafe | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362297.22/warc/CC-MAIN-20211202205828-20211202235828-00151.warc.gz | en | 0.943362 | 272 | 2.953125 | 3 |
Peach Tree Creek Battlefield
Located in the heart of metropolitan Atlanta, nearly all the Peach Tree Creek battlefield has been lost to development. Georgia Historical Commission markers throughout the urban landscape point out key battle sites, and a few public parks preserve portions over which the Confederates advanced. Tanyard Creek Park near the center of the battlefield contains several memorial markers. The Atlanta History Center houses the famous Atlanta Cyclorama and the Western & Atlantic Railroad locomotive Texas. Visit the Smith Family Farm on the museum grounds, a rural plantation house built in the 1840s, with a smokehouse, slave gardens & costumed interpreters. Many notable Confederates are buried in Oakland Cemetery. | <urn:uuid:b73ce4bb-39fa-4aef-8677-cfe52352caee> | CC-MAIN-2021-49 | https://www.battlefields.org/visit/battlefields/peach-tree-creek-battlefield | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362297.22/warc/CC-MAIN-20211202205828-20211202235828-00151.warc.gz | en | 0.917487 | 134 | 2.828125 | 3 |
Published: 12th April 2018
According to an international study, Yoga classes in school may help kids fight stress, anxiety
Researchers targeted third grade because it is a crucial time of transition for elementary students, when academic expectations increase
Participating in yoga and mindfulness activities at school helps relieve stress and anxiety in young children, improving their wellbeing and emotional health, a study has found.
Researchers from Tulane University in the US worked with a public school to add mindfulness and yoga to the school's existing empathy-based programming for students needing supplementary support.
Third graders who were screened for symptoms of anxiety at the beginning of the school year were randomly assigned to two groups.
A control group of 32 students received care as usual, which included counselling and other activities led by a school social worker.
The intervention group of 20 students participated in small group yoga/mindfulness activities for eight weeks using a Yoga Ed curriculum.
Students attended the small group activities at the beginning of the school day. The sessions included breathing exercises, guided relaxation and several traditional yoga poses appropriate for children.
The study, published in the journal Psychology Research and Behavior Management, evaluated each group's health-related quality of life before and after the intervention, using two widely recognised research tools.
"The intervention improved psychosocial and emotional quality of life scores for students, as compared to their peers who received standard care," said Alessandra Bazzano, associate professor at Tulane University.
"We also heard from teachers about the benefits of using yoga in the classroom, and they reported using yoga more often each week, and throughout each day in class, following the professional development component of intervention," Bazzano said.
"Our initial work found that many kids expressed anxious feelings in third grade as the classroom work becomes more developmentally complex," Bazzano said.
"Even younger children are experiencing a lot of stress and anxiety, especially around test time," she said. | <urn:uuid:0ef3cb1c-b62d-4fc4-aaff-b1b77cf70a57> | CC-MAIN-2021-49 | https://www.edexlive.com/news/2018/apr/12/according-to-an-international-study-yoga-classes-in-school-may-help-kids-fight-stress-anxiety-2461.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362297.22/warc/CC-MAIN-20211202205828-20211202235828-00151.warc.gz | en | 0.973178 | 394 | 3.03125 | 3 |
The doors to the European Space Agency (ESA) have been wide opened: everything they’ve ever released is now open access.
We knew something was up because, for the past few weeks, ESA has been uploading more and more of its archive to the open access site. Now, the trove of images and videos has adopted the Creative Commons Attribution – ShareAlike 3.0 Intergovernmental Organisation. This means that you can use whatever you want, share it and adapt it as you wish for all purposes — even commercially — while crediting ESA as the author.
“This evolution in opening access to ESA’s images, information and knowledge is an important element of our goal to inform, innovate, interact and inspire in the Space 4.0 landscape,” said Jan Woerner, ESA Director General. “It logically follows the free and open data policies we have already established and accounts for the increasing interest of the general public, giving more insight to the taxpayers in the member states who fund the Agency,” he added.
But this isn’t just about images and videos, their recent releases under open access policies also include data which can be used by scientists, professionals, and even students. Among many other things, you can now freely access:
- Images and data from Earth observation missions (Envisat, Earth Explorer, European Remote Sensing missions, Copernicus; example here).
- ESA/Hubble images and videos
- The entire ESA Planetary Science Archive Data (PSA). The PSA is the central European repository for all scientific and engineering data returned by ESA’s planetary missions. You can see pretty much everything ESA has ever done: ExoMars 2016, Giotto, Huygens, Mars Express, Rosetta, SMART-1, Venus Express, and many more.
- Sounds from Space: ESA’s official SoundCloud channel hosts a multitude of sounds and so-called sonifications from Space, including the famous ‘singing comet’, a track that has been reused and remixed thousands of times by composers and music makers worldwide.
- 3D Models of a comet.
It’s a great step the ESA has taken, one which follows NASA’s similar decision. Yes, everything that NASA has is also open access — and this truly is tremendous. Now more than ever, we need access to information and data, and now more than ever NASA and the ESA have more data on their hands than they can analyze. By making it available for everyone they are not only helping researchers, students, and the media, they are also helping advance science. The tendency to favor open-access data is something we applaud and which can lead to important discoveries. The trove of data is now open for everyone to access — congrats, ESA! | <urn:uuid:d733bf22-c677-4eb3-b58b-768c82f3b368> | CC-MAIN-2021-49 | https://www.zmescience.com/space/european-space-agency-02032017/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362297.22/warc/CC-MAIN-20211202205828-20211202235828-00151.warc.gz | en | 0.939044 | 582 | 2.671875 | 3 |
Let’s begin at the beginning… unpaired electrons are unstable. They want to react with other entities to form pairs. These “free radicals,” or “oxidants” are formed in many ways, like UV radiation or as intermediary steps in the break-up of more complex molecules in chemical reactions, such as when our mitochondria convert food into energy.
Thus, anti-oxidants are good. They bond with free radicals so free radicals can be flushed out of your system before they can bond with something that was supposed to be part of your natural functioning, in turn causing abnormal cell growth, which is cancer.
The components in plants that are responsible for secondary functions (such as colorful pigments or scents meant to communicate to pollinators and predators, or like hormones that regulate cell metabolism) tend to contain lots of antioxidants and tea in particular contains many types of antioxidants, some of which also trigger enzymes that influence the way DNA is transcribed into RNA such as
- Stimulating protective genes.
- Inhibiting tumor cell proliferation.
- Blocking the blood vessels that feed tumors.
- Triggering other anticancer and detoxification mechanisms.
So why do we still say tea “may” help fight cancer rather than it “does” help fight cancer? Because it is all very complicated, we don’t understand it completely, and good science operates on very high standards. Some studies don’t match up with others, which can happen because of different procedures and criteria. (Which types of teas were tested and in what ways? What types of cancer? Etc.) For another thing, many studies are animal studies rather than human, and are therefore “inconclusive.”
- It is abundantly clear that tea does several things that help maintain healthy cells in general, so in this way, tea does help to fight cancer.
- Evidence is growing that there are also additional mechanisms which fight or inhibit specific cancers, though they are not yet fully understood or proven. | <urn:uuid:3496de05-bb88-469c-965c-a7b9ae204c5f> | CC-MAIN-2021-49 | http://bruetta.com/tea-help-prevent-cancer/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362969.51/warc/CC-MAIN-20211204094103-20211204124103-00471.warc.gz | en | 0.954132 | 425 | 3.0625 | 3 |
VATICAN CITY (CNS) — Although he served as pope for less than five years, Blessed John XXIII left one of the most lasting legacies in the Catholic Church’s history by convening the Second Vatican Council.
A plump, elderly, smiling Italian of peasant origins, the future pope had an illustrious career as a papal diplomat in Bulgaria, Turkey and postwar France.
He became pope amid the dismantling of colonialism, the rise of the Cold War and on the cusp of a technological transformation unlike anything the world had seen since the Industrial Revolution.
Citing the Holy Spirit as his source of inspiration, he called the Second Vatican Council to help the church confront the rapid changes and mounting challenges unfolding in the world — and, by inviting non-Catholics to the council, to work toward Christian unity.
As pope from 1958 to 1963, Blessed John launched an extensive renewal of the church when he convoked the council, which set in motion major reforms with regard to the church and its structure, the liturgy, ecumenism, social communication and Eastern churches.
After the initial session’s close in 1962, he set up a committee to direct council activities during the nine-month recess. Subsequent sessions — the final one ended in December 1965 — produced documents on the role of bishops, priestly formation, religious life, Christian education, the laity and interreligious dialogue.
He produced a number of historic encyclicals, including “Mater et Magistra” on Christian social doctrine and “Pacem in Terris,” issued in 1963 at the height of the Cold War, on the need for global peace and justice.
He established the Pontifical Commission for the Revision of the Code of Canon Law, which oversaw the updating of the general law of the church after the Second Vatican Council, culminating in publication of the new code in 1983.
Before he was elected pope, he served as a Vatican diplomat. His work in Bulgaria and Turkey put the future pope in close contact with many Christians who were not in full communion with the Catholic Church and inspired him to dedicate so much effort as pope to try to recover the unity lost over the centuries. It was Blessed John who, as pope in 1960, created the Vatican’s office for promoting Christian unity.
With his humility, gentleness and active courage, he reached out like the Good Shepherd to the marginalized and the world, visiting the imprisoned and the sick, and welcoming people from every nation and faith.
He visited many parishes in Rome, especially in the city’s growing suburbs. His contact with the people and his open display of personal warmth, sensitivity and fatherly kindness earned him the nickname, “the Good Pope.”
Blessed John brought a humble yet charismatic, personal style to papacy. He placed great importance on his modest upbringing in a village about 25 miles northeast of Milan, saying: “I come from the country, from poverty” that he said was “happy and blessed poverty — not cursed, not endured.”
Born in Sotto il Monte, Italy, in 1881, Angelo Giuseppe Roncalli was one of 13 children in a family of sharecroppers. He entered the minor seminary at the age of 11 and was sent to Rome to study at the age of 19.
He was ordained to the priesthood in 1904 and, after several years as secretary to the bishop of Bergamo, he was called to the Vatican. In 1925 he began serving as a Vatican diplomat, first posted to Bulgaria, then to Greece and Turkey and, finally, to France. He was named a cardinal and patriarch of Venice in 1953.
After more than five years as patriarch of Venice, then-Cardinal Roncalli was elected pope Oct. 28, 1958.
He died of cancer June 3, 1963.
Blessed John was beatified in 2000, by Blessed John Paul II, with whom he will be canonized April 27.
In a time to build, CatholicPhilly.com connects people and communities
As society emerges from the loss and separation of the pandemic, CatholicPhilly.com works to strengthen the connections between people, families and communities every day by delivering the news people need to know about the Catholic Church, especially in the Philadelphia region, and the world in which we live.
By your donation in any amount, you join in our mission to inform, form in the Catholic faith and inspire the thousands of readers who visit every month.
Here is how you can help:
- A $100 gift allows us to present award-winning photos of Catholic life in our neighborhoods.
- A $50 gift enables us to cover a news event in a local parish, school or Catholic institution.
- A $20 gift lets us obtain solid faith formation resources that can deepen your spirituality and knowledge of the faith.
- A small, automated monthly donation means you can support us continually and easily.
Won't you consider making a gift today?
Please join in the church's vital mission of communications by offering a gift in whatever amount that you can ― a single gift of $40, $50, $100, or more, or a monthly donation. Your gift will strengthen the fabric of our entire Catholic community.
Make your donation by credit card here:
Or make your donation by check:
222 N. 17th Street
Philadelphia, PA 19103 | <urn:uuid:025adcf1-997a-4fae-b7c8-9862905afac6> | CC-MAIN-2021-49 | https://catholicphilly.com/2014/03/news/world-news/short-pontificate-long-impact-blessed-john-xxiii-launched-reforms/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362969.51/warc/CC-MAIN-20211204094103-20211204124103-00471.warc.gz | en | 0.963055 | 1,126 | 2.984375 | 3 |
Carefully following a gluten-free diet might not protect people with celiac disease from exposure to potentially harmful amounts of gluten, new findings suggest.
“Individuals who are on a gluten free diet are consuming more gluten than we actually imagined. It’s not uncommon for them to be consuming on average a couple of hundred milligrams a day,” Dr. Jack A. Syage, CEO of ImmunogenX in Newport Beach, California, and the study’s lead author, told Reuters Health in a telephone interview.
In people with celiac disease, consuming even microscopic amounts of the gluten protein in wheat, rye or barley triggers an autoimmune response that harms the lining of the small intestine. Left untreated, celiac disease can lead to a host of long-term ill effects including anemia, osteoporosis and fertility problems.
Hidden gluten is ubiquitous in medications, food additives, seasonings, sauces, lipsticks and lip balms, fried foods and many other sources.
Dr. Syage and his team quantified gluten exposure by analyzing amounts of gluten excreted in stool and urine in people with celiac disease who were following a gluten-free diet but still experiencing moderate to severe symptoms.
As reported in the American Journal of Clinical Nutrition, they estimated that these adults were still being exposed to an average of 150 to 400 milligrams (or less than two one-hundredths of an ounce) of gluten a day.
Up to 10 mg of gluten per day is generally considered safe for people with celiac disease, according to the University of Chicago Celiac Disease Center.
The study wasn't designed to identify the sources of accidental gluten exposure.
Estimates of gluten exposure in the new study are indirect, and based on several unproven assumptions, said Dr. Carlo Catassi, head of pediatrics at the Universita Politecnica Delle Marche in Ancona, Italy, who studies celiac disease but did not participate in the new research.
“The risk of gluten contamination in the diet of treated celiacs is very well known,” Dr. Catassi said, adding that the new study’s estimate was surprisingly high. “Should these data be confirmed by direct evidence of a frequent high gluten contamination, further treatments beyond the gluten-free diet would certainly be an option.”
He added: “The data of this study suggest a ‘pessimistic’ view about the possibility to maintain a correct gluten-free diet that is not justified in my opinion, until further studies directly measuring the amount of gluten contamination will be available.”
Still, the authors conclude, the data suggest “that individuals on a gluten-free diet cannot avoid accidental gluten intrusions and these small amounts are sufficient to trigger severe symptomatic responses.”
Celiac disease patients who are still having symptoms should re-evaluate their diets under the guidance of a clinician or dietician, they suggest.
© 2021 Thomson/Reuters. All rights reserved. | <urn:uuid:9008349d-3d44-412b-a66b-8037029034d3> | CC-MAIN-2021-49 | https://cloudflarepoc.newsmax.com/health/health-news/gluten-free-diets-gluten-levels/2018/03/22/id/850137/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362969.51/warc/CC-MAIN-20211204094103-20211204124103-00471.warc.gz | en | 0.948415 | 625 | 3.09375 | 3 |
I love this quote from Maria Montessori and could not agree more.
All cultures throughout our history have one thing in common… music. It is the world’s universal language. From children to old folk, sound and music have an everlasting presence. It is never too early to introduce a child to music. For a child, musical education is crucial for development and so much more than just a form of art.
For babies, listening to music before bedtime can be soothing, stress relieving and improve sleep/habits! Nursery rhymes for example, are one of the best proven ways to help a child with speech recognition!
The piano is a great starter instrument for toddlers (along with other percussion instruments) introduced as a routine. Some of the benefits are:
Exercises of motor skills
Enhances of counting and other math concepts
Reinforces the alphabet and reading
Helps parents to bond throughout activities together
Helps to distinguish right hand from left hand
Increases brain development and neural activities
Patience & commitment
Improves memory & IQ
Builds multi-tasking skills
Helps to better focus
Improves hand-eye coordination
The list goes on and on, but I think the point is clear here. Kids enjoy what we give them and many families have pianos & keyboards at home already. If used properly, it can be the greatest teaching “Toy” at your home. That’s why the Key’ndergarten curriculum focuses on introducing music just like a spoken language. With practice anyone can understand what the music being played is saying. | <urn:uuid:ce7bdbcb-896e-4768-ade7-79e9684cecdd> | CC-MAIN-2021-49 | https://keyndergarten.com/2019/01/21/my-first-post/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362969.51/warc/CC-MAIN-20211204094103-20211204124103-00471.warc.gz | en | 0.930687 | 346 | 3.28125 | 3 |
In some circumstances, children find themselves in an environment necessitating to learn more than one language. Children may learn two languages at the same time whereby a child learns and speaks one language at home and begins to learn a second language in school. In the situation one learns two languages at the same time apart from their native language is referred to as Dual Language Learning (Cook, 2016). Dual language learning needs attention as a teacher has to understand how a child is acquiring and responding to the two languages.
Dual language is important as it helps children grow and develops them across all domains (Cook, 2016). However, there are circumstances where dual language learning is not easy. In this scenario, where four-year-old Sheila just moved into an English speaking country from Bosnia, and she does not speak a word of English, it can be hard. It would be challenging to get her to learn English because she does not socialize, but instead, she sits by herself and waits for her mother to pick her. As a teacher, to increase Sheila participation in class which will get her to learn English, I will implement three DLL strategies which are planning activities which are non-language dependent, facilitate problem-solving to help her emotionally and work on cultivating a caring and respectful learning environment.
Planning activities which do not require languages to communicate are essential. This includes gestures, games, drawing, writing, and using props (Cohen, 2014). Some students would like to communicate, but they do not have a way to do it. Sheila might be spending time alone because she feels left out during discussions. Therefore, as a teacher, I am going to encourage her to use gestures when communicating with classmates. Also, she can draw an animal she is describing or write down the name in her language, and I will translate to other students. This will help Sheila to make friends with other classmates who are going to help her learn English. A student teaching another one is the easiest way as they understand each other personally. Therefore, using non- language dependent activities to get Sheila to make friends is an excellent strategy that will help her learn English.
The second DLL strategy that I would employ as a teacher is facilitating problem solving to help her emotionally (Cohen, 2014). Sheila is challenged emotionally that is why she cries and throws herself on the floor when her mother leaves her at school in the morning. This is because she is emotional about being left in a new environment with strangers. I will create problem-solving lessons where she manages to solve problems such as two friends fighting then she provides a solution. She will do this by either using gestures, the little English she would have learned or in her native language and I will translate for students. She will solve it in any language she feels comfortable in. Helping solve problems will help her grow emotionally and discover she can relate with other people other than her mother. This will motivate Sheila to learn English so she can help solve more problems in class in a language that everyone understands.
Working on cultivating a caring and respectful learning environment is another strategy that will help Sheila to learn English. One of the ways is by making all students learn each other's names and their meanings. Sheila can also tell the class the meaning of her name in her native language, and one student will tell her what it means in English. Also, I would create a fun environment by giving Sheila matching games on the computers where she matches her native language words with their corresponding English words and she scores points.
Sheila should not be referred to evaluation at this time. This is because she is still learning and if an evaluation is done to assess her language, she will feel like she is not good enough or rather her native language is not accepted. Sheila should be allowed to get to a point where she appreciates her native language, but she understands the importance of learning English as it is the primary language used in the school. At this point she can be evaluated as she will be good at it and fluent.
How would you classify the exchange rate regime used by Russia over the 1991–2014 period? The Russian ruble has experienced a wide range of regime shift in the economy of Russia since opening in 1991. For the period between 1991 and 1998, the e...Case-Study:-Russian-Ruble-Roulette …Read Article
The Individual Mandate Provision Following the passage of the Obamacare Act, many people still appear to be equally divided with 42 percent in support of it, 42 percent against it, as well as 16 percent not decided (Davidson 13). Probably there is n...The-Individual-Mandate-Provision …Read Article
ACCELERATION OF AGEING BY DRUG ADDICTION Addiction is one of the leading problems that face our society today. There have been several studies done in this are...Is-biological-aging-accelerated-in-drug-addiction? …Read Article | <urn:uuid:aa2a2f02-57e7-47db-995c-d53f4a9b26d0> | CC-MAIN-2021-49 | https://mypaperhub.com/blog-post.php?id_blog_post=682%22/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362969.51/warc/CC-MAIN-20211204094103-20211204124103-00471.warc.gz | en | 0.969916 | 1,004 | 3.78125 | 4 |
White locates Rhodesia’s independence in the era of decolonization in Africa, a time of great intellectual ferment in ideas about race, citizenship, and freedom. She shows that racists and reactionaries were just as concerned with questions of sovereignty and legitimacy as African nationalists were and took special care to design voter qualifications that could preserve their version of legal statecraft. Examining how the Rhodesian state managed its own governance and electoral politics, she casts an oblique and revealing light by which to rethink the narratives of decolonization.
Related collections and offers
|Publisher:||University of Chicago Press|
|Product dimensions:||5.90(w) x 9.10(h) x 1.00(d)|
About the Author
Read an Excerpt
Rhodesian Independence and African Decolonization
By Luise White
The University of Chicago PressCopyright © 2015 The University of Chicago
All rights reserved.
"The last good white man left": Rhodesia, Rhonasia, and the Decolonization of British Africa
In November 1965, when almost all British territories in Africa had been granted their independence, Rhodesia's white minority government made a Unilateral Declaration of Independence from Britain. This was UDI, an acronym that would serve both to mark the event and to describe the period of Rhodesian independence, which lasted until 1980. Or 1979: Rhodesia, which had been Southern Rhodesia until 1964, became Zimbabwe-Rhodesia in 1979, with a no less independent government with an elected African head of state. It became majority-ruled independent Zimbabwe in 1980. These four names have been collapsed into two—Rhodesia and Zimbabwe—and have produced some discursive flourishes that have generated two histories, a before-and-after that literally makes the past a prologue, an exception to the natural order that was decolonization, an interruption that slowed down the history of what should have happened. These two histories have been routinely deployed as an example of the well-wrought story of a colony becoming an independent nation. In the story of Rhodesia becoming Zimbabwe it is one of the success of guerrilla struggle and the valiant triumph of universal rights. But it is a story that even at its best leaves out the peculiarities of local politics and difference, not to mention one or two of the country's names. The result is a history in which Rhodesia was the racist anomaly, an eager if secondhand imitation of South Africa's apartheid. Rhodesia—and Southern Rhodesia, and Zimbabwe-Rhodesia—hardly merits any analysis beyond the racism of its renegade independence. It is an example or an occurrence; it has no specific history.
The Country No One Can Name
At the level of popular and academic history and at the level of diplomacy, Rhodesia was the place that no one could even name: for years after the country took its own independence Britain referred to it as Southern Rhodesia while much of the historiography refers to "colonial Zimbabwe," although Rhodesia had never really been a formal colony in the way that Kenya or Gabon had been. On one hand, not calling Rhodesia Rhodesia was a way to show how illegitimate it really was. On the other, calling Rhodesia colonial Zimbabwe served—as did talk of the decolonization of Algeria—to change its history, to return clumsy governance and messy episodes to the normal, linear story of colony to nation.
The two names make for a definitive break. This is the literature of The Past Is Another Country, the title of a journalist's account of the events and negotiations that ended white rule; it is in a genre of crossing a threshold, from oppression to freedom. Authors and activists announced, with pride or sadness depending on the circumstance, that they would never return to Rhodesia, only to arrive in Zimbabwe at the start of the next chapter. Indeed, once Rhodesia became Zimbabwe it became a commonplace for history texts to begin with a before-and-after list of place names. Obviously a before-and-after list cannot do justice to all the names of the country, but I have dispensed with such a list altogether because of my misgivings about the whole before-and-after enterprise. Place names and how they change are important, to be sure, but all too often the then-and-now lists that show that African names have replaced those chosen by settlers (Harare for Salisbury, for example, but see Mutare for Umtali, or Kadoma for Gatooma) suggest that a new name represents a resolution, a wrong that has now been righted.
Almost as common as the list of place names is a chronology or time line in the front matter of a text. A few begin the country's history with the building of Great Zimbabwe, but most start the chronology with European contact in 1509 or with the first white settlers in 1890. Even the most African nationalist books have chronologies that begin in 1890. Chronologies may have been required by publishers who thought Central Africa too remote for many readers, but the ways these chronologies begin suggest a desire to historicize the land—even if it cannot be named—and establish a claim to territory whoever the population is and whenever it got there. Rhodesia's territory was never really contested, but who lived on it, and where they lived, was. One critical argument of this book is that the idea of place—as in "patria" and "locus"—was in flux for much of the 1960s and 1970s. Independent Rhodesia was likened to Britain at its best, or to Britain in the 1940s, or to Sparta, or to the European nations handed to Hitler by Neville Chamberlin, or to Hungary in 1956. In the rhetoric of its independence, what located Rhodesians firmly in Africa was not its African population, but its white one. Party hacks called upon the genealogy of brave pioneers who had tamed the land bare-handed. "The Rhodesian was the last good white man left," recalled P. K. van der Byl, a post–World War II immigrant who was to hold several portfolios in the Rhodesian cabinet.
But good white men or bad ones, the fact was that the white population of Southern Rhodesia (and Rhodesia) was small, never amounting to as much as 5 percent of the total population of the country. There were fewer than 34,000 Europeans in Southern Rhodesia in 1921 and the numbers gradually increased to about 85,000 in 1946. Within six years the population was almost 140,000; it peaked at 277,000 in 1961. These figures are misleading, however, as they do not show how peripatetic the white population was: of the seven hundred original pioneers who arrived in 1890, only fifteen lived in the country in 1924. Many came and went because of changing opportunities in regional industries, particularly mining, while others used South Africa, Northern Rhodesia, or Britain as a base from which to launch new careers. These were the men called "Good Time Charlies" in the press and "rainbow boys" in parliament. This pattern intensified as the population trebled: there were almost equal numbers of white immigrants and white emigrants for most of the early 1960s. It was only during the boom years of 1966–1971 that white immigration exceeded white emigration by significant amounts. After that, more whites left the country than came to live there.
This book, then, is a history of Rhodesia's independence and its place—clumsy governance and messy episodes and all—in what was everywhere else postcolonial Africa. Caroline Elkins has argued that after 1945, white settlers in Africa dug in against the colonial retreat and claimed a popular sovereignty for themselves alone, insisting that they constituted a people who had the legitimacy to trump empire and to make claims equivalent to those independent nations could make. This book argues something quite different, that white settlers and white residents and whites who were just passing through utilized a hodgepodge of institutions and laws and practices in Rhodesia to maintain what they refused to call white rule but instead relabeled as responsible government by civilized people. They did not claim to be "the people" worthy of sovereignty but instead proclaimed membership in an empire, or the West, or an anticommunism that had no national boundaries. I am writing against, for want of a better term, the story of Rhodesia becoming Zimbabwe, which is itself a version of colonies becoming nation-states: I am writing a history of how Rhodesia and its several names disrupt that narrative and show how awkward and uneven it was.
Southern Rhodesia: A Short History
Southern Rhodesia was founded as a chartered colony of the British South Africa Company in 1890. It was Cecil Rhodes's attempt to find more gold and to create a buffer against the Dutch in South Africa. Thirty years later a rebellion had been vanquished, South Africa was British, the gold mines were not wholly successful, and a chartered colony did not fit easily with the imperial world after the Treaty of Versailles. Indeed, Jan Smuts, an architect of the mandate system of the League of Nations, wanted Southern Rhodesia to join the Union of South Africa, but a white electorate of less than fifteen thousand, fearing an influx of white Afrikaaners, rejected this in 1922. In 1923, the British government expropriated the British South Africa Company, which then ceased to administer both Northern and Southern Rhodesia. The company maintained some rights in Northern Rhodesia but Southern Rhodesia was annexed to the crown as a colony but would have responsible government. This had a specific and limited meaning in 1923: Britain had the right to make laws for Southern Rhodesia but the colony could legislate its own internal affairs so long as these did not affect African land rights and political rights, such as they were. The assembly was elected, and the cabinet was chosen from ministers all of whom were appointed by the governor, including the prime minister. In a very short time the Southern Rhodesian government presented all draft legislation to Britain and amended or abandoned them if the United Kingdom objected, thus making the limits of responsible government barely visible.
This version of responsible government did not give Rhodesia dominion status, as many white politicians were to insist forty years later. Dominion status was itself very indistinct: it was an ambiguous term used to convey that some self-governing states—Canada, New Zealand, Australia, and the Union of South Africa after 1910—were to some degree subordinate to Britain. At its most clear-cut it marked a space between internal self-government and full independence, and as such it proved a useful procedural route to the independence of the Indian subcontinent. It was South Africa, and the union Southern Rhodesia rejected, that had dominion status with responsible government. Southern Rhodesia had responsible government without dominion status.
Throughout the 1920s, commercial agriculture expanded. Even as Britain regarded Southern Rhodesia's African policies as more progressive than those of South Africa, the Land Apportionment Act of 1930—the cornerstone of settler society, wrote Victor Machingaidze—evicted thousands of Africans from their farms to guarantee that land was available to new white farmers. A few years later the government of Southern Rhodesia created Native Purchase Areas, to compensate Africans not necessarily for their loss of land, but for their loss of the right to purchase land anywhere in the country. The scheme never managed to settle the fifty thousand African farmers Rhodesian officials both hoped and feared would create a propertied African middle class, but the ten thousand Purchase Area farmers who took advantage of the scheme occupied a unique space in how Rhodesians imagined African politics, as chapters 6, 9, and 11 show. This pattern, of openings for white immigrants yet to come that closed down opportunities for Africans, was to be repeated for years, especially after World War II, when the white population grew rapidly as commercial agriculture became once again profitable.
In 1951, Southern Rhodesia introduced the Native Land Husbandry Act (NLHA). Funded by the World Bank, it marked a significant shift in thinking about Africans, as Jocelyn Alexander has argued: Africans were no longer communal tribesmen, but rational actors operating within an impersonal market. Each man was a yeoman farmer. There would be fewer but more productive farms in the reserves; rural Africans should not be intermittent farmers, nor could they lay claim to land they had not worked for years. Urban Africans were to live in townships and rely exclusively on their wages: The actual implementation of the act was slow, however, and gave chiefs considerable latitude about how to protect their own land and cattle while safeguarding their rights over land redistribution. African opposition to the act was intense. The Southern Rhodesia African National Congress (SRANC) and its successor, the National Democratic Party (NDP), made land, not voting, the center of their political platforms, and their actions in towns and countryside brought about a range of repressive legislation that was to shape the history of Rhodesia. A state of emergency was declared in early 1959. Many leaders of the SRANC were detained. In prison they founded the NDP and continued to direct party affairs, as we shall see in chapter 3. The Law and Order (Maintenance) Act of 1960 strengthened not only executive power but that of security forces just as trade unionists struggled with—and sometimes against—nationalists while political parties sent youth leagues into townships to rally support, often with great violence and always with easy accusations that rivals had collaborated with the regime. By the time the NLHA was withdrawn in the early 1960s the rapidly overcrowded reserves were once more under the authority of chiefs and headmen.
Even as the government planned for a new kind of African farmer, officials in London, Salisbury, and Lusaka planned ways for new white immigrants to live in Africa. This was the Central African Federation, established in 1953. Federations within the British Empire had been promoted first in the 1880s, a way to secure British greatness in a way that made the nation global and British nationality the basis for political organization. There was one population—AngloSaxon—that could bring stability to a chaotic colonial world. As the idea of an imperial federation waned in Britain, the idea of merging of Southern and Northern Rhodesia in some way took shape in the 1920s. There were fantastic ideas with fantastic maps. The Central African Federation created in the early 1950s was in large part a product of pressure exerted by mining companies on the Colonial Office, which itself grappled with how the new white immigrants would live in Africa: in old colonies or in new kinds of states. In the end the Colonial Office agreed to a federation of Southern and Northern Rhodesia so long as it included Nyasaland. The widespread joke in Southern Rhodesia was that the federation was a "marriage of convenience" between a wealthy bride (copper-rich Northern Rhodesia) and a hardworking husband (the not-so-coded reference to white settlers in Southern Rhodesia). Impoverished Nyasaland was accepted "as the unavoidable mother-in-law in the matrimonial home." But many whites in Southern Rhodesia were elated: the goal of some kind of merging with Northern Rhodesia, a tobacco farmer said, was that "the copper mines could pay for the development of the whole area ... the same as the coal mines did in Britain and gold in South Africa." What was to distress many whites in Southern Rhodesia was the degree to which the question of Southern Rhodesia's status was mooted by the creation of the federation.
The structure of the federation disrupted the familiar hierarchies of colonial rule, in which the Colonial Office appointed a governor to oversee the territory and to act according to the reports of district and provincial officials. In the Central African Federation, Southern Rhodesia came under the supervision of the Commonwealth Relations Office (CRO); the federal government which oversaw the administration of Northern Rhodesia and Nyasaland was located in Salisbury while the two territories were nominally under the control of the Colonial Office. The federal government and those of the two northern territories were often in conflict; the Colonial Office usually did not support the federal government, about which officials in Salisbury complained loudly. This made Southern Rhodesia's status as a British possession even more ambiguous than it had been, and it made party politics layered. The federation introduced a tiered political system of federal and territorial assemblies and political parties that operated at both levels. It also required a determination of who, if anyone, would elect representatives to these bodies. There were federal electoral commissions and territorial franchise commissions. As chapter 2 shows, these did not promote universal adult suffrage but instead worked out convoluted imaginaries by which each territory would have something called nonracial politics that eventually would lead to parity between black and white in legislative bodies. The franchise commission in Southern Rhodesia, where voter qualifications had not changed since the 1930s, did not seek to expand or to limit African voting so much as it sought to hone it, to make sure that voter qualifications met African experiences, and to make sure Africans did not vote based on appeals to their emotions, emotions all too often triggered by talk of race. It is easy enough to read these commissions as a last gasp of white supremacy, but that would flatten the very active debates about race and politics that they engendered. There was a range of multiracial organizations, as we will see in the next chapter, and a social world, best chronicled by Philip Mason, of Salisbury dinners in which white people argued about the ideas of Burke and Mill and who should be allowed to vote.
Excerpted from Unpopular Sovereignty by Luise White. Copyright © 2015 The University of Chicago. Excerpted by permission of The University of Chicago Press.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
Table of ContentsAcknowledgments
A Note on Sources
Place Names, Party Names, and Currency
1 “The last good white man left”: Rhodesia, Rhonasia, and the Decolonization of British Africa
2 “Racial representation of the worst type”: The 1957 Franchise Commission, Citizenship, and the Problem of Polygyny
3 “European opinion and African capacities”: The Life and Times of the 1961 Constitution
4 “A rebellion by a population the size of Portsmouth”: The Status of Rhodesia’s Independence, 1965–1969
5 “A James Bond would be truly at home”: Sanctions and Sanctions Busters
6 “Politics as we know the term”: Tribes, Chiefs, and the 1969 Constitution
7 “Other peoples’ sons”: Conscription, Citizenship, and Families, 1970–1980
8 “Why come now and ask us for our opinion?”: The 1972 Pearce Commission and the African National Council
9 “Your vote means peace”: The Making and the Unmaking of the Internal Settlement, 1975–1979
10 “Lancaster House was redundant”: Constitutions, Citizens, and the Frontline Presidents
11 “Adequate and acceptable”: The 1980 Election and the Idea of Decolonization
12 “People such as ourselves”: Rhodesia, Rhonasia, and the History of Zimbabwe
What People are Saying About This
“White’s Unpopular Sovereignty is a groundbreaking contribution to studies of decolonization. She places the seemingly anomalous history of Rhodesian independence within the decolonization of the rest of Africa. This is combined with a reanimation of the history of the ‘high politics’ of late colonialism by incisive accounts of the effects of various franchise commissions and experiments at constitution writing. The result is one of the most decisive challenges to linear versions of decolonization: of Rhodesia-into-Zimbabwe, to be sure, but also, more broadly, of colonies into nation-states. Written with characteristic brilliance, verve, and wit, Unpopular Sovereignty will become indispensable reading for scholars of colonialism and of the postcolonial world.”
“Set in the late-colonial context of decolonization in Africa, this masterful book demonstrates that sovereignty does not flow in a linear fashion and according to preordained coordinates; and, that its predicates and foundationspolitical autonomy and self-government, on the one hand, and political identity and subjectivity, on the otherabide time and space in unpredictable ways. Relating the arguments to contemporary Zimbabwe, White demonstrates once and for all that the nature of sovereign power or associated political processes and outcomes are better understood through the manners in which shifting terrains of global, regional, and local alliances shaped the interests and the terms of the quest for power for protagonistswhite minorities and so-called native populations alike. This is a truly impressive intervention in the historiography (and theory) of decolonization in Zimbabwe that holds significant insights for accounts of postcolonial sovereignty everywhere. Simply wonderful and a joy to read.”
“This is a thorough, comprehensive, and well-researched book that will be the essential starting point for the reconsideration of Zimbabwe’s recent history and historiography. A sharply acute and very readable study that resets the foundations for the understanding of Rhodesia’s Unilateral Declaration of Independence in 1965, it sets the events surrounding and following UDI in the context of African decolonisation and in their international context. With fascinating accounts of the constitutional machinations and the regime of economic sanctions and its failures, it is unrivalled as a rich resource for the period based on a very wide range of sources.” | <urn:uuid:61fcf26b-9807-45fb-90a9-471dd814e8f8> | CC-MAIN-2021-49 | https://valsec.barnesandnoble.com/w/unpopular-sovereignty-luise-white/1119942554;jsessionid=5B2D7F685FDF55A87688BD7D3C532256.prodny_store01-va18?ean=9780226235059 | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362969.51/warc/CC-MAIN-20211204094103-20211204124103-00471.warc.gz | en | 0.964739 | 4,560 | 3.046875 | 3 |
1.1.4 Early Help / Early Support / Early Intervention
1. Introduction – What is Early Help?
Providing early help is more effective in promoting the welfare of children than reacting later. Early help means providing support as soon as a problem emerges, at any point in a child's life, from the foundation years through to the teenage years. Early help can also prevent further problems arising; for example, if it is provided as part of a support plan where a child has returned home to their family from care, or in families where there are emerging parental mental health issues or drug and alcohol misuse.
Effective early help relies upon local organisations and agencies working together to:
- Identify children and families who would benefit from early help;
- Undertake an assessment of the need for early help;
- Provide targeted early help services to address the assessed needs of a child and their family which focuses on activity to improve the outcomes for the child.
Local organisations and agencies should have in place effective ways to identify emerging problems and potential unmet needs of individual children and families. Local authorities should work with organisations and agencies to develop joined-up early help services based on a clear understanding of local needs. This requires all practitioners, including those in universal services and those providing services to adults with children, to understand their role in identifying emerging problems and to share information with other practitioners to support early identification and assessment.
Practitioners should, in particular, be alert to the potential need for early help for a child who:
- Is disabled and has specific additional needs;
- Has special educational needs (whether or not they have a statutory Education, Health and Care Plan);
- Is a young carer;
- Is showing signs of being drawn into anti-social or criminal behaviour, including gang involvement and association with organised crime groups;
- Is frequently missing/goes missing from care or from home;
- Is at risk of modern slavery, trafficking or exploitation;
- Is at risk of being radicalised or exploited;
- Is in a family circumstance presenting challenges for the child, such as drug and alcohol misuse, adult mental health issues and domestic abuse;
- Is misusing drugs or alcohol themselves;
- Has returned home to their family from care;
- Is a privately fostered child.
Effective assessment of the need for early help.
Children and families may need support from a wide range of local organisations and agencies. Where a child and family would benefit from co-ordinated support from more than one organisation or agency (e.g. education, health, housing, police) there should be an inter-agency assessment. These Early Help Assessments should be evidence-based, be clear about the action to be taken and services to be provided and identify what help the child and family require to prevent needs escalating to a point where intervention would be needed through a statutory assessment under the Children Act 1989.
A lead practitioner should undertake the assessment, provide help to the child and family, act as an advocate on their behalf and co-ordinate the delivery of support services. A GP, family support worker, school nurse, teacher, health visitor and/or special educational needs co-ordinator could undertake the lead practitioner role. Decisions about who should be the lead practitioner should be taken on a case-by-case basis and should be informed by the child and their family.
For an Early Help Assessment to be effective:
- It should be undertaken with the agreement of the child and their parents or carers, involving the child and family as well as all the practitioners who are working with them. It should take account of the child's wishes and feelings wherever possible, their age, family circumstances and the wider community context in which they are living;
- Practitioners should be able to discuss concerns they may have about a child and family with a social worker in the local authority. Local authority children's social care should set out the process for how this will happen.
2. Assessing Children and Families with Additional Needs
Each area has its own local process for assessing and supporting children and families with additional needs Please see below:
- Bradford – Prevention and Early Help;
- Calderdale – Early Intervention;
- Kirklees – Early Support;
- Leeds – Early Help;
- Wakefield - Early Help.
REMEMBER - The Early Help Assessment is not for when there is concern that a child may have been harmed or may be at risk of harm. In these circumstances, a referral should be made to Children's Social Care Services (see Referrals Procedure). | <urn:uuid:e8fbba29-417d-492b-a9a6-7ec9ca46b629> | CC-MAIN-2021-49 | https://westyorkscb.proceduresonline.com/p_com_ass_fram.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362969.51/warc/CC-MAIN-20211204094103-20211204124103-00471.warc.gz | en | 0.960676 | 938 | 3.765625 | 4 |
Approximately a decade following the Syncom initiative Hughes perceived that the major virtues of spacecraft spin stabilization would probably not be sufficient to continue Hughes’ dominance of the geosynchronous ComSat marketplace over the long term. Consequently an IR&D program was established to design and test a body stabilized ComSat “bus” variations of which were later offered in proposals for Intelsat V as well as for several government customers during the mid to late 1970’s. As it turned out, through a series of innovative design and marketplace initiatives, the reign of Hughes’ spin stabilized ComSats extended through the end of the 20th century, before being eclipsed by advanced body stabilized design technology.
Why Not Just Keep On Spinning?
The primary communication performance characteristics of a “high altitude” ComSat are “Effective Isotropic Radiated Power or EIRP (antenna gain times transmitted power directed toward the coverage area) and receiver “sensitivity” (antenna gain/receiver noise temperature or G/T). The most serious deficiency of an “all-spinning” ComSat design is the absence of a stable element for the mounting of highly directional (high gain) earth-oriented transmit/receive antenna beams1.
Additionally, the solar panel “prime power” available for powering communication transmitters and the power management of spacecraft “housekeeping” functions (telemetry, command, sensors, etc.) is derived from solar cells mounted on the vehicles cylindrical spinning “drum” rotor. Relative to a flat, sun-oriented panel, the cylindrical array “geometric” efficiency is about 32%. For a conventional spin stabilized design, the length (and, consequently the power output) of this cylindrical panel is limited by the requirement for passive dynamic stability that the inertial properties of the spacecraft be “disk shaped” with the spin axis moment of inertia larger than that of either transverse axis.
Both these antenna gain and prime power technical issues were largely mitigated by major design breakthroughs during the 1960’s and 1970’s. However, the spinning solar array “geometric inefficiency” could not be avoided. This fundamental prime power limitation prompted Hughes’ 1972 IR&D investment in a body stabilized S/C design program.
HS 361 IR&D Program
In 1972, Hughes executives recognized that future payload requirements would drive spacecraft design and push the limits of prime power because of the drum shaped solar array geometric inefficiency as well as the diameter and length constraints inherent in available launch vehicle fairing envelopes. While Hughes’ spin stabilized Comsat’s would continue to satisfy nearly all near-term mission requirements, it was understood that body stabilized spacecraft offered power advantages through the incorporation of large deployable, flat, sun-oriented solar arrays. Several satellites developed by our traditional competitors as well as the ATS-F and the Canadian Technology Satellite (CTS) had all demonstrated their capabilities in the early 70’s. Hughes’ challenge was to be ready to meet the growing future mission requirements of our commercial and government customers.
The decision to meet these future customer requirements with either a spin or body stabilized control system opened up new technical horizons. While many of Hughes’ existing technologies and capabilities were directly applicable to the implementation of body stabilized spacecraft designs, the hands on experience with body stabilized attitude control systems were limited to the days of the 1960’s Hughes Surveyor (“lunar soft lander”) spacecraft. The fundamental body stabilized design requirements were, in many areas, very different and not uniformly supported by Hughes’ prior design/manufacturing base. Establishing an internal hardware design/manufacturing capability with respect to spacecraft body stabilization was judged as crucial to positioning Hughes for future new business. The primary competition that lay ahead in 1975 was for the follow-on to Intelsat IV. As a result, a working group was set up in Dr. Leo Stoolman’s Systems organization and by the fall of 1972 the HS 361 Internal Research & Development (IR&D) project was put in place. Will Turk was named to lead the “HS 361” IR&D development program.
The key elements of the IR&D project were to:
a) Conduct tradeoffs and analyses for an Atlas Centaur Class Spacecraft design to gain insight into various nuances of the design including the unique transition from a spin stabilized transfer orbit into a body stabilized geosynchronous orbit deployment. Thor-Delta class missions were also considered during the later stages of the project.
b) To investigate alternative bus configurations to maximize payload performance while incorporating the most efficient power, thermal and propulsion systems (including an option for the incorporation of ion propulsion) and to evaluate deployable solar panel arrays and configurations including the Flexible Roll Up Solar Array (FRUSA) which was in development at Hughes and to test its deployment dynamics for this mission
c) To demonstrate substantive proof of concept of the attitude control system through a full up demonstration of the control systems acquisition and pointing control and to build/test a full scale engineering model of the primary body stabilized bus.
To these ends the Project achieved all of these goals in late 1974 and a HS 361 body stabilized spacecraft design and test program were completed incorporating an Intelsat V class payload based on early, postulated Intelsat communication design/performance requirements.
The 3800 pound HS 361 design incorporated a high speed momentum wheel gimbaled about two axes to control/stabilize spacecraft attitude to within 0.2 degree in pitch and roll and 0.5 degrees in yaw. Earth and sun sensors provide for attitude sensing along with inertial gyroscopes. Magnetic torque rods were employed to unload the wheel momentum accumulated from firing the propulsion jets and from external (primarily solar) torques. The design incorporated a solid propellant apogee kick motor to place the spacecraft into GEO following separation from the Centaur in a geosynchronous transfer orbit. Hydrazine thrusters were used to limit nutation divergence of the spin stabilized spacecraft during the transfer orbit and AKM burn phases to implement GEO N/S-E/W station-keeping. Prime power was derived from a sun oriented Flexible Roll Up Solar Array (FRUSA) which was selected for minimal solar panel weight. The FRUSA was partially deployed to provide power during the transfer orbit and subsequently stowed prior to the spacecraft’s erection/deployment in GEO. A maneuver to orient the spacecraft was performed and the momentum wheel was brought up to speed to stabilize/control the HS 361 in GEO. At this point the FRUSA solar array was fully deployed to provide up to 1 KW of power followed by deployment of the communication antennas. Ni-Cd batteries provided power during solar eclipses. The thermal control system incorporated mirrored thermal surfaces on the North and South faces of the spacecraft to reject internally generated heat loads (primarily from the communication payload’s high power transmitters).
For engineering and presentation purposes multiple models were built for testing purposes while several were created to demonstrate the key features of our design. A 1/3-scale model of the HS 361 was developed to show the design features and to demonstrate the various spacecraft states. The extension of the solar array during the transfer orbit and the on-orbit fully deployed solar panel and antennas could be shown with this single model and was used for both internal and customer presentations.
The engineering model, shown below, depicts the proposed structural elements containing the AKM thrust tube, propulsion system, the thermal control mirrors and the fully integrated communications payload.
While there are no pictures of the Attitude Control Lab built specifically to demonstrate our ability to design and build the HS-361 control system—this critical technology was incorporated in our design and the real time demonstration of the spacecraft attitude control was an IR&D project achievement.
Late 1970’s Proposals
The HS 361 project enabled Hughes to position itself for several future commercial and government bids during the mid to late 1970’s. The first opportunity, the Intelsat V RFP, came along in 1975. The proposed Intelsat V body stabilized design was largely derived from the HS 361 development project. However, the Intelsat V design requirements led Proposal Manager Warren Nichols and Steve Pilcher (leading the technical design) to select several design and component alternates. The bus shape was modified and the tanks were positioned for improved inertial properties. A solid, hinge deployable, hard-backed solar panel was selected to meet higher power requirements.
In fact, Hughes offered two spacecraft designs in response to the Intelsat V RFP, a spin stabilized configuration based on the dual spin Intelsat IV A and as well as a body stabilized design. Although Hughes was not awarded the Intelsat V contract (nor any of the subsequent 1970’s body stabilized proposal opportunities), Intelsat confirmed that both of the Hughes S/C offerings were in full compliance with the Intelsat V’s technical requirements.
Another Decade of “Spinners”
The incorporation of “despun”, earth oriented antennas and the “Gyrostat”, dual spin configuration dramatically improved the communication performance of the Hughes’ family of spin stabilized spacecraft designs while retaining the major advantages of spin stabilization.2 These critical design initiatives enabled spin stabilized designs to maintain a competitive marketplace advantage throughout the 1970’s.
With the advent of NASA’s Space Transportation System (STS) or Space Shuttle initiative (1972), another opportunity for a major improvement in the communication performance and cost-effectiveness of spin stabilized ComSats became available. To attract satellite users and manufacturers to adopt the Shuttle as their launcher of choice, in the mid 1970’s NASA offered a very attractive price for a ride to low earth orbit (LEO). This price was a small fraction of the cost of launching on an expendable launch vehicle, but left the GEO S/C customer responsible for orbital transfer of their STS payload from LEO to GEO. As it turned out implementation of the required orbital transfer maneuvers was a great design match for the Hughes spin stabilized “integral propulsion” designs. Additionally the Shuttle’s payload bay envelope was much wider (15 feet in diameter) and longer (60 feet) than any available expendable launch vehicle’s.
Within a few months of NASA’s Shuttle launch pricing “formula” announcement, a spacecraft design effort lead by Dr. Harold Rosen produced a spin stabilized, “wide body” design optimized for, and launch- able only on, the Shuttle. This spacecraft design was dubbed “Syncom IV” and incorporated the spin stabilized integral propulsion (large solid rocket motor, or SRM, augmented by a liquid propulsion system) capable of the requisite orbital transfer from LEO to GEO. Since the Shuttle had not yet launched, The Syncom IV design was followed by the HS 376 spin stabilized configuration which was compatible with launch on either the Shuttle or the Delta expendable L/V. These S/C designs for Shuttle launch came to fruition years ahead of Hughes’ competitors and were unique in their ability to effect orbital transfer from LEO to GEO without the requirement for a separate upper stage. The Syncom IV “wide body” design concept was incorporated in the design of the “Leasat” ComSat for communication service leases to the US Navy. This Shuttle optimized “wide body” configuration was also purchased by the USAF and dubbed the “Multi Mission Bus” (MMB) which supported classified government communication payloads.3
During the first half of the 1980’s four Shuttle optimized wide body ”Leasat” ComSats for support of US Navy communications as well as numerous HS 376 domestic ComSats were launched into GEO by the Shuttle. Hughes enjoyed a major cost-effective competitive marketplace advantage employing Shuttle launches until the disastrous loss of the Shuttle Challenger and its crew in January of 1986. This watershed event prompted the government to preclude the additional STS missions which would have been necessary to support the growing demand for the launch of unmanned spacecraft4 so unfortunately, the Shuttle “party was over”!
Within weeks of the Challenger disaster, Hughes management realized that Hughes family of spin stabilized spacecraft was at a significant cost/performance disadvantage with launches restricted to expendable L/V’s. Consequently, the Space and Communications Group (S&CG) embarked on an ambitious design program, to develop a superior, state-of-the-art body stabilized spacecraft which became the HS 601 series.
This expenditure (in excess of $100 M) of internal funds to execute this major HS 601 design effort was approved by Hughes’ new owner, General Motors, and led by Ron Symmes with close design support from Dr. Rosen and Steve Pilcher. The HS 601 body stabilized spacecraft design was derived in part from S&CG’s 1970’s IR&D and proposal efforts, but was primarily aimed at “leap-fogging” the competition by incorporating the very latest in proven, high performance S/C ‘bus” space technology, such as the NiH batteries and bi-propellant propulsion adopted from the USAF’s MMB program. Most notably, since most available expendable launchers offered a lower cost ride into LEO (Vs GEO transfer orbit), the HS 601 design incorporated the Shuttle launched design legacy, as an option, for the transfer from LEO to GEO employing spin stabilized integral propulsion.5 The first HS 601, Australia’s Optus B1, was launched on the Chinese Long March 2E in August of 1992 from Xichang, China and is depicted in the artist’s rendering, below:
With the introduction of the very successful HS 601 and HS 702 body stabilized spacecraft “product lines” the era of Hughes’ ground-breaking spin stabilized ComSats effectively ended with the expiration of the 20th century.
1. See “Slicing the Bologna” subsection “Focusing” for the dramatic communication performance enhancements available employing “despun”, earth oriented spacecraft antennas.
2. See “Slicing the Bologna” subsection “Syncom” for the primary attributes of spacecraft spin stabilization.
3. The USAF’s “Multi-Mission Bus” introduced NiH battery technology to Hughes’ spacecraft implementing major improvements in battery efficiency, capacity and life. Also incorporated for the first time was mono-methyl hydrazine and nitrogen tetroxide (MMH/N2O4) higher Isp liquid bi-propellant propulsion.
4. The government granted two waivers to this prohibition based on the unavailability of expendable launch vehicles capable of accommodating Hughes’ “wide body” S/C configuration. STS launch of the fifth Leasat took place on 1/9/1990 preceded by the classified launch of the USAF’s MMB in the late 1980’s.
5. A more complete description of S&CG’s history with respect to their body stabilized design initiatives is presented in “Slicing the Bologna” under subsection “HS 601/702”. | <urn:uuid:6a41d2ab-6902-4606-9644-ff6ab4feddb1> | CC-MAIN-2021-49 | https://www.hughesscgheritage.com/body-stabilized-comsat-developments-dick-cr-johnson-and-will-turk/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362969.51/warc/CC-MAIN-20211204094103-20211204124103-00471.warc.gz | en | 0.940218 | 3,182 | 2.96875 | 3 |
WIPO Demonstrates 3D Printing: Making The Impossible Possible 25/04/2013 by Catherine Saez, Intellectual Property Watch 2 Comments Share this:Click to share on Twitter (Opens in new window)Click to share on LinkedIn (Opens in new window)Click to share on Facebook (Opens in new window)Click to email this to a friend (Opens in new window)Click to print (Opens in new window)Experts in the field of three dimensional (3D) printing, invited by the World Intellectual Property Organization, today tried to demystify this technology, which has been much talked about but still not very well understood. Seen by some as a futuristic technology, 3D printing can achieve amazing results but also has technical limits and is not expected to yet lead to a manufacturing revolution. The hallway of the WIPO new building hosted a uncommon display of surprising objects on 25 April, a foldable seat made of plastic that once folded vaguely resembles an umbrella, a pair of improbable antique lace high heels plastic shoes, and a collection of equally strange items, all of which were made with 3D printing technology. A “3D printer” was also busy manufacturing a set of plastic objects. Human white matter fibers, first accurate model of the wiring of the human brain. (Photo Credit: Catherine Saez, IP-Watch) But 3D printing is not as frivolous as it seems, as many applications can be found in areas such as biology, medical treatments, and the serious-minded Ecole Polytechnique Fédérale de Lausanne (EPFL) will launch 3D print operations in July 2013 to help researchers with prototypes and models for use in the school’s 80 research laboratories. WIPO Director General Francis Gurry, chairing the panel discussion organised by WIPO, said 3D printing has potentially enormous implications for manufacturing capacities throughout the world. One of the reasons for the patent system is that technologies are disclosed, Gurry said. In the case of 3D printing, the first patent filing was in 1971and the first patent was granted in 1977, so the technology is now in the public domain, he said. For Carla van Steenbergen, legal counsel at Materialise, a company specialised in 3D printing, the correct terminology to describe this technology is “additive manufacturing,” as it only is loosely related to printing. It really is not as simple as pressing the button of a printer, all panellists said. It gives the possibility to manufacture a piece of hardware from digital information, from a computer file containing a design, van Steenbergen explained. The object is then done layer by layer. Olivier Olmo, director of operations at EPFL, said several different technologies are available on the market and some are easily found on the internet where people can buy or build their machine. But there was a core difference between professional machines and machines that can be accessed or built by the public in terms of quality and possibilities. 3D printing in action. (Photo Credit: Catherine Saez, IP-Watch) The main differences between professional machines and the ones accessible by the public is the complexity of the pre-processing of the data that could prove very complicated and not easy to use, he said. The possibilities offered by machines found on the internet are limited to small objects, such as glasses or handles. At EPFL, he said, this technology offers new opportunities to all the laboratories for research purposes and design of prototypes, although there is a limit to the technology. Gurry asked what brought the “media hype” around this technology. Van Steenbergen said additive manufacturing applies to many areas, such as in the medical field, like custom implants for hip replacement surgeries. The surgeon will be able to manufacture a custom-made implant from the magnetic resonance imaging of their patients, taking their age into account, van Steenbergen said. “It makes the impossible become possible,” she said. Some uniqueness and complexities can only be addressed through additive manufacturing, she added. Simon Jones, partner in DLA Piper’s Intellectual Property & Technological Group, concurred and said the technology has many applications such as human bones, clothing, pharmaceutical products, and pieces of furniture. One of the benefits of the technology, is that products can be manufactured anywhere, without the need to move them halfway around the world, he said. It also allows small scale production, he said, with societal and ecological benefits, he added. First 3D-printed “Couture” shoes by Naim Josefi made in polyamide by Materialise. (Photo Credit: Catherine Saez, IP-Watch) Several speakers said problems could occur concerning the quality control of the products and as the technology is more widely spread there would be a need for regulatory standards and measures, in all areas, and in particular in the medical sector. Intellectual property might also become an issue as the technology spreads and illegal copying might be hard to avoid, Jones said. Erik Ziegler, one of the co-founders of Prevue+Medical, a start-up specialised in 3D printing in medical applications, said that currently the US Food and Drug Administration does not have any regulation on products issued from 3D printing. Additive manufacturing has huge possibilities and is game changing, van Steenberger said, as “we are doing things that we could not have done 20 or 30 years ago.” But even if 3D printing can appear as a miracle technology, van Steenbergen said classical manufacturing appears to be here to stay, as 3D printing at this stage cannot represent an alternative. Some limitations stem from the fact that some materials cannot be used with this technology, said Olmo. Some 10 or 20 years down the road, 3D printing might be a revolution in terms of technology available for everybody. Then IP protection and regulation will be necessary, he said. According to Van Steenberger, Materialise is “flooded” by demands from people wanting to apply 3D printing to their products, but, she said, the applications and use of the technology have to be for the right purposes. Share this:Click to share on Twitter (Opens in new window)Click to share on LinkedIn (Opens in new window)Click to share on Facebook (Opens in new window)Click to email this to a friend (Opens in new window)Click to print (Opens in new window) Related Catherine Saez may be reached at firstname.lastname@example.org."WIPO Demonstrates 3D Printing: Making The Impossible Possible" by Intellectual Property Watch is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. | <urn:uuid:59b5b6f2-d61e-468a-a6cb-13d2b233ed3c> | CC-MAIN-2021-49 | https://www.ip-watch.org/2013/04/25/wipo-demonstrates-3d-printing-making-the-impossible-possible/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362969.51/warc/CC-MAIN-20211204094103-20211204124103-00471.warc.gz | en | 0.955118 | 1,384 | 2.90625 | 3 |
What is a research bibliography?
A bibliography is a list of works on a subject or by an author that were used or consulted to write a research paper, book or article. It can also be referred to as a list of works cited. It is usually found at the end of a book, article or research paper. Author name. Title of publication.
What are the parts of bibliography?
In general, a bibliography should include:the authors’ names.the titles of the works.the names and locations of the companies that published your copies of the sources.the dates your copies were published.the page numbers of your sources (if they are part of multi-source volumes)
How many parts are included in an annotation of a bibliography?
What are the two types of bibliography?
Types of bibliographiesYour bibliography should include every work that you cite in your text, as well as works that were important to your thinking, even if you did not mention them in your text. Selected bibliography.Single-author bibliography.Annotated bibliography.
What are the three different formats in writing bibliography?
The three most common types of citation styles used in academic writing are MLA, APA, and Chicago. However, these are not the only referencing styles available.
What bibliography means?
noun, plural bib·li·og·ra·phies. a complete or selective list of works compiled upon some common principle, as authorship, subject, place of publication, or printer. a list of source materials that are used or consulted in the preparation of a work or that are referred to in the text. | <urn:uuid:d8cf1bdb-a467-4ee4-a114-38c5a1536b0c> | CC-MAIN-2021-49 | https://www.leonieclaire.com/the-best-writing-tips/what-is-a-research-bibliography/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362969.51/warc/CC-MAIN-20211204094103-20211204124103-00471.warc.gz | en | 0.952744 | 347 | 3.40625 | 3 |
HM15 Presenter: Tammie Quest, MD
Summation: Heroics- a set of medical actions that attempt to prolong life with a low likelihood of success.
Palliative care- an approach of care provided to patients and families suffering from serious and/or life limiting illness; focus on physical, spiritual, psychological and social aspects of distress.
Hospice care- intense palliative care provided when the patient has terminal illness with a prognosis of 6 months or less if the disease runs its usual course.
We underutilize Palliative and Hospice care in the US. Here in the US fewer than 50% of all persons receive hospice care at EOL, of those who receive hospice care more than half receive care for less than 20 days, and 1 in 5 patients die in an ICU. Palliative Care can/should co-exist with life prolonging care following the diagnosis of serious illness.
Common therapies/interventions to be contemplated and discussed with patient at end of life: cpr, mechanical ventilation, central venous/arterial access, renal replacement therapy, surgical procedures, valve therapies, ventricular assist devices, continuous infusions, IV fluids, supplemental oxygen, artificial nutrition, antimicrobials, blood products, cancer directed therapy, antithrombotics, anticoagulation.
Practical Elements of Palliative Care: pain and symptom management, advance care planning, communication/goals of care, truth-telling, social support, spiritual support, psychological support, risk/burden assessment of treatments.
Key Points/HM Takeaways:
1-Palliative Care Bedside Talking Points-
- Cardiac arrest is the moment of death, very few people survive an attempt at reversing death
- If you are one of the few who survive to discharge, you may do well but few will survive to discharge
- Antibiotics DO improve survival, antibiotics DO NOT improve comfort
- No evidence to show that dying from pneumonia, or other infection, is painful
- Allowing natural death includes permitting the body to shut itself down through natural mechanisms, including infection
- Dialysis may extend life, but there will be progressive functional decline
2-Goals of Care define what therapies are indicated. Balance prolongation of life with illness experience.
Julianna Lindsey is a hospitalist and physician leader based in the Dallas-Fort Worth Metroplex. Her focus is patient safety/quality and physician leadership. She is a member of TeamHospitalist. | <urn:uuid:37fe3337-0d7d-4493-91e6-e50f6f87da3a> | CC-MAIN-2021-49 | https://community.the-hospitalist.org/hospitalist/article/122456/hospice-palliative-medicine/palliative-care-and-last-minute-heroics | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363229.84/warc/CC-MAIN-20211206012231-20211206042231-00151.warc.gz | en | 0.915172 | 526 | 2.65625 | 3 |
This village consists fo scattered homesteards and about 112 households. Thier current water sources are swamps and ponds. Typhoid, diarrhea, and cholera are common. The cost of treatment for Typhoid is about $30 per person—15 days’ wages. The cost of lost work and lost days at school is hard to quantify, but it is significant. Girls typically end their education early and their economic well-being is affected for the rest of their lives.
The people of Kyamayanja Village had no borehole available to them. They drew water from a nearby river and several ponds. The nearest borehole was in a village 5 kilometers away, but few people would travel to get that water not only because of distance but also because the people in that village would charge them a great deal of money to use the well. While the government had been promising a borehole for the last 10 years, this borehole from Ingomar Living Waters was the first in Kyamayanja. | <urn:uuid:5581da65-08a1-41c0-8a25-2ff7a4ffc8ee> | CC-MAIN-2021-49 | https://ingomarlivingwaters.org/projects/projects2021/2021-024/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363229.84/warc/CC-MAIN-20211206012231-20211206042231-00151.warc.gz | en | 0.986415 | 205 | 2.984375 | 3 |
Science and Research Behind Class IV Laser Therapy
Studies & Case Reports of Class IV Laser Therapy Treatment
If you have had success with laser therapy, please share your case study with us.
Comparisons of Light Therapy
Lasers are classified into four classes and two subclasses based on their wavelength and maximum output of power. The classifications categorize lasers according to their ability to produce damage in exposed people, from class I (no hazard during normal use) to class IV (severe hazard for eyes and skin). The classes are I, II, IIa, IIIa, IIIb, and IV.
Class III vs Class IV Lasers
The main differences between class III and class IV lasers are the maximum power output and the depth of penetration. Class III lasers have insufficient power and penetration capabilities to provide the long-lasting healing and pain relieving treatments of class IV therapeutic lasers.
LED Light Vs Laser Light
LED, or light emitting diode, is different from laser light because of its coherency—that is, how light wavelengths move in relation to each other. LED light emits incoherent waveforms so the light from an LED device is released in many directions. However, laser light have coherent waveforms, meaning that all the light waves move parallel to each other which creates a strong, tight beam of light. | <urn:uuid:1ba38c07-6fc8-4467-bfa0-ae025cc6a428> | CC-MAIN-2021-49 | https://www.aspenlaseru.com/class-iv-laser-therapy-reserach/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363229.84/warc/CC-MAIN-20211206012231-20211206042231-00151.warc.gz | en | 0.924519 | 273 | 2.578125 | 3 |
In certain circumstances you can use a special right that means you can refuse to hand over documents to the court or answer certain questions even if those documents or questions are relevant to the case. This is called the “doctrine of privilege”.
There are two main types of privilege:
- Private privilege
- Public interest privilege
You have a "privilege against self-incrimination". This means that you can refuse to answer questions or hand over documents that may implicate you in criminal proceedings.
You may have a "legal professional privilege". This means that a legal advisor and their client cannot be forced to reveal communications between them.
In addition, if any communication is made when legal action is being considered, then that communication is privileged even if it is not a communication between lawyer and client.
Priests are allowed to refuse to answer questions relating to what was said in the confessional. This is known as the "sacerdotal privilege". Similarly, communications with a counsellor may also be privileged.
Public interest privilege
You should get legal advice for more detailed information on this. | <urn:uuid:2cd32248-8584-4983-a917-c6e657ab9f84> | CC-MAIN-2021-49 | https://www.citizensinformation.ie/en/justice/evidence/privileged_evidence.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363229.84/warc/CC-MAIN-20211206012231-20211206042231-00151.warc.gz | en | 0.947136 | 241 | 2.953125 | 3 |
Once the heat of the summer has left and the cool crispness of early fall is just around the corner, it is time to consider resurrecting that dry and damaged lawn. Reseeding a lawn in the fall allows the grass to set roots over the winter, sprout vivaciously in the spring, and prepare itself for the unavoidable sweltering heat of the summer to come.
Step 1: Rake the yard to remove as much dead grass and thatch as possible. Consider renting a gas powered dethatcher to really pull up the dead grass and expose fresh soil for the soon to be sown seed.
Step 2: Loosen the soil. If the ground has not been aerated for several years, rent a core aerator and aerate the lawn. Otherwise, a good strong rake and a lot of hard work should do the job.
Step 3: Choose the right grass seed for the project. Fescues do well in areas with hot summers and cold winters. Reduce watering costs by choosing a mix that is Turfgrass Water Conservation Alliance Approved. TWCA approved seed requires 40% less water than traditional grass.
Step 4: Prepare the soil by spreading a low nitrogen fertilizer formulated specifically for starting grass or by spreading a thin layer of cotton burr or other mild compost over the entire yard.
Step 5: Sow the seed. Use a broadcast spreader or a verticutter/overseeder to spread seed onto the lawn at the appropriate rate. Fill the seed hopper up with a small amount of seed and watch as it is dispersed to gage if the seed is being dropped at the correct rate. Adjust the hopper as necessary and continue on.
Step 6: Fescue seed needs to have soil on three sides of it to germinate. For best results, use a verticutter/overseeder to “plant” the seed by running the machine over the yard after the seed has been sown. Otherwise, hand rake the area after seeding to assure the seed is pushed down into the soil.
Step 7: Once the seed is planted, reduce foot traffic on the lawn and lightly water the seeded area twice daily. Keep the top half inch of soil moist for the next 3 – 4 weeks. Be careful not to over water. As long as nighttime temperatures remain above 50 degrees Fahrenheit grass should begin to germinate within 2 – 3 weeks. | <urn:uuid:787f61aa-6d63-45f3-843f-d104f40ed26f> | CC-MAIN-2021-49 | https://www.cottinshardware.com/fall-seeding/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363229.84/warc/CC-MAIN-20211206012231-20211206042231-00151.warc.gz | en | 0.930583 | 495 | 2.8125 | 3 |
Our sense of hearing helps us to socialize, learn new things, and communicate. Imagine if you couldn’t hear suddenly? It can be debilitating, and you may feel left out. That is how some people experience their daily lives with a condition of hearing loss.
Hearing loss is a very common condition that affects 1 in 6 Australians and millions of others across the globe. Its affects can range from mild to severe and may be either temporary or permanent.
Some common signs of hearing loss include turning up the volume when listening to the TV, asking others to repeat their speech during a conversation, and experiencing ringing or buzzing in your ears.
Not all hearing loss is the same and is typically categorized into 3 main types; conductive, sensorineural and mixed hearing loss. Read on to learn more about the different types of hearing loss and how we can help you.
- Different Types of Hearing Loss:
- Sensorineural Hearing Loss
- Conductive Hearing Loss
- Mixed Hearing Loss
- Final Thoughts:
Different Types of Hearing Loss:
There are 3 main categories of hearing loss, known as conductive, sensorineural and mixed. Conductive hearing loss is usually referred to as temporary and is due to a physical obstruction restricting sound waves travelling sufficiently through the outer or middle ear. Sensorineural hearing loss is permanent as a result of damage occurring in the inner ear and/or auditory nerve. Mixed hearing loss as the name suggests is a mixture of both Conductive and Sensorineural. Your best option for treating any type of hearing loss is to seek treatment as early as possible and have regular hearing assessments to assess your auditory health.
Sensorineural Hearing Loss
Sensorineural hearing loss is the most common type of hearing loss. This form of auditory impairment is permanent. It impacts the pathways from your inner ear to your brain. This occurs when the inner ear structures are damaged, such as the auditory nerves or the hair cells (known as stereocilia) which are located in the cochlea. The function of the inner ear is to convert the sound waves into electrical signals, which are then transferred to the brain for processing.
Symptoms of Sensorineural Hearing Loss
- Difficulty understanding certain voices, such as children’s and/or female voices.
- Trouble hearing speech or sounds when background noise is present.
- People sound like they are constantly mumbling when talking to you.
- Unable to hear certain sounds, such as clock ticking or alarms.
- The need to turn the TV or radio up louder than other people.
- You have difficulty understanding speech clearly in groups of people or meetings.
- Buzzing or ringing sounds in your ears (commonly known as tinnitus).
Common Causes of Sensorineural Hearing Loss
Sensorineural hearing loss can result from a number of different reasons but the two most common causes are Age-Related hearing loss (known as Presbycusis), which occurs through the natural aging process, and Noise-Induced hearing loss typically caused by prolonged exposure to loud noise either through work or recreational activities.
Although sensorineural hearing loss typically occurs in the later stages of life there are also many people who are affected from an early age and even from birth. This type of hearing loss is typically due to genetic conditions or as a result of infections passed from the mother to the foetus inside the womb (Rubella, herpes, toxoplasmosis).
The most common causes are:
- Aging (Presbycusis)
- Long-term exposure to loud noises
- Head trauma or injury that damages delicate structures of the inner ear
Less common causes are:
- Meniere’s disease
- Tumours (acoustic neuroma)
- Infections or viruses
- Side effects of medications
Treatment Options for Sensorineural Hearing Loss
Unfortunately, there is no current medical or surgical procedure that can repair the hair-like sensory cells that become damaged in the inner ear. But there are a number of devices available that can help improve hearing depending on the severity and type of hearing loss:
- Hearing aid(s)
- Cochlear implants
- Bone-Anchored hearing aid (BAHA)
- Hybrid cochlear implants
- Assistive Listening Devices
Conductive Hearing Loss
Conductive hearing loss occurs in the outer and/or middle ear. An obstruction or damage in this area restricts the sound waves from travelling to the inner ear. Soft sounds can be hard to hear, while loud sounds are muffled. This type of hearing loss is typically temporary, as when the obstruction causing the restriction of sound is treated, its function will resume to normal.
A build-up of earwax or fluid in the middle ear is a common cause of conductive hearing loss. This type is usually temporary and easily treated, although if an infection is left untreated, it can sometimes become permanent if damage has occurred to the eardrum.
Symptoms of Conductive Hearing Loss
- Experiencing muffled hearing (often caused by fluid in the ears).
- Hearing your own voice differently (echo, drummy)
- A foul odour coming from the ear canal.
- Tenderness or ear pain.
- Ear drainage
- Decreased ability to hear.
- Sudden loss of hearing.
- Fullness or blocked sensation in your ears.
Common Causes of Conductive Hearing Loss
This type of auditory impairment can be caused by the following:
- Excessive build-up of earwax
- Perforated eardrums
- Otitis externa (infection of the external ear)
- Otitis media (infection in the middle ear)
- Otosclerosis (abnormal bone growth in the ear)
- A foreign object stuck in the ear canal.
- A benign tumour (Cholesteatoma)
Treatment Options for Conductive Hearing Loss
Medical treatment usually restores conductive hearing loss, but it may become permanent if the damage medical treatment isn’t sought.
- Ototopical drops (used to treat infections in the outer ear)
- Antibiotic tablets (used to treat middle ear infections)
- Cerumen Drops (used to soften and remove excessive earwax)
- Hearing aid (If the conductive loss is unrepairable)
- Bone-anchored implant (If the conductive loss is unrepairable and the use of a traditional hearing aid is not suitable)
Learn More: Hearing Loss: Causes, Diagnosis and Treatment
Mixed Hearing Loss
A Mixed hearing loss occurs when conductive and sensorineural hearing loss are both present. The sensorineural impairment is typically permanent, while the conductive hearing component is typically temporary. For example, mixed hearing loss may occur in people with Presbycusis who also have an ear infection.
Symptoms of Mixed Hearing Loss
The symptoms of mixed hearing loss can be a combination of sensorineural and conductive symptoms. It can reduce hearing in either one ear (unilateral hearing loss) or both (bilateral hearing loss) and is typically identifiable by an increased difficulty listening to speech or sound in the affected ear (s).
Common Causes of Mixed Hearing Loss
Causes of mixed hearing loss will be a mixture of both the conductive and sensorineural caused previously discussed.
- Head trauma (Sensorineural or Conductive)
- Age-related hearing loss (Sensorineural)
- Earwax impaction (Conductive)
- Gene factors (Sensorineural or Conductive)
- Certain medications (Sensorineural or Conductive)
- Infection in the outer or middle ear (Conductive)
- Frequent exposure to loud noises (Sensorineural)
Treatment Options for Mixed Hearing Loss
People experiencing mixed hearing loss need to have the cause of their condition assessed by a hearing care professional to best determine the right course of action. The first step would be to address the conductive component of the loss with wither surgical procedures and other medical treatments depending on the cause. Once the conductive component is addressed, then we can look at the sensorineural aspect and determine the best cause of rehab for the individual, either with the use of hearing aids, cochlear implants, bone-anchored hearing aids, or assistive listening devices.
As we have discussed, there are 3 main types of hearing loss which can occur from a range of different causes and conditions. In order to identify and seek adequate treatment for your specific type of loss, a hearing assessment is required by an experienced hearing care professional.
It is important to be advised that if you notice a sudden hearing loss, your best option is to seek treatment immediately.
Many patients that undergo a hearing test are often surprised to find that they have some level of hearing loss when tested. Hearing loss is not always noticeable and generally occurs gradually over a period of time. Some frequencies can remain within the normal range while others significantly decline. Annual hearing checks are recommended in order to monitor the decline in hearing year on year and should be incorporated into your annual health checklist.
Independent Hearing specializes in hearing loss prevention, detection and rehabilitation. We offer FREE consultations and are fully accredited with the government’s office of hearing service program to provide subsidized services to pension and DVA cardholders. Contact us today on 08 8004 0077! | <urn:uuid:a703fcb3-0f10-47c4-84a4-487fd14e0322> | CC-MAIN-2021-49 | https://www.ihearing.com.au/different-types-of-hearing-loss/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363229.84/warc/CC-MAIN-20211206012231-20211206042231-00151.warc.gz | en | 0.934748 | 1,967 | 3.65625 | 4 |
While technology continues to march us forward, good ol’ steam remains critical to production in many industries including chemical processing, energy generation, food & beverage, building heat, and pharmaceuticals to name just a few. The process for making steam may not have changed much over the years, but like almost all other components of production the cost to produce it keeps going up. When costs are going up, the focus on efficiency is greater, creating a greater emphasis on accurate flow data.
There are several technologies available for measuring steam flows, but in our experience, we mostly see turbine, vortex and differential pressure across an orifice plate in the field. DP technology is the legacy technology for steam flows and is far and away the most common in older installations. All three technologies can measure both steam and temperature to calculate mass flow. With all the pressure variations that can occur in steam flows, simple volumetric measurement is not sufficient for accurate steam flow measurement.
While all three technologies have proven effective over time, vortex technology has become the ideal choice for measuring steam flows. Here is a simple overview of each technology’s pluses and minuses:
From an up-front cost perspective turbine meters are the least expensive option with DP being the most expensive (when you add temp sensor to calculate mass flow and installation costs) while vortex falls roughly in the middle. However, when you start including all the maintenance costs associated with DP and turbine technologies, the total cost of ownership for vortex holds relatively steady while DP and turbine rise significantly over time. As dirt and scale build up in steam lines, turbines and DP impulse lines begin to clog while orifice plates begin to degrade reducing both accuracy and life expectancy. Steam line blowdowns take their toll on turbines and require the isolation of the DP transmitter and cleaning of impulse lines. With no moving parts or impulse lines, vortex meters require minimal maintenance.
From a turndown perspective, turbine meters typically have the widest turndown range with mass flow accuracy of +/-2.0%. Vortex meters also have a very wide turn down range of roughly 30:1 with a similar mass flow accuracy of +/- 2% and better accuracy of +/-1.0% for volumetric flow. DP with orifice plate typically has the narrowest turndown in the range of 10:1 with a similar accuracy as vortex. It is worth noting though, that while their accuracy slips significantly, both turbine and DP technologies can measure in low flow conditions with reduced accuracy. For flow rates below a vortex meters calibrated range, readings become erratic.
Vortex flow technology has been around for a long time and it was not until Sierra Instruments introduced the multivariable mass vortex flowmeter in 1997 that vortex became a reliable option for steam flows. As the demand for production efficiency increases and older technologies are due for replacement, in our opinion, vortex technology has become the best option for steam flow measurements because of its higher accuracy, better reliability, lower installation and maintenance costs and lowest total cost of ownership over the life of the meter.
Sierra’s InnovaMass 240i Inline and 241i Insertion Vortex Meters (https://www.sierrainstruments.com/products/steam)
Sierra InnovaMass® 240i Inline Meter
Sierra’s next generation inline vortex meter is ideal for saturated and superheated steam, gas and liquid applications. Able to measure both mass and volumetric flow rates, this one meter is multivariable for 5 measurements. | <urn:uuid:89de2588-ef42-43a6-a666-94c23f2b4587> | CC-MAIN-2021-49 | https://www.jobeandcompany.com/steam-measurement/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363229.84/warc/CC-MAIN-20211206012231-20211206042231-00151.warc.gz | en | 0.942023 | 723 | 2.859375 | 3 |
What is Psychology Q&A
What is the difference between a psychologist and a psychiatrist?
Psychologist and psychiatrists both provide treatment to individuals with psychological and behavioural issues. Psychology is both a profession and an independent scientific discipline. Psychiatry is a specialization within the field of medicine. A psychologist will have a Doctorate degree and a psychiatrist will have a Medical degree. Psychologists and Psychological Associates are trained in assessment, and treatment of both behavioural and mental conditions. Psychologists help people control and change their behavior as a primary method of treating problems. Psychiatrists prescribe medication as a primary means of changing people’s behavior. Both psychologists and psychiatrists assume that complex emotional problems are likely to be the result of biopsychosocial causes.
How does a Psychologist go about determining a diagnosis?
The standard procedure is for the psychologist or psychological associate to conduct several interviews with the patient and this usually includes having the patient complete one or more psychological test(s) and other sources of collateral data. A diagnosis is then made from the Diagnostic and Statistical Manual of Mental Disorders-5th Edition.
How long are the sessions and how many treatment sessions will I have to participate in?
The length of the sessions may vary but often are typically 50 to 60 minutes. How many assessment and treatment sessions you will need depends upon several variables including the nature and severity of the problem, the treatment goals selected, and the approach of the psychologist or psychological associate.
Will the information discussed with my psychologist be confidential?
Clients have the right to confidentiality and privacy. There are only a few circumstances in which psychologists might need to disclose your private information, and these circumstances will be reviewed with you before you initiate treatment so that you are aware of the limits of confidentiality. For example, some exceptions to confidentiality include, but are not limited to, when there is suspected abuse of a child, or if a client expressed a serious intention to harm oneself or another. Another potential circumstance, in which your information must be shared is a court order.
What is the difference between psychologist and psychological associate? (taken from CPO)
The difference is in how they are trained. Both have completed an undergraduate degree and have gone on to complete a graduate degree in psychology. Psychological Associates have completed a masters level degree in psychology (e.g. M.A., M.Sc., M.Ps., M.Ed.), which is then followed by four years of experience working in the scope of practice of psychology. Psychologists have completed a doctoral level degree in psychology (Ph.D., Psy.D., Ed.D., D.Psy.) which typically includes a one-year internship. Both Psychologists and Psychological Associates have then completed at least one additional year of formal supervised experience approved by the College and passed the three examinations required by the College. The profession of psychology in Ontario has a single scope of practice. There is no distinction made in the legislation or in the regulations between Psychologists and Psychological Associates with respect to scope of practice or with respect to controlled/authorized acts. | <urn:uuid:61c2261b-3b09-44ee-aa66-8999451a3936> | CC-MAIN-2021-49 | https://www.psych.on.ca/Public/About-Psychology/What-is-Psychology-Q-A | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363229.84/warc/CC-MAIN-20211206012231-20211206042231-00151.warc.gz | en | 0.95319 | 617 | 2.828125 | 3 |
YOGA INTELLIGENCE PROGRAM
FOR EARLY EDUCATION
(Click top right to unmute)
The Values we teach in our Yoga Intelligence Program for Early Education have passed the test of time. Throughout the world, regardless of religious and cultural differences, these values are central to how well we function as individuals and communities.
RESPONSIBILITY: Understanding that the actions we take can impact our own well being, the wellbeing of others, and even the environment.
CIVILITY: Learning that no individual exists in isolation leads to the cultivation of personal behaviours that extend beyond each individual, benefiting not only ourselves but the communities and ecosystems in which we live.
COMPASSION: Recognising our ability to empathise with, and aid, those less fortunate than ourselves.
RESPECT: We can learn from the wisdom and experience of others.
INTEGRITY: Learning the importance of aligning actions and words with your values. This often requires truthfulness and consistency in our actions.
RESILIENCE: The ability to cope with setbacks or adversity.
GRATITUDE: Our ability to appreciate and be thankful for what we have, or are given.
COURAGE: The ability to overcome our fears with grace and not be intimidated by the unknown.
HUMILITY: teaches us self worth, without the need to openly display it to others. | <urn:uuid:2dfcf179-ff8f-46b3-b9e9-9f60e656a523> | CC-MAIN-2021-49 | https://www.yipeekids.com/about | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363229.84/warc/CC-MAIN-20211206012231-20211206042231-00151.warc.gz | en | 0.905296 | 298 | 2.90625 | 3 |
January 2009 - A line of fruit trees was planted along the garden fence with hopes of a small espalier orchard. For the most part, the undertaking has been successful. Some trees thrived more than others. The first nectarine tree was pulled out in early 2012 and replaced. It's replacement is thriving. Another tree that was planted in shaky ground was the O'Henry Peach. Growth was only evident at the ends of the limbs and not very robust.
O'Henry Peach - July 21, 2010
Even though it produced a few peaches, it was pretty measly. When starting a young fruit tree, fruit should be plucked to encourage plant growth rather than fruit production. I can't do that and that's one of my weaknesses. Soil was amended and fertilizer was applied along with essential irrigation to ensure a healthy tree. All the trees receive at least 3 applications of dormant spray during the winter too. Still, it was kinda measly. The trunk appeared to be damaged from sunburn/sun scald. In 2011, Tommy Bahama umbrellas were erected for protection from the Bakersfield baking sun. But early in 2012, while cruising the isles of Lowe's, I spotted some tree wrap in the discount bin for only $1 a roll. Rather than use the umbrellas in 2012, the trunks of the peach and apple trees were wrapped (loosely) with the new find. Note: umbrellas don't work so well if an occasional wind picks up. The umbrellas could end up folded inside out and in the vegetable beds.
O'Henry Peach - July 21, 2012
The results are great. Even though the juvenile peach continues to cast off fruit it can't hold, there are a few monsters on those well shaded limbs. The apple tree is also sporting positive results this summer. | <urn:uuid:e6d8079b-726c-48cd-b42d-cd7ace8c24eb> | CC-MAIN-2021-49 | http://maybellinesgarden.blogspot.com/2012/07/patience-is-peachy-virtue.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363400.19/warc/CC-MAIN-20211207140255-20211207170255-00471.warc.gz | en | 0.981199 | 380 | 2.546875 | 3 |
Software and hardware are both main components of a computer. Equipment executes job, while software program tells the computer exactly how to work. It is necessary to recognize the differences between hardware as well as’ software application’. Both components are essential to the procedure of a computer system. If you wish to completely recognize the difference between equipment and’ software program,’ read on. This post will certainly aid you better recognize the distinction between both. This entry was created by an IT professional.
Application software is made up of computer system programs that do a specific feature. This type of software can be self-supporting, or it can be part of a larger program. It may be a single program, or a group of programs that work together. Several instances of application software include word processors, graphics programs, interaction platforms, and also office collections. It is essential to keep in mind that software is not the like ‘hardware.’.
System software consists of running systems, setting languages, and computational science. Software is non-essential and also not made use of by businesses. It is additionally available in open resource variations, which allow companies to create their very own OS. This kind of software is commonly made use of to manage various systems, from electric grids to nuclear plants. Its complexity and also usefulness make it essential for day-to-day life, and also it is utilized in practically every market of culture. You can’t visualize the world without software.
Software is also called’ system software’. It is installed on your computer. It regulates the standard operations of your computer as well as takes care of equipment parts. It is used to access files. The system software consists of running systems, device drivers, and also device drivers. By comparison, application-specific software program is specialized to execute a specific job. There are 2 major types of system software: general and specialized. The previous is utilized for standard features of a computer, and the latter is used to run other programs.
Applications are the sorts of software application that a computer requires to run. They are programs that do a particular task. These programs are called ‘applications’. They are software application. ‘Equipment’ is the physical parts of a computer. The hardware is the components that make it function. As well as software application is the “software application”. Simply put, both hardware and software are very important for a computer’s functionality. Yet the software is what makes it function.
System software helps the customer interact with the equipment. It works as a bridge in between hardware and the user. It runs in the history of the computer system. It permits various other applications to run. For instance, a phone can respond to an inquiry as well as display the outcomes. It can also perform a job for you. For example, a smart phone can help you shop online. And also a laptop will aid you conserve cash. All these types of software program are necessary to a computer system.
Whether the software program is freeware or industrial, both sorts of software are very important. For instance, system software is an application that controls a computer system. The application itself is another sort of software. It is the code that regulates the hardware. The system software is made up of low-level languages that communicate with hardware at a low-level degree. Both kinds of programs are necessary to the correct performance of a computer system. Actually, they are essential to a computer system’s performance.
There are 2 sorts of software program: application and also system software. Application software is made for end-users while system-based software program is for developers. There are likewise 3 sorts of systems: game engine, running system, and system-based application. The last one is a kind of energy program. The system software application is the foundation of a computer. It is the part that runs the game. Yet application software is the advanced kind. Yet it is very important to understand the difference between applications and systems before getting any one.
In computing, software application is the info that tells a computer just how to operate. While this is the most common kind, it is not always the only kind. There are utility programs, such as word processing program, which are likewise taken into consideration part of the system. The software application in a computer system is called a program, which is the set of guidelines that tells the equipment just how to operate. It is a type of data that explains just how the hardware and also the os job.
Unlike hardware, software can be self-contained or be composed of multiple programs. Modern applications are made to run a single purpose. Some of one of the most typical kinds of software program are data sources, workplace suites, word processing program, and also graphics applications. Additionally, a computer system might have both software and hardware. Its functionality can be measured with numerous metrics, consisting of dependability and also safety. The very best applications can safeguard you from malicious software program, while offering a trustworthy experience under particular problems.
The software application used by a computer system is called application software. It is made for end users and also helps them carry out tasks. For example, any kind of application you see on your smart phone is an application. A computer with no application means nothing. However without it, a computer system is worthless. A standalone program can refrain the job. A stand-alone program has no function. It is a single-purpose program. A standalone application, on the other hand, needs a certain computer system.
Traditionally, computer software application has been created by business that specialized in it. It is the software application that overviews the equipment in carrying out a certain task. Besides running systems, software program can also be identified as application software or system-specific software. In addition to applications, there are also various other kinds of applications that are made specifically for a specific kind of usage. Furthermore, there are games and also DVD playback applications. Some programs are extra generalized than others, and are meant to work on only one sort of gadget.
To put it simply, software is what tells a computer just how to do its task. It supplies the devices and sources needed for computer system programmers to develop applications. For example, it enables the user to manage the hardware of a gadget and also can regulate its procedures. Its key objective is to aid a person run a company. This software application can be set up on any type of tool that uses it. A computer system can additionally be utilized for enjoyment functions and also as an educational device. pen testing uk
To put it simply, software application is a set of programs that make a computer system run. Its role in a computer system is boosting and becomes extra complicated as it ends up being a lot more intricate. There are numerous kinds of software program that make life much easier, but there are some that are just valuable for a particular function. Ultimately, software can help you accomplish jobs you never thought possible before. Actually, software application is one of one of the most important parts of a computer. | <urn:uuid:9a51f522-8bb0-437a-b241-0e5cd021e99b> | CC-MAIN-2021-49 | http://pages-biz.com/2021/11/21/unanticipated-ways-software-application-can-make-your-life-better-2/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363400.19/warc/CC-MAIN-20211207140255-20211207170255-00471.warc.gz | en | 0.963188 | 1,450 | 3.640625 | 4 |
Human papillomavirus (HPV) is a common sexually transmitted infection that is implicated in 99.7% of cervical cancers and several other cancers that affect both men and women. Despite the role that HPV plays in an estimated 5% of all cancers and the evolving role of HPV vaccination and testing in protecting the public against these cancers, preliminary research in New Zealand health professionals suggest knowledge about HPV may not be sufficient.
A total of 230 practice nurses, smear takers and other clinical and laboratory staff who attended a range of training events completed a cross-sectional survey between April 2016 and July 2017. The survey explored four broad areas: demographics and level of experience, HPV knowledge (general HPV knowledge, HPV triage and test of cure (TOC) knowledge and HPV vaccine knowledge), attitudes towards the HPV vaccine and self-perceived adequacy of HPV knowledge.
The mean score on the general HPV knowledge questions was 13.2 out of 15, with only 25.2% of respondents scoring 100%. In response to an additional question, 12.7% thought (or were unsure) that HPV causes HIV/AIDS. The mean score on the HPV Triage and TOC knowledge questions was 7.4 out of 10, with only 9.1% scoring 100%. The mean score on the HPV vaccine knowledge questions was 6.0 out of 7 and 44.3% scored 100%. Only 63.7% of respondents agreed or strongly agreed that they were adequately informed about HPV, although 73.3% agreed or strongly agreed that they could confidently answer HPV-related questions asked by patients. Multivariate analyses revealed that knowledge in each domain predicted confidence in responding to patient questions. Furthermore, the number of years since training predicted both HPV knowledge and Triage and TOC knowledge.
Although overall level of knowledge was adequate, there were significant gaps in knowledge, particularly about the role of HPV testing in the New Zealand National Cervical Screening Programme. More education is required to ensure that misinformation and stigma do not inadvertently result from interactions between health professionals and the public.
Citation: Sherman SM, Bartholomew K, Denison HJ, Patel H, Moss EL, Douwes J, et al. (2018) Knowledge, attitudes and awareness of the human papillomavirus among health professionals in New Zealand. PLoS ONE 13(12): e0197648. https://doi.org/10.1371/journal.pone.0197648
Editor: Ray Borrow, Public Health England, UNITED KINGDOM
Received: May 3, 2018; Accepted: December 10, 2018; Published: December 31, 2018
Copyright: © 2018 Sherman et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The information sheet, survey and data are deposited publicly on the Open Science Framework website and can be accessed here: osf.io/ub7g2, DOI 10.17605/OSF.IO/UB7G2.
Funding: The authors received no specific funding for this work.
Competing interests: The authors have declared that no competing interests exist.
Human papillomavirus (HPV) is responsible for 99.7% of cases of cervical cancer along with some head and neck, penile and anal cancers. There are approximately 150 new diagnoses and 50 deaths from cervical cancer in New Zealand (NZ) every year , while head and neck cancers attributable to HPV are increasing in both men and women with 94 new cases and 43 deaths estimated for 2012 . In addition, there are longstanding ethnic inequalities in cervical cancer incidence and mortality, and cervical screening coverage remains low (and cancer incidence and mortality high) for indigenous Māori women as well as Pacific women .
The NZ National Cervical Screening Programme (NCSP), which was established in 19904, recommends 3-yearly routine screening with liquid-based cytology (LBC) for 20–69 year-old women, with HPV triage testing for low grade (ASC-US/LSIL) cytology in women 30+ years. The programme also recommends testing of cure following treatment for a high-grade lesion . From late 2018 the NCSP will introduce HPV testing as the primary screening test for women aged 25–68 years on a 5 yearly basis .
To reduce infection with high-risk types of HPV and its related cancers, the NZ National HPV Immunisation Programme was introduced in September 2008, offering free HPV vaccination (Gardasil, Merck) for females born in 1990 or later. School-based immunisation for 12–13 year-old girls commenced in most regions in 2009 and the three-dose coverage achieved by the program in cohorts born in 1991–2002 reached approximately 48–66% nationwide . In January 2017, the free programme was extended to boys and young men, the upper age for free vaccination was increased to 26 years, a two-dose schedule was implemented for individuals aged 14 and under, and the vaccine used was changed to nonavalent Gardasil 9 (Merck) .
Previous research has identified that health professionals can play an important role in vaccine uptake. In an Italian survey assessing childhood vaccine hesitancy in parents, hesitancy was significantly more common in those parents who lacked confidence in their child’s doctor . In a US study, more adolescents had not had the HPV vaccine when their parents felt they were not able to openly discuss their concerns with the doctor and in a second US study of parents who decline then later accept the HPV vaccination for their child, secondary acceptance was more likely in parents who received follow-up counselling from their child’s healthcare provider . Furthermore, recent research in the UK has also identified that women who report greater trust in their doctor were less likely to have decided not to undergo cervical screening .
In NZ, a cervical sample taker is a registered health practitioner (nurse or doctor) who holds a current practising certificate and has completed appropriate cervical screening training as part of a medical degree, midwifery training programme or via a New Zealand Qualifications Authority (NZQA) accredited course for cervical sample takers. Previous research exploring the knowledge of GPs and practice nurses (PNs) in Christchurch, New Zealand about HPV used 5 questions as part of a larger survey exploring attitudes towards HPV vaccination . Whilst performance across the 5 questions was reasonable, there was uncertainty as indicated by the number of ‘not sure’ responses, as well as some variability across questions. For example, while more than 90% of GPs and PNs knew that HPV vaccination would not eliminate the need for cervical screening, only 33% of GPs and 7% of PNs knew that anogenital warts caused by HPV 6 and 11 are not a precursor to cervical cancer. Only half of GPs and 42% of PNs knew that most HPV infections will clear without medical treatment and a quarter of GPs and nearly a third of PNs did not know, or were unsure, whether persistent HPV was a necessary cause of cervical cancer.
To our knowledge there are no studies exploring what primary care staff such as GPs, PNs and smear takers in NZ know about HPV since 2009. In light of the recent changes to the immunisation programme and the forthcoming changes to the NCSP, it is important to benchmark what nurses and smear takers understand about HPV, whether they feel well informed and assess any training needs they might identify.
Ethics approval was granted by the Massey University Ethics Committee 4000015595. The project was registered with Waitemata DHB localities (Reference number RM13518). Both Waitemata and Auckland DHB confirmed that locality authorisation was not required as the research was carried out in community healthcare settings.
An anonymous cross-sectional survey was conducted between April 2016 and July 2017. GPs, practice nurses, smear takers and other clinical and laboratory staff who attended a variety of training events (11 in total) in Auckland District Health Board (DHB), Hutt Valley DHB and Waitemata DHB catchment areas were invited to complete the paper-based survey. The sample represents the number of respondents we were able to collect within the one-year time frame. Participants were provided with an information sheet to read prior to completing the survey. The survey was taken from Patel et al., who had incorporated most of the items from Waller et al., and was adapted by adding back in a question about HPV and HIV/AIDS from Waller et al., and by changing some wording to make the terminology or protocols New Zealand-specific.
We established the face validity of the adapted questionnaire for the NZ clinical environment by having two groups peer review the survey, firstly to ensure we had captured the scope adequately and secondly to ensure questions were well structured. These groups included members of the DHB Immunisation team and cervical screening specialist doctors as well as nurse practitioners.
The final survey explored four broad categories: demographics and level of experience; HPV knowledge (general HPV knowledge, HPV triage and test of cure (TOC) knowledge and HPV vaccine knowledge), which were assessed using a true, false, don’t know format; and attitudes towards the HPV vaccine and self-perceived adequacy of HPV knowledge, which were assessed using 5-point Likert scales (the survey is publicly available here: osf.io/ub7g2, DOI 10.17605/OSF.IO/UB7G2).
Demographic factors included age, profession and years since HPV training. For analyses, profession was collapsed into four categories (nurse; general practitioner (GP); colposcopy, which included colposcopists and colposcopy nurses; and laboratory staff and other), and years since HPV training was collapsed into 3 categories (never; ≤ 1 year; > 1 year).
Factors affecting HPV knowledge were assessed using ordinal regression analysis. The approach for model development was to conduct univariate analyses initially and then enter variables into the full multivariate models that showed a statistically significant (P<0.05) association with the main outcomes in the univariate analyses. The rationale for this was that if a variable was associated with the main outcome measure, it could be a confounder.
Factors affecting self-perceived adequacy of HPV knowledge were also assessed. Feeling adequately informed and feeling confident in answering patient questions were converted from 5-point Likert scales to binary variables of yes (strongly agree, agree) and no or undecided (strongly disagree, disagree or undecided) as the dependent variables.
A total of 234 health professionals completed the survey. Due to the opportunistic nature of participant recruitment, a response rate was not able to be calculated. The data for four individuals were removed, as there were large sections that had been left unanswered. A total of 22 health professionals had at least one answer missing for the general HPV knowledge questions, 18 had at least one answer missing for the Triage and TOC knowledge questions, and 8 health professionals had at least one answer missing for the vaccine knowledge questions. Overall, 40 participants had at least one answer missing across all of the questions (several participants had missing data in more than one of the three sections). Details about participant gender, age categories, profession, smear taker status and date of most recent training, if any, are presented in Table 1.
General HPV knowledge
Out of a maximum knowledge score of 15 (see individual questions in Table 2 and excluding the question about HIV/AIDS), the mean score achieved by participants was 13.3 (standard deviation (SD) 2.0) and the median score was 14 (range 0–15, interquartile range (IQR) 13–15), with 27.9% (N = 58) achieving 100%. One individual did not answer any questions correctly.
The following questions were most often answered incorrectly: HPV usually doesn’t need any treatment (35.9% answered incorrectly or weren’t sure); Having sex at an early age increases the risk of getting HPV (26.2%); Most sexually active people will get HPV at some point in their lives (24.7%). In addition, more than 10% of health professionals incorrectly thought (or were not sure) that HPV cannot be passed on by genital skin-to-skin contact, that HPV does not cause genital warts, that using condoms does not reduce the risk of getting HPV and that HPV can be cured with antibiotics.
Following Waller et al., the item about HIV/AIDS was analysed separately from the rest of the questions. In total, 87.2% of respondents correctly identified that HPV does not cause HIV/AIDS.
HPV Triage and TOC knowledge
Out of a maximum knowledge score of 10 (see individual questions in Table 2), the mean score achieved by the participants was 7.4 (SD 2.0) and the median score was 8 (range 0–10, IQR 6–9), with 9.9% (N = 21) achieving 100%. Three individuals had no correct answers.
The following questions were answered incorrectly most often: All cervical samples showing mild cellular (ASC-US/LSIL) are tested for high-risk HPV (55.3% answered incorrectly or were not sure); All cervical samples taken 6 to 12 months post-treatment can be tested for high-risk HPV (54.9%); If high-risk HPV test is negative at 12 and 24 post treatment they will still require annual follow up for life (39.1%); If cytology and high-risk HPV test are negative at 12 and 24 post treatment, they will require a repeat smear in 3 Years (24.4%). In addition, more than 10% of health professionals incorrectly thought (or weren’t sure) that an HPV test can tell how long a person has had an HPV infection; an HPV test cannot be done at the same time as a Smear test; HPV testing is used to indicate if the HPV vaccine is needed; when an HPV test has been done that the results are available the same day; If an HPV test shows that a women does not have HPV her risk of cervical cancer is not low.
HPV vaccine knowledge
Out of a maximum knowledge score of 7 (see individual questions in Table 2), the mean score achieved by the participants was 6.0 (SD 1.2) and the median score was 6 (range 0–7, IQR 5–7), with 45.9% (N = 102) achieving 100%. One individual had no answers correct.
The following questions were answered incorrectly most often: The HPV vaccine offers protection against genital warts (31.9% answered incorrectly or weren’t sure); The HPV vaccines offer protection against most cervical cancers (18.8%); The HPV vaccines are most effective if given to people who have never had sex (17.5%). In addition, more than 10% of participants incorrectly thought (or weren’t sure) that the recommended number of HPV vaccine doses was not three.
Factors influencing level of HPV knowledge
Table 3 shows the effect of predictors on the three types of knowledge, both unadjusted (‘crude’) and adjusted for the other covariates (‘full model’). Having ever taken a smear was significantly positively associated with all three types of knowledge when entered into the model as the only predictor. However, when adjusting for the other predictors, the association with having ever taken a smear was attenuated for all knowledge types and only remained significantly associated with Triage and TOC knowledge score (where those who had ever taken a smear were more likely to have a higher knowledge score than those who had not taken a smear (OR 3.59, 95% CI 1.81–7.10, p < 0.01).
Years since HPV training was also associated with knowledge level in univariate analysis, where those who had had training (either ≤ 1 year ago or > 1 year ago) were more likely to have a higher knowledge score than those who had never had HPV training, across all types of knowledge. The association was more pronounced for those who had had more recent training (≤ 1 year ago) than for those who had training longer ago (> 1 year ago) for two out of the three domains, as expected. The association was attenuated when taking into other predictors on knowledge. However, having had HPV training ≤ 1 year ago compared to never remained significantly independently predictive of HPV knowledge score and Triage and TOC knowledge score; having had training > 1 year ago compared to never also remained significantly independently predictive of Triage and TOC knowledge score. Years since training was not predictive of HPV vaccine knowledge score after adjustment for the other predictors.
Current role was not associated with HPV knowledge score in univariate or multivariate analyses. However, current role was associated with the Triage and TOC knowledge score in univariate analysis with those who worked in colposcopy having a higher knowledge score than nurses (OR 7.89, 95% CI 1.19–52.19, p = 0.03). This association was attenuated and no longer statistically significant after adjustment for the other predictors (OR 6.20, 95% CI 0.91–42.30, p = 0.06). The number of colposcopy workers was very small (n = 4) and comprised 2 individuals who identified themselves as colposcopists and 2 who identified themselves as colposcopy nurses, so this result should be interpreted with caution. Those that were classed as laboratory staff or other were less likely to have higher Triage and TOC knowledge scores in the univariate analyses (OR 0.42, 95% CI 0.23–0.75, p<0.01, but this association disappeared after adjustment for the other predictors (OR 0.92, 95% CI 0.47–1.81, p = 0.82. The laboratory staff and other group were also more likely to have lower HPV vaccine knowledge scores than nurses in both univariate (OR 0.27, 95% CI 0.15–0.48, p < 0.01) and multivariate (OR 0.34, 95% CI 0.17–0.67, p < 0.01) models.
The effect of age on knowledge score was explored in univariate analysis as a potential predictor, but was not associated with scores for any of the three knowledge types (data not shown), so was not included in the multivariate analysis.
Attitudes towards HPV vaccine
Of all respondents, 96.5% (N = 220) agreed or strongly agreed that they would recommend the HPV vaccine (Table 4), with a further 3.5% (N = 8) undecided (there were 2 blank responses).
In total, 94.3% (N = 215) respondents agreed or strongly agreed that men/boys should be offered the vaccine (Table 4), with 5.3% (N = 12) undecided and 0.4% (N = 1) in disagreement (there were 2 blank responses).
Self-perceived adequacy of HPV knowledge
Only 63.7% (N = 144) respondents agreed or strongly agreed that they were adequately informed about HPV (see Table 4), 23.0% (N = 52) were undecided, while 13.2% (N = 30) disagreed or strongly disagreed (there were 4 blank responses).
Despite this, 73.3% (N = 165) respondents agreed or strongly agreed that they could confidently answer HPV related questions asked by patients (see Table 4). A further 18.2% (N = 41) were undecided and 8.5% (N = 19) disagreed or strongly disagreed (there were 5 blank responses).
Independent t-tests confirmed that the knowledge scores for general HPV knowledge, triage and test of cure knowledge and HPV vaccine knowledge were all significantly higher for those participants who felt they were adequately informed than in those who did not feel they were or who were unsure (p<0.01). The same was found for the question about feeling confident in answering patient questions.
Feeling adequately informed and feeling confident in answering patient questions were both related to having ever taken a smear, years since training, and to a much lesser extent, current role (data not shown). Therefore, the relationship between self-perceived adequacy and knowledge was explored further in multivariate analysis using binary logistic regression (Table 5).
Feeling adequately informed and confident in answering patient questions were independently predicted by HPV vaccine knowledge after adjustment for the other predictors, while the associations with HPV knowledge and Triage and TOC knowledge disappeared after adjustment. Again, the number of health professionals in the colposcopy role category was very small, so these results should be interpreted with caution.
Suggestions for how training might be improved were provided by 36 respondents (15.7%). They wanted regular updates, more training sessions and several health professionals felt that online training and other online resources such as research, frequently asked questions and updates would be useful. A request for specific advice that should be provided to parents and simple information sheets for both primary care and patients was suggested. There were also requests to widen the provision of training beyond practice nurses to all healthcare providers, specifically including GPs, independent vaccinators and Public Health Nurses delivering the School Based Immunisation programme.
Although mean knowledge levels for HPV and the HPV vaccine were reasonable (with each subset of questions yielding a mean percentage correct score of between 88% and 85%, respectively), only 25.2% and 44.3% of health professionals scored 100% in each category, respectively. Research has been conducted in other countries with HPV vaccination programmes to explore health professional knowledge about HPV and the vaccination (e.g., [11, 13, 14]). These studies reveal that, consistent with our NZ results, health professional knowledge about HPV and the HPV vaccination is frequently incomplete.
An evaluation of knowledge about HPV and HPV vaccination for GP practice nurses in Leicestershire in the UK, where the vaccination has been administered through the NHS since 2008, found that although general HPV knowledge scores were quite high, there were specific gaps or weaknesses in knowledge for example nearly 10% of PNs did not know that HPV causes cervical cancer and 63% believed that HPV requires treatment. Our study also revealed significant gaps in knowledge. For example, while general HPV knowledge was high, around a quarter of respondents were unaware that having sex at an early age increases the risk of getting HPV. A quarter of respondents were also unaware that HPV is so common that most sexually active people will be exposed to it in their lifetime. Research has shown that considerable stigma can be attached to a positive HPV test [15, 16] and that a lower level of education can be associated with an increase in the negative emotions and stigma that patients experience . Therefore, it is vital that clinical staff are aware of the widespread nature of the virus so that they can reassure patients and reduce stigma. A third of participants did not know or were unclear that HPV does not usually need treatment. This lack of knowledge has the potential to spread misinformation and cause confusion among patients as they seek treatment that is not available. Perhaps most worryingly, 13% of respondents either believed that HPV causes HIV/AIDS or were unclear that it did not.
Other research has demonstrated a lack of complete knowledge about HPV and the HPV vaccine among health professionals. Nilsen et al explored knowledge of and attitudes to HPV infection and vaccination among public health nurses and GPS in Northern Norway in 2010, one year after the HPV vaccination was introduced for 12 year-old girls in Norway . Knowledge of HPV infection, vaccine and cervical cancer was measured with 7 open-ended questions (e.g. what is the lifetime risk of a sexually active person getting HPV?). The percentage of GPs getting each question correct ranged from 26–55% while for the nurses it was 35–86%. Self-reported knowledge was considerably higher than actual knowledge. Only 47% of respondents knew that HPV infection is a necessary cause of cervical cancer.
In Malaysia there has been a school-based HPV vaccination programme since 2010. Jeyachelvi et al conducted a survey to explore HPV and HPV vaccination knowledge and attitudes in primary health clinic nurses who run the vaccination program in Kelantan, Malaysia . Nurses were given 11 questions to assess their knowledge. The mean score was 5.37 with the minimum score being 0 and the maximum being 9. No question was answered correctly by more than 87.3% of respondents and the poorest question (External anogenital warts increase the risk of cancer at the same site where the warts are located. True/False) was answered correctly by only 10.6%.
Rutten et al conducted a survey exploring clinician knowledge, clinician barriers and perceived parental barriers to HPV vaccination in Rochester US . They found that greater knowledge of HPV and the HPV vaccination (assessed together using an 11-item scale) was associated with higher rates of HPV vaccination initiation and completion of the 3-dose vaccination schedule, suggesting that knowledge is important in order to effectively promote HPV vaccination in addition to reducing stigmatising attitudes of clinicians identified in past research and discussed in more detail below.
Knowledge about triage and test of cure in our study was lower than for HPV and HPV vaccine knowledge (mean percentage correct score of 74%) and only 9.1% of health professionals correctly answered all the answers in this section. The Leicester UK study discussed above also revealed gaps in the practice nurse knowledge about current NHS processes around HPV triage and test of cure . For example, the role of HPV testing post-treatment (TOC) was misinterpreted, with only 66% acknowledging that all normal, borderline nuclear and mildly dyskaryotic samples are tested for high risk HPV post-treatment. Not all nurses felt adequately informed about HPV and a need to improve the provision of training was identified. For the triage and test of cure questions, while some questions were generally answered accurately in our study, some questions revealed uncertainty and a lack of understanding of the current guidelines. For example, fewer than half of the respondents knew that not all cervical samples showing mild cellular changes are tested for high-risk HPV (only those for women aged 30 and older are tested for HPV under the current NZ guidelines). In addition, almost a quarter of respondents did not know or were unsure that a negative HPV test means that a woman is at low risk from cervical cancer. This uncertainty is likely to be problematic when primary HPV testing is rolled out. Unlike the cell changes that are screened for currently, primary HPV screening is about identifying a woman’s risk factors. Health professionals will need to be confident in talking with women about what their positive test result means. The test of cure questions were also correctly answered by fewer than three quarters of the respondents.
In addition to knowledge, other studies have been conducted exploring health professionals’ attitudes towards the HPV vaccine. In Italy, almost all of the primary care paediatricians surveyed believed the vaccine was effective in preventing HPV related diseases in boys (92.3%) and girls (97.9%) and they also believed it was safe. Despite this only 18.4% always recommended the HPV vaccine to boys aged 11–12 compared with 77.4% who always recommended it to girls aged 11–12 . In a French survey, 72.4% of general practitioners indicated that they always or often recommended the vaccine to girls aged 11–14 , while in a US survey, 60% of paediatricians and 59% of family doctors recommended the vaccine to girls aged 11–12 compared with 52% and 41% who recommended it to boys aged 11–12 . By contrast in our survey, the vast majority of health professionals indicated that they would recommend the vaccine and they also favoured vaccinating boys and men, with only one individual indicating they would not recommend vaccinating boys and men. This is particularly reassuring since NZ made the vaccine available to boys from January 2017.
The public and HPV
The public is generally not well informed about HPV and its health sequelae, even in countries with well-established vaccine and screening programmes, and the role of health professionals is vital in mitigating this lack of knowledge [22, 23]. For example, in a survey of 200 NZ university health science students (mean age 19.8 years), 50.8% were both unaware of the sexual transmission of HPV and unwilling to accept a free HPV vaccine, highlighting the need for education in this age group . In the UK, Sherman and Nailer found that only half of parents of teenage boys in their survey had heard of HPV and the HPV vaccine, despite the vaccine having been available to girls in the UK since 2008 . The HPV vaccination programme in the UK is school-based and Boyce and Holmes identified that the school nurse played a vital role in reducing health inequalities associated with vaccine uptake . In another survey, of young adults exploring psychological traits and vaccine uptake in the US, Scherer et al suggested that women who receive the HPV vaccine may do so based on informational evidence and that for both males and females, information about the vaccine “should be communicated in a way that highlights the risks associated with HPV and reduces uncertainty about the HPV vaccine” .
HPV-related knowledge or lack thereof can also impact cervical screening engagement. A survey of adult women in Kenya from HIV-1-discordant couples, found that those women who had never attended screening reported not knowing what a Pap smear was or why they needed one. After adjusting for age, both education and knowledge of HPV were associated with ever having a smear test . In a survey of women who underwent first time treatment for high grade cervical intraepithelial neoplasia (CIN) in Sweden, knowledge about HPV, CIN and cervical cancer predicted their understanding of their personal risk of cervical cancer and Barnoy et al have previously identified that lower unrealistic optimism about risk was associated with intention to undergo screening tests . Crucially, more than two thirds of the Swedish women surveyed stated they would like “to receive more information about HPV, cervical cancer and its prevention from health professionals (midwives, gynaecologists, primary care physicians)” . Furthermore, there is considerable stigma associated with a diagnosis of HPV [15, 16]. For example, in a qualitative study, McCaffrey et al., found that HPV positive women reported levels of stigma and anxiety suggesting that “testing positive for HPV was associated with adverse social and psychological consequences that were beyond those experienced by an abnormal smear alone” (p173). Daley et al., found that younger age and less education were associated with more negative emotions (e.g., anger, shock and worry) and stigma beliefs (e.g., feeling ashamed, guilty and unclean) in HPV positive women . In addition, a survey of Hong Kong Chinese healthcare providers exploring levels of knowledge about HPV and attitudes revealed that more knowledge about HPV predicted less stigmatising attitudes from healthcare providers .
The findings above underscore the need for health professionals to be well informed about HPV, the vaccine and screening programme.
Health professional education needed
Our results suggest that education about HPV and particularly the use of HPV testing in the screening programme and test of cure process is urgently needed to address some worrying gaps in knowledge. This is especially important since further changes to the screening programme are due to be implemented, with draft primary HPV screening guidelines recently out for consultation . As other countries also start to roll out primary HPV screening, the success of primary screening engagement in NZ and the rest of the world may well rest upon the level of knowledge of those health professionals responsible for implementing it.
The need for education indicated by the knowledge scores was further reinforced by the fact that over a third of respondents did not agree that they felt adequately informed about HPV and that being adequately informed and feeling confident in responding to patients’ questions were both associated with knowledge. Suggestions for training were proposed by some of the respondents. One promising suggestion, which was also proposed by UK practice nurses , was for online training. This would provide a low-cost way to update changes to the vaccination and/or screening programmes and guidelines in a format that would be easily accessible to many staff whilst requiring relatively little time commitment to complete. HPV vaccination online training was developed by the Immunisation Advisory Committee (IMAC) and was released in August 2017 . This may address some of the knowledge issues associated with vaccination identified in this study, but additional online training regarding screening and test of cure is needed.
There are several limitations to our study. Firstly, due to the opportunistic recruitment approach, involving self-selection of study participants, the response rate is unknown. As a consequence, we were not able to examine whether, and to what extent, bias due to non-response (or participation bias) has occurred, as we were unable to assess the level of HPV knowledge in non-responders. Also, as is common for most questionnaire surveys, we cannot exclude social desirability bias (the tendency of survey respondents to answer questions in a manner that will be viewed favourably by others), but believe that this type of bias is less likely to be a significant factor for health professionals. Secondly, the sample is not evenly distributed across the categories of health professionals, with significantly more nurses having completed the survey than GPs, colposcopists or laboratory and other staff. Since there are currently no official data available on the cervical screening workforce in New Zealand, we are unable to indicate how representative our sample is of that wider group. Thirdly, some of the questions, such as whether participants felt they could confidently answer HPV related questions asked by patients, are less relevant to laboratory staff who are less likely to interact with patients. Lastly, as with all such surveys, by providing questions with true/false/don’t know response options, the questions themselves might act as a prompt enabling educated guesses rather than measuring knowledge directly and thus might potentially overestimate knowledge.
Our survey is the first to be conducted in NZ that explores health professional knowledge and understanding about HPV, the vaccine and the role of HPV testing in the cervical screening programme and it contributes to the international picture about HPV knowledge that is emerging. It is evident from our findings and those from other countries, that more education is required to ensure that misinformation, the stigma associated with the sexually transmitted nature of HPV and widening inequalities do not inadvertently result from interactions between health professionals and the public.
We would like to thank the following individuals for assistance with data collection:
Jane Grant—Cervical Screening Nurse Specialist, Metro Auckland Cervical Screening Coordination Service, Auckland and Waitemata DHBs
Lucina Kaukau—Cervical Screening Nurse Specialist, HPV Self-Sampling Feasibility study for Maori women, Research Nurse.
Lisbeth Alley—Programme Manager, Immunisation. Auckland and Waitemata DHBs
Pam Hewlett—Women’s Health Manager. Auckland and Waitemata DHBs
- 1. Ministry of Health (2017). Immunisation Handbook. Wellington: Ministry of Health. https://www.health.govt.nz/system/files/documents/publications/immunisation-handbook-2017-may17-v3.pdf Downloaded on 4th January 2018.
- 2. HPV Information Centre (2017). Human Papillomavirus and Related Diseases Report NEW ZEALAND. http://www.hpvcentre.net/statistics/reports/NZL.pdf. Downloaded on 8th June 2017.
- 3. National Screening Unit (2017). National Cervical Screening Programme. https://www.nsu.govt.nz/health-professionals/national-cervical-screening-programme/cervical-screening-coverage/dhb-quarte-21. Downloaded on 12th January 2018.
- 4. National Screening Unit (2008). Guidelines for Cervical Screening in New Zealand: Incorporating the management of women with abnormal cervical smears. Wellington: National Screening Unit, Ministry of Health.
- 5. National Screening Unit (2016). Primary HPV Screening. https://www.nsu.govt.nz/health-professionals/national-cervical-screening-programme/primary-hpv-screening. Downloaded on 8th June 2017.
- 6. Napolitano F, D’Alessandro A, Angelillo IF. Investigating Italian parents’ vaccine hesitancy: A cross-sectional survey. Human vaccines & immunotherapeutics. 2018 May 14:1558–1565.
- 7. Roberts JR, Thompson D, Rogacki B, Hale JJ, Jacobson RM, Opel DJ, et al. Vaccine hesitancy among parents of adolescents and its association with vaccine uptake. Vaccine. 2015 Mar 30;33(14):1748–55. pmid:25659278
- 8. Kornides ML, McRee AL, Gilkey MB. Parents who decline HPV vaccination: who later accepts and why?. Academic Pediatrics. 2018 Mar 31;18(2):S37–43.
- 9. Marlow LA, Ferrer RA, Chorley AJ, Haddrell JB, Waller J. Variation in health beliefs across different types of cervical screening non-participants. Preventive Medicine. 2018 Jun 1;111:204–9. pmid:29550302
- 10. Henninger J. Human papillomavirus and papillomavirus vaccines: knowledge, attitudes and intentions of general practitioners and practice nurses in Christchurch. Journal of Primary Health Care. 2009;1(4):278–285. pmid:20690336
- 11. Patel H, Austin-Smith K, Sherman SM, Tincello D, Moss EL. Knowledge, attitudes and awareness of the human papillomavirus amongst primary care practice nurses: an evaluation of current training in England. Journal of Public Health. 2016 Jul 1:601–608.
- 12. Waller J, Ostini R, Marlow LA, McCaffery K, Zimet G. Validation of a measure of knowledge about human papillomavirus (HPV) using item response theory and classical test theory. Preventive Medicine. 2013 Jan 1;56(1):35–40. pmid:23142106
- 13. Nilsen K, Aasland OG, Klouman E. The HPV vaccine: knowledge and attitudes among public health nurses and general practitioners in Northern Norway after introduction of the vaccine in the school-based vaccination programme. Scandinavian journal of primary health care. 2017 Oct 2;35(4):387–95. pmid:28933242
- 14. Jeyachelvi K, Juwita S, Norwati D. Human papillomavirus Infection and its Vaccines: Knowledge and Attitudes of Primary Health Clinic Nurses in Kelantan, Malaysia. Asian Pacific Journal of Cancer Prevention. 2016;17(8):3983–8. pmid:27644649
- 15. McCaffery K, Waller J, Nazroo J, Wardle J. Social and psychological impact of HPV testing in cervical screening: a qualitative study. Sex Transmitted Infections. 2006 Apr 1;82(2):169–74.
- 16. Daley EM, Vamos CA, Wheldon CW, Kolar SK, Baker EA. Negative emotions and stigma associated with a human papillomavirus test result: A comparison between human papillomavirus–positive men and women. Journal of Health Psychology. 2015 Aug;20(8):1073–82. pmid:24217064
- 17. Rutten LJ, Sauver JL, Beebe TJ, Wilson PM, Jacobson DJ, Fan C, et al. Clinician knowledge, clinician barriers, and perceived parental barriers regarding human papillomavirus vaccination: Association with initiation and completion rates. Vaccine. 2017 Jan 3;35(1):164–9. pmid:27887795
- 18. Kwan TT, Lo SS, Tam KF, Chan KK, Ngan HY. Assessment of knowledge and stigmatizing attitudes related to human papillomavirus among Hong Kong Chinese healthcare providers. International Journal of Gynecology & Obstetrics. 2012 Jan 31;116(1):52–6.
- 19. Napolitano F, Navaro M, Vezzosi L, Santagati G, Angelillo IF. Primary care pediatricians’ attitudes and practice towards HPV vaccination: A nationwide survey in Italy. PloS one. 2018 Mar 29;13(3):e0194920. pmid:29596515
- 20. Collange F, Fressard L, Pulcini C, Sebbah R, Peretti-Watel P, Verger P. General practitioners’ attitudes and behaviors toward HPV vaccination: A French national survey. Vaccine. 2016 Feb 3;34(6):762–8. pmid:26752063
- 21. Allison MA, Hurley LP, Markowitz L, Crane LA, Brtnikova M, Beaty BL, et al. Primary care physicians’ perspectives about HPV vaccine. Pediatrics. 2016 Feb 1;137(2):e20152488. pmid:26729738
- 22. Ragan KR, Bednarczyk RA, Butler SM, Omer SB. Missed opportunities for catch-up human papillomavirus vaccination among university undergraduates: Identifying health decision-making behaviors and uptake barriers. Vaccine. 2018 Jan 4;36(2):331–41. pmid:28755837
- 23. Napolitano F, Napolitano P, Liguori G, Angelillo IF. Human papillomavirus infection and vaccination: Knowledge and attitudes among young males in Italy. Human vaccines & immunotherapeutics. 2016 Jun 2;12(6):1504–10.
- 24. Chelimo C, Wouldes TA, Cameron LD. Human papillomavirus (HPV) vaccine acceptance and perceived effectiveness, and HPV infection concern among young New Zealand university students. Sexual Health. 2010 Sep 9;7(3):394–6. pmid:20719233
- 25. Sherman SM, Nailer E. Attitudes towards and knowledge about Human Papillomavirus (HPV) and the HPV vaccination in parents of teenage boys in the UK. PloS one. 2018 Apr 11;13(4):e0195801. pmid:29641563
- 26. Boyce T, Holmes A. Addressing health inequalities in the delivery of the human papillomavirus vaccination programme: examining the role of the school nurse. PLoS One. 2012 Sep 13;7(9):e43416. pmid:23028452
- 27. Scherer AM, Reisinger HS, Schweizer ML, Askelson NM, Fagerlin A, Lynch CF. Cross-sectional associations between psychological traits, and HPV vaccine uptake and intentions in young adults from the United States. PloS one. 2018 Feb 23;13(2):e0193363. pmid:29474403
- 28. Rositch AF, Gatuguta A, Choi RY, Guthrie BL, Mackelprang RD, Bosire R, et al. Knowledge and acceptability of pap smears, self-sampling and HPV vaccination among adult women in Kenya. PloS one. 2012 Jul 10;7(7):e40766. pmid:22808257
- 29. Andersson S, Belkić K, Demirbüker SS, Mints M, Östensson E. Perceived cervical cancer risk among women treated for high-grade cervical intraepithelial neoplasia: The importance of specific knowledge. PloS one. 2017 Dec 22;12(12):e0190156. pmid:29272293
- 30. Barnoy S, Bar-Tal Y, Treister L. Effect of unrealistic optimism, perceived control over disease and experience with female cancer on behavioral intentions of Israeli women to undergo screening tests. Cancer Nursing 2003; 26: 363–369. pmid:14710797
- 31. National Screening Unit (2017). Updated Guidelines for Cervical Screening in New Zealand. https://www.nsu.govt.nz/health-professionals/national-cervical-screening-programme/cervical-screening-guidelines/updated. Downloaded on 4th January 2018.
- 32. The Immunisation Advisory Centre (2017). HPV Vaccination Module. https://www.immune.org.nz/health-professionals/education-training/hpv-vaccination-module. Downloaded on 4th January 2018. | <urn:uuid:3af205d0-5351-4b5c-bbce-e97595c1b0b6> | CC-MAIN-2021-49 | https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0197648 | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363400.19/warc/CC-MAIN-20211207140255-20211207170255-00471.warc.gz | en | 0.95322 | 9,421 | 3.171875 | 3 |
Letya Mathilda. Worksheets. June 11th , 2021.
There are other sources for worksheets also. You can find many public schools and private schools which will provide free worksheets for you if you buy textbooks from the school. Or you can usually find textbooks and workbooks at the public library, where you can also copy any worksheets that you want to use. So what kinds of worksheets should you get? Anything where you feel that your child needs further drill. We often have this notion that worksheets are just for math. This, of course, is not true. While they are excellent tools for reviewing math facts such as the multiplication tables and division facts, they are just as useful for reviewing parts of speech or the states in the union.
The addition, subtraction & number counting worksheets are meant for improving & developing the IQ skills of the kids, while English comprehension & grammar worksheets are provided to skill students at constructing error free sentences. The 1st grade worksheets can also used by parents to bridge between kindergarten lessons & 2nd grade program. It happens on many occasions that children forget or feel unable to recollect the lessons learnt at the previous grade. In such situations, 1st grade worksheets become indispensable documents for parents as well as students.
Home schooling can be expensive and difficult to implement, especially on a tight budget. If money is tight, homeschool worksheets that you can get for free can take the place of a textbook. The worksheet will be able to teach pretty much the same things that a textbook will, and yet you won’t have to spend hundreds of dollars on books. Although some worksheet resources charge a small fee, you will be able to access thousands of printable worksheets that you can use for homeschooling. Different worksheets are available for different areas of study at home. If you know what your child will be learning for the year, you can already find worksheets for homeschooling on that particular topic. In choosing a worksheet, it is important to review the source and check the material. Ensure that the material and answers are accurate. Evaluate the worksheet by completing it yourself. The worksheet should provide information clearly and accurately. Make sure it is exactly what you need to homeschool your child.
Schools use worksheet from printing to cursive writing of letters to writing of words. There are also online help to show the children how to exactly form a letter or word. After showing the students or children the way of writing, you can print the worksheets and give them practices on how to write exactly the right way. Children will be interested to do the activity because they had fun watching the software that you showed them. Worksheet is not just for practice. Teachers can also let their students do a group activity through worksheets. Through this, students will learn how to bond and work with their classmates as one team. Teachers may also make worksheet activities as a contest. The prizes at hand will inspire and motivate students to perform well and learn their lessons.
If you home school your children, you will quickly realize how important printable homeschool worksheets can be. If you are trying to develop a curriculum for your home-schooled child, you may be able to save a lot of time and money by using free online home school worksheets. However, while they can be a helpful tool and seem like an attractive alternative to a homeschool, they do have a number of limitations. There are numerous online resources that offer online worksheets that you can download and use for your children’s homeschooling for free. They cover practically all subjects under the sun. Different homeschool worksheets are available that are suitable for all types of curriculums, and they can help enhance what you are teaching. Aside from helping you assess your child’s comprehension of a subject matter, printable home school worksheets also provide something for your child to do while you work on other things. This means that you can be free to run your home while teaching your child at the same time, because the worksheet simplifies the homeschooling job for you.
What are math worksheets and what are they used for? These are math forms that are used by parents and teachers alike to help the young kids learn basic math such as subtraction, addition, multiplication and division. This tool is very important and if you have a small kid and you don’t have a worksheet, then its time you got yourself one or created one for your kid. There are a number of sites over the internet that offer free worksheets that are downloadable and printable for use by parents and teachers at home or at school.
Are you the parent of a toddler? If you are, you may be looking to prepare your child for preschool from home. If you are, you will soon find that there are a number of different approaches that you can take. For instance, you can prepare your child for social interaction by setting up play dates with other children, you can have arts and crafts sessions, and so much more. Preschool places a relatively large focus on education; therefore, you may want to do the same. This is easy with preschool worksheets. When it comes to using preschool worksheets, you will find that you have a number of different options. For instance, you can purchase preschool workbooks for your child. Preschool workbooks are nice, as they are a relatively large collection of individual preschool worksheets. You also have the option of using printable preschool worksheets. These printable preschool worksheets can be ones that you find available online or ones that you make on your computer yourself.
Any content, trademark/s, or other material that might be found on this site that is not this site property remains the copyright of its respective owner/s. In no way does LocalHost claim ownership or responsibility for such items and you should seek legal consent for any use of such materials from its owner. | <urn:uuid:11a3951e-a559-4a02-ad4e-a39494efc98b> | CC-MAIN-2021-49 | https://jugglethisnyc.com/V3187sXz0/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363400.19/warc/CC-MAIN-20211207140255-20211207170255-00471.warc.gz | en | 0.962533 | 1,266 | 2.84375 | 3 |
Prison is not something unfamiliar to hear. In fact, it is so common because criminal acts keep increasing now all over the world. There are various type of criminal acts. Each of them has its type of punishment. Every country has their own law in determining the punishment of every criminal act dine by their citizens. Also, there has been some reforms until now. Is prison reform necessary?
In the past, punishment often apply the philosophy of eye for an eye which meant the punishments should equate the harm. For example, if individual killed another, then they should receive death penalty. However, today law is not a simple as that. In fact, applying this philosophy to punish criminal acts is not simple at all. It is not so easy to determine how to equate different types of harm.
That’s why, there is different type of punishment as well as the length of the punishment itself. Being placed in the prison is a must for criminals who committed crimes. However, they may receive different length of how long they should be imprisoned. As for severe case, death penalty is also possible for example for serial killers, multiply crimes, etc.
Importance of Prison Reform
Prison reform is important especially in the past where the system was still unorganized and inefficient. Through the reform, the punishment has been tailored to the individual convicted criminal. Today, it is not that easy to give a sentence for criminal because the judge should consider various factors. Prison reform has not been easy to make because there were many implications and complications in the process including the cost efficiency, the availability of facilities, etc. The problem is, numbers in prison has kept growing despite those reforms.
Punishment or individual who committed crimes were meant to be the preventive acts for future criminal behavior. In the past, prisoners were subjected to harsh conditions with purpose to exemplify the others. For example, they undergone public embarrassment or even public execution. Therefore, people would think twice before conducting illegal activity. Planting fear was the main purpose of harsh punishments. Besides, criminals were used as threat for themselves and others.
Prison reforms have developed and the purpose of punishment is not only to make the individual who committed crimes to pay for their acts. The purpose is also to repair individual deficiencies so they can return to the society as productive as possible. Prison is now expected as a place to gain education, work skills, and self-discipline. | <urn:uuid:796392d3-4d65-49b7-8df5-2fcf00206d7d> | CC-MAIN-2021-49 | https://mainecure.org/the-prison-reform-then-and-now/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363400.19/warc/CC-MAIN-20211207140255-20211207170255-00471.warc.gz | en | 0.981091 | 491 | 2.9375 | 3 |
Dialogue in the Jewish culture
European Days of Jewish Culture 2021
The 2021 edition of the European Days of Jewish Culture includes the exhibition “Dialogue”, available now at Muzeon. Immerse yourself into the history of Jewish education, explore how Jewish traditions are being passed down to the new generations, and discover the many forms of connections between Judaism and different cultures and religions.
In the exhibition you will see photographs, illustrated manuscripts of religious and philosophical texts, letters, press articles and posters.
The exhibition addresses the idea of “Dialogue” in Jewish culture from 5 different points of view:
DIALOGUE is the first temporary exhibition that you can visit at Muzeon, and was created with help from The National Library of Israel, The European Association for the Preservation and Promotion of Jewish Culture and Heritage (AEPJ), and Networks Against Antisemitism (NOA), as part of the 2021 edition of the European Days of Jewish Culture.
The exhibition can be visited until 10 October, from Tuesday to Sunday, between 10:00am-06:00pm (last entry at 05:30pm), with free admission.
Read more about the European Days of Jewish Culture here: | <urn:uuid:8062743f-9ce2-4d62-8e0b-a09b112fbc85> | CC-MAIN-2021-49 | https://muzeon.ro/en/dialogue/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363400.19/warc/CC-MAIN-20211207140255-20211207170255-00471.warc.gz | en | 0.933585 | 252 | 2.65625 | 3 |
Two years ago I published a detailed blog post: Will an asteroid hit Earth? In that post I discussed the scenario that an asteroid had been discovered on a collision course with Earth and what could be done to avoid such a possibly catastrophic collision. One option is to send a spacecraft to the asteroid and let it crash with it. The impact should change the course of the asteroid, so it would no longer hit Earth.. The DART mission will test the feasibility of this “kinetic impactor” technique. DART will be launched on 24 November, so it is time for an update.
The acronym DART stands for Double Asteroid Redirection Test. Target for DART is the minor asteroid Didymos, discovered in 1996. It has a diameter of 780 meter and orbits the sun in 2.11 year. In 2003 it was discovered that Didymos has a small moon with a diameter of 160 meter. This moon has been named Dimorphos , it orbits Didymos in about 12 hour at a distance of 1.2 km. DART will crash into this moon at a speed of 6.6 km/s. and change its orbit slightly. In the infographic this change is hugely exaggerated. It is estimated that the crash will change the speed of Dimorphos only about 0,4 mm/s and its orbital period about 10 minutes
Originally DART was part of the much more ambitious AIDA mission. The crash will take place at about 11 million km from Earth. How to observe the effects of the crash? The solution was to launch another spacecraft earlier than DART, which would reach Didymus and go into orbit around the asteroid. This AIM spacecraft , to be developed by the European Space Agency (ESA), would observe the crash and send data back to Earth. It would even deploy a small lander, MASCOT2 to study the properties of Dimorphos.
But in December 2016, AIM was cancelled by ESA, after Germany withdrew the 60 million Euro funding for the project. I commented in the above mentioned blog:
As an European I feel rather ashamed that Europe has acted this way.
NASA decided to continue with DART., which will be launched by a SpaceX Falcon 9 rocket. A fascinating feature from the Falcon 9 is that part of it (the first stage) will return to Earth, land vertically (!) and can be used again for other missions. It will land on a so-called drone ship, an unmanned platform in the ocean. There are three of these drone ships active at the moment, all with poetic names. The Falcon 9 will land on “Of Course I Still Love You” Here is the ship.
And here is a video of the take off and landing. You must see it to believe it ;-).
DART will arrive at the asteroid end of September 2022. The spacecraft will use autonomous navigation to point itself to the moon. It has a camera on board, the DRACO that takes high-resolution photos. On-board software will analyse these photos, be able to distinguish between Dimorphos and Didymos and point Dart to Dimorphos.
About 10 days before reaching its destination, DART will deploy a tiny spacecraft, a so-called CubeSat . This LICIACube has been developed by ASI, the Italian Space Agency and will take pictures of the crash. So at least images of the collision will be sent to Earth.
Here is a short YouTube video of the DART mission. I will point out a few details.
0:07 The nose cone of the Falcon 9 opens to deploy DART
0:15 the solar arrays are unrolled, a new technique. Each one is 8,5 m ;long
0:22 The lens cover of DRACO opens
0:26 Didymos in the center, Dimorphos to the right
0:32 The orbits of Earth and Didymos. They comes close, but are still 11 million km away from each other when DART crashes.
0:37 The Xenon thruster will steer the spacecraft
0:41 The LICIACube is deployed
0:54 DRACO will find the target
0:58 Found the target
1:02 On collision course
1:04 The end of DART
My next update about DART will probably be in October next year.
Are you staying up tonight?, a friend asked me a few weeks ago. No, why? , I replied. He knows about my interest in space travel and expected that I was aware of the landing of a spacecraft on Mars that night. But I was not 😉
I checked the timing, the Perseverance would land at 4:55 am in the morning of 19 February (Malaysian time). In this blog I will explain why I decided to enjoy my sleep and check the next morning if the landing had been successful 😉
In 2018 I wrote a blog Landing on Mars, in which I described the various Mars missions, concentrating on the Curiosity Mission of 2012. The procedure to land the Curiosity was new, using a so-called sky-crane for the last phase.
Here is a diagram of what is called the Entry, Descent and Landing (EDL) process. The spacecraft enters the (thin) Martian atmosphere with a velocity of ~ 20.000 km/h. About 7 minutes later it must land on the surface with a velocity of less than 1 m/s. As signals between Earth and Mars take about 11 minutes, EDL can not be controlled from Earth, the whole process must have been programmed in the computers on board. Mission Control can only wait and see. That’s why these 7 minutes have been called the seven minutes of terror.
In my 2018 blog I describe the three phases in more detail, here is an very informative animation.
In 2012 everything went well, the Curiosity is actually still operational at the moment, much longer than originally planned.
The Perseverance that landed last week, has followed the same EDL procedure. Of course it must have been a relief for Mission Control that it was again a smooth process, but to keep calling it seven minutes of terror is exaggerated. That’s why I decided to enjoy a good night’s sleep. Here is the EDL process for the Perseverance. As you see it is basically the same as for Curiosity.
The two rovers also look the same. To the left Curiosity, the official name of the mission is Mars Science Laboratory (MSL). The Perseverance, to the right, is part of the Mission 2020 project.
Of course there are differences. The wheels have been redesigned, the robotic arm is heavier and the rover carries more cameras, 23 in total. Notice the “hazcams” at the front and the back of the rover, to avoid obstacles. Sherloc, Watson and Pixl are science cameras, I will tell a bit more about them later.
Some of the cameras have not a real science function, but have been added mainly to please the general public 🙂 . The back shell has a camera looking up to see how the parachute deploys. The camera of the sky-crane is looking down and can follow how the rover is being lowered to the ground. And the rover has a camera looking upwards to see the sky-crane. And a camera looking downward to the ground. That one is important, the spacecraft has a digital map of the surface and uses the camera images with a lot of AI to steer to the right location.
Keep in mind that all these images can only be transmitted back to Earth, after the spacecraft has landed. During the EDL, Mission Control only receives telemetry signals (altitude, speed etc). NASA has published a spectacular video where those messages are combined with the camera images. This is a YouTube video your really should watch (several times!). No wonder that this video has already been viewed more than 14 million times.
This map of Mars gives the location of the NASA missions. Insight and Curiosity are still operational. For a list of all Mars missions, click here.
This amazing photo has been taken by the Mars Reconnaissance Orbiter, one day after the landing. During the EDL the heat shield, the parachute and the sky-crane (descent stage) have to be jettisoned away from the rover. When you enlarge the picture above, you can just see the two small craters.
———————————————– Perseverance’s mission
Now that Perseverance has landed successfully on the Red Planet, what is it going to do? The missions of Curiosity and Perseverance are basically the same, to determine whether Mars ever was, or is, habitable to microbial life.
When Mars was a young planet, billions of years ago, water was abundant, there were lakes and rivers, similar to young Earth. On Earth life started about 3,5 billion years ago in the form of microbes. Fossil remains of these microbial colonies are called stromatolites. Here is an example, found in Australia, ~ 3.4 billion year old.
Could primitive microbial life have started on Mars in a similar way? Curiosity landed in the Gale crater, created about 3.7 billion year ago by a gigantic meteor impact. The crater became a lake, rivers deposited sediments. Curiosity collected surface material with its robotic arm, pulverised and heated it, before using a variety of analysing tools. Many organic molecules were found, for example thiophenes. which, on Earth at least, are primarily a result of biological processes.
Mars Mission with the Perseverance will continue this research with advanced technology.
Here is an artist impression how the Jezero crater may have looked like, when it was filled with water. Notice the river, top left, flowing into the lake. That river deposited a lot of sediments in the lake and it is near these sediments that Perseverance has landed.
A detailed map of the landing region, with the various geological structures in different colors. The “valley” of the former river and the delta are clearly visible The location of the rover again marked with a cross. The scientists have already made a proposal how the rover will explore the region (yellow line). The mission will take at least one Mars year (687 Earth days). If you want more information why the Jezero crater was chosen, click here.
The robotic arm has three scientific instruments, the PIXL, SHERLOC and WATSON. PIXL stands forPlanetary Instrument for X-ray Lithochemistry . SHERLOC is an acronym for Scanning Habitable Environments with Raman & Luminescence for Organics & Chemicals and WATSON represents a Wide Angle Topographic Sensor for Operations and eNgineering . Engineering sense of humor.
PIXL is the main instrument. It points a very narrow X-ray beam at a piece of rock and detects the reflected light (fluorescence ), which is characteristic for the chemical elements in the rock. By analysing this reflected light, PIXL hopes to find biosignatures. Here is an artist impression of PIXL in action.
SHERLOC searches for organics and minerals that have been altered by watery environments and may be signs of past microbial life . Its helper Watson will take close-up images of rock grains and surface textures.
Suppose that Perseverance finds promising locations during its traveling. Wouldn’t it be wonderful if scientists on Earth could study the material at these locations in greater detail?
Well, that is exactly the most ambitious part of the Mars 2020 project, to bring back rock and regolith back to Earth. When you are a follower of my blog, you may remember that Hayabusa2 has brought back material from the asteroid Ryugu, and of course moon rocks have been brought back. But never yet material from a planet.
The robotic arm of Perseverance contains a drill, which can collect core samples. Here it is ready to start drilling.
The core sample (comparable in size with a piece of chalk) is put in a sample tube and taken over to the body of the rover where a few measurements are made. Then it is hermetically sealed to avoid any contamination, and temporally stored in a cache container. The container has space for 43 tubes. Here is an example of a sample tube.
Watch the video to follow the complicated process. Three robotics arms are used!.
How to get these sealed tubes back to Earth? NASA and ESA (the European equivalent will work together in what at first sight looks almost like science fiction. Actually it is still partly fiction at the moment! Here is the plan.
In July 2026 (!) a spacecraft will be launched, consisting of a lander and a rover. In August 2028 it will land near the Perseverance.
Here the spacecraft has landed on the surface of Mars, the rover still has to be deployed.
The only function of the rover is to fetch the sample tubes and bring them back to the lander. In this artist impression it is suggested that the sample tubes are scattered around, but that doesn’t make sense to me. Probably Perseverance will have created a few depots, or even kept all tubes in its own storage. The various descriptions I have found on the Internet, are not clear about this. The whole Return Mission is very much work in progress.
Here the tubes are handed over by the “fetch rover” to the lander, where they are put in the Sample Return Container.
The Sample Return Container might look like this. It will be designed so that the temperature of the samples will be less than 30 degrees Celsius.
The container will be loaded in a rocket, the Mars Ascent Vehicle, which will be launched in spring 2029.
The rocket will bring the container in a low Mars orbit and release it there..
In the meantime In October 2026 the Earth Return Orbiter has been launched, it will arrive at Mars in 2027 and lower its orbit gradually to reach the desired altitude in July 2028. There it will wait to pick up the container.
After the Earth Return Orbiter has caught the container, it will “pack” it in the Sample Return Capsule (SRC) and then go back to Earth, where it will arrive in 2031, ten years from now. It is this SRC that will will be released and finally land on Earth.
Here is a simulation of the procedure.
The primary mission of Mars2020 is to determine if Mars was habitable in the past. But there are also secondary missions. On board of the Perseverance there is one experiment, called MOXIE, that will produce oxygen from the carbon dioxide in the Martian atmosphere. Just a proof of concept experiment, important for future human missions to Mars.
Quite spectacular is that Perseverance is bringing a small helicopter, the Ingenuity. The Mars atmosphere is thin, but the helicopter should be able to fly. A bit similar to a drone, flying a few meter high, and maximum 50 m away. At the moment it is still hanging under Perseverance, planning is to test it after a few months. Here is an animation
At the moment Perseverance is testing al its components. It has made its first test drive, only a few meters. Here is a picture, you can clearly see the tyre tracks.
If there is more news about the Mars2020 mission, I will update this blog or write a new one.
Let me end this blog with an animation created in 1988 (!) , describing a Sample Return Mission to Mars. Fascinating to watch. | <urn:uuid:c25842f4-12e9-41ab-938a-62134f731e8f> | CC-MAIN-2021-49 | https://stuif.com/blog/?cat=69 | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363400.19/warc/CC-MAIN-20211207140255-20211207170255-00471.warc.gz | en | 0.950232 | 3,245 | 3.5625 | 4 |
Pioneers of the fragrant world
Since the very roots of our existence, we have tasted and smelled the world around us, and the way in which we ‘sense’ has become intrinsically interwoven with our most affecting memories and emotions.
Written by well-renowned perfume historian Annick Le Guérer Givaudan: a Secular History is the first chapter of ‘An Odyssey of Flavours and Fragrances’ – Givaudan’s exclusive anthology.
Through the work of esteemed writers and photographers, we trace the history of flavours and fragrances back to the earliest days of human civilisation. Along the way, this particular chapter profiles some of the most significant men and women who’ve shaped the history of Givaudan, and the industry itself.
Since the mists of antiquity
Perfume has always proved to be a priceless record of ancient civilisations. The ancient Egyptians would use resins, aromatic gums, and unguents to anoint their dead ‘perfumed ones’ (gods, essentially) in the hope they would find immortality in the afterlife.
The Egyptians also used herbs and plants to flavour their food. They practiced distillation and ‘maceration’ to extract flavours from liquids like oil and wine. Elsewhere, the Persians and Assyrians created wines flavoured with roses and figs.
An industry is born
In the middle ages, the first perfume was created in Europe. The distilling of rosemary flowers in ethyl alcohol created ‘Hungary water’, which was said to have cured the Queen of Hungary of many of her illnesses.
During the sixteenth and seventeenth centuries, people tried to protect themselves from rampant epidemics using powders, perfumed gloves, aromatic vinegars and scented masks.
Perfume became synonymous with sensuality during the Enlightenment. Perfumers created light fragrances designed to intoxicate and seduce, such as ‘L’Eau Sensuelle’ – worn by the much-maligned French Empress Marie-Antoinette.
The game changers of Grasse
The Chiris family of Grasse, France, made perfuming an international industry, beginning at the end of the eighteenth Century. Their knowledge of perfuming mixed with French colonisation and their access to high society allowed them to set up plantations and factories in places like Central Africa, the Congo, the Comoros Islands, Madagascar, Indochina, and Algeria.
The Givaudan brothers’ beginnings
On another avenue of exploration and innovation, Léon and Xavier Givaudan started out creating chemical fragrances in Zurich in 1895, long before the Givaudan and Chiris families would merge paths.
Initially in Zurich, The Givaudan brothers soon moved to Vernier. From their new one-acre site in Vernier, Switzerland, the Givaudan brothers found success in creating synthetic bases that made creating new fragrances easier for in-house perfumers.
By the 1930s, Givaudan had subsidiaries in the US, Canada and Italy, and owned factories in Switzerland, France and the US. This made them the largest producer of chemical products for the perfume and soap industries.
Travels and takeovers
After World War II, the industry went through a first consolidation, with mergers and acquisitions.
In 1958, Givaudan expanded into a new market when it took over Esrolko – a producer of synthetic food grade flavourings, based in Dübendorf, near Zurich. Since that takeover, Givaudan has always produced products for two markets: flavours and fragrances.
In 1963, the Chiris family business was taken over by Hoffmann-La Roche. At the same time, Givaudan began scouring the globe for untapped natural scents to complement their work in synthetics. This was helped by a merger with perfume house Roure – a company with strong roots in harvesting natural ingredients and great skills in creating fine fragrances.
In 2000 though, Givaudan-Roure separated from Hoffmann-La Roche and went public, heralding a new era of successful expansion. This included the acquisition of Quest International, a company whose fine fragrance business was headed by Yves de Chiris – the last scion of the Chiris family.
The one-acre plot in Vernier is now a key Givaudan site spanning fifty acres. The Esrolko site in Dübendorf is now the headquarters for the group’s Flavour Division, and is also home its Fragrance Division’s research and development centre.
As of 2015, Givaudan has a presence in forty countries with more than 9,500 employees people spread over eighty-eight sites worldwide.
This reality is just the actual chapter in an evolution that spans centuries, graphically brought to life in ‘An Odyssey of Flavours and Fragrances’. | <urn:uuid:ae5d6e7b-1d03-45ca-a180-86efd2845cec> | CC-MAIN-2021-49 | https://www.givaudan.com/our-company/rich-heritage/odyssey-flavours-and-fragrances/history | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363400.19/warc/CC-MAIN-20211207140255-20211207170255-00471.warc.gz | en | 0.937686 | 1,053 | 2.53125 | 3 |
A significant amount of valuable fundamental research has been conducted by Japanese researchers at the Center or Iron and Steelmaking Research (CISR) at Carnegie Mellon University. The work of these researchers, with the author, are reviewed in this paper. Research on ironmaking, steelmaking and refining is included. Examples include fundamentals of iron smelting, refining for phosphorus and nitrogen, slag foaming and the kinetics of various gas-metal reactions.
Chemical activity-temperature-composition (a-T-x-y) relationships determined by Knudsen effusion technique were made available for molten Fe-Cr-P system by a group of Russian researchers. Nevertheless, they were not presented in form of solubility x in Fe1-yCryPx as explicit functions of phosphorus activity aP and temperature T for given y and thence they are not readily usable for evaluating P solubility in molten Fe1-yCry at arbitrary T under certain aP. In the present work, effort was invested to derive empirical expression for the solubility x in Fe1-yCryPx as functions of T and aP at given y from the reported aP-T-x-y relationships in discrete tabulated format. Such analytical expression of solubility x might allow us to proceed with more profound consideration for atomic interaction and atomic configuration in the molten Fe1-yCryPx on the basis of statistical thermodynamics.
Blast furnace (BF) slag in south-western part of China contains large amount of titanium. For the purpose of beneficial utilization of titanium in the slag, microwave (MW) processing of the slag has been proposed. In this study, a fundamental research on heating behavior of the slag by application of 2.45 GHz MW was performed. The slag specimens can be heated in a domestic MW oven at constant input power (0.5 kW) and sudden increase of the temperature is observed after several heating cycles. A phenomenon of so-called thermal runaway (TRW) occurred. A part of the specimen melted, and the XRD analysis indicated that this area became amorphous. Crystallization of the CaTiO3 phase occurred by MW heating of the melted slag at 600°C. A sintered CaTiO3 compact was heated by MW much more than the synthetic Chinese slag. The synthetic slag without containing Ti was not heated very well. Permittivity of CaTiO3 (loss factor) was measured and demonstrated to be much larger than the other oxides in the slag. Therefore, it is concluded that CaTiO3 phase is responsible for heating of the whole slag.
The precipitation and growth of V-concentrating phase are implemented in the synthetic V-bearing steelmaking slag based on the composition of factory slag from the Masteel Co. X-ray diffraction (XRD) and scanning electron microscopy (SEM), with energy dispersive X-ray spectrometer (EDX) are used to investigate the slag after heat treatment and to determine the temperature at which crystallization of V-concentrating phase is initiated. It is demonstrated Whitlockite with a high content of V2O5 (called V-concentrating phase) nucleates homogeneously and hetergeneously at 1623-1598 K. When emplaced holding time at 1548 K, the crystals of V-concentrating phase grow with increasing of the holding time using crystal size distribution theory (CSD). Observation of the microstructures and crystallite-size data indicates that the precipitation of V-concentrating phase proceeds via three different mechanisms: nucleation, growth, and coalescence of grains of V-concentrating phase.
The solubility of oxygen in iron-silicon melts in equilibrium with silica was measured within the range from 0.1 up to 70 mass% Si at 1873 K. The experimental procedure involved alloys melting in silica crucibles under argon atmosphere. The sampling was made by melt sucking into quartz tubes, equipped with copper chillers. The oxygen content of analytical samples was determined with inert gas fusion analysis after careful sample preparation. The results obtained were treated by thermodynamic model, which allowed to calculated the activity and solubility of oxygen in Fe-Si melts up to 100 mass% Si. The isotherm of oxygen solubility exhibits both intermediate minimum and maximum at 20 and 85 mass% Si, respectively. The corresponding values of oxygen saturated contents are as follows: 1.4 and 94 μg/g. The activity coefficient of oxygen shows alternating deviations from the additive behaviour. These are positive in the iron rich melts, containing up to 45 mass% Si. In the melts with higher silicon content the deviations from additivity are negative. The following values of interaction parameters were calculated: εSiO(Fe)=12.9±2.7 and εFeO(Si)=-6.5±2.0.
Manganese furnace dust is made up of volatiles and raw materials fines collected from the off-gas during smelting of manganese alloys. Currently, manganese furnace dust is accumulated in large settling ponds. Major factors preventing recycling of the manganese furnace dust to the ferroalloy furnaces are handling, due to its tar content, and accumulation of zinc in the furnaces, which can cause irregularities in their operation. This paper presents characteristics of manganese furnace dust generated in ferromanganese and silicomanganese production at Tasmanian Electrometallurgical Company and analyses zinc balances in light of furnace dust recycling. If manganese furnace dust is recycled to the ferroalloy furnaces via the sinter plant, the overall zinc input will increase by 51-143 % depending on charging materials.
The kinetics of the reduction of hematite pellets using hydrogen-carbon monoxide mixtures as reducing agent was described by using the “grain model”. This model involves the particle size and the porosity of the pellet as main structural parameters which affect directly the kinetics of the hematite pellets during the reduction process. The predictions of the model were compared with the experimental results. Fired hematite pellets were reduced at 850°C using hydrogen, carbon monoxide and Midrex gas. The weight loss technique was used to follow the reduction process. The reduction of iron oxide pellets using hydrogen or carbon monoxide is a mixed controlled system, where chemical reaction and internal gas diffusion are competing processes during the first stage of the reduction, while internal gas diffusion becomes controlling step at the last stage of the process. The reduction of iron oxide pellets using Midrex gas is a mixed controlled system throughout the whole reduction process.
Using the sessile drop approach, interfacial reactions taking place in the iron/carbon interfacial region were investigated at 1550°C in a horizontal tube resistance furnace with an argon atmosphere. Two coalchars, labelled as 1 and 2 with respective ash concentrations of 10.88 wt% and 9.04 wt%, and electrolytically pure iron were used in this study. Liquid iron droplets were exposed to chars at high temperatures for times ranging between 1 to 180 min and the assembly was then withdrawn into the colder section quenching the droplet. To examine the time dependant growth of new phases formed in the interfacial region, FESEM and EDS investigations were carried out on the underside of the droplet, which effectively represents the iron/char interface. The transfer of carbon and sulphur into the iron droplet was also determined using a LECO Analyser. Interfacial regions for both chars showed a high occurrence of ash deposits, which were found to increase with time. Al, Ca, S, O, Fe were also detected in EDS analysis of the interface. However very low levels of Si were found in the interfacial region despite high concentrations of silica in the chars initially suggesting chemical reactions involving silica. After three hours of contact, carbon pick-up by liquid iron reached only 0.12 wt% and 0.28 wt% for Char 1 and Char 2 respectively, both of which were much below the saturation level of 5.6 wt%. These results are discussed in terms of the formation of interfacial products, the consumption of solute carbon by reducible oxides and low intrinsic rates of carbon dissolution from non-graphitic Chars.
The fractal analysis of data on silicon content in hot metal obtained in No. 1 blast furnace at Laiwu and No.6 blast furnace at Linfen Iron and Steel Group Co. respectively is performed using power spectrum method to examine the possible scale-invariance laws. The results confirm the existence of fractal characteristics in the investigated time series, which provides a powerful tool to explore complex blast furnace system and makes the application of fractal theory to blast furnace full of potential and attraction.
A three-dimensional numerical model of pulverised coal injection has been developed for simulating coal flow and combustion in the tuyere and raceway of a blast furnace. The model has been used to simulate previously reported combustion tests, which feature an inclined co-axial lance with an annular cooling gas. The predicted coal burnout agrees well with that measured for three coals with volatile contents and particle size ranging between 20.2-36.4% and particle sizes 1-200 μm. Many important phenomena including flow asymmetry, recirculating flow and particle dispersion in the combustion chamber have been predicted. The current model can reproduce the experimental observations including the effects on burnout of coal flowrate and the introduction of methane for lance cooling.
The present analysis of the experimental data by Yamamoto etal. permitted to determine the rate controlling mechanisms for the decarburization and evaporative manganese loss concurrently taking place during the oxygen refining process of ferromanganese melt. When oxygen is supplied by a top lance blowing mode, the decarburization reaction takes place by three different mechanisms in sequence. The chemical reaction at the melt-gas interface controls the rate of decarburization during the first period, the rate of oxygen supply through the boundary layer in gas during the second period, and the mass transfer rate of carbon in melt during the third period when the carbon content is less than 2 mass%. Manganese is lost primarily by evaporative reaction, but its dynamics are affected by the prevailing excess oxygen after accounting for CO formation. The excess oxygen and manganese vapor establish counter-current flux and form MnO mist at some distance away from the metal-gas interface. This creates two diffusion boundary layers, one for the flux of manganese vapor adjacent to the melt-gas interface and the other for the flux of excess oxygen in the gas phase. When the vapor pressure of manganese at the metal-gas interface is low, the rate of manganese vapor loss is controlled by the flux of excess oxygen. Otherwise, it is determined by the flux of manganese vapor.
To improve the temperature of continuous casting slab, a mathematical heat transfer model simulating the solidification process of continuous casting slab was developed based on the technical conditions of the slab caster of Steelmaking plant of Wu-Han Iron and Steel Group Corp., by which the slab temperature distribution and shell thickness were computed. The adequacy of the model was compared with the measured slab surface temperature at the caster exit. The effects of the main operation parameters including casting speed, secondary cooling conditions, slab size and steel melt superheat on the solidification process were discussed and the means of enhancing the slab temperature was brought forward. Raising the casting speed from 1.0 or 1.1 to 1.3 m/min, controlling the flow rate of secondary cooling water and optimizing the spray pattern at the lower segments of secondary cooling zone could effectively improve the slab temperature. Whereas increasing the superheat is adverse to the production of slab with high temperature. The results of model research have been applied to plant operation at Steelmaking plant of Wu-Han Iron and Steel Group Corp. The slab surface temperature has risen from 900 to 1 250°C, and the slab are directly fed to the rolling mill after exiting caster.
In order to exhibit good all-round performance the impact toughness enhancement of as-cast high-speed steels is obligatorily needed. In general, different methods are used commercially to achieve cast structure refinement and, as a consequence, their properties are improved. Introduction into the melt of inoculant particles or surface-active additions is among most beneficial. However, the effect of modifying additions in as-cast high-speed steels has been studied insufficiently. In fact, a restricted number of modifiers is used for structure and properties improvement in the as-cast high-speed steels compared to the common cast alloys. In the present work several kinds of alloys including tungsten-molybdenum high-speed steels of M2 and T30 types and low-alloy tungsten-free 1.1C-5Mo-1.7V high-speed steel were melted to investigate the effect of bismuth on their structures and properties. It has been found that additions of bismuth produce a very fine cast structure and affect the shape of the matrix grains and the morphology of the eutectic carbides as well as the redistribution of the main alloying elements of high-speed steels between solid solution and eutectic carbides. The microstructural changes, induced by bismuth during solidification, are explained by the surface activity of bismuth, which segregates to liquid/solid interface, significantly blocking dendrite growth in the direction along certain crystallographic planes. It has been shown that during eutectic solidification, carbides are also exposed to the barrier effect of bismuth being surrounded in the melt by this element. The main metallographic features of the modified cast structure alongside with the purifying effect produced by bismuth are retained after full heat treatment affecting the final mechanical properties of as-cast high-speed steels. As a result, bismuth significantly increases impact toughness and wear resistance of as-cast high-speed steels but decreases their red hardness.
The low solubility of titanium nitrides (TiN) in austenite is taken advantage of in structural steels to control the evolution of the microstructure in the hot rolling process and in the heat affected zone (HAZ) in processes involving the application of heat, such as welding. However, the quantitative influence of the precipitation state of these particles on hot strength and dynamic recrystallisation kinetics, given by the precipitate size distribution and the precipitated volume, is practically unknown. The present work studies the influence of various Ti and N compositions, which give rise to different precipitation states at the reheating temperatures, on the aforementioned phenomena. The influence of the precipitation state on hot strength has been quantified by changes in the peak stress and the activation energy. A maximum has been obtained for the activation energy which corresponds to a Ti/N ratio of approximately 1.3. The model used to predict the flow curve and dynamic recrystallisation kinetics has been improved, extending it to include microalloyed steels containing Ti.
Strip track-off, particularly at high speed, is a serious operational problem that could lead to mill crashes and damaged rolls. It predominately occurs at the entry of the tandem cold mill, where the entry tension is relatively low. This paper presents an analysis of the effect of entry tension on the stability of strip tracking in the first stand of a cold rolling mill using a recently developed mathematical model of strip lateral dynamics in cold rolling. The analysis reveals that the entry tension is a crucial parameter in stabilizing the strip tracking if buckling of the strip is present. A procedure of selecting the entry tension that ensures stable strip tracking for a given mill schedule is discussed.
This paper describes the influence of sliding wear conditions and post weld heat treatment on abrasive wear resistance of iron base hardfacing overlays (Fe-30Cr-3.6C) deposited on mild steel. Overlays were deposited by a shielded metal arc (SMA) welding process on mild steel using a commercially available hardfacing electrode (Sugar-Arc) of 4.0 mm in diameter. Overlays were deposited using a welding current of 250 A (DCEN) and welding speed of 15 cm/min. The abrasive wear resistance of overlays in as-welded and heat-treated condition was tested using a pin on disc type wear testing machine against a 320 grade SiC abrasive paper at different normal loads (1-4 N). Optical microscopy was used to study the microstructure of overlays. Scanning electron microscopy (SEM) wear surfaces was carried out to analyze the wear mechanism. Variation in the hardness across the coating substrate interface of was observed. Post weld heat treatment improved the abrasive wear resistance.
A modified phosphate coating on automobile iron castings was described in this paper. The phosphating bath was modified by adding sodium molybdate. The microstructure of the phosphate coating was remarkable refined as the addition of Na2MoO4 increases to 1.5-2.0 g/L. As a result, the corrosion current of the coatings measured by electrochemical polarization decreases as Na2MoO4 increases, which indicates the increase of the corrosion-resistance of the coatings. The modified phosphate coating was used to automobile castings as an intermediate protect layer before final painting. It was shown that the coating containing molybdate can improve the adhesion of painting onto the automobile casting. Salt spray test and atmospheric corrosion test indicated that the anticorrosion performance of the paint plus phosphate coatings on automobile castings was also improved significantly.
Pitting corrosion behavior of austenitic stainless steels with Mo was investigated in chloride and bromide solutions. The steels with higher S content as 0.009 mass% showed higher pitting potential in 1 kmol/m3 bromide solution than 1 kmol/m3 chloride solution. On the other hand, the steels with lower S content as 0.0003 mass% showed opposite results. The higher pitting potential was observed in chloride solution. This tendency became profound with passivation treatment by nitric acid. From the experimental results above, it is highly believed that chloride ions are more detrimental to sulfide inclusions induced pitting than bromide ions, whereas pitting due to breakdown of passive film other than sulfide inclusion is more easily caused in solutions containing bromide ions than chloride ions.
A numerical model was developed to simulate the competing precipitation of Cu particles on dislocations and in the matrix in Fe-Cu alloys. The nucleation and growth rates and the remaining Cu concentration in the matrix were calculated successively at a large number of fine discrete time steps. In the absence of dislocations the results for precipitation in the matrix that was assumed to occur homogeneously were in essential agreement with those of Langer-Schwartz (L-S) model and Lifshitz-Slyozov-Wagner (LSW) coarsening theory. The heterogeneous precipitation on dislocations was incorporated taking into account the development of solute-depleted zone around dislocations. The coarsening behavior of particles on dislocations and in the matrix deviated substantially from those of previous theories probably due to the interaction of diffusion fields between heterogeneous and homogeneous precipitation zones. In other words, coarsening can occur at the expense of smaller particles nucleated in the matrix at later stages. The bcc to fcc transformation of Cu particles that occurs during growth was likely to accelerate the coarsening of Cu particles. The simulation results agreed well with experiment in respect of the particle number and mean particle radius, but the model yielded a considerably narrower particle size distribution than experiment reported in the literature.
The effects of copper along with some microalloying elements and the processing parameters are modeled with artificial neural network and adaptive neuro-fuzzy inference system. Both the tools are found to be useful for modeling the effect of copper and other alloying additions along with the processing parameters on the hardness of microalloyed DP steels. In case of the neural network, the proposed committee of models is found to be effective in handling the problem of mapping the input-output relation in these steels. The increase in the number of rules is found to improve the predictability of the neuro-fuzzy inference system. The predictions made by both the models substantiate the knowledge of physical metallurgy principles.
The cold formability of the drawn non-heat-treated steels, i.e. dual phase (DP) steel, low Si steel and ultra low carbon bainitic (ULCB) steel, was examined in terms of the deformation resistance and the forming limit. The present investigation was aimed at elucidating the effect of drawing on the cold formability of non-heat-treated steels which is directly affected by drawing since no heat treatment are involved during forming processes for them. A special care was taken for the present steels to exhibit the similar strength after drawing, so eliminating the strength effect. The present steels after drawing revealed the elastic-near perfect plastic behavior in compression. After drawing, the low Si steel exhibited the lowest deformation resistance estimated by the absorbed energy during deformation. In case of the forming limit in terms of the critical strain under which no cracking occurs during the upsetting test of the drawn steels, the low Si steel and the ULCB steel were better than the conventional heat-treated steel. Accordingly, among several non-heat-treated steels which can replace the conventional heat-treated steel as forging steels, the low Si steel seems to exhibit the best performance if they have the similar strength. The compressive deformation behavior of the present drawn non-heat-treated steels was discussed in association with the strain hardened state and the Bauschinger effect developed by drawing. In addition, their cold formability was explained by the plastic incompatibility between the constituent phases of each steel.
This study aims at casting new light about the knowledge of the metallurgical techniques developed by the Etruscan and the Romans during their political and cultural interactions in Central Italy. The analysis of two weapons found at the Etruscan sites of Vetulonia and Chiusi have pointed out some new information about the production process performed. The optical microscopy analysis has allowed to identify the sequence of the microstructural constituents present in the two ancient weapons. SEM-EDS has permitted to identify the chemical composition of the non metallic inclusion and to estimate the average temperature of the reduction process. The analysis of the metal matrix performed by a coupled argon plasma spectrometer permitted to measure the average chemical compositions of the studied alloys. SEM-EBSD analysis has allowed to identify the crystallographic textures present within the different zones of the sword blades and this has indicated the realization of a forming process that gave interesting mechanical properties to the metal products. The results obtained by the Etruscans artisans were of very high standard quality and their production system had been certainly assimilated by the Romans who found in them a strategic factor to increase their power. | <urn:uuid:23bc55c8-79b2-4ec5-8785-7a4798bf4367> | CC-MAIN-2021-49 | https://www.jstage.jst.go.jp/browse/isijinternational/45/9/_contents/-char/en | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363400.19/warc/CC-MAIN-20211207140255-20211207170255-00471.warc.gz | en | 0.944683 | 4,805 | 2.59375 | 3 |
Remarkable pseudo-Egyptian facade
In 1834 John Lavin, a bookseller, of Penzance bought two cottages in Chapel Street for £396 and proceeded to raise the height of the building and to add to its street front the present remarkable pseudo-Egyptian facade. The Royal Arms on the building suggest that it was complete before the accession of Queen Victoria in June 1837. John Lavin sold maps, guides and stationery in the Egyptian House, but his main business was in minerals which he bought, sold and exhibited here.
The exotic building must have been intended to emphasise the bizarre and beautiful side of geological specimens and to draw into the shop visitors to the town. At the time there was a great deal of enthusiasm for the study of minerals and fossils, particularly in Cornwall. Not only were the railways and the fashion for the seaside bringing the beginnings of tourism (Cornwall’s principal business today) to the Duchy, but because of the mining industry the county was also a centre of scientific knowledge and enthusiasm. Cornish miners and engineers were carrying their expertise all over the world. Many of the rare specimens sold by Lavin in Chapel Street were found by Cornish miners while at work in the county, but others were brought back to him by those who came home from overseas. He is supposed to have been guilty of the occasional deception!
John Lavin married Frances Roberts in 1822 and they had two children, Edward and John. John, the younger, emigrated to Australia where he was a biscuit-maker. He died in 1881. Edward ran a stationery, bookbinding and printing business in the Egyptian House beside the mineral shop renting the premises from his father. Perhaps he was not keen on geology, because in 1863 a few years after his father’s death, he sold the entire collection of minerals for £2500 to the great Victorian philanthropist, Angela Georgiana, Baroness Burdett-Coutts. With the proceeds he built a large hotel on the esplanade at Penzance, which he called Lavin’s Hotel (now the Mount Bay’s House Hotel).
Motifs derived from the Egyptian style of architecture (obelisks, pyramids, sphinxes etc.) can be found throughout the history of European architecture. The association of so much Egyptian architecture with death meant that it was often used for monuments. With the development of more accurate scholarship, a greater range of forms and ornaments became available to architects especially after the French occupation of Egypt in 1798-9. One of the most prominent English exponents of the style was Thomas Hope (1769-1831) who designed furniture and interiors which were described in scholarly books. But the Egyptian style also appealed to those looking for novelty and publicity and the Egyptian House would seem to be one such example.
Despite much debate on the subject it has never been proved which architect, if any, was responsible for its design. Peter Frederick Robinson and John Foulston are the names most often mentioned. Robinson, who advised the Prince of Wales on the Chinese furnishings at the Brighton Pavilion, was a pupil of Holland and was a successful country house architect. He designed an Egyptian Hall in Piccadilly, London for a collection of curiosities. Again its purpose was largely advertising for its owner William Bullock. But there is no more than its affinity with the Egyptian House to link the two. John Foulston was closer to hand, practising in Plymouth in a great range of different styles. His Classical and Mathematical School, built in the Egyptian style and criticised at the time for being an imitation of Robinson, still stands somewhat altered as the Odd Fellows Hall. Again there is no other evidence to link Foulston to the Egyptian House.
For a short history of The Egyptian House please click here.
To read the full history album for The Egyptian House please click here.
To download the children's Explorer pack for The Egypitan House please click here.
Select a changeover day to start your booking...
What's a changeover day? and Why can't I select other dates?Explain More
A changeover day is a particular day of the week when holidays start and end at our properties. These tend to be on a Friday or a Monday but can sometimes vary. All stays run from one changeover day until another changeover day.
Monday 13th February 2014 | <urn:uuid:3d4785c1-8874-4a82-9a21-233bc5ce8937> | CC-MAIN-2021-49 | https://www.landmarktrust.org.uk/search-and-book/properties/egyptian-house-1-6747/?ZJjYEWg8T5Uilneb2kJmmS/wLBbpScPDAf4Ik7ZuG/6GbF0xkXEvsQRG2iM4D6wc21fdVCLiA7XfMjdhmUbPRUzmQ64d37ksW8V+A52+DM6iaKyHMTDs4hmm3S6BQA1g66K/16o89BEB1lIM5X3oIi9jS1iW6uO+AI6Yu7uedXE= | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363400.19/warc/CC-MAIN-20211207140255-20211207170255-00471.warc.gz | en | 0.974173 | 921 | 2.890625 | 3 |
5 things you need to know about 5G
After a decade in the making, 5G is finally coming, and it’s arriving with more buildup than the latest superhero film (and boasting the same amount of super-powers). Carriers are starting to roll out the technology in select cities with plans to enter the mass market in 2020. But what is 5G exactly? How does it work? What’ll it cost? Will it be faster than a speeding bullet...or will it be done in by the dastardly super-villain named Overhype? Here’s what you need to know:
1. What is 5G?
5G is the fifth generation of wireless connectivity, which utilizes a higher-frequency band of the wireless spectrum to transfer data more rapidly than the lower-frequency band used in today’s 4G LTE. And what does that mean for those of us who didn’t major in quantum mechanics? It means things are about to get a lot faster, like 10 to 100 times faster than your typical cellular connection. 5G is capable of surpassing the 1 Gigabit per second (Gbps) threshold and could even reach 10 Gbps.
2. How does 5G work?
Okay, back to the physics stuff for a minute. The 5G networks will operate in the millimeter wave spectrum—between 30 GHz and 300 GHz. The advantage of using the millimeter wave spectrum is that it sends large amounts of data at very high speeds and, because the frequency is so high, it experiences little interference from most surrounding signals. The challenge is, they don't travel as far as the lower-frequency waves, meaning 5G network carriers will have to use a lot more antennas to get the same coverage as 4G LTE.
3. What are the benefits?
Imagine your car stopping you at an intersection because it senses another vehicle is about to run the red light. Or a specialist performing a complex operation on a patient 10,000 miles away. Or a smart home that alerts you when your back deck is getting too hot for your pet. The real-world applications of 5G can be life-altering, even life-saving. The key benefit of this new technology, along with greater speed, is its low latency—that is, its virtually lag-free connection, which is huge not just for automotive or medical applications, but for simple activities like video conferencing or multiplayer gaming.
4. Will it cost more?
When 4G LTE rolled out, there was no upgrade in price...though you did have to buy a new phone. But by all early indications, it appears adding 5G will mean a bump in your bill. Some carriers have announced their new pricing, which falls somewhere in the $10-extra/month range. Prices and plans vary, of course.
5. When will 5G be available?
The short answer is, sometime in 2020. While some telecom service providers have begun testing their 5G technologies, the upgrade will require a huge infrastructure deployment, involving new antennas, new devices, and new tech inside your phone. In reality, it could be a few years before 5G becomes widely available to the public. So, for the time being, it’s still up to you to keep an eye out for those red-light-ignoring drivers.
Have any questions about bringing the latest wireless and wired communication technologies and services to your business? Contact us today. We offer a suite of solutions that can help your organization operate more efficiently and productively before and after the arrival of 5G! | <urn:uuid:0215af58-65b3-412d-a17b-9f7d78c7e020> | CC-MAIN-2021-49 | https://www.rcn.com/business/insights-and-news/insights-articles/5-things-you-need-to-know-about-5g/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363400.19/warc/CC-MAIN-20211207140255-20211207170255-00471.warc.gz | en | 0.953665 | 740 | 2.53125 | 3 |
Flu vaccines typically protect the person against three to four influenza viruses that are common during the winter season. According to the CDC, there are many different flu vaccines manufactured that are licensed and approved by the United States Food and Drug Administration (FDA).
For the first time, a new flu vaccine is made from plants that have been put to the test in two large-scale clinical trials which have proven that it can match the commercial vaccines available.
This plant-based vaccine has virus-like particles that resemble the circulating flu strains from native Australian tobacco plant relatives that were genetically modified to produce the viral proteins.
Constantly improving flu vaccine for the next season
In contrast to cell cultures or eggs, plants are one of the world's most prolific protein producers. They efficiently express proteins of varying complexities and glycosylation patterns. They can also be engineered to produce specific proteins and cultivated at scale, thus becoming an alternative to help boost the capacity to creating seasonal flu vaccine.
The new plant-based flu vaccine underwent two large-scale trials which nearly reached 23,000 participants when combined. The results suggest that the vaccine is both safe and could fairly compete with commercial flu vaccines, Science Alert reported.
The research team wrote that their studies and clinical development are the largest demonstrations so far of the potential for a plant-based flu vaccine that can be safe, immunogenic, and effective.
Since every year flu vaccines are needed to be upgraded for the next flu season, vaccine development is indeed a huge undertaking. Researchers are feverishly looking for new ways to improve their vaccine technology because the influenza virus is like a chameleon that is constantly changing its protein molecules that it displays on its outer surface.
Tobacco plant-based flu vaccine
The researchers used Nicotiana benthamiana, an Australian relative of the tobacco plant, to produce the outer shell of influenza viruses which are then extracted and purified under strict conditions to make a flu vaccine. The vaccine needs to pass Phase III of vaccine development to be deemed safe and effective.
After the first trial is conducted in Asia, Europe, and North America, they found that the vaccine protected two-thirds of the participants from the flu strains circulating in those areas from 2017-2018 during the winter season. The result may sound low but take note that flu vaccines usually vary from year to year based on the flu strains present during that season.
Nonetheless, the first trial involving more than 10,000 people has proved that the tobacco plant-based vaccine is safe and as effective as the commercial vaccines available.
Moreover, the second trial with more tha 12,000 elderly participants was conducted and was found to activate a substantial increase in immune cells. Like the first trial, it also proved to be safe and effective like the vaccines available today.
"This is the first time a plant vaccine has been tested in a [human] clinical trial," infectious disease researcher John Tregoning added. "It is a milestone for this technology and sows the seeds for other plant-based vaccines and therapeutics."
Check out more news and information on Vaccines on Science Times. | <urn:uuid:03351769-a488-4e78-b667-b5b16add1b1a> | CC-MAIN-2021-49 | https://www.sciencetimes.com/articles/28130/20201109/flu-vaccine-based-tobacco-plants-passed-clinical-trials-first-time.htm | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363400.19/warc/CC-MAIN-20211207140255-20211207170255-00471.warc.gz | en | 0.965171 | 631 | 3.703125 | 4 |
Child Internet Protection Act
Stratford Public School recognizes federal and state regulations with regards to Internet usage and Internet safety for their students. Stratford Public School Policy and Procedures delineates, for public reference, the safe guards in place to assure safe and responsible use of the Internet for students and staff alike. Internet usage is a privilege and not a right. Appropriate usage is directly related to curricular standards and reflected in lesson plans. It is the intention of Stratford Public School to teach our students how to be good digital citizens, as well as, the capacity to use the Internet as a learning tool for the 21st century. See the FCC Consumer Facts link here regarding the Child Internet Protection Act. Also reference this video clip from Google regarding Safe Search. | <urn:uuid:84174190-931f-4d1f-a3e8-3026f50f7512> | CC-MAIN-2021-49 | https://www.stratford.k12.ok.us/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363400.19/warc/CC-MAIN-20211207140255-20211207170255-00471.warc.gz | en | 0.904836 | 152 | 2.875 | 3 |
|Introduction of Chinese language
Spoken by over one billion people Chinese is indeed the greatest language in the world. But rather than one language China has many “ languages” or “dialects” that are based on the same written language, hence differing primarily in pronunciation and speech. With seven major language groups the Chinese languages are strongly associated with individual dialects and different geographical locations. What is spoken in the North may not be easily understood in the South.
Why Learning Chinese is a Smart Business/Career Move
Like most Westerners, just a few years ago you might not have been thinking very much about China. However, these days China is all over the news. You, like more and more people, may be wondering, ¡°Should I be learning Chinese?¡± The answer is a resounding "Yes". The following are the top 10 reasons why learning Chinese is a smart business move for the young professional or career-minded person.
Is Chinese Really So Hard to Learn as a Second Language?
Many foreign friends of mine are complaining to me that Chinese is so hard to learn: the ridiculously difficult writing system, the confusing four tones, the extensive system of measure words, so a lot of things to memorize…
A brief history and classification of Chinese characters
Chinese characters are the writing system to record the Chinese language. With a history as long as 8,000 years at least, it's perhaps the oldest surviving writing system in the world. | <urn:uuid:0d8faa4e-e1bd-446d-8777-b8fbc5886e4f> | CC-MAIN-2021-49 | http://china-window.com/china_briefing/chinese_language/index.shtml | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363689.56/warc/CC-MAIN-20211209061259-20211209091259-00151.warc.gz | en | 0.956773 | 306 | 2.53125 | 3 |
NameThe word '' cross'' is recorded in 10th-century as ''cros'', exclusively for the instrument of Christ's crucifixion, replacing the native Old English word '' rood''. The word's history is complicated; it appears to have entered English from , possibly via , ultimately from the Latin (or its accusative and its genitive ), "stake, cross". The English verb ''to cross'' arises from the noun , first in the sense "to make the sign of the cross"; the generic meaning "to intersect" develops in the 15th century. The Latin word was, however, influenced by by a native Germanic word reconstructed as *''krukjo'' (English '' crook'', Old English , Old Norse , Old High German ). This word, by conflation with Latin , gave rise to Old French (modern French ), the term for a shepherd's crook, adopted in English as '' crosier''. Latin referred to the gibbet where criminals were executed, a stake or pole, with or without , on which the condemned were impaled or hanged, but more particularly a cross or the pole of a carriage. The derived verb means "to put to death on the cross" or, more frequently, "to put to the rack, to torture, torment", especially in reference to mental troubles. In the Roman world, replaced as the name of some cross-like instruments for lethal and temporary punishment, ranging from a . The field of etymology is of no help in any effort to trace a supposed original meaning of ''crux''. A ''crux'' can be of various shapes: from a single beam used for impaling or suspending () to the various composite kinds of cross () made from more beams than one. The latter shapes include not only the traditional †-shaped cross (the ), but also the T-shaped cross (the or tau cross), which the descriptions in antiquity of the execution cross indicate as the normal form in use at that time, and the X-shaped cross (the ''crux decussata'' or ). The Greek equivalent of Latin ''crux'' "stake, gibbet" is , found in texts of four centuries or more before the gospels and always in the plural number to indicate a stake or pole. From the first century BC, it is used to indicate an instrument used in executions. The Greek word is used in descriptions in antiquity of the execution cross, which indicate that its normal shape was similar to the Greek letter tau ( Τ).
Pre-ChristianDue to the simplicity of the design (two intersecting lines), cross-shaped incisions make their appearance from deep prehistory; as petroglyphs in European Cult (religious practice), cult caves, dating back to the beginning of the Upper Paleolithic, and throughout prehistory to the Iron Age. Also of prehistoric age are numerous variants of the simple cross mark, including the ''swastika, crux gammata'' with curving or angular lines, and the Egyptian ''ankh, crux ansata'' with a loop. Speculation has associated the cross symbol – even in the prehistoric period – with astronomical or cosmological symbology involving "classical element, four elements" (Chevalier, 1997) or the cardinal directions, cardinal points, or the unity of a vertical axis mundi or celestial pole with the horizontal world (Koch, 1955). Speculation of this kind became especially popular in the mid- to late-19th century in the context of comparative mythology seeking to tie Christian mythology to ancient Religious cosmology, cosmological myths. Influential works in this vein included G. de Mortillet (1866), L. Müller (1865), W. W. Blake (1888), Ansault (1891), etc. In the European Bronze Age the cross symbol appeared to carry a Prehistoric religion, religious meaning, perhaps as a symbol of consecration, especially pertaining to burial. The cross sign occurs trivially in tally marks, and develops into a number symbol independently in the Roman numerals (X "ten"), the Chinese Counting rods, rod numerals (:wikt:十, 十 "ten") and the Brahmi numerals ("four", whence the numeral 4 (number), 4). In the Phoenician alphabet and Semitic abjad, derived scripts, the cross symbol represented the phoneme /t/, i.e. the letter taw, which is the historical predecessor of Latin T. The letter name ''taw'' means "mark", presumably continuing the Egyptian hieroglyph "two crossed sticks" (List of hieroglyphs/Z, Gardiner Z9). According to William Edwy Vine, W. E. Vine's ''Vine's Expository Dictionary, Expository Dictionary of New Testament Words'', worshippers of Tammuz (deity), Tammuz in Chaldea and thereabouts used the cross as symbol of that god.
Christian crossThe shape of the cross (''crux'', ''stauros'' "stake, gibbet"), as represented by the letter Tau (letter), T, came to be used as a "seal" or symbol of Early Christianity by the 2nd century. Clement of Alexandria in the early 3rd century calls it ("the Lord's sign") he repeats the idea, current as early as the Epistle of Barnabas, that the number 318 (in Greek numerals, ΤΙΗ) in Genesis 14:14 was a foreshadowing (a "type") of the cross (the letter Tau) and of Jesus (the letters Iota Eta). Clement's contemporary Tertullian rejects the accusation that Christians are ''crucis religiosi'' (i.e. "adorers of the gibbet"), and returns the accusation by likening the worship of pagan idols to the worship of poles or stakes. In his book ''De Corona'', written in 204, Tertullian tells how it was already a tradition for Christians to trace repeatedly on their foreheads the sign of the cross. While early Christians used the T-shape to represent the cross in writing and gesture, the use of the Greek cross and Latin cross, i.e. crosses with intersecting beams, appears in Christian art towards the end of Late Antiquity. An early example of the cruciform halo, used to identify Christ in paintings, is found in the ''Miracles of the Loaves and Fishes'' mosaic of Sant'Apollinare Nuovo, Ravenna (6th century). The Patriarchal cross, a Latin cross with an additional horizontal bar, first appears in the 10th century. A wide variation of cross symbols is introduced for the purposes of Crosses in heraldry, heraldry beginning in the age of the Crusades.
Cross-like marks and graphemesThe cross mark is used to mark a position, or as a check mark, but also to mark :wikt:deletion, deletion. Derived from Greek Chi (letter), Chi are the Latin alphabet, Latin letter , Cyrillic Kha (Cyrillic), Kha and possibly runic Gyfu. Egyptian hieroglyphs involving cross shapes include ''ankh'' "life", ''Cross-ndj (hieroglyph), ndj'' "protect" and ''Nefer, nfr'' "good; pleasant, beautiful". Sumerian cuneiform had a simple cross-shaped character, consisting of a horizontal and a vertical wedge (:wikt:𒈦, 𒈦), read as ''maš'' "tax, yield, interest"; the superposition of two diagonal wedges results in a decussate cross (:wikt:𒉽, 𒉽), read as ''pap'' "first, pre-eminent" (the superposition of these two types of crosses results in the eight-pointed star used as the sign for "sky" or "deity" (:wikt:𒀭, 𒀭), DINGIR). The cuneiform script has other, more complex, cruciform characters, consisting of an arrangement of boxes or the fourfold arrangement of other characters, including the archaic cuneiform characters Liste der archaischen Keilschriftzeichen, LAK-210, LAK-276, LAK-278, LAK-617 and the classical sign EZEN (𒂡). Phoenician ''tāw'' is still cross-shaped in Paleo-Hebrew alphabet and in some Old Italic scripts (Raetic and Lepontic), and its descendant T becomes again cross-shaped in the Latin Lower case, minuscule t. The Plus and minus signs, plus sign (+) is derived from Latin t via a simplification of a ligature for ''et'' "and" (introduced by Johannes Widmann in the late 15th century). The letter Aleph is cross-shaped in Aramaic script, Aramaic and paleo-Hebrew. Egyptian hieroglyphs with cross-shapes include Gardiner's sign list, Gardiner List of hieroglyphs/Z, Z9 – Z11 ("crossed sticks", "crossed planks"). Other, unrelated cross-shaped letters include Brahmi ''ka'' (predecessor of the Devanagari letter क) and Old Turkic script, Old Turkic (Orkhon) ''d²'' and Old Hungarian alphabet, Old Hungarian ''b'', and Katakana ナ ''Na (kana), na'' and メ''Me (kana), me''. The multiplication sign (×), often attributed to William Oughtred (who first used it in an appendix to the 1618 edition of John Napier's ''Descriptio'') apparently had been in occasional use since the mid 16th century. Other typographical symbols resembling crosses include the dagger (typography), dagger or ''obelus'' (†), the Chinese numerals, Chinese (wikt:十, 十, Radical 24, Kangxi radical 24) and Roman numerals, Roman (X ten). Unicode has a variety of cross symbols in the "Dingbat" block (U+2700–U+27BF) : :✕ ✖ ✗ ✘ ✙ ✚ ✛ ✜ ✝ ✞ ✟ ✠ ✢ ✣ ✤ ✥ The Miscellaneous Symbols block (U+2626 to U+262F) adds three specific Christian cross variants, viz. the Patriarchal cross (☦), Cross of Lorraine (☨) and "Cross of Jerusalem" (implemented as Cross potent, ☩).
Cross-like emblemsThe following is a list of cross symbols, ''except'' for variants of the Christian cross and Crosses in heraldry, Heraldic crosses, for which see the dedicated lists at Christian cross variants and Crosses in heraldry, respectively. ;As a design element
Physical gesturesCross shapes are made by a variety of physical gestures. Crossed fingers, Crossing the fingers of one hand is a common invocation of the symbol. The sign of the cross associated with Christian genuflection is made with one hand: in Eastern Orthodox tradition the sequence is head-heart-right shoulder-left shoulder, while in Oriental Orthodox, Catholic and Anglican tradition the sequence is head-heart-left-right. Crossing the index fingers of both hands represents and a charm against evil in European folklore. Other gestures involving more than one hand include the "cross my heart" movement associated with making a promise and the Tau shape of the referee's "time out" hand signal. In Chinese-speaking cultures, crossed index fingers represent the number 10.
Other things known as "cross"*Crux, or the Southern Cross, is a cross-shaped constellation in the Southern Hemisphere. It appears on the national flags of Australia, Brazil, New Zealand, Niue, Papua New Guinea and Samoa. *Notable free-standing Christian crosses (or Summit crosses): The tallest cross, at 152.4 metres high, is part of Francisco Franco's monumental "Valley of the Fallen", the ''Monumento Nacional de Santa Cruz del Valle de los Caidos'' in Spain. A cross at the junction of Interstates Interstate 57, 57 and Interstate 70, 70 in Effingham, Illinois, is purportedly the tallest in the United States, at 198 feet (60.3 m) tall. The tallest freestanding cross in the United States is located in Saint Augustine, FL and stands 208 feet. *The tombs at Naqsh-e Rustam, Iran, made in the 5th century BC, are carved into the cliffside in the shape of a cross. They are known as the "Persian crosses". * Cross-ndj (hieroglyph) * Cross cap, topological surface * Crossroads (mythology) * Crossbuck
References* Chevalier, Jean (1997). ''The Penguin Dictionary of Symbols''. Penguin . * Drury, Nevill (1985). ''Dictionary of Mysticism and the Occult''. Harper & Row. . * Koch, Rudolf (1955). ''The Book of Signs''. Dover, NY. . * Webber, F. R. (1927, rev. 1938). ''Church Symbolism: an explanation of the more important symbols of the Old and New Testament, the primitive, the mediaeval and the modern church''. Cleveland, OH. . | <urn:uuid:6a62c4f2-4db4-49aa-bd42-cb7eabcd8583> | CC-MAIN-2021-49 | http://theinfolist.com/html/ALL/s/cross.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363689.56/warc/CC-MAIN-20211209061259-20211209091259-00151.warc.gz | en | 0.913541 | 2,818 | 3.65625 | 4 |