text
stringlengths
209
389k
Fault tree analysis (FTA) was developed in 1962 and is frequently used in safety and reliability engineering fields. FTA is employed in almost every engineering discipline and provides a framework using which the defects and weaknesses of a system can be analyzed qualitatively or quantitatively. A fault tree is a structured logic diagram that shows the cause and effect relationships among events in systems. Fault tree analysis begins with a “top event” to be analyzed that generally is displayed with a rectangle and related events based on logical relations with the top event that are drawn below, branching downward as in a tree. The top event defines the failure mode of the system or its function, which is then analyzed in terms of the failure modes of its components and influencing factors. FTA begins to identify the causes of an undesired event, namely the top event and proceeds to their root causes with a treelike structure until all possible primary events are reached. After the identification of the top event, inter-mediate events should be defined. An intermediate event is any event except the top event that could be broken into events that could cause it. This process continues until all root causes are identified, namely basic events and gate events, which show the lowest level in a fault-tree structure. The relationships between events, including top event, intermediate event and basic event, are described and presented by logical gates, including AND gate, OR gate, Inhibit gate and other logic gates. AND gate indicates that if all lower events occur, the up- per event will occur. OR gate means that the occurrence of any of the lower events would result in the upper event.
On a sunny Friday in February, Marine Ecology Program Director Hannah Webber sent an automatic water monitoring instrument (SeapHOx) out to a site off Schoodic Island for a third year of measuring acidity and other water quality parameters, in collaboration with Acadia National Park. After Webber screwed the sensors and housing back together, “Diver Ed” and his crew transported the SeapHOx to the site, where Ed donned his cold-water SCUBA gear and bolted the instrument’s metal frame to a granite block 40 feet below the surface. Ocean acidification, and its potential to harm shellfish and other marine organisms, is a major concern among marine ecologists and coastal communities. But until a few years ago no one had any data on acidity in the Gulf of Maine. The Schoodic Island site is now part of a network of monitoring sites. The autonomous sensor continuously measures ocean pH, temperature, dissolved oxygen, and conductivity (or salinity); the data will be collected when the SeapHOx is retrieved from the water at the end of the year. Located at the edge of the Eastern Maine Coastal Current, the Schoodic Island site could provide an indication of the chemistry of water coming into the Gulf of Maine from the Northwest Atlantic and Arctic oceans. The pH measurements from the last two years vary widely, which is to be expected according to Joe Salisbury, a professor at University of New Hampshire and one of Webber’s collaborating scientists. Only after multiple years of data will researchers be able to identify any trends. In the meantime, the SeapHOx is sampling away beneath the surface, recording data on the changing conditions of the ocean that surrounds Acadia.
According to the World Health Organization, 2 billion people lack access to safe drinking water, leading to waterborne diseases and over 500,000 deaths each year. Moreover, by 2040, roughly 1 in 4 children worldwide will be living in areas of extremely high water stress. Kyran Knauf, a French-German freelance designer, decided to tackle this problem in his thesis project at the Design Academy in Eindhoven where he focused on courses related to the circular economy, low/high-tech products and speculative design. Kyran’s proposal is called Nebula Mesh and takes inspiration from the Cotula Fallax, a plant native to New Zealand that uses its structure to harvest water from the air. Through biomimicry, Kyran has developed a 3D printed mesh that replicates the leaf structures of the Cotula Fallax: its fine hairs has been found to underpin the collection and retention of water droplets on the foliage for extended periods of time. The Nebula Mesh is modular, it can be hung or placed on its base, so that it can function in different contexts. The product is easily assembled through 4 simple steps: assemble the base, assemble the mesh with its strucure, assemble the two components and secure them. The structure was designed to be easily transported in remote or hard-to-reach areas. The product was created not only to respond to a supply problem but also to infrastructures that are very often lacking or are vulnerable to external factors such as economic instability, natural disasters or political unrest. The final size of Nebula Mesh should be determined based on the specific needs and requirements of the community or individual user it is intended for. Similarly to Nebula, even the fog collectors are solutions inspired by nature – the first documentations date back to the Inca empire, which exploited the water collected from the leaves of the trees – in this case, however, the modern structures have a mesh qith a vertical direction. The world’s largest “fog harvesting” system was built by WaterFoundation in Morocco to provide locals with clean water.
Service Water Treatment involves various procedures that separate the various types of contaminants from the water. During the water treatment process, water is made slightly basic so that the metals that are present in the water will be precipitated. This process also removes organic matter. It is an ancient technique that was used by the Romans and Egyptians. It is one of the most common methods in water treatment. It is also one of the most common processes used to remove the suspended solids from wastewater. During the different processes, different chemicals are added to the water to achieve the desired results. These chemicals may include disinfectants, corrosion inhibitors, and pH balance agents. These chemicals are required to meet specific standards set by the EPA. The process is highly effective at removing many kinds of bacteria and viruses from water. The EPA’s site contains more information about water treatment. Its effectiveness is dependent on the raw water quality and seasonal variations. For example, water that is highly contaminated with microbial pathogens and has a high coliform count may require pre-treatment before undergoing conventional treatment. For low coliform counts, however, conventional treatment is sufficient. Water treatment can also include biological processes or mechanical methods. Water treatment is essential to ensure that water is safe for human use and meets environmental standards. The process relies on science and engineering to ensure that the technology works as intended. The art of presenting a clean, safe water source is also important in the water treatment process. Some of the most common technologies for water treatment include ultraviolet irradiation, disinfection, and ozonation. In addition to disinfection, water treatment can also remove suspended particles that can harm human health. The process of water treatment starts with flash mixing of the primary coagulant and polymer. After that, the water is put into a flocculation basin where it is turned slowly. The mixture of the chemicals creates a coagulate or floc which settles at the bottom of the flocculation basin. Water treatment technologies for potable water are well developed. Many private companies offer patented technological solutions for a variety of contaminants. In the developed world, water treatment is often automated. However, capital and operating costs can be influenced by the source water’s quality. The end-use of treated water can also determine the quality monitoring technologies needed. The degree of automation required can also depend on local skills and availability. Public drinking water systems use a combination of different water treatment methods. Some of these methods include coagulation, sedimentation, filtration, and disinfection. The coagulation process is often the first step in water treatment, and it involves adding chemicals with a positive charge to the water. The chemicals then adhere to dissolved particles and form larger particles. Common chemicals used in this process include specific salts and aluminum. The next step of water treatment is flocculation. This process involves gently stirring the water to create larger, heavier particles. This step is followed by sedimentation, which involves the sedimentation of the flocs. During sedimentation, the flocs settle to the bottom of the water supply. There, they are then removed from the water.
Bridges. 1. Definitions and General Considerations. - Bridges (old forms, brig, brygge, brudge; Dutch, brug; German, Brücke; a common Teutonic word) are structures carrying roadways, waterways or railways across streams, valleys or other roads or railways, leaving a passage way below. Long bridges of several spans are often termed "viaducts," and bridges carrying canals are termed "aqueducts," though this term is sometimes used for waterways which have no bridge structure. A "culvert" is a bridge of small span giving passage to drainage. In railway work an "overbridge" is a bridge over the railway, and an "underbridge" is a bridge carrying the railway. In all countries there are legal regulations fixing the minimum span and height of such bridges and the width of roadway to be provided. Ordinarily bridges are fixed bridges, but there are also movable bridges with machinery for opening a clear and unobstructed passage way for navigation. Most commonly these are "swing" or "turning" bridges. "Floating" bridges are roadways carried on pontoons moored in a stream. In classical and medieval times bridges were constructed of timber or masonry, and later of brick or concrete. Then late in the 18th century wrought iron began to be used, at first in combination with timber or cast iron. Cast iron was about the same time used for arches, and some of the early railway bridges were built with cast iron girders. Cast iron is now only used for arched bridges of moderate span. Wrought iron was used on a large scale in the suspension road bridges of the early part of the 19th century. The great girder bridges over the Menai Strait and at Saltash near Plymouth, erected in the middle of the 19th century, were entirely of wrought iron, and subsequently wrought iron girder bridges were extensively used on railways. Since the introduction of mild steel of greater tenacity and toughness than wrought iron (i.e. from 1880 onwards) it has wholly superseded the latter except for girders of less than 100 ft. span. The latest change in the material of bridges has been the introduction of ferro-concrete, armoured concrete, or concrete strengthened with steel bars for arched bridges. The present article relates chiefly to metallic bridges. It is only since metal has been used that the great spans of 500 to 1800 ft. now accomplished have been made possible. 2. In a bridge there may be distinguished the superstructure and the substructure. In the former the main supporting member or members may be an arch ring or arched ribs, suspension chains or ropes, or a pair of girders, beams or trusses. The bridge flooring rests on the supporting members, and is of very various types according to the purpose of the bridge. There is also in large bridges wind-bracing to stiffen the structure against horizontal forces. The substructure consists of (a) the piers and end piers or abutments, the former sustaining a vertical load, and the latter having to resist, in addition, the oblique thrust of an arch, the pull of a suspension chain, or the thrust of an embankment; and (b) the foundations below the ground level, which are often difficult and costly parts of the structure, because the position of a bridge may be fixed by considerations which preclude the selection of a site naturally adapted for carrying a heavy structure. 3. Types of Bridges. - Bridges may be classed as arched bridges, in which the principal members are in compression; suspension bridges, in which the principal members are in tension; and girder bridges, in which half the components of the principal members are in compression and half in tension. But there are cases of bridges of mixed type. The choice of the type to be adopted depends on many and complex considerations: - (1) The cost, having regard to the materials available. For moderate spans brick, masonry or concrete can be used without excessive cost, but for longer spans steel is more economical, and for very long spans its use is imperative. (2) The importance of securing permanence and small cost of maintenance and repairs has to be considered. Masonry and concrete are more durable than metal, and metal than timber. (3) Aesthetic considerations sometimes have great weight, especially in towns. Masonry bridges are preferable in appearance to any others, and metal arch bridges are less objectionable than most forms of girder. Most commonly the engineer has to attach great importance to the question of cost, and to design his structure to secure the greatest economy consistent with the provision of adequate strength. So long as bridge building was an empirical art, great waste of material was unavoidable. The development of the theory of structures has been largely directed to determining the arrangements of material which are most economical, especially in the superstructure. In the case of bridges of large span the cost and difficulty of erection are serious, and in such cases facility of erection becomes a governing consideration in the choice of the type to be adopted. In many cases the span is fixed by local conditions, such as the convenient sites for piers, or the requirements of waterway or navigation. But here also the question of economy must be taken into the reckoning. The cost of the superstructure increases very much as the span increases, but the greater the cost of the substructure, the larger the span which is economical. Broadly, the least costly arrangement is that in which the cost of the superstructure of a span is equal to that of a pier and foundation. For masonry, brick or concrete the arch subjected throughout to compression is the most natural form. The arch ring can be treated as a blockwork structure composed of rigid voussoirs. The stability of such structures depends on the position of the line of pressure in relation to the extrados and intrados of the arch ring. Generally the line of pressure lies within the middle half of the depth of the arch ring. In finding the line of pressure some principle such as the principle of least action must be used in determining the reactions at the crown and springings, and some assumptions must be made of not certain validity. Hence to give a margin of safety to cover contingencies not calculable, an excess of material must be provided. By the introduction of hinges the position of the line of resistance can be fixed and the stress in the arch ring determined with less uncertainty. In some recent masonry arched bridges of spans up to 150 ft. built with hinges considerable economy has been obtained. For an elastic arch of metal there is a more complete theory, but it is difficult of application, and there remains some uncertainty unless (as is now commonly done) hinges are introduced at the crown and springings. In suspension bridges the principal members are in tension, and the introduction of iron link chains about the end of the 18th century, and later of wire ropes of still greater tenacity, permitted the construction of road bridges of this type with spans at that time impossible with any other system of construction. The suspension bridge dispenses with the compression member required in girders and with a good deal of the stiffening required in metal arches. On the other hand, suspension bridges require lofty towers and massive anchorages. The defect of the suspension bridge is its flexibility. It can be stiffened by girders and bracing and is then of mixed type, when it loses much of its advantage in economy. Nevertheless, the stiffened suspension bridge will probably be the type adopted in future for very great spans. A bridge on this system has been projected at New York of 3200 ft. span. The immense extension of railways since 1830 has involved the construction of an enormous number of bridges, and most of these are girder bridges, in which about half the superstructure is in tension and half in compression. The use of wrought iron and later of mild steel has made the construction of such bridges very convenient and economical. So far as superstructure is concerned, more material must be used than for an arch or chain, for the girder is in a sense a combination of arch and chain. On the other hand, a girder imposes only a vertical load on its piers and abutments, and not a horizontal thrust, as in the case of an arch or suspension chain. It is also easier to erect. A fundamental difference in girder bridges arises from the mode of support. In the simplest case the main girders are supported at the ends only, and if there are several spans they are discontinuous or independent. But a main girder may be supported at two or more points so as to be continuous over two or more spans. The continuity permits economy of weight. In a three-span bridge the theoretical advantage of continuity is about 49% for a dead load and 16% for a live load. The objection to continuity is that very small alterations of level of the supports due to settlement of the piers may very greatly alter the distribution of stress, and render the bridge unsafe. Hence many multiple-span bridges such as the Hawkesbury, Benares and Chittravatti bridges have been built with independent spans. Lastly, some bridges are composed of cantilevers and suspended girders. The main girder is then virtually a continuous girder hinged at the points of contrary flexure, so that no ambiguity can arise as to the stresses.Fig. 1. - Trajan's Bridge. Whatever type of bridge is adopted, the engineer has to ascertain the loads to be carried, and to proportion the parts so that the stresses due to the loads do not exceed limits found by experience to be safe. In many countries the limits of working stress in public and railway bridges are prescribed by law. The development of theory has advanced pari passu with the demand for bridges of greater strength and span and of more complex design, and there is now little uncertainty in calculating the stresses in any of the types of structure now adopted. In the modern metal bridge every member has a definite function and is subjected to a calculated straining action. Theory has been the guide in the development of bridge design, and its trustworthiness is completely recognized. The margin of uncertainty which must be met by empirical allowances on the side of safety has been steadily diminished. The larger the bridge, the more important is economy of material, not only because the total expenditure is more serious, but because as the span increases the dead weight of the structure becomes a greater fraction of the whole load to be supported. In fact, as the span increases a point is reached at which the dead weight of the superstructure becomes so large that a limit is imposed to any further increase of span.Fig. 2. - Bridge of Alcantara.
In your organic chemistry class, questions like predict the product or mechanisms might oftentimes involve a step with rearrangements (hydride or methyl shifts). 1. How do we know when rearrangements are possible? Simple, any mechanisms that involves a carbocation can have rearrangement. Therefore, reactions Sn1, E1, as well as reactions of alkenes with HX and H3O+ could have rearrangements. 2. How do we know when we should do a rerraganement? If there is a more stable position for a carbocation on a neighboring carbon in the molecule, a shift will happen. For example, let's say that we made a secondary carbocation, but there is a tertiary position next to it, a rearrangement will occur to place the carbocation into a more stable position. Happy organic chemistry studying, and remember that Transformation Tutoring amazing tutors are always here to answer any questions and help you ace your organic chemistry class.
Jehossee Island, at approximately 4,500 acres, is separated from the mainland by the Dawho River, South Edisto River, and the Intracoastal Waterway. The Jehossee causeway, the only link to the mainland, was destroyed when the waterway was created in the 1920’s. The majority of the island consists of salt marsh salt marsh Salt marshes are found in tidal areas near the coast, where freshwater mixes with saltwater. Learn more about salt marsh and non-forested freshwater wetlands. Other habitats include upland forested areas and, open fields, much of which is becoming reforested. Brackish water impoundments and natural tidal marshes of bulrush, cattail, cordgrass, and sea myrtle, interspersed with adjacent upland areas of wax myrtle, pine and palmettos, oak, and sweetgum, provide a remarkable complex of habitats for a broad spectrum of migratory and resident marsh birds, waterfowl, raptors, songbirds and shorebirds. Many wildlife species call the island home and include the American alligator, white-tailed deer, Eastern cottonmouth, Diamondback terrapin, and spotted salamander. Large numbers of Wood storks, federally listed as a threatened species, are regularly foraging in the impoundments on Jehossee Island, and roosting in the adjacent trees. Marsh birds such as American and least bittern, king, yellow and black rails, gallinules, seaside and wintering saltmarsh sparrows find suitable habitats for foraging, roosting and nesting on Jehossee. The refuge manages the water levels in the impoundments through the use of rice trunks in the old abandoned rice fields, similar to how field management was done during the rice culture and antebellum period. A natural haven for wildlife, Jehossee Island is steeped in the cultural past. Settlement on Jehossee is recorded as early as 1685 – 1700. The island has had a number of different owners throughout its history and is representative of the late eighteenth and early nineteenth century rice plantation era. Rice cultivation, which began in the 1730’s, changed the economic and cultural life of the low country area until its demise following the Civil War. In 1830, William Aiken acquired the first of his island holdings and, by 1859, he had acquired the remainder of the island. Aiken served in both the South Carolina House of Representatives and Senate, was a state Governor and, elected to the U.S. House of Representatives. Governor Aiken was well known for Jehossee Plantation. During his ownership, over 800 enslaved people lived and toiled on the plantation. As a result of enslaved people’s skills in rice cultivation, Jehossee Plantation became one of the most productive rice plantations in the area and, known as the largest and wealthiest rice plantation in the South. Jehossee Island remained in the Aiken-Rhett-Maybank family until it was sold to the U.S. Fish and Wildlife Service in 1993 as part of the ACE Basin NWR. An archaeological and historical investigation of Jehossee Island was conducted in 2002, with thirteen significant historical sites located on the island. Still standing on Jehossee are the overseer’s house, a chimney of a rice threshing operation, and other historic remains that serve as “placeholders” of sorts, reminding us of a period long past. Refuge management of the island is to maintain and preserve in perpetuity the archaeological and historical resources that exemplify the natural and cultural history of South Carolina. These sites are provided full protection as provided by the Archaeological Resources Protection Act. You are exiting the U.S. Fish and Wildlife Service website You are being directed to We do not guarantee that the websites we link to comply with Section 508 (Accessibility Requirements) of the Rehabilitation Act. Links also do not constitute endorsement, recommendation, or favoring by the U.S. Fish and Wildlife Service.
This is a pretty big question to grapple with, especially if you are only in Primary School! Well this is exactly what the students in 3-6 Honours did in Semester 1. Throughout Term 1 and Term 2 students engaged with the topic “What Matters”. Each student was able to select a topic of interest which would be their focus for the semester’s work. This unit was composed of a three stage process involving research, developing ideas and creating. These components were called So What, Now What and Then What. Part ONE: So What? Why does this matter to you? Why is this important? Part TWO: Now What? What are you going to do to help promote or support what matters? Part THREE: Then What? What are your future plans, ideas and actions for this cause? Students did an amazing job researching their topics and diving deep into the issues that face their topic. Based on this research students developed a design, product or website promoting or supporting their what matters topic. Many things were created such as; a website that has informative videos about how to protect the environment, a 3D web design of a musical building and a model of an electric powered 4WD. The topics students explored were so diverse and completely relevant to them. At the beginning of Term 4 students were able to present their What Matters Topic to the Honours Class. Each student shared the process of their learning from the initial research to the completed end of unit product. Each presentation was filmed and can be accessed through the link below. Semester 1 in Honours was a whirlwind of research, learning, creating and fun. Each student has done a fantastic job and should be very proud of all that they have achieved.
ROCHESTER, Minn. — A pair of Mayo Clinic studies shed light on something that is typically difficult to see with the eye: respiratory aerosols. Such aerosol particles of varying sizes are a common component of breath, and they are a typical mode of transmission for respiratory viruses like COVID-19 to spread to other people and surfaces. Researchers who conduct exercise stress tests for heart patients at Mayo Clinic found that exercising at increasing levels of exertion increased the aerosol concentration in the surrounding room. Then also found that a high-efficiency particulate air (HEPA) device effectively filtered out the aerosols and decreased the time needed to clear the air between patients. "Our work was conducted with the support of Mayo Cardiovascular Medicine leadership who recognized right at the start of the pandemic that special measures would be required to protect patients and staff from COVID-19 while continuing to provide quality cardiovascular care to all who needed it," says Thomas Allison, Ph.D., director of Cardiopulmonary Exercise Testing at Mayo Clinic in Rochester. "Since there was no reliable guidance on how to do this, we put a research team together to find answers through scientific testing and data. We are happy to now share our findings with everyone around the world." Dr. Allison is senior author of both studies. To characterize the aerosols generated during various intensities of exercise in the first study, Dr. Allison's team set up a special aerosol laboratory in a plastic tent with controlled airflow. Two types of laser beam particle counters were used to measure aerosol concentration at the front, back and sides of a person riding an exercise bike. Eight exercise volunteers wore equipment to measure their oxygen consumption, ventilation and heart rate. During testing, a volunteer first had five minutes of resting breathing, followed by four bouts of three-minute exercise staged ― with monitoring and coaching ― to work at 25%, 50%, 75% and 100% of their age-predicted heart rate. This effort was followed by three minutes of cooldown. The findings are publicized online in CHEST. The aerosol concentrations increased exponentially throughout the test. Specifically, exercise at or above 50% of resting heart rate showed significant increases in aerosol concentration. "In a real sense, I think we have proven dramatically what many suspected ― that is why gyms were shut down and most exercise testing laboratories closed their practices. Exercise testing was not listed as an aerosol-generating procedure prior to our studies because no one had specifically studied it before. Exercise generates millions of respiratory aerosols during a test, many of a size reported to have virus-carrying potential. The higher the exercise intensity, the more aerosols are produced,” says Dr. Allison. The follow-up study led by Dr. Allison focused on how to mitigate the aerosols generated during exercise testing by filtering them out of the air immediately after they came out of the subject's mouth. Researchers used a similar setup with the controlled airflow exercise tent, particle counter and stationary bike, but added a portable HEPA filter with a flume hood. Six healthy volunteers completed the same 20-minute exercise test as the previous study, first without the mitigation and then with the portable HEPA filter running. Also, a separate experiment tested aerosol clearance time in the clinical exercise testing laboratories by using artificially generated aerosols to test how long it took for 99.9% of aerosols to be removed. Researchers performed the test first with only existing heating, ventilation and air conditioning, and then with the addition of the portable HEPA filter running. "Studying clearance time informed us of how soon we could safely bring a new patient into the laboratory after finishing the test on the previous patient. HEPA filters cut this time by 50%, allowing the higher volume of testing necessary to meet the clinical demands of our Cardiovascular Medicine practice," says Dr. Allison. "We translated CDC (Centers for Disease Control and Prevention) guidelines for aerosol mitigation with enhanced airflow through HEPA filters and showed that it worked amazingly well for exercise testing. We found that 96% plus or minus 2% of aerosols of all sizes generated during heavy exercise were removed from the air by the HEPA filter. As a result, we have been able to return to our practice of performing up to 100 stress tests per day without any recorded transmission of COVID in our exercise testing laboratories," says Dr. Allison. About Mayo Clinic Mayo Clinic is a nonprofit organization committed to innovation in clinical practice, education and research, and providing compassion, expertise and answers to everyone who needs healing. Visit the Mayo Clinic News Network for additional Mayo Clinic news. For information on COVID-19, including Mayo Clinic's Coronavirus Map tracking tool, which has 14-day forecasting on COVID-19 trends, visit the Mayo Clinic COVID-19 Resource Center.
Early mechanical timepieces didn’t have hands. They signaled time with bells. Then one hand was introduced, indicating the hour only, until eventually sophisticated mechanics introduced the more precise minute and then second hands. Because clocks were invented in the northern hemisphere, the hands followed the same direction as the shadows on a sundial. If they’d been invented in the southern hemisphere, “clockwise” would be in the opposite direction. That’s why the hands of a clock move to the right.
Moderate Imaging Spectroradiometer (MODIS) data in the Normalized Difference Vegetation Index (NDVI), an index that shows plants “greenness” or photosynthetic activity, is helping better understand risk factors associated with Rift Valley Fever outbreaks in Southern Africa. A recent study published in the National Center for Biotechnology Information’s Pub Med looked at epidemiological and environmental risk factors from 2008 – 2011, the worst outbreak of Rift Valley fever in almost 40 years. Periods of widespread and above-normal rainfall are associated with Rift Valley Fever outbreaks. Researchers combined data from the World Animal Health Information Database (WAHID) on what types of species were affected, where and when with environmental factors including rainfall and NDVI. The results of the study show that these environmental factors along with geographic factors (topography, drainage, and land use) do play a role in the emergence of Rift Valley Fever. This study will help the accuracy of future models of areas at risk, allowing more time to adequately prepare and prevent future outbreaks. Read the full article at http://www.ncbi.nlm.nih.gov/pubmed/26273812
Food production is one of the largest drivers of climate change and environmental degradation. Current diets are contributing to a rising burden of diet-related chronic diseases. To address these intertwined issues, there is an urgent need to transition to sustainable and nourishing dietary patterns. Addressing food production through increasing efficiencies or transitioning to nature positive food production is necessary but insufficient. It is impossible to meet the 1.5 degrees goal without widespread dietary change. Consumption patterns must shift to ensure food and nutrition security, and a livable climate for a growing population. Over 100,000 young people from across the globe have engaged in consultation processes as part of the UN Food Systems Summit and the Act4Food, Act4Change movement. We have shared our views on challenges, solutions and priorities. What came out, loud and clear, is that the top priority of youth is for everyone, globally, to have access to healthy and sustainable diets. Many young people are already acting upon this priority in their own lives and communities. Countless youth have changed their own diets and founded health and sustainability-oriented organizations, student groups and startups. Young people engaged in the UN Food Systems Summit agree on three key principles for healthy and sustainable diets. First, populations that consume more than the recommended healthy levels of animal-sourced foods need to decrease their intake. However, nutritious ASF consumption should be increased among nutritionally vulnerable groups, particularly infants and children in low-income settings. Second, healthy and sustainable diets should contain minimal quantities of ultra-processed foods and foods high in unhealthy fats, salt and sugars. Third, all foods in the diet should be produced regeneratively and humanely. These principles should be adapted to different regions and cultures in correspondence to national, local and Indigenous knowledge systems. They should also work in support of decentralized and smallholder farming practices, and support food sovereignty. Despite this clear mandate, youth’s prioritisation of healthy and sustainable diets is not currently reflected in the COP26 policy narrative. The creation of the Healthy Diets from Sustainable Food Systems Coalition at the UN Food Systems Summit was a step in the right direction, as it will enable stakeholders to come together to make commitments and act on them. But this alone is not enough. The importance of healthy and sustainable diets must be reflected in the COP26 outcomes, as well as in the UN’s biodiversity COP, the Nutrition for Growth Summit, Stockholm+50 and other global fora. Time is precious; we cannot afford to repeatedly miss opportunities that address human, animal and planetary health simultaneously. These global fora can accelerate progress on healthy, sustainable diets through providing aspirational guidance and implementable solutions to countries, civil society and businesses. At these events, healthy and sustainable diets should be prominently featured – both as a standalone issue and as a cross-cutting lever for change when discussing related topics such as biodiversity, water use and pollution, and carbon sequestration. On the eve of COP26 and in full awareness of the urgent need to act on the environmental and health consequences of food systems, we, the undersigned, call on businesses and policymakers to act on the following items: 1. Inclusion of healthy and sustainable diets at forthcoming global fora, including COP26, the Biodiversity COP and the Nutrition for Growth Summit - with a strong focus on developing a global approach to measuring progress on healthy, sustainable diets; We call on COP26 specifically to: - Encourage member states to include healthy and sustainable diets in their Nationally Determined Contributions - Establish a session of the Koronivia Joint Work on Agriculture focused on healthy and sustainable diets - Ensure catering at COP26 and future UNFCCC events provide catering in line with the three principles for healthy and sustainable diets outlined here 2. Governments should support the scaling of regenerative and agroecological production of health-promoting foods through agricultural subsidies. 3. Governments and businesses should include commitments and strategies on healthy, sustainable diets at the heart of 1.5 degree climate and nature commitments; 4. Governments, businesses and academia should adopt frameworks that use true cost accounting of food and measure agricultural success according to quality (nutrients produced), not only quantity (the calories and yield) of food, and; 5. Governments and businesses should invest in sustainable future foods, including alternative proteins, while ensuring a just transition for vulnerable sectors so that workers’ rights and livelihoods are protected. Including healthy, sustainable diets in climate and nature commitments is crucial to enable implementation of needed policy measures including the redesign of food environments so that advertising, nudging and behavioural strategies enable, incentivize and empower consumers to make healthy and sustainable food choices, and the reshaping of dietary guidelines and food procurement to incorporate sustainability. Adopting true cost accounting frameworks would enable a much-needed subsidy shift away from systems that damage human and planetary health to those that maintain and restore it. We are at a pivotal moment for collective decision-making. COP26 and other upcoming global fora are where power will be wielded and decisions made that will impact us as youth for decades to come. We commend the UNFSS for its engagement of young people to date. We implore others to emulate this approach. Yet youth engagement will only have been meaningful if the priority of youth is acted upon. Young people understand the importance of healthy and sustainable diets for the health of people and the planet. We have taken action for healthy, sustainable diets and will continue to do so. We now call on you to do the same. -Youth Sustainable Diets Campaign Committee
Remembering the Fall of the Soviet Union: The August Coup of 1991 In the summer of 1991, the world watched in awe as a pivotal event unfolded in the heart of Moscow. On the fateful day of August 19th, a group of high-ranking Soviet officials attempted to overthrow President Mikhail Gorbachev in what came to be known as the August Coup. This audacious move aimed to reverse Gorbachev’s liberal reforms and restore the old order of the Soviet regime. The coup marked a critical turning point in history, ultimately leading to the collapse of the Soviet Union and the dawn of a new era. The August Coup was orchestrated by a group known as the “Gang of Eight,” consisting of influential figures such as Vice President Gennady Yanayev, Defense Minister Dmitry Yazov, and KGB chief Vladimir Kryuchkov. These hardline communists were deeply opposed to Gorbachev’s policies of glasnost (openness) and perestroika (restructuring), which they believed were destabilizing the Soviet Union. Seizing the opportunity presented by Gorbachev’s absence due to a vacation, the coup plotters moved swiftly to take control of vital institutions. On that dramatic morning, tanks rumbled through the streets of Moscow, blockading key government buildings, while the coup leaders declared a state of emergency and imposed martial law. Television and radio stations were quickly taken over, broadcasting messages of support for the coup and attempting to suppress any opposition. However, the coup faced unexpected resistance from the people. A strong wave of public defiance swept across the city, as Muscovites took to the streets in massive numbers to denounce the coup. Crowds gathered around the Russian White House, where Boris Yeltsin, the President of the Russian Federation, stood defiantly atop a tank, rallying the protesters and encouraging them to resist the ousting of Gorbachev. The crowds roared with chants of “We won’t let them pass!” and “Freedom!” as they pushed back against the tanks and soldiers. The international community closely monitored the events unfolding in Moscow. Leaders worldwide condemned the coup and expressed their support for Gorbachev. Western media outlets provided extensive coverage, capturing the spirit of a nation on the brink of change. The coup leaders, isolated and facing massive popular opposition, quickly realized they did not have sufficient support to maintain power. Just three days after the coup began, it ultimately collapsed, leaving the plotters discredited and their ambitions shattered. The August Coup of 1991 was undoubtedly a watershed moment in history. While it was a failed attempt to restore the Soviet Union’s old order, it paradoxically expedited its demise. The events of that summer exposed the deep fractures within the Soviet system, further eroding the authority and legitimacy of the Communist Party. The coup also served as a catalyst for national movements and accelerated the disintegration of the Soviet Union. By the end of the year, the USSR was dissolved, marking the end of an era and paving the way for a new era of independence and uncertainty for the former Soviet republics. (Image source: Wikimedia Commons)
Prepare for the mental health impacts of disasters Being prepared is about having plans in place so that individuals, communities and services are able to minimise the mental health impacts of a disaster and recover more quickly. Having a better understanding of the common emotional reactions to a disaster and knowledge of what support is available can assist with feeling more prepared in the event that a disaster occurs. Emotional preparedness can also be supported by: - Thinking about how you or others might generally respond to high stress situations - Knowing the early warning signs that tell you that you or others around you are having difficulty coping - Identifying strategies that can assist with managing your stress levels and wellbeing - Thinking about potential decisions that might need to be made and developing a plan to assist in making decisions if a disaster occurs - Knowledge of what support services are available and how (and when) to access them - Knowledge of the potential risks for disasters occurring in the local area (such as bushfires, floods, extreme heat, industrial incidents) and seek information from the local council regarding local emergency management plans - Attending local community forums that focus on disaster preparedness and community response and recovery planning - Connecting with others in the local community and sharing plans within existing support networks to keep each other informed.
Considering the various types of disasters and catastrophes that many people face in different places in the world it is important to consider how you can plan to provide the very basic human needs, including food and water, for your family and yourself. According to Healthline.com “You won’t live long without consuming a healthy amount of water. It’s only possible to survive without water for a matter of days. You may be susceptible to the effects of dehydration even sooner, depending on certain factors.” Clearly, it is important that we learn how to purify and store large amounts of water. So how do you store large amounts of water? How to store large amounts of water in a food-grade barrel or Drum: - Determine how much water you need - Select a high-quality, large food-grade barrel - Thoroughly clean and disinfect the barrel - Add potable drinking water - Disinfectant large amounts of water if needed - Store large water barrels in a clean and appropriate location Also, it is critical that you know how to safely and easily purify water at home in case your stored water or other water sources become contaminated. And for those who have colder seasons like use see how to keep water storage from freezing outside. Determine how much water you need to store In order to decide really how much water you should store for yourself and each of your family members, you may want to consider a few different factors, such as; age, health, physical condition, physical activity, diet, and climate. To get a general idea of the minimum amount of water you should store check out our Water Storage Calculator. The CDC suggests that you store 1 gallon of water per person per day for drinking and sanitization. This might be more than enough water for a toddler to survive while at the same time it isn’t enough water for adults to take a comfortable bath. The idea is that 1 gallon per person will sustain life. However, highly active people that live in hot climates may want to increase the amount of water that they store for daily use. Take the following into account: - Children, nursing mothers, and sick people may need more water. - A medical emergency might require additional water. - If you live in a warm-weather climate more water may be necessary. In very hot temperatures, water needs can double. How much water to store Chart |Water storage by Uses |1 Month of Water for 1 Person |Two Weeks of Water for 1 Person |3 Days of Water for 1 Person |7 – 8 gallons 26 – 30 Liters |2 – 3 gallons 7.5 – 11 Liters |Cooking (food preparation) Water |1 – 1.5 gallons 4 – 6 Liters |Washing (Sanitizing) Water |1 – 1.5 gallons 4 – 6 Liters “Store at least one gallon of water per person for drinking and sanitation. A normally active person needs about three quarters of a gallon of fluid daily, from water and other beverages. However, individual needs vary depending on age, health, physical condition, activity, diet and climate.”Department of Homeland Security Select a high-quality, large food-grade water barrel Many types of containers are available for water storage. Containers should be “food grade,” meaning they were meant to hold food or water. The most commonly used containers are glass, plastic, and metal. The best containers have secure lids and a spout or spigot that allows for dispensing water with minimal or no contamination. Plastic (food-grade barrel, drum, or containers) Plastic bottles or jugs previously used for beverages make excellent containers. They are lightweight and fairly sturdy. However, in order to store large amounts of water, we suggest getting a large water storage barrel such as a 55-gallon (208 Liters). What to look for in a large water storage barrel or drum - Make sure it is food-grade plastic that is BPA-Free - Has a top that can be closed tightly - Is made of durable, unbreakable plastic - 55-gallon water storage barrels are great for storing large amounts of water - Comes with a pump and hose - Getting a hand siphon pump, and a hose is important to avoid contaminating your water supply when accessing your water storage. - Lid opener (sometimes referred to as a bung wrench or drum bung wrench) So what is the best large 55-gallon water storage barrel/drum? 55-gallons | BPA Free - two fittings, - easy-to-use hand siphon pump, - 6-foot siphon hose, - lid opener - Aquamira Water Treatment pack (treats up to 60 gallons of water) If you have space and you’re looking to have at least one month of water storage on hand, you can’t go wrong with 55-gallon water barrels. They’re made from sturdy food-grade plastic and have bungs at the top that can be sealed super tight in order to protect your water from contamination. The plastic is also BPA-free and UV-resistant. Two of these drums can give a small family up to a couple of weeks of water. This is what I have right now for my water storage solution. Glass Water Containers Glass provides a fairly effective container for storage and is non-permeable to vapors and gases. Glass should not be the sole means of water storage since it is easily broken and may be damaged during an emergency. Glass is impractical to use for water storage of large amounts of water. Metal Water Containers Stainless steel can successfully be used for water storage. Other metals are not optimal containers unless they are coated and made specifically to hold food or water. Pewter or lead-soldered metals should be avoided. Stainless steel can be used to store large amounts of water but can be challenging in several ways so we prefer to store large amounts of water in food-grade plastic barrels and containers. Thoroughly clean and disinfect the barrel Water containers should be cleaned with warm, soapy water and rinsed. Special attention should be given to containers that previously contained food or beverages. - Fill the container with potable tap water - Add 1 tablespoon bleach for each 1 gallon of water - Shake well, turning the bottle upside down a time or two to sanitize the cap - Let stand for 1 minute, and then pour out the bleach water - Let the container air dry. Add potable water Once the barrel is thoroughly cleaned and located in its permanent home, fill it with the cleanest water available. We use a clean RV drinking water hose (Check the price on Amazon) when filling a water barrel to ensure that the water is not contaminated while you are filling it. Garden hoses that have been sitting outside in the yard can collect contaminates. We store our hose with the ends screwed together to prevent contamination. However, a garden hose will get the job done if that is all that you have. Disinfectant large amounts of water if needed Tap water or well water is not sterile. The few microorganisms present can multiply during storage and have the potential to cause illness. Water that is to be stored for long periods of time should be treated to control microbial growth. Be sure to use the best quality water possible for storage. Boiling is a good way to purify water. Bring the water to a rolling boil for 1 to 3 minutes. After the water has cooled, fill your clean water storage barrel or drum. - Boiled water will taste better if you put oxygen back in it before drinking. To restore the oxygen, pour the water back and forth between two clean containers several times. Liquid chlorine bleach (unscented) can be used to disinfect water for long-term storage. Use fresh chlorine bleach since it can lose up to half its strength after 6 months. One gallon of water can be treated by the addition of 1/8 teaspoon of liquid chlorine bleach containing 4 to 6 percent sodium hypochlorite. (Most bleach contains 5.25 percent.) This is equivalent to 8 drops of liquid chlorine bleach. During storage, the bleach will break down into oxygen and table salt. How much bleach do you add to stored water? long term water storage bleach ratio |Bleach to Clear Water |2 & 1/2 cups |2 Tablespoons & 1 teaspoon |Bleach to Cloudy Water |1 & 1/2 cups |1 & 1/3 cups Water Purification Tablets Different types of tablets are available for water purification purposes. Be sure to follow the manufacturer’s directions for treatment and allow sufficient time for the chemical to work before using the water. Check the label for the expiration date since the tablets can become ineffective with time. Most tablets have a storage life of 2 to 5 years unopened. Here is a great option for water purification tablets (100 pack) on Amazon! Water Filtration Units You can filter water if you have a commercial or backpack filter that filters to 1 micron. Storing a large amount of water might be impractical for you depending on your situation so having a good water filter that you can fit in a bag or 72-hour kit could be life-saving. For more information about water filters and how to know exactly which one is right for you see our article Selecting the Best Emergency Water Filters: Buyers Guide. The Survivor Filter PRO (See on Amazon) is one that I use camping and have on hand for emergencies. It is great because it filters to 0.01 Microns to remove viruses, bacteria, parasites, and reduces most heavy metals, taste, and chemicals. Our favorite personal filtration straw is the Sawyer Mini Filtration System (See on Amazon). The best part about this Sawyer Mini filter is that it filters up to 100,000 gallons (400,000 liters) of water whereas the LifeStraw only filters up to 1,000 gallons (4,000 liters) of water. Bottled water can be a quick and convenient way to store water. It is convenient to have quick bottled water on hand for drinking water. Standards for public water supplies are set by the Environmental Protection Agency and those for bottled water are set by the U.S. Food and Drug Administration (FDA). Additionally, the International Bottled Water Association (IBWA) works with the industry to assure that FDA regulations are followed, assuring a safe, high-quality product. We suggest having some packages of bottled water on hand in addition to your large water storage containers. Store large water barrels in a clean and appropriate location Storage conditions should include: - dry place - off the ground - away from sunlight Since plastic is permeable to certain vapors, water stored in plastic should not be near gasoline, kerosene, pesticides, or similar substances. If you have freezer space, store water in the freezer. It not only acts as water storage but if the electricity goes out, it will help keep foods frozen. Leave 2 to 3 inches of headspace in the container to allow for expansion as the water freezes. When potable (drinkable) water is properly disinfected and stored, it should have an indefinite shelf life. To maintain optimum quality, water should be checked every 6 to 12 months. Check for secure lids, broken or cracked containers, and for cloudiness. Replace the water and treat it as before.
Thomas Fuller, often called “the Virginia Calculator,” was born in 1710, somewhere between the “Slave Coast” of West Africa (present-day Liberia) and the Kingdom of Dahomey (modern-day Benin). When the pre-colonial scramble for slaves replaced the earlier trade in gold, Fuller was snatched from his native land, sold as a slave, and brought to Colonial America in 1724, at age 14. Although considered “illiterate” because he could not read and write in English, he consistently demonstrated an unusual talent for solving complex math problems in his head. Northern Virginia planters, Presley and Elizabeth Cox, both of whom were also “illiterate,” quickly recognized his surprising abilities and put them to use in every phase of the management of their 232-acre plantation farm, about four miles from Alexandria, Virginia. Working as a field slave for most of his adult life, it was generally believed that Fuller must have taught himself how to calculate early in life probably as a child in West Africa. In an environment where slaves were forbidden to learn to read and write, he explained his skill as coming from experimental applications around the farm such as counting the hairs in a cow’s tail or counting grains in bushels of wheat or flax seed. Allegedly, he also figured out a new way of multiplying how far apart objects were, wading into complex astronomy-related computations, now carried out by computer. Not surprisingly his owners refused numerous offers to purchase Fuller because they had come to depend on his amazing abilities to measure things with his mind, alone. In 1780, when Fuller was 70 years old, a Pennsylvania businessman and a couple of associates, on hearing of his extraordinary genius, traveled to Alexandria to meet him. Out of curiosity, they asked a few questions. Two were noteworthy: (1) how many seconds were in a year and a half? And, (2) how many seconds had a man lived who is 70 years, 17 days and 12 hours old? When he correctly answered 47,304,000 and 2,210,500,800, respectively, in less than two minutes each time, one of the men objected, citing his own calculations were much smaller. Fuller quickly responded, “(Stop), Massa, you forget de leap year.” When the observer adjusted for the extra day every four years, they grudgingly accepted Fuller’s answer. Their observations of Fuller’s computational abilities were later submitted to the Abolitionist Society of Pennsylvania. Fuller died on the Cox farm near Alexandria, Virginia in 1790. He was 80 years old. The Columbian Centinel, a Boston, Massachusetts newspaper noted in its obituary of Fuller: “Thus died ‘Negro Tom,’ this self-taught arithmetician, this untutored Scholar! — Had his opportunities of improvement been equal to those of thousands of his fellow-men, neither the Royal Society of London, the Academy of Science at Paris, nor even Newton himself, need have been ashamed to acknowledge him a Brother in Science.”
Are you teaching AP Spanish for the first time and not sure how to plan for these AP Spanish classes? It can be very overwhelming to try to figure out how to plan for AP Spanish and all the requirements of curriculum and test prep! Here are some tips and tricks I’ve learned along the way! 1. Carve Out a Big Chunk of Time for Planning Teachers plan their lessons in a variety of ways. Some people like to go day by day, others plan week by week. I highly recommend planning out an entire unit with AP Spanish. It takes a big chunk of time, so you should plan ahead to reserve this time if possible. But in order to make a plan for AP Spanish that is cohesive and leads students to success requires forethought. Starting with the essential questions you will use for the unit, how will you find out what they learned? Figure out how you want them to demonstrate what they know by the end of the unit, and then work backwards to fill in the different kinds of activities you will use to get them there. When I plan this way, I write down what I want these activities to be, even if they don’t exist yet! This way, I know that my students will have ample practice and opportunity to learn new information before I ask them to show me what they learned. 2. Check Your Plans for Each Skill Planning for AP Spanish is complicated. You must ensure that you have provided enough new information to cover each essential question. You must inject your lesson plans with ways for your students to grow their vocabulary, increase grammatical accuracy and also practice for the AP test. After your plans are made, go back! Analyze what you have written down to ensure that your activities are balanced. Do you have ways for students to practice reading, writing, listening and speaking? Did you include enough activities to make sure students can talk about their thoughts for the essential questions? Did you include at least one or 2 AP Test-style practices? One easy way to keep track is to make yourself a list! I’ll link one here in my free resource library! Just make tally marks as you go through your plans. If you see that something is unbalanced, it’s easy to change up your plans, especially for those activities you have thought about that don’t exist yet! Change that reading activity into a viewing activity! Exchange the writing prompt for an oral one! 3. Weekly Plan for AP Spanish Once you have a general plan for your unit, the next step is to go week by week. It is so important to make sure that each day is a continuation of the last day, until you reach a checkpoint. This is the time to go back to the activities you want to do and either find them or create them if they don’t yet exist. Having everything ready for the entire week will help you to feel organized and confident. It will free up time for your other tasks, like correcting and planning for your other courses. 4. Don’t Do It All Yourself! One BIG mistake I made during my first few years as an AP Spanish teacher was that I did everything myself. At the time, I used a textbook. But even still, I made all the plans, all the activities, all the homework, projects and tests, myself. And it took WAY too much time. If you are lucky enough to have another teacher in your department who teaches the same course, divide and conquer! Share your work and lessen the load for both of you! Use the resources that are available! There are so many ways to find free resources and also ones you can pay for. Though teaching can sometimes feel very isolating, there are many of us out there! There’s lots available so that no one person has to do it all! Check out some ideas below in my resources heading!
Design by Maddy Pease. When I was learning how to drive, one of the first things Driver’s Ed taught was that driving is a privilege, not a right. With this understanding, the “privilege” of being able to drive is earned, and can be stripped away if you fail to drive well. You’re not born with the ability to drive, you have to work for it. However, some forms of privilege are not earned in this way and people are afforded privileges based on innate characteristics. According to Merriam-Webster, privilege is “a right or benefit that is given to some people and not to others.” In feminist conversations, these rights or benefits are often forms of power societal systems give to certain people based on characteristics like gender, race, wealth, or sexuality. It is understandable why the concept of privilege can be confusing to some. Some people wonder, how can a social system be beneficial for some people while disadvantageous for others? Everyday Feminism compares privilege to the concept of oppression. According to the article, oppression occurs because society is structured in a way that automatically puts individuals with certain traits at a disadvantage. In other words, it’s harder for oppressed people, such as women or people of color, to advance in our society. This is the opposite of privilege, in which certain characteristics make it easier for people to succeed in society. Our identities are made up of various aspects, some of which may afford certain privileges while some may lead to oppression from others. These different characteristics intersect in various ways. For the majority of the US population, most people are privileged in some aspects and disadvantaged in others. For example, as a woman I am disadvantaged in this society, but I am also privileged in that I am receiving higher education and I have a supportive family. I am neither wholly privileged nor wholly disadvantaged, and this applies to most people in our society. One very important thing to consider when thinking about privilege is to think about the ways you are privileged. If you are able to read this article, you have some privileges that others do not. By considering the ways you hold privilege, you are better able to understand and listen to those who experience oppression (or more oppression) and can learn on how to work together to end the oppression and create a society in which everyone enjoys the same opportunities. As Roxane Gay describes in her essay “Peculiar Benefits,” it is important not to play the “Game of Privilege.” She explains that people often play this game in which you ask who is more privileged when comparing different demographics, for example, comparing a wealthy white man to a wealthy black man. Gay writes that privilege is “relative and contextual,” and therefore playing this game is pointless. The game is also harmful because it pits people against each other and creates a hostile environment. Rather than working together to end oppression, the game of privilege tears people apart. Gay also claims that many people have become “privilege police,” in which they find people talking about their own experiences and point out the various forms of privilege they hold. By doing so, you are demeaning the experiences of those who have encountered other forms of oppression. In order to prevent yourself from doing this, you must listen and understand others’ experiences, and as mentioned above, understand the ways in which you have privilege.
Nearly everyone plays games today. But the earliest manufactured board and card games illustrate that our ancestors may have felt differently about gaming. Many games from the middle and later nineteenth century were didactic—they taught a subject either scholarly or religious. Maps comprise all the earliest children’s puzzles; these taught geography. The first American board games followed a map’s path as well. Children’s card games often taught Bible verses or the titles of authors’ works. And one of the best-known and earliest board games, Milton Bradley’s The Checkered Game of Life, followed courses paved with good deeds--move ahead--or slightly devilish setbacks. The winning player achieved the last space, literally Happy Old Age, with a combination of luck and accumulated points. There was less time for play in the colonial agrarian economy, so time not spent working—often called idleness—was undesirable. But industrialization and increased urbanization, one hundred years later, changed habits and attitudes. As America grew and changed and manufacturing and capitalism gained footholds, people, and their children, had more time to play. But gaming itself carried a negative connotation from the past. Children had no disposable income of their own, so parents purchased their toys. Did they, as consumers, favor educational or moralistic games over simply playful versions? Or did the manufacturers discover that these games sold best and then promote them to increase profit? And finally, did educational or religious games help overcome a common belief of games and game playing as bad, evil, or a waste of time?
Fluid intelligence is represented by a person’s ability to use logic and reasoning to solve new problems in unique ways, while crystallized intelligence is represented by a person’s ability to access and apply previously learned information and knowledge. Our intelligence as individuals is a critical component of our personalities, behaviors and paths in life, and from a young age, we are often told that being “smart” is better than being “dumb”. However, this simple binary doesn’t explain the whole story of human intelligence—not even close! Think back to your school days, and the stress of sitting down to take a big social studies exam. You’ve been studying for weeks, reading through your notes, and skimming back over the textbook, trying to make sure not a single detail of recent world history is forgotten. The facts and dates you are memorizing and solidifying in your brain fall under the classification of crystallized intelligence. During your studies, you even designed a color-coded system for your flashcards that helps you determined what chapters covered in the exam are giving you the most trouble. You had used flashcards before, but by color-coding the vocab words according to historical era, and then analyzing what colors showed up most often in the “incorrect” pile, you were able to design a more effective study system. The logic and reasoning you applied in this novel situation to design your flashcard system represents fluid intelligence. Back in the 1960s, Raymond B. Cattell first drew this distinction between the types, It is important to understand the difference between these two types of intelligence, the roles they play in life, and their interaction patterns with one another, which is what we hope to achieve in this article! What Is Crystallized Intelligence? Crystallized intelligence is what most people would think of as “traditional” intelligence. In short, crystallized intelligence represents the “stuff” you know—how to perform mathematical operations, the capitals of countries around the world, the recent stats of your favorite sports team, the route to every one of your friends’ houses, or the complex details of a given social or scientific subject. If we were to “quantify” intelligence, these are the pieces of information, found in your long-term memory, that could be measured in that way. The exams and quizzes taken in our younger years are often based on the retention, retrieval and regurgitation of such crystallized intelligence. This type of intelligence is based on prior learning and experience. When you are carefully reading an article, taking notes during a fascinating lecture, or listening to a podcast about a new subject, you are taking in new information, some of which your brain will store in its long-term memory banks so you can find and refer back to it later. In a sense, crystallized intelligence is the catalogue of knowledge in your head, as well as the ability to “use” it. In most cases, simply remembering the information accurately is the extent of this “use”, i.e., you can fix your bike because you have fixed the same problem multiple times before. Crystallized intelligence is based on knowledge acquired in the past, which is typically broken down into three types: factual, episodic and procedural knowledge. A full examination of these knowledge types goes beyond the scope of this article, but a quick review may be helpful. Factual knowledge is exemplified in dates, facts, pieces of trivia, vocabulary, and other types of knowledge that can be easily transferred and stored. Episodic knowledge consists of your memories, or recollections of events that happened from your first-person perspective. Finally, procedural knowledge is casually defined as “know-how”, the ability to perform certain tasks based on experience, i.e., driving a car, navigating a city, tying a sailor’s knot, or fixing a bike. All of these types of knowledge can be accessed from your long-term memory, where it is crystallized and (somewhat) permanent. What Is Fluid Intelligence? Unlike crystallized intelligence, fluid intelligence is much more difficult to quantify, as it represents one’s ability to use logic and reasoning in unique and original ways to solve problems. In such a situation, you won’t necessarily rely on previous knowledge, data or facts to address the situation in front of you. Rather than the ability to identify and retrieve specific pieces of information, fluid intelligence means being able to keep track of numerous things at once, and manipulate large amounts of information easily. Imagine that you are faced with four rows of black-and-white symbols in different patterns. The last space in the fourth row of symbols is missing and you are asked to supply the final shape. This is a common type of question found on an IQ test. In such a situation, you cannot apply “prior knowledge”, as there are no facts or figures to work with, simply the logical progression of anonymous shapes following an indeterminate pattern. To solve these types of problems—without a clear directive or standard path to solving them—one must use induction, pattern recognition, imagination, abstract thinking, and critical analysis to reach the correct answer. “Working memory” is also closely linked to fluid intelligence, as this is your ability to hold and manipulate information in your mind for a brief period of time, while you’re “working” with it. Being able to juggle multiple pieces of information simultaneously, and determine the connections between them, is a core aspect of fluid intelligence. Fluid intelligence requires people to infer relationships and determine patterns—a broadly applicable skill that is relevant to almost all cognitive tasks and demands. For this reason, some people consider fluid intelligence as the fundamental metric of cognitive ability, rather than a mere subset of general intelligence. Unlike the relatively simple retrieval process for crystallized intelligence, this type of intelligence generates brain activity in the dorsolateral prefrontal cortex and the anterior cingulate cortex, which are associated with short-term memory and attention. This makes sense, given the close link of fluid intelligence to working memory—a type of short-term memory. Do These Types Of Intelligence Work Together? We have thus far given specific examples of these two types of intelligence, and what makes them different from one another. However, this is not to say that fluid intelligence and crystallized intelligence do not work together in countless ways. When applying fluid intelligence to an abstract problem, it is not unusual to bring in previously acquired knowledge to inform one’s choices. For example, if you are trying to redesign a large office space to be more productive and communal, there are no “right” answers, but a combination of both types of intelligence are required. For example, knowing each person’s job, the size of each team, the general flow of projects within the space, and hierarchical relationships… all of these are form of crystallized knowledge that will help with the redesign. Simultaneously, applying logic to spatial efficiency and reasoning to relationship dynamics, while also imagining a business flow and inferring potential negative outcomes… these are all examples of fluid intelligence that are similarly essential! When we talk about “quantifying” intelligence, most people immediately think of an IQ test, but these types of cognitive ability tests assess both types of intelligence, despite only giving you a general intelligence score. General intelligence (the g factor) framed through an IQ score may be a singular measurement of intelligence and cognitive ability, but both types of intelligence—crystallized and fluid—are evaluated through such tests. While people may be stronger in one area of intelligence than another (an advertising genius versus a trivia champion), these forms of intelligence are often synergistic and used in combination. When we look into the brain, studies still remain inconclusive about which regions of the brain contribute to intelligence. Neuroanatomical studies have pin-pointed several regions, particularly those in the frontal and parietal lobe, and notably, the hippocampus (which is often linked with long-term memory and spatial memory). But the evidence suggests that these multiple brain regions work together to result in what we’ve termed ‘intelligence’. Can We Change Our Intelligence Levels? As children, everything that we experience is an opportunity to learn and increase our intelligence. We begin demonstrating fluid intelligence early through problem-solving, just as we begin establishing crystallized knowledge early, such as how to open a drawer, tie our shoes or count to 10. Both types of intelligence are clearly present in our younger years, but they peak at distinctly different times. Crystallized intelligence is related to acquisition and retention of specific knowledge, so as we continue to educate ourselves, read books, watch documentaries, learn new languages and explore complex subjects, our crystallized knowledge base continues to grow. Most experts argue that our crystallized knowledge peaks around the age of 60 or so. Fluid intelligence, on the other hand, plateaus much earlier in life, typically peaking at around the age of 30 or 40, and often declining as early as our 20s. This is not to say that older individuals are unable to solve problems, apply logic or use reasoning to work through unique obstacles, but it becomes harder or less “natural” to apply such knowledge. Additionally, older people also have a larger breadth of crystallized knowledge and experience, which is often enough to compensate for drop-offs in fluid intelligence. Fortunately, it is possible to protect and improve both our fluid and crystallized intelligence, though the approaches are somewhat different. Improving crystallized intelligence consists of increasing the amount of established knowledge and experiential knowledge that you have. Quite simply, studying new subjects, reading more books, and conversing with others can increase the amount of knowledge that you have. Fluid intelligence, on the other hand, is a bit harder to practice, and requires stepping further from your comfort zone. Placing yourself in situations where fluid intelligence is required may mean learning how to play a new instrument, studying a new language, or even pouring yourself into abstract fields, such as social justice or art, areas where there are no “right” answers, only different strategies, patterns and opportunities for your mind to play with and manipulate. Put your working memory to the test as often as possible, and you’ll find that, just like a muscle, it gets stronger over time. A Final Word Discussions of intelligence can be confusing and intimidating, particularly if you don’t consider yourself to be all that “smart”, but it is important to understand the multiple facets of intelligence so you can better identify, improve and apply them for yourself! How well do you understand the article above! References (click to expand) - (2010) Fluid and Crystallized Intelligence - ScienceDirect.com. ScienceDirect - Horn, J. L., & Cattell, R. B. (1967). Age differences in fluid and crystallized intelligence. Acta Psychologica. Elsevier BV. - Cunningham, W. R., Clayton, V., & Overton, W. (1975, January 1). Fluid and Crystallized Intelligence in Young Adulthood and Old Age. Journal of Gerontology. Oxford University Press (OUP). - Jaeggi, S. M., Buschkuehl, M., Jonides, J., & Perrig, W. J. (2008, May 13). Improving fluid intelligence with training on working memory. Proceedings of the National Academy of Sciences. Proceedings of the National Academy of Sciences. - Batey, M., Chamorro-Premuzic, T., & Furnham, A. (2009, April). Intelligence and personality as predictors of divergent thinking: The role of general, fluid and crystallised intelligence. Thinking Skills and Creativity. Elsevier BV. - Ferrer, E. (2009, May). Fluid reasoning and the developing brain. Frontiers in Neuroscience. Frontiers Media SA. - Horn, J. L. (1982). The Theory of Fluid and Crystallized Intelligence in Relation to Concepts of Cognitive Psychology and Aging in Adulthood. Aging and Cognitive Processes. Springer US. - Jung, R. E., & Haier, R. J. (2007, April). The Parieto-Frontal Integration Theory (P-FIT) of intelligence: Converging neuroimaging evidence. Behavioral and Brain Sciences. Cambridge University Press (CUP).
If you think recent natural disasters have been terrifying — just wait. Things will only get worse over the next century, a group of leading climate change researchers warns in a paper published in Nature Climate Change this week. Currently, most places suffer just one climate-related disaster at a time. But by 2100, regions can expect to grapple with multiple disasters at once, they researchers say. “We are facing an incredible threat to humanity,” said lead author Camilo Mora of the University of Hawaii at Manoa. “We are sensitive to the hazards that have been triggered and unfortunately these hazards are only going to get worse.” To better understand the threats that lie ahead, Mora and his colleagues reviewed more than 3,200 scientific papers and found 467 ways that climate change has already impacted humanity. The report chronicles how climate hazards such as heat waves, wildfires, floods and sea level rise have impacted human disease, the food supply, economies, infrastructure, security and other aspects of society. “I couldn’t stop being so frightened every single day, to be honest with you,” Mora said. Mapping Climate Change Impacts The research team created a supplemental, interactive world map based on peer-reviewed projections. It demonstrates the overlapping impacts of climate change on human populations in the coming century. By the turn of the century, for example, people in New York could be faced with four separate climate hazards, including drought, sea-level rise, extreme rainfall and higher temperatures. On the other side of the country, Los Angeles will likely face up to three. Especially vulnerable tropical regions of the world could deal with as many as six threats at once. A comparison of climate hazards in 2018 (top) and 2100 (bottom) based on the researchers’ data. (Credit: Mora Lab) The report predicts that developing nations will face larger losses of human life, whereas the developed world will endure a high economic burden associated with damages and adaptation. Although climate change has been studied extensively, Mora said previous research isolates the impact of one or two hazards rather than providing a big picture view of the consequences of global warming. That’s likely not going to be the reality though, when climate change poses so many concurrent dangers. For example, an increase in atmospheric temperature can exacerbate soil moisture evaporation in dry places, which leads to droughts, heatwaves and wildfires. In wet places, extreme rain and flooding may occur. As the oceans warm, water evaporates more quickly, causing wetter hurricanes with stronger winds and extreme storm surges from rising sea levels. “It’s like having a puzzle in which all of the pieces are all over the place. You can only really see the picture when all of the pieces are put together,” Mora said. Mora hopes the science will ultimately inspire people to become a part of the solution. Even grassroots community efforts, like Hawaii’s “Go Carbon Neutral” project, which aims to offset carbon emissions through tree planting, and which Mora is involved with, can add up and help change the course of climate change, he said. “This is a fight we cannot afford to lose. We don’t have any other planet to go to.”
Candle Snuffer and Douter Until the mid 19th century, when the braided, self-consuming candle wick was invented, candles needed to be trimmed or ‘snuffed’ very frequently, while still burning, to prevent the wick from smoking and flaring. The snuffer or wick scissors combined a blade to trim the wick with a box to hold the burnt trimmings. Snuffers usually had three feet and stood on a small oblong or shaped tray. To stop smoke and disagreeable smell when the candle was extinguished, but without cutting off the wick, a separate instrument was developed, known as a douter. This also worked on the scissors principle, but had blades terminating in vertical plates which squeezed the wick and deprived it of oxygen. Douters are much less common than snuffers. The present example is a combination snuffer and douter, sometimes found in steel but rarely in brass. The pointed end could be used to loosen and remove candle stubs from the socket of the candlestick. - John Caspall, Making Fire And Light In The Home pre-1820, Antique Collectors’ Club, Woodbridge, Suffolk, 1987, ISBN 1851490213, pp. 49-56
Solar photovoltaic power stations in dry cereal farmland: how to convert habitat loss into landscape heterogeneity By Iván Salgado Agricultural intensification has caused the decline of farmland bird populations across Europe (Donald et al. 2001; Rigal et al. 2023). High-intensity farming simplifies agricultural mosaics by removing non-cultivated landscape elements such as fallow land and field margins. However, birds depend on semi-natural areas for feeding and nesting in dry cereal farmland. In fact, the decline of farmland bird populations in the Iberian peninsula is linked to the loss of fallow land (Traba & Morales 2019). The main impact of solar photovoltaic power stations (SPPSs) on farmland birds is habitat loss (Serrano et al. 2020): around 2 ha of land per MWp of power. However, if managed as large pesticide-free fallow patches, SPPSs may function as sources of seeds and invertebrates that spread along field margins throughout farmland (spillover effects; Blitzer et al. 2012): i.e. large patches (> 10 ha of fallow land) interact with linear corridors (field margins of at least 1,5 m in width) to move trophic resources for farmland birds throughout the agricultural landscape for a long time (the useful life of SPPSs is 30 years). Managing SPPSs as large pesticide-free fallow patches converts non-habitat into landscape heterogeneity in intensive farmland. Therefore, SPPSs managed as fallow land may mitigate the negative effects of agricultural intensification on farmland birds. Attachment: GraphicalAbstract.jpg (848 KB)
Cycling is popular with people of all ages. Whether it’s a family hobby, a mode of transportation, a way to exercise, or a competitive pastime, cycling can take you through busy city streets or into beautiful natural environments that lie beyond city limits. Wherever your travels take you, it is important to consider safety as the number one priority. Head injuries are the number one cause of serious injury to kids on bicycles. Wearing a helmet should be just as much a part of your cycling routine as it would be if you were playing a contact sport like hockey or football. Why We Need to Protect Our Brains Your brain is your body’s mission control centre. It governs your thoughts, memories, decisions and movements. If your brain is injured, the impact of that injury can reverberate throughout your body. There are injuries to the head we can objectify, like fractures, but there are also injuries that are non-objective, like concussions. Concussions happen when the brain is shaken enough to hit the inside of the skull and get bruised. Acquired brain injuries can have temporary, prolonged, or permanent repercussions to your everyday life. By using a helmet and practising safe-cycling, you can significantly reduce your chances of sustaining such an injury. How Helmets Work A bicycle helmet has two main parts: a hard outer shell and a softer inner liner. The hard shell works to spread the force of the impact over a larger area (which decreases the risk of a skull fracture), while the inner lining of the helmet absorbs the impact energy so less is transmitted to your head. A bike helmet can reduce the risk of serious head or brain injury by as much as 80%. For a bicycle helmet to work as intended, it needs to be worn properly and fitted correctly. A properly fitted helmet will: - sit level on the head at a two-finger width above the eyebrows, - have straps that lie flat and fit snugly around the ears in a “V” shape, - have a one finger width between the strap buckle and the bottom of the neck/chin area, and - not move on the head once secured. Hats, headphones or even big hair clips should not be worn under a helmet since they might alter the fit of the helmet and make it less effective. The top of the helmet should be smooth and free of any objects such as protruding decorative pieces and/or stickers. Bike helmets that have been involved in a previous crash or are damaged must be replaced. Check out this detailed bike safety checklist by Brain Injury Association of Peel Halton: The Right Helmet For The Right Activity Helmets can help protect your head during all sorts of activities, such as skating, skiing, biking and horseback riding. But it’s important to select the proper helmet for your activity since each helmet is designed for the potential impacts of that particular sport. Skateboarding helmets, for example, cover more of the back of the head than bicycle helmets, and are designed to withstand multiple impacts. While hockey helmets are great for the rink, they are not designed to scrape across pavement like a bicycle helmet. A helmet’s certification sticker should tell you what activities that helmet is suitable for. Lead By Example One of the best ways to ensure your child will wear their bike helmet is by wearing one yourself. Research has shown that when parents wear helmets, children are more likely to want to wear them too. Start using helmets as soon as your child starts learning to ride a tricycle or bicycle and keep helmet use consistent. Physical activity is important for everyone to ensure a healthy lifestyle – and so is personal safety. Whether your sport is cycling, skiing, hockey or football, make sure you reach for that helmet every time. If you have sustained a serious injury while cycling and have questions about your legal options for personal injuries, please contact Peel Helmets on Kids volunteers at Howie, Sacks & Henry at 1-877-771-7006.
A kindergarten student is counting multi-colored marbles. She realizes that there are 8 marbles each time she counts, even when she starts with a different color each time. What skill is the kindergartener demonstrating? Already signed up? Sign in Let's continue studying where you left off.
Where in the Story Does Your Plot Start? A discussion about the difference between plot vs. story is anything but an academic question. Instead, like most talks about structure, how a plot is designed defines how the audience experiences the story. An early point of attack gives you the musical Les Mis, where you see the epic story in its totality play out in front of the audience. There is little or no need for exposition, since the audience sees every important moment play out in front of their eyes. A late point of attack gives you Oedipus Rex or just about any contemporary drama or musical that you can think of. Plotting Your Story The “wright” in playwright means “maker.” It is useful to remember that plays are constructed; they have a shape that is chosen for a reason by the author. A story is a chronological sequence of events: this happens, then that happens, then that happens next. A plot, by contrast, is carefully constructed by the writer to create meaning out of those sequence of events. A playwright sifts and sorts, edits and rearranges the sequence of events in a story to tell the story in a certain way to create a certain experience for a certain audience. This is the craft of playwriting . There’s a reason that plot is #1 of Aristotle’s six elements. A writer uses his or her own unique perspective to create a meaning, a message, a takeaway for the audience. A writer is not a historian nor a journalist. And a weak plot will get the dramatist nowhere fast. So let’s ask the question again: Where in the story does your plot start? Early or Late Point of Attack? It’s a generally accepted saying that in writing a play today you have to “get in late and get out early.” In other words, start the plot or scene as late in the action as you can, show the action, and then get out of the plot or scene as quickly as possible. This is how most contemporary dramas are built, with a late point of attack. (Classical plays also have a late point of attack,fyi.) This climactic structure with a late point of attack allows the plot to focus on building the “suspense,” or on engaging the audience’s attention in an entertaining way while playing out the dramatic question that forms the spine of the story. A late point of attack begins in the midst of the conflict, and we find out important details about the past on the way to a much greater conflict in the rising action. Contrast this structure to the opposite, the early point of attack. In Les Miserables, written by Victor Hugo in the 19th century, the action covers many years, over a vast sequence of events that all play out in front of the audience’s eyes and ears. An episodic structure like this unfolds scene by scene onstage, with little backstory. It too has a dramatic spine, but this plot takes us from the very beginning of a story and allows us to experience each moment leading up to the main climax for ourselves, with little to no exposition needed. Shakespeare also uses an early point of attack, as his average play length was three hours. (You can show quite a bit of history in three hours.) Plots with early points of attack tend to emphasize the past, and understanding the causes of events that took place. Those with late points of attack seek to make us understand the dynamics that lead up to a conflict and the repercussions that followed. One is not better than the other. It is simply two ways a playwright can attack a story. A Writer’s Checklist: Plot A plot is a roadmap to get you where you want to go, and a blueprint for what you want your audience to experience at the end. A plot builds a definite structure from the story’s sequence of events. Here’s a quick Writer’s Checklist for Plot: - Get in late, get out early - Stasis: start right before the inciting event. Identify the world of the play and start the action quickly. - Inciting event: generally happens within the first 10-15 minutes of a play or musical. - The inciting event immediately launches the rising action, the journey the hero undertakes to get what he wants. - Clarify wants of the main characters early in the play and their obstacles by the first 20 minutes. - Midpoint reversal - Surprise twist – end of Act 1 in musical, and halfway through a play - New goal or intensified goal accelerates the action - This could be a subtle event, even an internal (psychological) reversal - “11:00 Number” traditionally occurs right before the climax (in musicals) for fun or theatricality - Climax, with an anagnorisis (where the hero and others fully understand where the journey has brought them or taught them), and a peripeteia (a reversal of some sort), which brings about an emotional catharsis in the audience. - Resolution – plot points are resolved logically (no deus ex machina) - The Finale (in a musical) delivers the theme in a rousing tune that stays with the audience as they leave the theater. Are you writing a play or musical? Would you like someone to look over your script, or to help overcome writer’s block? I’d love to speak with you. Email me at firstname.lastname@example.org or post a comment below.
Mangrove expansion and saltmarsh decline at mangrove poleward limits Mangroves are species of halophytic intertidal trees and shrubs derived from tropical genera and are likely delimited in latitudinal range by varying sensitivity to cold. There is now sufficient evidence that mangrove species have proliferated at or near their poleward limits on at least five continents over the past half century, at the expense of salt marsh. Avicennia is the most cold-tolerant genus worldwide, and is the subject of most of the observed changes. Avicennia germinans has extended in range along the US Atlantic coast and expanded into salt marsh as a consequence of lower frost frequency and intensity in the southern USA. The genus has also expanded into salt marsh at its southern limit in Peru, and on the Pacific coast of Mexico. Mangroves of several species have expanded in extent and replaced salt marsh where protected within mangrove reserves in Guangdong Province. In south-eastern Australia, the expansion of Avicennia marina into salt marshes is now well documented, and Rhizophora stylosa has extended its range southward, while showing strong population growth within estuaries along its southern limits in northern New South Wales. Avicennia marina has extended its range southwards in South Africa. The changes are consistent with the pole-ward extension of temperature thresholds co-incident with sea-level rise, although the specific mechanism of range extension might be complicated by limitations on dispersal or other factors. The shift from salt marsh to mangrove dominance on subtropical and temperate shorelines has important implications for ecological structure, function, and global change adaptation. |Mangrove expansion and saltmarsh decline at mangrove poleward limits |Neil Saintilan, Nicholas C. Wilson, Kerrylee Rogers, Anusha Rajkaran, Ken W. Krauss |Global Ecology and Biogeography |USGS Publications Warehouse |National Wetlands Research Center
When computers need to communicate with each other on a network, they use IP addresses to identify and locate the devices they want to connect with. An IP address is a unique numerical identifier assigned to each device on a network. It consists of four sets of numbers separated by dots, such as 192.168.0.1. IP addresses are used to route data packets from the sender to the receiver on a network. When a computer wants to send data to another device on the network, it looks up the destination device’s IP address and includes it in the packet header. Routers on the network then use the IP address to forward the packet towards the destination. In summary, computers use IP addresses to find and communicate with other devices on a network. What is an IP address? An IP address is a unique identifier assigned to a device or network that uses the Internet Protocol for communication over the internet. Internet protocols manage the process of assigning IP addresses to devices and also route internet traffic. IP addresses serve the same purpose as telephone numbers, by identifying devices and enabling communication over the internet. There are two types of IP addresses: IPv4 and IPv6. IPv4 addresses consist of a series of four numbers separated by periods, while IPv6 addresses are represented as eight groups of four hexadecimal digits separated by colons. To find your IP address, you can use a cheat sheet that shows how to locate your IP address on either Mac or Windows operating systems. The parts of your IP address An IP address consists of two parts: the network ID, which is made up of the first three numbers of the address, and a host ID, which is the fourth number in the address. For instance, in the IP address 192.168.1.1, 192.168.1 represents the network ID, while the final number is the host ID. The network ID identifies the network to which the device is connected. The host ID refers to the specific device on that network. Typically, the router is assigned .1, while each subsequent device is given .2, .3, and so forth. However, there may be times when you don’t want others to know which device and network you are using. In such cases, you can use a Virtual Private Network (VPN) to hide your IP address from the outside world. When you use a VPN, it prevents your network from revealing your address. Where do IP addresses come from? - IPv4 was created in the early 1980s when the internet was still a private network used by the military. It has a pool of 4.3 billion addresses, which used to be sufficient. However, with the increasing number of devices connecting to the internet, such as computers, smartphones, tablets, and IoT devices, we have run out of IPv4 addresses. Although some technical networking tricks have been implemented to work around the shortage, the depletion of IPv4 addresses began as early as the 1990s. - To address this issue, the Internet Engineering Task Force (IETF), which designs internet technologies, created IPv6 about a decade ago. IPv6 has a potential pool of 340 undecillion addresses, which means we will never run out of addresses in theory. Although IPv6 is slowly replacing IPv4, both protocols coexist for now. Public vs. local IP addresses There are two types of IP addresses: external or public IP addresses and internal, also known as local or private addresses. Your internet service provider (ISP) provides you with your external address. When you access a website, the site needs to identify you for traffic-monitoring reasons. Your ISP uses your external IP address to introduce you to the website and establish a connection. For internal purposes, such as identifying your devices within a home network or business office, you have a different IP address. The local or internal IP address is assigned to your computer by the router, which is the hardware that connects a local network to the internet. In most cases, this internal IP address is assigned automatically by the router or cable modem. It is important to note that in most cases, you will have a different IP address internally than the one you have on the public internet. Your local IP address represents your device on its network, while your public IP address is the face of your network to the greater internet. How do IP addresses work? An IP address serves as a digital marker for the virtual location of a computer, website, network server, or other devices. Similar to how the post office uses physical addresses to route mail, IP addresses route internet traffic and direct emails to their destination. It’s an essential component of sending and receiving information over the internet. It’s worth noting that every active device connected to the internet has an IP address, making it an integral part of how the internet functions. - IP addresses are just one component of the internet’s infrastructure. Without a functioning post office to deliver mail, your physical address is irrelevant. In the same way, TCP/IP is the backbone of the internet, with IP being just one component. - TCP/IP is a set of regulations and procedures used to link devices on the internet. It describes how data is transmitted between devices: information is divided into packets and passed through a series of routers from its origin to its destination. This is the foundation of all internet communication. - TCP defines how applications interact across the network. It manages how messages are separated into smaller packets that can be sent over the internet and reassembled in the right order at the destination. - The IP component of the protocol ensures that each packet reaches its intended destination. Each gateway computer on the network uses the IP address to determine the proper path for the message.
Salt has long been used for various purposes around the house, specifically in the kitchen. It is commonly used to preserve food and flavor it. However, you may have heard of people having some uncommon uses for salt like drying up their water-damaged electronics and dehumidifying the air. Hence comes the question – does salt really absorb water? If you want to know more about this phenomenon and are interested in how salt reacts with other substances as well, then continue reading this article. Read: Does Snow Absorb Sound? Does Salt Absorb Water? Salt strongly absorbs water and humidity from its surroundings. It can even turn into a complete solution if the relative humidity around it is above 75%. This happens because water and salt are polar in nature, hence, they have a strong attraction force between them allowing the salt to absorb the water. Both salt and water are polar in nature, meaning that they have negative and positive charges inside the molecule. This polarity gives them a quite strong attraction force, making salt absorb water from its surroundings. In fact, some experienced cooks advise not to season your food with salt before cooking it for a long time, as salt can dry out and toughen food. Due to being a hygroscopic substance, salt can absorb the moisture out of your food and affect its quality. To prevent salt from absorbing moisture and turning into a solution, commercial manufacturers usually add calcium carbonate to table salt to boost its shelf life. Do All Salts Absorb the Same Amount of Water? Salt has a strong propensity to absorb water from its surroundings, mostly due to its hydrophobic qualities and negative and positive electrical charges. Since salt is made up of both negative and positive charges, the opposite charge of water is drawn to it. Hence, not all salts absorb the same quantity of water. This sum depends on a variety of elements. The level of absorption of salt depends on three factors – its chemical structure, the way they react with water, and the polarizing capacity of its basic and acidic components. The property of the salt absorbing moisture from the air and turning into liquid itself is called deliquescence. Different types of salt absorb separate amounts of water. The salts that can absorb water from their surroundings are called delicates. They often form their own solution with the humidity that they absorb. Some salts that can be called delicates are calcium chloride, sodium nitrate, and potassium oxide. Epsom salt, scientifically known as Magnesium Sulfate, is a good absorber of moisture from the environment and can also be used as a desiccant (substance for keeping the surroundings dry) in its dry form. It is also used to cure body aches, in cosmetic products, and for cleansing baths. Another salt that can absorb moisture from the air is rock salt. A lot of people also keep a chunk around the house to act as a dehumidifier. In fact, it is the best of its kind as it can hold moisture and works very effectively on rainy days. How Can Salt Absorb Water If It Can Be Dissolved In It? With enough water, salt dissolves. Salt forms bonds with water to make a hydrate structure in case the water amount is not greater than the amount of salt, but if the water amount is greater than the salt amount, the salt will dissolve and create a salt solution. Since the covalent bonds in water are more powerful than the ionic bonds in salt molecules, when salt is combined with water, the salt dissolves. How well the salt absorbs water all depends on their relative quantities. Since both of these substances have contrasting properties, one will absorb the other if it is greater in quantity. Since the process of dissolving salt in water is reversible, the salt will come back again as residue after the water evaporates. The temperature the two are in also affects how the two will react with each other. If the water you pour salt in is boiling or hot, then it will immediately dehydrate and absorb the salt. Does Rock Salt Absorb Moisture? Rock salt is a type of natural salt that can draw moisture out of the air and act as a moisture-holding material to dry up the air around it. It is a dehumidifier that is widely used because with about 75% moisture content, salt begins to deliquesce, which indicates it has absorbed too much water from the air to turn into a solution. Rock salts can be used to dry up the environment because they are effective in both natural and purified forms. The best option to eliminate humidity in your home is undoubtedly rock salt. Simply purchase 50–60 pounds of salt chloride, and you will see the dampness around evaporate like a miracle. It works very well for rainy days as it is good at pulling moisture out of the air and decreasing humidity. Rock salt (NaCl) is hygroscopic by nature, so it both stores and soaks up water better than a mechanical humidifier. It is also the cheaper and more natural way to keep your house fresh. Additionally, you can also save up on electricity as it can function without it. How To Make A DIY Humidifier At Home? You will need the following materials: - A bag of rock salt - 2 five-gallon buckets Step 1: Make a few dozen holes in the sides and bottom of a bucket using a drill. Step 2: Put this bucket inside the other one that is still whole and undrilled and then add rock salt to the bucket at the top. Step 3: Put the buckets in the area that has to be dried out. Step 4: Wait and let water vapor gather in the bottom bucket as the rock salt draws it out of the atmosphere. When the rock salt deliquesces completely, you can replace it with more salt and continue to dehumidify the area as needed. Does Calcium Chloride Absorb Moisture? One of the most efficient moisture absorbers is calcium chloride. It can even absorb water molecules twice as heavy in case the temperature circumstances are right. It is great for absorbing moisture, and additionally, has particular hygroscopic qualities that allow it to be effective at absorbing humidity from the atmosphere. When the relative humidity rises, calcium chloride’s absorption property increases. Calcium chloride prevents moisture from escaping or evaporating out into the atmosphere by locking it inside a chamber. What is great about this water-absorbing salt is that it is also environmentally friendly. If the air is sufficiently humid and the temperature is high enough, it can draw in water that is several times its own weight and dissolve into a liquid brine. More moisture is absorbed by calcium chloride desiccant when the relative humidity (RH) of the air is higher, and in contrast to other desiccants like silica gel and clay, its absorption accelerates exponentially as RH increases. Does Epsom Salt Absorb Moisture? Magnesium sulfate, usually referred to as Epsom salt, is a potent moisture absorber and can even be used as a desiccant when it is in its anhydrous state. A desiccant is something that can take in moisture and keep things dry. The most common form of this salt is hydrated. As a result of its adequate ability to hold moisture, Epsom salt is also utilized in cosmetics. Epsom salt is a versatile ingredient and due to its moisture-absorbing qualities, it is also used for making beauty products as it keeps the formula dry. It has also been used in the treatment of its minerals. Some other purposes of this salt are as follows. - It is used for bathing - It is used to cure wounds - Epsom salt can cure muscle and body pain - You can even use Epsom salt as a humidifier Does Salt Absorb Oil? Since oils are non-polar solvents and salts only dissolve in polar solvents like water, there is no direct interaction between salts and oil. If you were to add salt to oil, it would sink to the bottom and not dissolve. However, if the oil was mixed in with water, it would separate the two and cause the oil to float on top. Oil is often sprinkled with salt in order to flavor food as well. It is often added when oil is heated in order to use its functions as a catalyst and speed up oxidation when food is fried. Although salt does not react with oil since it is a nonpolar substance, it can make the oil degrade prematurely and form polar compounds. Hence, it will allow your food to absorb more oil. When salt is added to unmoving oil, it can also operate as an impurity by lowering the smoke point, which causes oil to age faster and lose its effectiveness. It is ideal to avoid adding salt before frying in order to prevent oil degradation. Salt and water are two contrasting forces that are opposite in nature, which is why the one in abundance will eventually overpower the other. We hope that this article has helped you understand the nature of salt better and solve any queries you may have had about its absorption qualities.
Similes and Metaphors Figurative Language Digital Task Cards NZ$0.00 incl GST (NZ) Go paperless with our Google Slides – ready similes and metaphors figurative language resource! These activities feature 20 interactive slides for students to work through. Due to their mostly open-ended nature, many slides can be used multiple times! Develop your students’ ability to use similes and metaphors in their writing. Great for distance learning and at-home learning. In this resource you will receive: 1. 20 Figurative Language Digital Task Cards featuring: 2. Full instructions on how to use this Google Slides resource, and how to share it with your students. 3. Tips for using Google Slides for teachers AND students. 4. An interactive student tracking sheet for students to record the activities they have completed. Please note: Due to the open-ended nature of the questions, no answer guide is provided. These activities are great for your writing program in both a traditional classroom with some access to mobile learning through to a full 1:1 digital classroom. They are great for end of year revision, test prep, early finishers, bell ringer activities, morning work, ESL, ELL and ELD classrooms, or for homework. This resource is suitable for Google Drive, Google Classroom, or Microsoft OneDrive (instructions included), and can be used on multiple devices! Activities link directly to the Common Core (Grade 3-5) and the New Zealand and Australian National Literacy Progressions, Powered by Embed YouTube Video Why Go Digital and Paperless? Many classrooms are now 1:1, BYOD, or improving the access of technological devices to students. This resource uses these devices to engage and enhance learning! Further benefits include: High student engagement and motivation Access and share learning from anywhere Build a skill base with 21st-century learning tools Save on paper and printing! Accessible on a range of devices including Chromebooks, iPads, tablets and more!
Thomas Becket (1118–1170) was Lord Chancellor of England in the twelfth century under Henry II, archbishop of Canterbury, and an incarnation of the ascended master El Morya. He was deeply devoted to the will of God and endured years of conflict with King Henry II over the rights of Church versus State. Becket was brutally murdered in his own cathedral by four knights who acted in response to Henry's desire to be rid “of this turbulent priest.” For centuries after his death, pilgrims flocked to his tomb at Canterbury and Saint Thomas worked many miracles there. Thomas was a man of action, delighting in hard work and quick debate. As a young man, he was educated in the finest schools of Europe and served in the household of the Archbishop of Canterbury, Theobald, who introduced him to the king and recommended him for the chancellorship. Becket and the king were said to have been of one heart and one mind and it is likely that the chancellor’s influence was largely responsible for many of the reforms in English law for which Henry is credited. Sir Thomas had a taste for magnificence and his household was considered even finer than the king’s. Wearing armor like any other fighting man, he led assaults and engaged in hand-to-hand combat—strong willed, stern, yet blameless in character and deeply religious. In 1161, Archbishop Theobald died and Henry called Becket to fill the office. Henry’s motive was simple. By placing his friend in the highest offices of both Church and State, Henry would bypass the traditional tension between the archbishop and the king. Becket, however, hesitated. He foresaw the inevitable conflict between the interests of the king and the interests of the Church. The chancellor declined Henry’s request, warning the king that such a position would separate them on moral principles. Sir Thomas told him: “There are several things you do now in prejudice of the rights of the Church which make me fear you would require of me what I could not agree to.” The king paid no heed and hastened to have Thomas consecrated archbishop on the octave of Pentecost, 1162. Becket finally accepted the office as “God’s hidden will.” Obedient to the king and in loving submission to the will of God, Becket left his household and his finery and began the life of an ascetic. Next to his skin he secretly wore a hairshirt. The beloved archbishop spent his days distributing alms to the poor, studying Holy Scripture, visiting the infirmary and supervising monks in their work. Conflict with the king Serving as an ecclesiastical judge, Thomas was rigorously just. Although as archbishop Becket had resigned the chancellorship against the king’s wish, nevertheless, as he had foretold, the relationship between Church and state soon became the crux of serious disagreements. Since at that time the Church owned large parcels of land, when Henry ordered that property taxes be paid directly to his own exchequer—actually a flagrant form of graft—Thomas protested. In another matter, a cleric accused of murdering a king’s soldier was, according to a long-established law, tried in ecclesiastical court and was there acquitted. A controversy arose because Henry considered the archbishop a partial judge. The king remained angry and dissatisfied with Thomas and called together a council at Westminster where the bishops, under pressure from the king, reluctantly agreed to the revolutionary Constitutions of Clarendon, which provided certain royal “customs” in Church matters and prohibited prelates from leaving the kingdom without royal permission. These provisions were severely damaging to the authority and prestige of the Church. Heedless of the new law, Thomas crossed the Channel to put the case before the Pope. Bent on vengeance, the king commanded him to hand over certain properties and honors and began a campaign to discredit and persecute him. King Louis of France was inclined in the Church’s favor and accepted the archbishop in exile. While submitting himself to the strict Cistercian rule in the monastery at Pontigny, Thomas received a letter from the bishops and other clergy of England deploring his “hostile attitude” to the king and imploring him to be more conciliatory and forgiving. Becket replied: For a long time I have been silent, waiting if perchance the Lord would inspire you to pluck up your strength again; if perchance one, at least, of you all would arise and take his stand as a wall to defend the house of Israel, would put on at least the appearance of entering the battle against those who never cease daily to attack the army of the Lord. I have waited; not one has arisen. I have endured; not one has taken a stand. I have been silent; not one has spoken. I have dissimulated; not one has fought even in appearance.... Let us then, all together, make haste to act so that God’s wrath descend not on us as on negligent and idle shepherds, that we be not counted dumb dogs, too feeble to bark. Becket excommunicated the bishops who had aided Henry. He also threatened England with an interdict that would forbid the people from participating in church functions. The historic quarrel had dragged on for three years when at last King Louis was able to effect a partial reconciliation between Thomas and Henry. Henry invited Becket to return to England, where he was welcomed by enthusiastic crowds. As he entered Canterbury Cathedral it was said of him by a contemporary biographer, “Some saw and marveled at the face of this man, for it seemed as though his flaming heart burned in his very countenance.” Becket was met with fierce hostility from some, however. Three bishops who had been excommunicated by Thomas for direct disobedience to the Pope went before the king, who remained yet in France. In a fit of rage, Henry cried out, “What disloyal cowards do I have in my court that not one will free me of this lowborn priest?” Four barons who overheard the king’s remarks plotted to kill Becket. When the archbishop received word of their plan, he said, “I think I know for certain that I will be slain. But they will find me ready to suffer pain and death for God’s name.” On December 29, 1170, the barons brutally murdered Thomas Becket in Canterbury Cathedral, four days after Christmas. His last words were, “For the name of Jesus and the defense of the Church, I embrace death.” The incredible sacrilege of murdering an archbishop in his own cathedral produced a reaction of horror throughout Christendom. When the news was brought to the king, he realized that his mistaken remark had caused Becket’s death. Henry shut himself up and fasted for forty days and later did public penance in Canterbury Cathedral. The body of Thomas Becket was placed in a tomb in the cathedral, which became the focus for hundreds of thousands of pilgrims—immortalized by Chaucer in his Canterbury Tales—who came to the shrine to witness the miracles that were wrought by Archbishop Becket’s intercession. Within three years, Thomas Becket was canonized a saint and martyr. The motion picture Becket, based on the play Becket by Jean Anouilh, is the dramatic portrayal of the life of Thomas Becket. Holy Days Calendar, December 1993. Elizabeth Clare Prophet, February 17, 1991.
By the age of three months, human babies can already follow Mr. Rogers' advice to "look for the helpers." In fact, human infants naturally show a strong preference for individuals who help rather than hinder others. Now, a study reported in Current Biology on January 4 finds that the same cannot be said of bonobos, one of humans' two closest relatives. While bonobos are similarly adept in discriminating helpers from hinderers, they show the opposite bias, consistently favoring hinderers over helpers. The findings suggest that humans' preference for helpers evolved only after our species diverged from other apes. This preference may have provided the foundation for the development of more complex features of human cooperation, the researchers say. The evidence also suggests that bonobos might prefer hinderers because they see them as more dominant. In primate society, it pays to be dominant and have dominant allies. "We were surprised in that many have characterized bonobos as being the most cooperative, 'hippie' ape," says Christopher Krupenye of Duke University. "Our experiments show that the issue is much more nuanced. Bonobos are highly socially tolerant in food settings and help and cooperate with food in ways that we don't see in chimpanzees. However, dominance still plays an important role in their lives." The new study by Krupenye and Brian Hare, senior author of the study, was inspired by an earlier study reported in 2007. It showed that young human infants already prefer helpers. "It was striking and unexpected and suggested that these sorts of motivations may be really central to humans' unusually cooperative nature," Krupenye says. In the new study, they wanted to find out whether the motivation to prefer helpers might be unique to humans. Because of the friendly reputation of the highly endangered bonobos, it made sense to start by looking at them. The researchers conducted their studies at Lola ya Bonobo, a sanctuary for orphaned bonobos in the Democratic Republic of the Congo. The researchers showed bonobos two-dimensional animated shapes that helped or hindered each other much as the earlier study did with infants. They then evaluated the bonobos' preference for one character or the other by watching which paper cutout character (placed atop a slice of apple) they reached for first. In additional experiments, bonobos were given a choice to interact with unfamiliar humans they'd observed either helping or hindering. In every case, bonobos showed an ability to differentiate helpers and hinderers. Surprisingly, however, they showed a preference for hinderers every time. A final experiment suggested that the bonobos' preference for hinderers might be driven by attraction to more dominant individuals. The findings in bonobos raise the possibility that one of the key motivational foundations of humans' uniquely cooperative nature--which is present early in infancy--may be unique to humans among the apes, the researchers say. "Bonobos exhibit a high level of social intelligence, tracking others' social interactions and evaluating novel social partners based on these observations," Krupenye says. "However, what motivates social preferences may be fundamentally different in bonobos and humans." Krupenye says that they continue to explore social preferences and social evaluation in bonobos, trying to understand what types of social information they track and what motivates their preferences. They also plan to conduct similar studies in chimpanzees. This research was supported by the National Science Foundation. Current Biology, Krupenye, C. and Hare, B.: "Bonobos Prefer Individuals that Hinder Others over Those that Help" http://www.cell.com/current-biology/fulltext/S0960-9822(17)31586-5 Current Biology (@CurrentBiology), published by Cell Press, is a bimonthly journal that features papers across all areas of biology. Current Biology strives to foster communication across fields of biology, both by publishing important findings of general interest and through highly accessible front matter for non-specialists. Visit: http://www.cell.com/current-biology. To receive Cell Press media alerts, contact firstname.lastname@example.org.
Sutuli is a clay-baked wind instrument. This half-moon-shaped, musical instrument falls under the category of Susira Vadya (wind instrument) of Indian musical instruments. Sutuli, an indigenous folk instrument draws inspiration from nature. It emulates the sound of a wild bird-kuli in Assamese or more popularly known as ‘koel’ in Hindi. As dance groups gather in open spaces to celebrate the arrival of spring during Rongali Bihu, the sound of kuli, as many folk tales and folk songs suggest is the sound most sought amongst natives. It is made of one type of special clay. Generally, it is hollow half-moon-shaped part. Its length is about 12 cm. Round-shaped Sutuli (diameter of about 10cm) are also found in some villages. It has a hole in the middle to produce the whistling sound by blowing air into it. There are three holes on its body to control the tune, which the per- former uses to manipulate the sound. Although popularly used in other regions while singing the Bihu songs, its use in the upper Assam region is relatively rare. Its Swara activity is also very limited. The Moran tribe of Assam generally use Sutuli made of bamboo, not clay. The Steps involved in the preparation of Sutuli are shown below:
NOTE: This product can be accessed through links that will be sent to you upon purchase. A yearlong course for grades 3-4 This classical, beautiful, and widely spoken language will be a treasure for children to learn. There a distinction, however, between just learning common words and phrases, which is the approach of many French programs, and knowing the language well enough to communicate fluently and accurately. French for Children Primer A teaches elementary students in grade four and up this dynamic language, both classically and creatively, at a time when students soak up language like a sponge. This book employs the pedagogy and structure of our popular Latin for Children series combined with immersion-style dialogues and vocabulary so that the French language will be taught well and thoroughly. The French for Children series emphasizes grammar and the parts of speech as vital tools for correctly speaking and understanding French. The text also uses lively chants to aid memorization of both grammar and vocabulary. The French for Children Primer A Chant and Audio files features master French teacher and author Joshua Kraut. Bring this valuable resource into your classroom to teach your students directly, or use it to privately prepare yourself to teach the lessons. Each lesson (40–50 minutes) corresponds to the weekly chapter of French for Children Primer B and has three segments: vocabulary, grammar, and conversational French. The audio chant files feature all of the pronunciations from the Pronunciation Wizard, as well as the dialogues, lively grammar chart chants, complete vocabulary, Conversation Journal words and phrases, Say It Aloud exercises, dicteés, and more! The set includes all the material available on the physical DVD’s and CD.
Today we explore the special INCISIONE lesson within the Jewelry Making course; Incisione refers to the traditional Florentine-style engraving technique. We will be following how to decorate leaf-shaped pendant head with delicate engraving. Incisione technique requires engraving techniques, but our students of Jewelry Making course also learns it as a part of its curriculum, so they can use the technical knowledge to learn incisione. The engraving work is performed by applying wax over the wooden base, and fixing the metal on the wax. The shape of the base may differ according to what has to be produced, and often students create their original wax base to facilitate their engraving work. At the beginning, students exercise a pattern (see the upper part of the copper plate) to practice engraving differentiating curved parts and corners differently. Next is practicing engraving their own name to practice engraving letters with different intensities. Then, they proceed to more complex design. Now, back to the pendant head. The pendant head is prepared with smooth surface, where leaf’s texture will be expressed by creating random scratches with a W-shaped chisel (called “Ciappola rigata” in Italian). The first time, the scratches are engraved on the entire surface of the work, in the direction from the lower left to the upper right part. The second time, from the upper left to the lower right part, thus the lines made at the first time and at the second time are crossed. Following, we carve the veins of leaf, this time employing a flat chisel called “ciappola piana”. Thanks to the effect of the texture on surface, the entire leaf shines brilliantly at each angle. Isn’t it beautiful??
Risk assessment is a systematic process of identifying, evaluating, and prioritizing potential risks or hazards associated with a particular activity, project, or situation. It involves assessing the likelihood of an event occurring and the potential impact or consequences it could have. The purpose of risk assessment is to proactively identify and understand risks in order to develop appropriate strategies for managing or mitigating them. Read more Find the top Risk Assessment databases, APIs, feeds, and products What is Risk Assessment? Risk assessment is a systematic process used to identify, evaluate, and prioritize potential risks or hazards that may arise in a particular context, such as a project, organization, or environment. It involves assessing the likelihood and severity of risks and determining appropriate strategies to mitigate or manage them. Risk assessment aims to provide a comprehensive understanding of potential risks, allowing individuals or organizations to make informed decisions and take appropriate actions to minimize or eliminate the negative impacts associated with those risks. By analyzing various factors, including the probability of occurrence, potential consequences, and existing control measures, risk assessment provides a structured framework to identify areas of vulnerability and develop effective risk management plans. How can you use a database for Risk Assessment? Risk assessment can be used in a wide range of applications across different domains. In business and industry, it helps organizations identify potential risks that could impact their operations, financial stability, or reputation. By conducting risk assessments, businesses can make informed decisions regarding investments, insurance coverage, and contingency planning. In the field of health and safety, risk assessment is used to identify hazards in workplaces or public spaces and implement appropriate control measures to protect individuals from harm. Risk assessment is also utilized in fields such as environmental management, disaster preparedness, and cybersecurity, where the identification and mitigation of risks are crucial for protecting the environment, infrastructure, and sensitive information. Why is Risk Assessment useful? Risk assessment is highly valuable due to several reasons. Firstly, it provides a structured and systematic approach to identify and evaluate risks, ensuring a comprehensive understanding of potential threats. This allows individuals or organizations to prioritize risks based on their likelihood and severity, allocating resources and efforts to address the most significant risks first. Risk assessment also promotes proactive decision-making by enabling stakeholders to anticipate and prepare for potential risks, minimizing their impact and maximizing opportunities. Additionally, risk assessment enhances communication and transparency by facilitating the sharing of risk-related information among stakeholders, fostering collaboration and collective decision-making. Ultimately, risk assessment plays a vital role in improving overall safety, resilience, and efficiency in various domains, providing a foundation for effective risk management and informed decision-making.
To help make microscopes as accessible and as sturdy as pencils, Stanford University biophysicist Manu Prakash created an ultra-low-cost origami-based microscope. He was inspired by a 2011 visit to a Thailand clinic where the local team was intimidated from using the state-of-the-art microscopes for fear of somehow damaging them. The Foldscope, his paper-based alternative, is easy to assemble and costs less than a dollar to manufacture. In this New Yorker video, Prakash demonstrates the microscope as he discusses human curiosity and microscopic beauty. More on this DIY assembly invention at Wikipedia: A Foldscope is an optical microscope that can be assembled from a punched sheet of cardstock, a spherical plastic lens, a light emitting diode and a diffuser panel, along with a watch battery that powers the LED. Once assembled, the Foldscope is about the size of a bookmark. The Foldscope weighs 8 grams and comes in a kit with multiple lenses that provide magnification from 140X to 2,000X. The kit also includes magnets that can be stuck onto the Foldscope to attach it to a smartphone, allowing the user to take pictures of the magnification. The magnification power is enough to enable the spotting of organisms such as Leishmania donovani and Escherichia coli, as well as malarial parasites. A Foldscope can be printed on a standard A4 sheet of paper and assembled in seven minutes. Prakash claims that the Foldscope can survive harsh conditions, including being thrown in water or from a five-story building. Nov 2016 update: Foldscope is on Kickstarter. Get one or a classroom pack.Watch this next: Animated Life – Seeing the Invisible.
I was recently emailed by Matthew W. Shepherd about a recording he made of wind whistling through railings. Unfortunately, Matthew didn’t have the best recording equipment available, but you can still hear the tones above the general rustling wind noise. So how is the sound made? [soundcloud url=”http://api.soundcloud.com/tracks/104109284″ params=”” width=” 100%” height=”166″ iframe=”true” /] With a railing one probable cause of a breathy note is an Aeolian tone, which is what causes the soughing sound as wind passes around the needles of pine trees. But I doubt that Aeolian tones caused the sound Matthew heard. The frequency of an Aeolian tone is proportional to the wind speed, so the pitch goes up when the wind blows stronger, and the pitch goes down when the wind decreases. So you expect to hear the tone’s frequency slding up and down as the wind speed varies, causing ghostly glissandi. While the frequency of the railing sound does vary in the recording, it does it in distinct steps. This makes me think that the railing is acting like a flute. Looking at the picture, a small hole can be seen in the pipe just below the top railing. I think this acts like a flute’s mouthpiece. As the wind rushes past, it causes the air within the railing pipe to vibrate. The air column in the pipe will have particular frequencies it naturally vibrates at, the natural resonant frequencies. Like a flute, the railing can jump between these different natural resonant frequencies if the wind blows softer or harder. Matthew handily measured the frequencies on his recording: 379, 511, 654 and 782 Hz, and in his blog correctly identifies this as a harmonic series based on 129 Hz (379 = 126*3, 511=128*4, 654=131*5, 782=130*6). If Matthew wanted to confirm his hypothesis, he could start by measuring the dimensions of the railing. The tone at 129Hz should be produced by a pipe (roughly) 132 cm long. So if the hole I can see is acting like a flute mouthpiece, then the railing pipe should be 132 cm high. Another way to test this theory would be to tape over all the holes in the railings and see if the tones disappear. Composer Pierre Sauvageot uses a similar mechanism to make tones in some of his giant sculptures in his work Harmonic Fields: Have you heard any other wind-generated sounds I could investigate? Please comment below. Why do railings whistle in the wind?
In this post, you’ll be learning about workbooks and how to create new workbooks in Excel VBA with few simple steps, add worksheets and later save the workbook. How to Access VBA in Microsoft Excel? To access the VBA feature in Excel, follow the below steps. - Under the Developer tab, click visual basic - From the insert option, click insert module. How to Create New Workbook in Excel VBA? You can create workbooks in Excel VBA using the Workbooks.Add property. Sub CreateNewWb() Workbooks.Add End Sub The newly added Workbook is now the Active Workbook. To confirm this, use the below code, the MsgBox will display the active workbook name. Sub CreateNewWb () Workbooks.Add MsgBox ActiveWorkbook.Name End Sub The message box in this code will display the name the workbook that is active and not the newly opened one. How to Save Workbook using Excel VBA? To create a workbook in Excel VBA and save it, follow these steps. This command will save the Workbook as an .xlsx file to folder which you have set as the default folder. When you need to access the particular workbook, enter the below code in the VBA module. This code will Activate “CreateNewWB.xlsx”. How to Create New Workbook & Add Sheets using Excel VBA? To create New workbook and add sheets to it, follow these steps. Run a macro with the following code. This code will add three sheets to the workbook , as the count is equal to 3 Sample program using creating a workbook property and its related features in Excel VBA. Sub CreateNewWb () Workbooks.Add MsgBox ActiveWorkbook.Name Workbooks.Add.SaveAs Filename:="CreateNewWB" Workbooks("CreateNewWB ").Activate ActiveWorkbook.Worksheets.Add Count:= 5 End Sub A new workbook is created and saved in the given name. It is reactivated and 5 sheets are added to the workbook.
Walruses prefer molluscs - mainly bivalves such as clams. They also eat many other kinds of benthic invertebrates including worms, gastropods, cephalopods, crustaceans, sea cucumbers, and other soft-bodied animals. Walruses may occasionally prey on fishes such as polar cod. Walruses may eat the carcasses of young seals when food is scarce. There are some rare but habitual seal-eating walruses. Their diet consists mainly of ringed and bearded seals. These are usually male walruses, recognizable because they are usually larger than other males, with powerful shoulder and chest muscles. Their skin may become grease-stained from the blubber of the seals they prey on. Adult walruses eat about 3% to 6% of their total weight per day. Adults may eat as many as 3,000 to 6,000 clams in a single feeding session. Observations of feedings indicate that walruses usually fill their stomachs twice daily. In the summer months, and during the southward migration in the fall, walruses spend most of their day foraging. They eat less on their northward migration in the spring. Food intake for mature male walruses dramatically decreases during the breeding season and probably for a shorter time for females in estrus. Pregnant females increase food consumption about 30% to 40%. Methods of Collecting and Eating Food Walruses usually forage on the bottom within 80 m (262 ft.) of the surface. Most feeding probably takes place between 10-50 m (33-164 ft.). Because visibility is poor in deep and murky waters, walruses rely on their vibrissae to locate food. A walrus moves its snout along the bottom, rooting through the sediment and using its vibrissae to help detect prey. Abrasion patterns of the tusks show that they are dragged through the sediment, but are not used to dig up prey. In addition, researchers have seen foraging Atlantic walruses rapidly waving a foreflipper to uncover prey from the sediment. The walruses that were observed, preferentially used their right flipper when foraging this way. Evidence shows that walruses may take in mouthfuls of water and squirt powerful jets at the sea floor, excavating burrowing invertebrates such as clams. Walruses do not chew their food, but they do sometimes crush clam shells. Soft-bodied invertebrates are usually not crushed or torn. A walrus sucks off the foot and the fleshy siphon of a clam and swallows it whole. The cheek teeth do get worn, but this is probably from abrasion by minute particles of sand that walruses inadvertently take into their mouths and not from crushing clam shells. Researchers have found numerous pebbles and small stones in the stomachs of walruses. These are thought to be ingested while feeding.
The surname Collins originates from Ireland’s ancient past, tracing back to the days of Gaelic tribes and the intricate web of chieftains, clans, and territories that characterized this vibrant period in Irish history. Etymology and Meaning The name Collins stems from the old Gaelic name “O’Coileain,” which translates roughly as “young warrior” or “whelp,” connoting an association with youth, vigour, and valour. Earliest Known Usage The earliest use of the Collins surname can be traced back to the Munster region of Ireland, particularly in counties Limerick and Cork. Notably, the O’Coileain clan was a prominent sept of the ancient Eoghanachta federation of tribes that dominated much of Munster. Over centuries, the Collins surname has been carried far beyond the borders of its original Irish homeland. Particularly during periods of mass Irish emigration, such as the Great Famine, the name was dispersed across the globe, finding footholds in the United States, Canada, Australia, and the United Kingdom, among others. Original Geographic Location Historically, the Collins family was concentrated within the province of Munster, especially the counties of Limerick and Cork. This region, defined by its rich farmland and rugged coastline, served as the heartland for the Collins clan. The mid-19th century brought catastrophic famine to Ireland, prompting mass emigration of Irish people to other parts of the world. Among them, numerous Collins families sought new opportunities overseas, leading to the widespread global presence of the surname today. Notable Historical Events The Collins name is intertwined with Ireland’s struggle for independence. The most notable event associated with the name is perhaps the Anglo-Irish Treaty negotiations of 1921, in which Michael Collins played a significant role. Involvement in Key Moments in History Over the centuries, bearers of the Collins name have participated in numerous key events in Irish history, including rebellions, conflicts, and political movements. Their contributions have left a significant imprint on Ireland’s historical landscape. Notable Irish Bearers of the Surname Michael Collins, a central figure in the Irish struggle for independence in the early 20th century, is perhaps the most renowned bearer of the Collins surname. Other noteworthy individuals include Patrick Collins, the second Irish-born Mayor of Boston, and Tom Collins, an influential writer and nationalist. In contemporary times, figures like Stephen Collins, a respected political journalist, and Joan Collins, a celebrated actress of Irish descent, continue to raise the profile of the Collins name. Variations of the Surname As the Collins name was anglicized from the original Gaelic, several spelling variants have emerged over time, such as Collin, Collings, Collis, and O’Collins. The pronunciation and spelling of the Collins name can vary based on regional dialects and accents, contributing to its diversity. Current Statistics and Distribution Frequency and Global Distribution The Collins surname remains prevalent in Ireland and is also notably common in regions with large Irish diaspora populations. Changes Over Time The geographic distribution of the Collins surname has evolved over time due to factors like population growth, migration, and intermarriage. Family Coat of Arms The Collins Family Coat of Arms traditionally features a silver shield with a black lion rampant, symbolizing courage, nobility, royalty, strength, stateliness, and valor.
Table of Contents The eel is a slippery creature. This fish can be found in freshwater and saltwater environments, and it’s known for its elongated body and forked tail. They are predators, and they use their sharp teeth for hunting down prey. While they may not be the most popular type of fish out there, they’re definitely worth checking out if you get the chance. In fact, scientists are still learning about these creatures, and new discoveries about these creatures are being made all the time. So if you want to learn more about one of the world’s most mysterious fish, keep reading! Eel scientific name and classification The scientific name for this creature is Anguilla vulgaris. They are classified as part of the Actinopterygii, or ray-finned fish, order. This order contains all fish that have fins supported by rays rather than by Webb’s or spines. They are further classified into the family Anguillidae and the genus Anguilla. There are approximately 20 different species of eel in the Anguilla genus. They are long, snake-like fish that can range in color from yellow to brownish-black. They have smooth, slimy skin and no scales. These creatures breathe using gills and have a lateral line system that helps them sense movement in the water around them. They live in both freshwater and saltwater environments and can be found on all continents except Antarctica. Eel physical appearance They are elongated fish that can grow to over 6.6 feet in length. They have a snake-like body with a small, pointed head. They are all predators, and their diet consists mainly of other fish, although they will also eat crabs, shrimp, and worms. These species are found in all oceans but are most abundant in the Indo-Pacific region. There are over 800 species of this creature, but only 20 are commonly found in the aquarium trade. The most popular species include the Moray one, Snowflake one, and Leopard one. They have very slimy skin that is covered in mucus. This mucus protects them from diseases and parasites. They are capable of regenerating their tail fin if it is lost. However, they cannot renew their spinal cord, so any injury to this area is fatal. They live for up to 25 years in the wild but can reach up to 50 years old in captivity. Eel habitat and distribution They are a fascinating species of fish that are found all over the world. They vary widely in size and appearance, but they share some common features, such as their long, snake-like bodies and their ability to breathe both air and water. They can be found in both fresh and saltwater environments, and they typically make their homes in reefs, caves, or other hidden places. Some are even known to travel overland in search of new habitats! Given their wide range of habitat options, it’s no wonder that they are such a successful and widespread species. All in the order Anguilliformes are facultative catadromous fish. This means that they all migrate to freshwater to breed, but they can live their entire lives in saltwater if they choose to do so. These fish are believed to have a diet that consists mainly of other fishes and invertebrates. However, there is still much unknown about their feeding habits due to their elusive nature. Studies have shown that diet varies depending on the species of these creatures and their life stage. For example, juveniles feeding near the surface tend to eat small crustaceans, while adults dwelling in deep water primarily consume fish. Some species are known to be cannibals, preying on smaller members of their own species. Given the vast array of habitats and prey items available to them, it is safe to say that they are not picky eaters! Eel interesting facts Did you know that they are among the most exciting creatures in the animal kingdom? Here are just a few facts about these slippery critters: - They are actually a type of fish, but they have a very unique anatomy. They lack scales and have slimy skin, which helps them to move through the water with ease. They also have a very long and slender body, which can grow up to two meters in length in some cases. - They are carnivorous predators, and they will eat just about anything they can fit into their mouths. Smaller ones will often eat insects and other small animals, while larger ones will feast on larger prey, such as fish and even small mammals. - They are capable of generating electricity, and they use this ability to stun their prey. They develop a strong electric field around their body, which is used to incapacitate fish and other animals so that they can be eaten. - They are found all over the world, in both fresh and saltwater environments. They are widespread in Europe and Asia but can also be found in North and South America. So there you have it – just a few of the many interesting facts about these slippery creatures. Eel reproduction and lifespan They are a type of fish that is often feared because of their snake-like appearance. While they may look dangerous, they are actually quite gentle creatures. One of the most interesting things about these species is their reproduction process. Unlike other fish, they do not lay eggs. Instead, they give birth to live young. Females store sperm in their bodies, and when they are ready to reproduce, they release the sperm and fertilize their eggs internally. The eggs are then incubated in the water for several weeks before hatching. They also have a very long lifespan. Some species can live for over 100 years! They are truly amazing creatures given their strange appearance and unusual reproductive habits Eel threats and predators They are one of the most popular freshwater fish in the world. They are known for their delicious taste and their sleek, snake-like bodies. However, they are also under threat from a variety of predators. One of the biggest threats to these creatures is humans. They are popular food fish, and they are often caught for their meat. In addition, they are also used in traditional medicines, and their organs are highly valued in some cultures. As a result, their populations are declining in many parts of the world. In addition to humans, they must also contend with a variety of other predators. Large fish, such as pike and muskellunge, often feed on these creatures. Birds, such as herons and cormorants, also prey on them. And even others can be predators; youngers often eat eggs and larvae. Given all these threats, it’s no wonder that their populations are in decline. Fortunately, there are efforts underway to protect them. In some areas, fishing for this species is banned or regulated. In addition, many organizations are working to restore habitats that have been lost or degraded. Is the eel a snake or a fish? They are actually flattering and longer fish than snakes. They breathe underwater with their fins and gills like marine animals and unlike reptiles. That is the reason these animal species cannot survive without water. If you’re looking for a new way to add excitement to your menu, look no further than this creature. This slimy fish is gaining popularity in sushi restaurants all over, and for a good reason – it’s delicious! They can be cooked in a variety of ways, so there’s sure to be a preparation that will please everyone at your table. Next time you’re considering adding something new to your menu, consider giving this creature a try. You won’t regret it!
Gambling is a complex and intriguing human behavior that has been present throughout history in various forms. From casinos to lotteries, sports betting to online poker, the appeal of gambling is undeniable. But what drives people to engage in this risky activity, and why do some individuals seem more prone to develop gambling problems than others? To answer these questions, we can turn to the field of behavioral economics, which provides valuable insights into the psychology of gambling. Behavioral economics is a subfield of economics that blends insights from psychology and economics to explore how individuals make decisions. Unlike traditional economics, which often assumes that people are rational and make choices based on maximizing their utility, behavioral economics recognizes that human behavior is influenced by cognitive biases, emotions, and heuristics (mental shortcuts). Cognitive biases are systematic patterns of deviation from norm or rationality in judgment, often leading to perceptual distortion, inaccurate judgment, illogical interpretation, or what is broadly called irrationality. In the context of gambling, several cognitive biases play a significant role: Overconfidence Bias: Many gamblers tend to overestimate their abilities and their chances of winning. This bias can lead to excessive risk-taking and larger bets. Gambler’s Fallacy: This is the belief that the outcome of a random event is influenced by previous outcomes. For example, if a roulette wheel has landed on red multiple times, some players may believe that black is now “due” to come up. This fallacy can lead to irrational betting patterns. Anchoring Bias: People often rely heavily on the first piece of information they encounter when making decisions. In gambling, this can manifest as players anchoring their bets to an initial wager, even if it’s arbitrary. Loss Aversion: The pain of losing is psychologically more significant than the pleasure of winning. Gamblers are often driven to chase losses, which can lead to a downward spiral. Confirmation Bias: People tend to seek out information that confirms their preexisting beliefs. In gambling, this can lead players to ignore evidence that suggests they are losing and focus on selective wins. Emotions also play a significant role in gambling behavior. The thrill of anticipation, the excitement of winning, and the frustration of losing all trigger powerful emotional responses that can influence decision-making. Reward System Activation: Winning a bet or receiving a payout triggers the brain’s reward system, releasing dopamine, a neurotransmitter associated with pleasure and reinforcement. This reinforcement makes the act of gambling itself inherently rewarding. Near Misses: Slot machines and other games often incorporate near misses, where the outcome is just short of a win. These near misses can create a sense of almost winning, which can be more motivating than a loss and keep players engaged. Loss Chasing: The desire to recover losses is driven by emotions like frustration and regret. This phenomenon often leads to continued gambling and more significant losses. Social Interaction: Gambling can be a social activity, and the emotions associated with group dynamics can further enhance the appeal. The excitement of sharing wins with friends or the fear of being left out can influence decision-making. Heuristics are mental shortcuts that individuals use to simplify complex decision-making processes. In gambling, several heuristics are commonly observed: Availability Heuristic: People tend to make judgments based on readily available information. If someone has recently won a jackpot, others may believe that they, too, can achieve similar success. Representativeness Heuristic: This heuristic involves making decisions based on perceived similarities to prototypes or stereotypes. In gambling, it can lead players to make bets based on superstitions or beliefs in lucky numbers or patterns. While cognitive biases, emotions, and heuristics provide insights into the psychology of gambling, it’s essential to recognize that individuals vary in their susceptibility to these influences. Some factors that contribute to individual differences in gambling behavior include: Personality Traits: Certain personality traits, such as sensation-seeking and impulsivity, are associated with a higher likelihood of engaging in risky behaviors, including gambling. Biological Factors: Genetic predispositions and brain chemistry can play a role in determining a person’s vulnerability to addiction and gambling-related problems. Cultural and Social Factors: Cultural norms and social influences can shape an individual’s attitudes toward gambling. Communities where gambling is prevalent may normalize the behavior. Exposure and Accessibility: The availability and accessibility of gambling opportunities, whether through physical casinos or online platforms, can significantly impact an individual’s gambling behavior. Psychological Vulnerabilities: Some individuals may have underlying psychological vulnerabilities, such as depression or anxiety, that make them more susceptible to using gambling as a coping mechanism. Gambling is a multifaceted behavior influenced by cognitive biases, emotions, and heuristics, as well as individual differences in personality, biology, and environment. Understanding the psychology of gambling through the lens of behavioral economics provides valuable insights into why people gamble and why some individuals are more prone to develop gambling problems. The Times Union compiled a selection of premier online casinos to elevate your gambling enjoyment. For individuals who enjoy gambling responsibly, it can be a source of entertainment and excitement. However, for those who struggle with gambling addiction or are at risk of developing gambling-related problems, it’s crucial to seek help and support. Recognizing the psychological factors at play can be the first step toward responsible gambling behavior and, if necessary, seeking treatment for gambling addiction.
A Lorawan Gateway is a box that connects wireless Lorawan devices such as sensors to the Internet via a local network. Lora gateways is a radio module that serves as a communication device between a Lora network terminal and a Lorawan network server (LNS). The gateway is the starting point for devices in the network that can transmit their data to the gateway. IoT devices use the gateway as a central hub to store perceived knowledge and connect it to an external network. A Lora Gateway connects the Lorawan network server to a high-bandwidth network, such as WiFi, Ethernet or cellular, to ensure end-to-end connectivity between Lora terminus and application servers. There are countless steps required to build a Lora gateway from scratch, from registering with the Things Network to observing uplink data from a simple Lora node, but it is a critical step in integrating IoT technologies into the embedded devices and applications that drive our world. A Lorraine network includes the use of different communication channels for the configuration and monitoring of gateways and devices. Lora Network (Lorawan) consists of nodes, gateways and operators of the Lora Long Range Network at its core. A typical Lorawan network is a single-hop star topology consisting of devices that transmit data to and from gateways. When a Lorawan gateway receives a LORA – modulated RF message, terminal devices can remotely hear the message from Lorawan network servers (LNs) connected to the IP backbone. Due to the one-to-one relationship between the LORA-based devices and gateways in the Lorawan network, messages sent to each terminal device travel within the gateway area. Based on the RSSI level of identical messages, the network server chooses the gateway that receives the message with the best RSSI and transmits the downlink message, and this is the closest to the device in question. Lora gateways can be used in environments where other types of networks are not viable due to technical limitations to establish network implementations relevant to electrical devices. In Lora (Lora) network gateways act as a transparent bridge, passing messages from a terminal device to a central network server (backend). In an ideal public or country-wide deployment, gateways are connected via a standard IP connection to the network server and controlled by private rollouts where security controls are essential. Terminal devices transmit their data to one or more gateways and the gateways forward the message to a network server. When a gateway reaches a device and receives the device message, it can do things on the network. Devices send their signal (RF packet), which is picked up by a gateway within range, and a strong device gateway connection relays the message to the cloud. The Lorawan enables terminal devices such as sensors and actuators to connect via radio gateways to the Lorawan network by Lora RF modulation. The number of terminals that a single Lora gateway can support depends on several factors. This paper focuses on the specification of the Lorawan protocol to determine the maximum number of Lora nodes that can communicate with a gateway. According to the Lorawan specification defined by the LORA Alliance, the gateway module transmits data to a class of nodes in the receiving shaft initiated by the node class to which the alliance sends the packet. The data rate between the end node and the end device (LORA gateway) is low, which is a necessary sacrifice to ensure long battery life and a long wireless range. The data volume transmitted by each sensor and the load and the transmission interval for each terminal device is used to determine the total transmission time that a single LORA gateway can process. We assumed that, if successfully set up, the gateway would register as the Things Network (TTN) and forward packets to all known local nodes and devices on the network, as well as TTN applications.
It is difficult to pinpoint which country has the highest rate of food poisoning, as different countries define and report on food poisoning differently. However, a recent survey from the World Health Organization (WHO) revealed that the rate of foodborne illnesses is highest in low and middle-income countries, particularly in the Eastern Mediterranean region, including Bahrain, Egypt, Kuwait, Qatar, Somalia and the United Arab Emirates. Although food poisoning is not exclusive to developing countries, certain factors such as inadequate or inefficient food safety regulations, poor sanitation, lack of food safety infrastructure, and lack of access to safe and nutritious food make foodborne illnesses more common in low and middle-income countries. Most lower-income countries also have high numbers of street vendors that sell prepared and ready-to-eat food. These street vendors often lack adequate regulation and food safety training, and their conditions can lead to food contamination and foodborne illnesses. In addition, climate change is increasingly causing food contamination. As temperatures rise and droughts persist in many areas, crop yields decrease and food spoilage increases. This has become a serious problem in many parts of the developing world, where the majority of the population is food-insecure and relies on non-refrigerated foods. Climate change is linked to an increased risk of food poisoning due to the proliferation of microbes, fungi, bacteria, and infections. Overall, food poisoning is a serious problem both in developed and developing countries. And every country should make concerted efforts towards improving food safety standards and infrastructure in order to reduce the risk of food poisoning. Is food poisoning more common in America? The short answer to this question is yes, food poisoning is more common in America than other countries. According to the Center for Disease Control and Prevention (CDC), each year, around 48 million Americans suffer from foodborne illnesses, 128,000 are hospitalized and 3,000 die from food poisoning. The CDC estimates that 1 in 6 Americans become sick from foodborne illnesses every year and it is more common in areas with large populations, such as cities. It’s important to note that the three leading causes of foodborne illness in the U.S. are fruits and vegetables, meat and poultry, and seafood. The most common causes of food poisoning in the United States are bacterium such as Salmonella, E. Coli, and Listeria. These organisms contaminate food through poor hygiene, inadequate food safety practices, and improper storage. Some of the most common symptoms of food poisoning include vomiting, diarrhea, cramps, and fever. Serious cases of food poisoning can cause lasting health problems or even death. To help reduce the risk of food poisoning, it’s important to make sure to properly cook, store, and handle food in a safe way. Cleaning and sanitizing kitchen surfaces, washing hands before preparing food, and not consuming meat that’s past its expiration date are some examples of food safety practices to follow. Additionally, make sure to only buy food from reputable sources and check the labels for any signs of any contamination. Is it easy to get food poisoning in Japan? Unfortunately, it is possible to get food poisoning in Japan as with anywhere else in the world. Japan has very strict food safety regulations, however these regulations cannot always guarantee the safety of all food, especially when it comes to outside markets or vendors. It is important to always be careful when consuming food in Japan and make sure to practice proper food safety precautions. This means paying particular attention to any food that is raw or of questionable safety such as fruits purchased from roadside vendors. Additionally, it is important to be careful in restaurant settings, ensuring all foods are properly cooked and avoiding any sauces or dressings with raw fish or eggs as these two ingredients can easily contain bacteria. What is the food poisoning capital of the world? It is difficult to determine the “food poisoning capital of the world” as food-borne illness is rarely reported or tracked on a global level, due to the large number of people who may consume contaminated food every year. Estimates suggest around 600 million people every year suffer from food-borne illness, but the majority of these cases are never reported. This lack of reporting means specific countries and regions remain underreported, making it difficult to identify the “food poisoning capital of the world.” However, there are certain countries and regions that are particularly prone to food poisoning and other food-borne illnesses. The countries and regions that are currently regarded as having some of the highest rates of food-borne illness include the United States, China, India, the Middle East, and Africa. The United States has a particularly high rate of food-borne illness due to the prevalence of processed and imported foods, while China, India, and Africa all have high levels of food-borne illness due to a lack of food safety regulations. Unfortunately, the lack of reported cases worldwide makes it difficult to definitively identify the “food poisoning capital of the world.” As a result, it is important to follow food safety guidelines and practice proper hygiene when preparing or consuming food, as this will reduce the risk of food-borne illness and help keep people safe. Who suffers most from food poisoning? Generally, the elderly and very young suffer the most from food poisoning, as their immune systems may be weaker and they could be at a greater risk of infection or becoming severely ill. Those with weakened immune systems such as pregnant women, people with cancer, and individuals on certain medications may also be more vulnerable to food poisoning. Additionally, people who work in food handling or preparation, such as restaurants, may be more likely to experience food poisoning due to their increased exposure, even if they are otherwise healthy. Finally, people experiencing extreme weather, such as when traveling, may be more susceptible to food poisoning due to changes in diet or a lack of access to properly-refrigerated food. How common is food poisoning in America? Food poisoning is extremely common in America, with almost 50 million people becoming ill from food poisoning every year. On top of that, 3,000 people die in the US due to food poisoning every year. A significant majority of these cases are caused by bacteria such as Salmonella, Escherichia coli (E. coli), Clostridium perfringens, and Campylobacter. Each of these bacteria can contaminate food or water and can lead to severe illnesses and complications such as nausea, vomiting, abdominal cramps, and diarrhea. The best way to lower your risk of getting food poisoning is to be aware of some helpful tips. Make sure to always keep raw meats and poultry separate from other foods, thoroughly wash hands and all surfaces that come in contact with raw foods after use, and always cook food to the recommended temperatures. Additionally, make sure to use different cutting boards and utensils for raw meats and other cooked or ready-to-eat foods. Following these simple rules will go a long way to keeping you and your family safe from food poisoning. Which types of poisoning are prevalent in the US? According to the American Association of Poison Control Centers, there are four primary types of poisoning that are prevalent in the US: ingestion, inhalation, injection, and absorption. Ingestion is when a person consumes a toxic substance, typically by eating or drinking. Inhalation is when a person breathes in a toxic substance, commonly through the fumes of household cleaners and other household substances. Injection is when a person comes into contact with a toxic substance through a syringe, typically through intravenous drug use. Lastly, absorption is when a person comes into contact with a toxic substance through their skin, often through topical ointments, lotions, or creams. Common types of poisoning in the US include exposure to lead, carbon monoxide, medication and drug overdoses, ethanol, and plants and insect products. Lead exposure can occur from drinking water from lead pipes or from paint chips in older building with lead-based paint. Carbon monoxide poisoning is often caused by exposure to fuel-burning appliances and other devices that release combustion gases. Medication and drug overdoses are often the result of incorrect dosages or substance misuse. Exposure to ethanol, a type of alcohol, is often caused by drinking alcohol-containing beverages. Lastly, poisoning from plants and insect products can occur from exposure to poison ivy, poison oak, and bee stings. To protect against poisoning, the Centers for Disease Control and Prevention (CDC) recommends that people take the following steps: store medications and other poisonous substances in their original containers and out of reach of children; keep all containers tightly closed and out of reach; dispose of unused medications and household chemicals safely; and be aware of any signs and symptoms of poisoning, especially in young children and the elderly. How quickly does food poisoning kick in? This depends on the type of food poisoning and the food that has been consumed. Generally, reactions can occur anywhere from within a few hours to several days after consuming contaminated food. Symptoms may range from mild in severity, such as a stomach ache or vomiting, to life-threatening reactions leading to hospitalization or death. Common symptoms associated with food poisoning include nausea, vomiting, abdominal cramps and pain, diarrhoea, fever, and fatigue. In some cases, further symptoms may also be experienced, such as headache and loss of appetite, or bloody stools or urine. For most types of food poisoning, it usually takes between 6-24 hours for an individual to start showing symptoms. However, some bacteria may produce a toxin which can cause symptoms as quickly as 1 to 4 hours after eating the contaminated food.
If you’re reading this, then I’m going to assume you already know what an API is. But, for the sake of those who may not know, I’ll touch on the fundamentals a bit. What Is an API (Application Programming Interface)? An API is a software interface that allows data exchange and communication between two separate software applications. One system executes the API, while another performs its functions and subroutines. The API specifies the data formats, methods, and requests that can be made between two software systems. APIs are the reason why you can log in to your Twitter account using your Google account credentials. What happens, in very simple terms, is that Twitter sends a request to Google via APIs to fetch your data and voila, you’re in! What Is API Testing? API testing is the practice of validating the integrity and functionality of APIs by sending requests across system software and evaluating system responses. In API testing special software is used to send calls to the API eing tested while the responses are noted and analyzed. API testing works on the business logic layer of a codebase, so any anomalies detected could lead to astronomical effects One could say that APIs make up the background framework of the internet as we know it today. This is why API tests are invaluable. Why Should You Test APIs? API testing is crucial now more than ever because APIs serve as the primary link to business logic. Perhaps, the most important reason for API testing is that as a system scales, changes are made across the codebase. API regression tests can help to detect whether a system upgrade results in a break in API interfaces. Such a break could have catastrophic results for web apps that rely on those APIs. On the other hand, API observability like what we do at APItoolkit can help you detect breaks in API interfaces that your web app relies on. For more context, here’s a list of the types of bugs that can be detected by API tests - Duplicate functionality - Missing Functionality - Incorrect structuring of Response Data (JSON or XML) - Problems in calling and getting a response from an API. - Security Issues - Incorrect handling of valid argument values - Performance lapses. API Testing techniques A. Unit Testing: Unit testing focuses on testing individual API components in isolation to ensure they function correctly and produce the expected outputs. B. Functional Testing: Functional testing verifies the API’s compliance with functional requirements by testing its inputs, outputs, and interactions with external systems. C. Load Testing: Load testing evaluates the performance and stability of APIs under varying levels of workload to ensure they can handle expected traffic and scale effectively. D. Security Testing: Security testing assesses the API’s vulnerability to potential threats, ensuring that appropriate security measures are in place to protect data and system integrity. E. Error Handling Testing: Error handling testing validates how APIs respond to different types of errors, ensuring that appropriate error codes, messages, and recovery mechanisms are implemented. F. Interoperability Testing: Interoperability testing focuses on testing the compatibility and seamless integration of APIs with different systems, platforms, and programming languages. Key Considerations for Testing APIs Test Coverage and Test Cases: Designing comprehensive test cases and ensuring sufficient test coverage is essential to ensure that all critical API functionalities are thoroughly tested. Test Environment Setup: Creating a reliable and representative test environment that closely mimics the production environment is crucial for accurate testing and reproducing real-world scenarios. Test Data Preparation: Preparing relevant and realistic test data sets that cover various scenarios is essential to validate the API’s behavior under different conditions. Handling Authentication and Authorization: APIs often involve authentication and authorization mechanisms. Testing these components ensures that only authorized users can access the API’s functionalities. Optimizing API performance is vital to provide a seamless user experience. Conducting performance tests and identifying bottlenecks can help optimize response times and resource utilization. API Testing Best Practices Adhere to these API testing best practices: A. Test Automation: Automating API tests allows for faster execution, better test coverage, and early detection of regressions. It also facilitates integration with Continuous Integration/Continuous Delivery (CI/CD) pipelines. B. API Documentation and Specifications: Clear and comprehensive documentation, along with well-defined API specifications (such as OpenAPI or Swagger), helps developers and testers understand the API’s intended behavior and aids in test design. C. Versioning and Dependency Management: As APIs evolve over time, maintaining proper versioning and managing dependencies becomes critical to ensure backward compatibility and minimize disruptions in existing integrations. D. Continuous Integration and Continuous Testing: Integrating API testing into CI/CD workflows enables regular and automated testing, ensuring that API changes do not introduce regressions and that the overall system remains stable. E. Collaboration between Developers and Testers: Close collaboration between developers and testers promotes a shared understanding of API functionalities, requirements, and potential edge cases, leading to more effective testing and bug resolution. API Testing Tools API testing can be done with a variety of automated tools. - APIToolkit: APIToolkit possesses all the tool you need to design, TEST, and monitor your APIs. It’s the one-stop toolbox for API developers utilizing a variety of tech stack. - Rapid API testing: Over 1 million developers and 10,000 APIs are available on Rapid API testing. It’s an API testing solution for managing complex API tests throughout the development process. You can run tests for any type of API (including REST, SOAP, and GraphQL). - SOAPUI test: Mainly used for REST, SOAP, and other mainstream API and IoT systems. - Postman testing: Commonly used for REST APIs - Parasoft testing: It’s a paid tool. It’s used for comprehensive API testing. What Are API Test Cases Based on? QA teams are usually in charge of API testing. It’s normal to see them follow a predefined strategy to conduct the API testing after the build is ready. This testing may not necessarily include the source code. The API testing approach helps to better understand the functionalities, security and testing techniques, input parameters, and the execution of test cases. API test cases are based on the following considerations Failure to return a value: This is an event in which there is no return value when an API is called. Trigger some other API/event/interrupt: Events and interrupt listeners should be tracked when an API output triggers some events. Return value based on input condition: This is pretty straightforward. Input is made and the results are authenticated. Update data structure: Changing data structures will have some effect on the system, which should be authenticated Modify certain resources: API calls that modify resources should be checked by accessing the corresponding resources APIs are software interfaces that allow data exchange and interaction between two different software applications. The purpose of API testing is to validate the integrity and functionality of APIs by sending requests across various systems and evaluating system responses. When approaching API testing, assess which APIs can enable important customer-facing applications/solutions and which can provide a solid technical foundation. API testing should be prioritized based on company strategy, business and modernization impact, as well as ability to execute. Develop API testing standards for your organization and train developers on prioritization. Final Thoughts on API Testing API testing represents the most fundamental measure in maintaining the seamless operation of application systems. When APIs are not tested thoroughly, it leads to problems in the API and calling applications. Suffice it to say that API testing is indispensable in software engineering. A break in an API calling system for a few seconds could have huge financial consequences. API toolkit essentially provides API observability and testing as a service. We augment your QA team, to detect issues automatically and in real-time. Recommended Post: How to Tackle ANomalies in RESTful APIs (the Right Way Recommended Post: API Documentation vs Specification: What It Means for You Recommended Post: A Comprehensive API Management Strategy for Businesses Recommended Post: The Rise of API-as-a-Product (2023)
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
2
Edit dataset card