text
stringlengths
1.59k
23.8k
id
stringlengths
47
47
dump
stringclasses
8 values
url
stringlengths
15
3.15k
file_path
stringlengths
125
142
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
2.05k
4.1k
score
float64
2.52
5.34
int_score
int64
3
5
In 1987, a barge called Mobro 4000 departed Islip in New York’s Suffolk County loaded up with 3,100 tons of waste, a fair bit of which was Styrofoam containers. The barge was supposed to unload its undesirable cargo into a landfill in North Carolina, but that proved harder than expected. No one wanted the trash. Mobro barge ended up wandering the eastern seaboard for six months, dipped into the Caribbean, and even made it as far as Belize without finding a suitable dumpsite. The fiasco drew media attention and ignited a national conversation about landfills, recycling, and the environment. Not surprisingly, Styrofoam emerged from this debate as the logical bad guy since it does not biodegrade and thus, it was argued, would crowd landfills and pollute our oceans. In 1988, New York’s Suffolk County enacted the first Styrofoam ban in the United States, but a plastics lobby quickly formed in response and succeeded in overturning the ban. Since then, similar bans have been put in place around the country—on Styrofoam, plastic bags and, most recently, plastic straws—resulting in years of litigations and millions of dollars worth of legal fees. We’ve been stuck in the same debate for the past 30 years. Styrofoam is still here—but is that good or bad? Turns out, there’s no simple answer. And Styrofoam’s story is certainly complicated. A Wonder Product or a Waste Nightmare? First discovered in 1839 in Berlin, Styrofoam’s precursor—expanded polystyrene (EPS) foam—became immensely popular during World War II as an inexpensive building material for military aircrafts. Between 1939 and 1945, the rate of polystyrene production increased expotentially. In 1946 the Dow Chemical Company trademarked Styrofoam. In the process of trying to make polystyrene more flexible, Dow scientist Ray McIntire mixed together styrene and isobutene in a reactor and heated them. The result was extruded polystyrene foam, a strong material that is moisture resistant and composed of 98 percent air—so incredibly lightweight and buoyant that it was considered a wonder product. Its low cost and ease of production catapulted Styrofoam into our lives. From energy efficient building insulation to surfboards, and from soilless hydroponic gardening to airplane construction, Styrofoam was heralded as the wave of the future—until the environmental issues came up. Audio brought to you by curio.io In the 1970s, research found that EPS foam not only degrades in seawater, but also that the resulting pieces called styrene monomers are toxic when ingested by marine life. “It doesn’t biodegrade, it just breaks down, and as it breaks down it just becomes edible to more things and it just leads further down the food chain,” says Nathan Murphy, the state director for Environment Michigan. There are several concerns here, he adds. One is that creatures that fill up their stomachs with plastic pieces may not be able to get enough food. Two is that that chemicals, particularly endocrine disruptors, might leach out of that plastic and harm the wildlife—or worse, make their way into the human food chain. And yet, for all its bad press, Styrofoam actually has its advantages over other packaging products, says Trevor Zink, an assistant professor of management at the Institute of Business Ethics and Sustainability at Loyola Marymount University. If you consider Styrofoam’s overall lifecycle impact assessment, looking at factors like energy demand, global warming, water consumption, and other ills, the foam actually has a lower footprint than other packaging materials, says Zink. It’s so light that it it has “lower production and transportation impacts than other products.” Joe Vaillancourt, CEO of Oregon-based chemical recycling company Agilyx agrees. “Foam is one of the more high utility polymers—very low cost, tremendous value, easy to manufacture—it’s the polymer of choice for things like shipping, food, electronics, etc.,” he says. “And yet it’s being vilified by the public—you have, as typical, a lot of misinformation about it.” Agilyx uses their pyrolysis-based technology to convert various plastic waste into hydrocarbon products—basically it breaks polymers down into elemental constituents, which can work very well for Styrofoam recycling. After compacting it and mixing the Styrofoam with other types of polystyrene foam plastics, Agilyx converts it back into a type of oil that can be used in manufacturing of anything from bicycle helmets to high quality synthetic crude oil. The crude oil is a particularly promising application since it swaps a non-renewable resource with a renewable one. Agilyx has sold their crude oil to a refinery that has turned it into jet fuel that was then sold to the department of defense. Moreover, Agilyx isn’t the only Styrofoam-recycling genius. Another company, Styro-Gro, has outfitted trucks with built-in Styrofoam compactors for convenient pick-up and then converts it into faux marble or quartz. So if recycling Styrofoam is possible, why hasn’t it caught on in the same way as other materials? Turns out, it all boils down to economics—volume, weight, and a functioning recycling process. The waste system wasn’t set up for Styrofoam recycling, says Agilyx Vice President of Operations Brian Moe. So today there’s little capacity and market for recycling it and turning it into useful products. Foam is a problem child for many facilities since it can easily break up and contaminate other, more profitable recyclables. Food service foam containers are particularly problematic since they are difficult to clean and most facilities don’t want to deal with that. Vaillancourt notes that while Agilyx’s technology can recycle commercial volumes of fairly contaminated mixed plastics (such as taking waste from 500 customers, including schools and lunch trays with food leftovers) and most other pyrolysis companies haven’t achieved commercial scale. “The challenge with chemical recycling is you need to be comprehensive about the types of products you take in and produce. That’s one of the reasons that chemical recycling has been slow to adopt,” he reflects. Anna Dengler, Vice President of Operations for corporate sustainability consulting firm Great Forest, says that when advising clients on whether or not to recycle Styrofoam, it comes down to volume and weight. “The issue with Styrofoam as opposed to hard plastics is that [hard plastics] weigh more,” she explains. Since foam is so light, it can take up a lot of room with far less monetary return so it’s not worth it for a lot of haulers. “You have to get a special compactor on site to compact the Styrofoam so it gets all the air out so then you are more likely to find a hauler who will move and recycle the material,” says Dengler. For a large-scale urban business this is a possibility, but many smaller companies are limited by availability of haulers. The Pros and Cons of Banning and Recycling After New York City’s ban on Styrofoam was challenged in court, the Department of Sanitation undertook a comprehensive study on the feasibility of Styrofoam recycling, and determined that food service foam “cannot be recycled in a manner that is economically feasible or environmentally effective for New York City.” After examining other municipalities that have tried to institute recycling for food service foam over the past 30 years, the report found that the majority of Styrofoam collected for recycling ended up in landfill anyway—but at a higher economic cost and carbon footprint compared to being directly landfilled. With these findings, the city was able to successfully implement a ban on expanded Styrofoam containers and packing peanuts, which will go into effect in 2019. Murphy salutes it—his work at Environment Michigan includes efforts to implement a statewide Styrofoam ban. Recycling isn’t the way to go, he thinks. “A way to think about it is the cleanest, least polluted plastic is the one we don’t make in the first place,” he says. Moreover, researchers have found that people who recycle may in fact end up being more wasteful because throwing something in the recycling bin makes them feel that using more of that product is environmentally harmless. But bans aren’t without blame either. Zink, who describes himself as a “deep and passionate environmentalist,” argues that perhaps bans are doing more harm than good. When considering a ban, he says, it’s important to consider what will be replacing the banned product. Since single use food service containers are not going to cease to exist, what would replace Styrofoam? It could end up being another type of material that has a greater environmental footprint than Styrofoam, Zink says. “If we’re going to continue to have single use products anyway, it’s better that they be made of the low-impact material than the high-impact material, and we should do a better job of collecting the waste and preventing it from ever entering these fragile ecosystems.” Otherwise you just swap one bad product for another. Get Our Newsletter Compostable options seem promising, but a report by Clean Water Action states that the majority of compostable single use food service products end up in landfill anyway and that whether composted or landfilled, they do not reduce greenhouse gas emissions. It seems that mealworms or mushrooms show promise for the eco-friendly solutions to degrade plastic, but that technology is still in its infancy. Can We Simply Be Less Wasteful? According to the EPA, waste and waste management issues are improving. In 2014 each American person produced an average of 4.4 pounds of solid waste per day, which is one of the lowest rates since before 1990. And between 1980 and 2014, recycling rates have increased from less than 10 percent to more then 34 percent, while landfilling dropped from 89 percent to below 53 percent. In 2014, the EPA said that impact of the 89 million tons of municipal solid waste that were recycled and composted was equivalent to removing the emissions of more than 38 million cars off the road. But there’s a catch here too. Vaillancourt notes that when some individuals drive up to 45 minutes each way just to drop off the foam product at the Agilyx recycling facility, it certainly doesn’t remove any cars off the road. “It doesn’t make sense from a carbon footprint standpoint.” It seems that neither bans nor recycling may be the magic button, but producing less waste overall is the right idea. So the 30-year long debate is now shifting from waste management to waste reduction. Part of that process will involve taking a close look at our own practices, both at the individual and corporate levels. But that would be an important step in the right direction. “Recycling has become a religion at this point and when things become a religion you stop looking at them through a critical eye—and I think we should,” says Zink, emphasizing that reducing waste is a much more efficient way to manage it. “A better option is not to use the single-use stuff in the first place.”
<urn:uuid:43771d1f-b455-4a17-bf66-732e768d5678>
CC-MAIN-2021-43
https://daily.jstor.org/is-the-30-year-long-styrofoam-war-nearing-its-end/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585424.97/warc/CC-MAIN-20211021133500-20211021163500-00390.warc.gz
en
0.952808
2,448
2.828125
3
Do you have a toddler who isn’t talking, has trouble with eye contact or participating in an activity with you? Have you noticed that your toddler is able to label many items but is unable to tell you what they want in a meaningful way? As a parent to a child like that I understand how overwhelmed, frustrated and helpless you may feel having tried everything. I remember feeling isolated and hopeless listening to the rhymes my child sang repeatedly, throughout the day and communicated nothing! I recall shuttling between “educational” apps, early intervention and speech therapy services and realized that the skills aren’t transferring into real communication. I so feel the unconditional love you have for your child that has caused you to look for answers and not give up! Let’s dive in. So, when is a child’s communication considered delayed? First, lets understand what normal speech milestones for a growing child look like. This will help you reflect on what your child is currently exhibiting and stay in positive anticipation of the future. Normal Pattern of Speech Development |1 to 6 months||Coos in response to voice| |6 to 9 months||Babbling| |10 to 11 months||Imitation of sounds; says “mama/dada” without meaning| |12 months||Says “mama/dada” with meaning; often imitates two- and three-syllable words| |13 to 15 months||Vocabulary of four to seven words in addition to jargon; < 20% of speech understood by strangers| |16 to 18 months||Vocabulary of 10 words; some echolalia and extensive jargon; 20% to 25% of speech understood by strangers| |19 to 21 months||Vocabulary of 20 words; 50% of speech understood by strangers| |22 to 24 months||Vocabulary > 50 words; two-word phrases; dropping out of jargon; 60% to 70% of speech understood by strangers| |2 to 2 ½ years||Vocabulary of 400 words, including names; two- to three-word phrases; use of pronouns; diminishing echolalia; 75% of speech understood by strangers| |2½ to 3 years||Use of plurals and past tense; knows age and sex; counts three objects correctly; three to five words per sentence; 80% to 90% of speech understood by strangers| |3 to 4 years||Three to six words per sentence; asks questions, converses, relates experiences, tells stories; almost all speech understood by strangers| |4 to 5 years||Six to eight words per sentence; names four colors; counts 10 pennies correctly| Information from Schwartz ER. Speech and language disorders. In: Schwartz MW, ed. Pediatric primary care: a problem oriented approach. St. Louis: Mosby, 1990:696–700. Copyright © 1999 by the American Academy of Family Physicians. What is echolalia?. When children repeat phrases, sounds it is called echolalia. This is a means to developing language, however could be a red sign if seen beyond a certain age. When a child repeats a word/most words and sounds right after they hear them it is called immediate echolalia. When they repeat it at a later time, sometimes days or months away then it is called delayed echolalia. Many a times this can seem out of context and repetitive in nature, also interchangeably called as “scripting” Some examples of delayed echolalia: repeating a line from a video over and over after days/months. Constantly singing the same rhyme regardless of the situation. A child with speech delay may use echolalia when he is bored, tired or just to self stimulate himself/herself. Understanding this will help you guide your child’s echolalia and turn it into functional language. That is why we need to know the TWO PRIMARY LANGUAGE ACQUISITION STYLES. They are analytic and gestalt language modes of acquisition. For a long time, I had no answer for my son’s delayed echolalia and every therapist simply was working “around it”. He had limited speech and until I stumbled on this article, I did not figure out that my child checked all boxes of a “gestalt language processor”. Read on to understand and ensure you are using the right approach for such a kid, as they need even greater levels of support to make progress. Please see the chart below for quick reference. Originally published here, for more details. Can my child with speech delays catch up? Well you can surely make progress. Gestalt language acquisition is a style of language development with predictable stages that begins with the production of multi-word “gestalt forms” and ends with the production of novel utterances. - At first, children produce “chunks” or “gestalt form” (e.g., echolalic utterances), without distinction between individual words and without an appreciation for internal syntactic structure. - As children understand more about syntax and syntactic rules, they can analyze (break down) these “gestalt forms” and begin to recombine segments and words into spontaneous forms. - Eventually, the child is able to formulate creative, spontaneous utterances for communication purposes. To summarize: Gestalt learners learn in “chunks” without processing the meanings of individual words. This learning style is called a “gestalt” style of language acquisition. Check out a supporting article here. Read on how to support their language acquisition! - Teach appropriate gestalts that can be used as building blocks. E.g “I need help”, “Potty please”. ▪ Pick gestalts that the child understands and would be useful for them to combine (e.g., “let’s find,” “want more”, “missing”) ▪ Use motivating and preferred activities- always, I can’t stress this enough. Use these routines to offer meaningful gestalts. ▪ Try not to teach rote/inflexible scripts that are not true symbolic communication (e.g., “Can I please use the bathroom?”), instead, use simpler words or two-word phrases like “potty please”. Here is the one I want to share that immensely helped my case. Use teaching core words as a strategy. Core words are 50-400 words that make up the majority of everything we say. More on this in my following post. In a layman’s term teach individual words and generalize effectively in various natural situations. This broadens their vocabulary and situational awareness. Practice generalizing across settings and communication partners. Understanding words and modeling their use purposefully, just a few phrases at a time seems to be the practical step in setting the stage for recombining novel words. Read my next post on core words here. Understand if your child is a gestalt language processor. Use echolalia to bridge the gap and build self-generated communication. It’s a tested strategy in my case, it worked, and my child made great progress in using novel utterances, by combining individual words. It gave him a boost with emotional regulation as well. His newly learned speech made it easier to communicate, thus reducing anxiety, and positively impacting his outcome of engaging with others throughout the day. Stiegler, L. N. (2015). Examining the echolalia literature: where do speech-language pathologists stand?.American journal of speech-language pathology, 24(4), 750-762. ▪ B. Prizant, J. Duchan, “The functions of immediate echolalia in autistic children”, Journal of Speech and Hearing Disorders, no. 46, pp. 241–249, 1981. ▪ Blanc, M. (2012). Natural language acquisition on the autism spectrum: The journey from echolalia to self-generated language. Madison, WI: Communication Development Center. ▪ Local, J., & Wootton, T. (1995). Interactional and phonetic aspects of immediate echolalia in autism: A case study. Clinical Linguistics & Phonetics, 9, 155–184. ▪ Prizant, B. M., & Duchan, J. F. (1981). The functions of immediate echolalia in autistic children.Journal of speech and hearing disorders, 46(3), 241-249. ▪ Rydell, P., & Mirenda, P. (1994). Effects of high and low constraint utterances on the production of immediate and delayed echolalia in young children with autism. Journal of Autism and Developmental Disorders, 24, 719–735. ▪ Sterponi, L., & Shankey, J. (2014). Rethinking echolalia: Repetition as interactional resource in the communication of a child with autism. Journal of Child Language, 41, 275–304. ▪ Tarplee, C., & Barrow, E. (1999). Delayed echoing as an interactional resource: A case study of a 3-year-old child on the autistic spectrum. Clinical Linguistics & Phonetics, 13, 449–482. ▪ https://blog.asha.org/2017/05/09/echoes-of-language-development-7-facts-about-echolalia-for-slps/ ▪ https://www.hanen.org/SiteAssets/Articles---Printer-Friendly/Research-in-your-Daily-Work/The-Meaning-Behind-the-Message_Helping-Childrenwh.aspx
<urn:uuid:4387e981-9807-4d11-ad10-373af55b0bf1>
CC-MAIN-2021-43
https://www.musingsandreviews.com/blog/two-primary-language-acquisition-styles/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584913.24/warc/CC-MAIN-20211016170013-20211016200013-00030.warc.gz
en
0.920348
2,086
3.0625
3
This example shows how to generate cartoon lines and overlay them onto an image. Bilateral filtering is used in computer vision systems to filter images while preserving edges and has become ubiquitous in image processing applications. Those applications include denoising while preserving edges, texture and illumination separation for segmentation, and cartooning or image abstraction to enhance edges in a quantized color-reduced image. Bilateral filtering is simple in concept: each pixel at the center of a neighborhood is replaced by the average of its neighbors. The average is computed using a weighted set of coefficients. The weights are determined by the spatial location in the neighborhood (as in a traditional Gaussian blur filter), and the intensity difference from the center value of the neighborhood. These two weighting factors are independently controllable by the two standard deviation parameters of the bilateral filter. When the intensity standard deviation is large, the bilateral filter acts more like a Gaussian blur filter, because the intensity Gaussian is less peaked. Conversely, when the intensity standard deviation is smaller, edges in the intensity are preserved or enhanced. This example model provides a hardware-compatible algorithm. You can generate HDL code from this algorithm, and implement it on a board using a Xilinx™ Zynq™ reference design. See Bilateral Filtering with Zynq-Based Hardware (Vision HDL Toolbox Support Package for Xilinx Zynq-Based Hardware). The BilateralFilterHDLExample.slx system is shown here. modelname = 'BilateralFilterHDLExample'; open_system(modelname); set_param(modelname, 'SampleTimeColors', 'on'); set_param(modelname,'SimulationCommand','Update'); set_param(modelname, 'Open', 'on'); set(allchild(0),'Visible', 'off'); To achieve a modest Gaussian blur of the input, choose a relatively large spatial standard deviation of 3. To give strong emphasis to the edges of the image, choose an intensity standard deviation of 0.75. The intensity Gaussian is built from the image data in the neighborhood, so this plot represents the maximum possible values. Note the small vertical scale on the spatial Gaussian plot. figure('units','normalized','outerposition',[0 0.5 0.75 0.45]); subplot(1,2,1); s1 = surf(fspecial('gaussian',[9 9 ],3)); subplot(1,2,2); s2 = surf(fspecial('gaussian',[9 9 ],0.75)); legend(s1,'Spatial Gaussian 3.0'); legend(s2,'Intensity Gaussian 0.75'); For HDL code generation, you must choose a fixed-point data type for the filter coefficients. The coefficient type should be an unsigned type. For bilateral filtering, the input range is always assumed to be on the interval . Therefore, a uint8 input with a range of values from are treated as . The calculated coefficient values are less than 1. The exact values of the coefficients depend on the neighborhood size and the standard deviations. Larger neighborhoods spread the Gaussian function such that each coefficient value is smaller. A larger standard deviation flattens the Gaussian to produce more uniform values, while a smaller standard deviation produces a peaked response. If you try a type and the coefficients are quantized such that more than half of the kernel becomes zero for all input, the Bilateral Filter block issues a warning. If all of the coefficients are zero after quantization, the block issues an error. The model converts the incoming RGB image to intensity using the Color Space Converter block. Then the grayscale intensity image is sent to the Bilateral Filter block, which is configured for a 9-by-9 neighborhood and the parameters established previously. The bilateral filter provides some Gaussian blur but will strongly emphasize larger edges in the image based on the 9-by-9 neighborhood size. Next, the Sobel Edge Detector block computes the gradient magnitude. Since the image was pre-filtered using a bilateral filter with a fairly large neighborhood, the smaller, less important edges in the image will not be emphasized during edge detection. The threshold parameter for the Sobel Edge Detector block can come from a constant value on the block mask or from a port. The block in this model uses port to allow the threshold to be set dynamically. This threshold value must be computed for your final system, but for now, you can just choose a good value by observing results. To overlay the thresholded edges onto the original RGB image, you must realign the two streams. The processing delay of the bilateral filter and edge detector means that the thresholded edge stream and the input RGB pixel stream are not aligned in time. The Pixel Stream Aligner block brings them back together. The RGB pixel stream is connected to the upper pixel input port, and the binary threshold image pixel is connected to the reference input port. The block delays the RGB pixel stream to match the threshold stream. You must set the number of lines parameter to a value that allows for the delay of both the bilateral filter and the edge detector. The 9-by-9 bilateral filter has a delay of more than 4 lines, while the edge detector has a delay of a bit more than 1 line. For safety, set the Maximum number of lines to 10 for now so that you can try different neighborhood sizes later. Once your design is done, you can determine the actual number of lines of delay by observing the control signal waveforms. Color quantization reduces the number of colors in an image to make processing it easier. Color quantization is primarily a clustering problem, because you want to find a single representative color for a cluster of colors in the original image. For this problem, you can apply many different clustering algorithms, such as k-means or the median cut algorithm. Another common approach is using octrees, which recursively divide the color space into 8 octants. Normally you set a maximum depth of the tree, which controls the recursive subtrees that will be eliminated and therefore represented by one node in the subtree above. These algorithms require that you know beforehand all of the colors in the original image. In pixel streaming video, the color discovery step introduces an undesirable frame delay. Color quantization is also generally best done in a perceptually uniform color space such as L*a*b. When you cluster colors in RGB space, there is no guarantee that the result will look representative to a human viewer. Quantize subsystem in this model uses a much simpler form of color quantization based on the most significant 4 bits of each 8-bit color component. RGB triples with 8-bit components can represent up to colors but no single image can use all those colors. Similarly when you reduce the number of bits per color to 4, the image can contain up to colors. In practice a 4-bit-per-color image typically contains only several hundred unique colors. After shifting each color component to the right by 4 bits, the model shifts the result back to the left by 4 bits to maintain the 24-bit RGB format supported by the video viewer. In an HDL system, the next processing steps would pass on only the 4-bit color RGB triples. A switch block overlays the edges on the original image by selecting either the RGB stream or an RGB parameter. The switch is flipped based on the edge-detected binary image. Because cartooning requires strong edges, the model does not use an alpha mixer. In addition to the pixel and control signals, two parameters enter the HDLAlgorithm subsystem: the gradient threshold and the line RGB triple for the overlay color. The FrameBoundary subsystem provides run-time control of the threshold and the line color. However, to avoid an output frame with a mix of colors or thresholds, the subsystem registers the parameters only at the start of each frame. After you run the simulation, you can see that the resulting images from the simulation show bold lines around the detected features in the input video. To check and generate the HDL code referenced in this example, you must have an HDL Coder™ license. To generate the HDL code, use the following command. To generate the test bench, use the following command. Note that test bench generation takes a long time due to the large data size. Consider reducing the simulation time before generating the test bench. The part of the model between the Frame to Pixels and Pixels to Frame blocks can be implemented on an FPGA. The HDLAlgorithm subsystem includes all elements of the bilateral filter, edge detection, and overlay. The bilateral filter in this example is configured to emphasize larger edges while blurring smaller ones. To see the edge detection and overlay without bilateral filtering, right-click the Bilateral Filter block and select Comment Through. Then rerun the simulation. The updated results show that many smaller edges are detected and in general, the edges are much noisier. This model has many parameters you can control, such as the bilateral filter standard deviations, the neighborhood size, and the threshold value. The neighborhood size controls the minimum width of emphasized edges. A smaller neighborhood results in more small edges being highlighted. You can also control how the output looks by changing the RGB overlay color and the color quantization. Changing the edge detection threshold controls the strength of edges that are overlaid. To further cartoon the image, you can try adding multiple bilateral filters. With a the right parameters, you can generate a very abstract image that is suitable for a variety of image segmentation algorithms. This model generated a cartoon image using bilateral filtering and gradient generation. The model overlaid the cartoon lines on a version of the original RGB image that was quantized to a reduced number of colors. This algorithm is suitable for FPGA implementation. Tomasi, C., and R. Manducji. "Bilateral filtering for gray and color images." Sixth International Conference on Computer Vision, 1998.
<urn:uuid:588c9e6b-fe11-4870-8752-0a29b25d82cd>
CC-MAIN-2021-43
https://ch.mathworks.com/help/visionhdl/ug/generate-cartoon-images-using-bilateral-filtering.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585537.28/warc/CC-MAIN-20211023002852-20211023032852-00470.warc.gz
en
0.851881
2,071
3.125
3
The reed relay was invented in 1936 by Bell Telephone Laboratories. Since that time, it has gradually evolved from very large, relatively crude parts to the small, ultra-reliable parts we have today. Production methods and quality systems have improved a great deal over that time, and costs have been radically reduced. Pickering Electronics, an established reed relay manufacturer, was founded in 1968, and even then some were saying that these electromechanical devices would have a limited lifetime. Instead, the market for high-quality reed relays has increased into areas that were inconceivable in those days. Part 1 of this two-part series answered the question, “What is a reed relay?” This article delves into the differences between reed relays and other switching technologies. Electromechanical relays (EMRs) are widely used in industry for switching functions and often can be the lowest cost relay solution available to users. Manufacturers have made huge investments in manufacturing technology to make the relays in high volumes. There are some notable differences between reed relays and EMRs which users should be aware of: - Reed relays generally exhibit much faster operation (typically between a factor of 5 and 10) than EMRs. The speed differences arise because the moving parts are simpler and lighter compared to EMRs. - Reed relays have hermetically sealed contacts, which lead to more consistent switching characteristics at low signal levels and higher insulation values in the open condition. EMRs often are enclosed in plastic packages that give a certain amount of protection, but the contacts over time are exposed to external pollutants, emissions from the plastic body, and oxygen and sulphur ingress. - Reed relays have longer mechanical life (under light load conditions) than EMRs, typically of the order of between a factor of 10 and 100. The difference arises because of the lack of moving parts in reed relays compared to EMRs. - Reed relays require less power to operate the contacts than EMRs. - EMRs are designed to have a wiping action when the contacts close, which helps to break small welds and self-clean their contacts. This does help lead to higher contact ratings but also may increase wear on the contact plating. - EMRs can have much higher ratings than reed relays because they use larger contacts; reed relays usually are limited to carry currents of up to 2 A or 3 A. Because of their larger contacts, EMRs also often can better sustain current surges. - EMRs typically have a lower contact resistance than reed relays because they use larger contacts and normally can use materials of a lower resistivity than the nickel iron used in a reed switch capsule. Reed relays and EMRs both behave as excellent switches. The use of high-volume manufacturing methods often makes EMRs lower cost than reed relays, but within the achievable ratings of reed relays, the reed relay has much better performance and longer life. Solid State Reed Relays The term “solid-state relay” refers to a class of switches based on semiconductor devices. There is a large variety of these switches available. Some, such as PIN diodes, are designed for RF applications, but the most commonly found devices that compete with reed relays are based on FET switches. A solid-state FET switch uses two MOSFETs in series and an isolated gate driver to turn the relay on or off. There are some key differences compared to a reed relay: - All solid-state relays have a leakage current associated with their semiconductor heritage; consequently, they do not have as high an insulation resistance. The leakage current is nonlinear. The on-resistance also can be nonlinear, varying with load current. - There is a compromise between capacitance and path resistance. Relays with low-path resistance have a large capacitive load (sometimes measured in nanofarads for high-capacity switches), which restricts bandwidth and introduces capacitive loading. As the capacitive load is decreased, the FET size has to decrease, and the path resistance increases. The capacitance of a solid-state FET switch is considerably higher than a reed relay. - Reed relays are naturally isolated by the coil from the signal path; solid-state relays are not, so an isolated drive has to be incorporated into the relay. - Solid-state relays can operate faster and more frequently than reed relays. - Solid-state relays can have much higher power ratings. - In general, reed relays behave much more like perfect switches than solid-state relays since they use mechanical contacts. MEMS switches still are largely in the development stage for general usage as relays. MEMS switches are fabricated on silicon substrates where a three-dimensional structure is micro-machined (using semiconductor processing techniques) to create a relay switch contact. The contact then can be deflected either using a magnetic field or an electrostatic field. Much has been written about the promise of MEMS switches, particularly for RF switching, but availability in commercially viable volumes at the time of writing is very limited. The technology challenges have resulted in a number of vendors involved in MEMS failing and either ceasing to trade or closing down their programs. Like reed relays, MEMS can be fabricated so the switch part is hermetically sealed (either in a ceramic package or at a silicon level), which generally leads to consistent switching characteristics at low signal levels. However, MEMS switches have small contact areas and low operating forces, which frequently lead to partial weld problems and very limited hot-switch capacity. The biggest advantage for MEMS relays—if they can be made reliable—is their low operating power and fast response. The receive/transmit switch of a mobile phone, for example, has long been a target for MEMS developers. However, at their present stage of development, it seems unlikely they will compete in the general market with reed relays as the developers concentrate on high value niche opportunities and military applications. The Future for Reed Relays In more recent years, there has been a constant quest for further miniaturization. Smaller parts have required more sophisticated methods, including lasers, to create the glass-to-metal hermetic seal of the reed switch capsule. Lasers also are sometimes used to adjust the sensitivity of reed switches by slightly bending the switch blades to change the size of the contact gap. Contact plating materials and methods also have changed, particularly in the areas of cleanliness, purity of materials, and the reduction of microscopic foreign particles or organic contamination, resulting in superb low-level performance. Reed-relay operating coils also have become smaller and more efficient thanks to advanced coil-winding techniques with controlled layering of the coil-winding wire. In the case of Pickering Electronics’ relays, the coil-winding bobbin also has been dispensed with in favor of former-less coils, which has reduced package sizes. While reed relays are a relatively mature technology, such evolution will continue in the future. A reed relay in many ways is a near perfect switching element with a simple metallic path. A well-designed and correctly used part will give a long and reliable life. Reed relays will certainly be around for many years to come. Original article can be found here. Pickering’s new Series 120 4mm2 TM reed relay range has attracted a lot of interest since being released in July at Semicon West in San Francisco, U.S.A. The relays require a board area of only 4mm x 4mm, making it the highest packing density currently available, taking up the smallest board area ever. Two switch types are available, a general purpose sputtered ruthenium switch rated for up to 20 Watts, 1 Amp and a low level sputtered ruthenium switch rated at 10 Watts, 0.5 Amps. These are the same reed switches as used in many other long-established Pickering Electronics ranges but are orientated vertically within the package, allowing this very high density. The small size of the package does not allow an internal diode. Back EMF suppression diodes are included in many relay drivers but if they are not, and depending on your drive methods, these may have to be provided externally. The relays feature an internal mu-metal magnetic screen. Mu-metal has the advantage of a high permeability and low magnetic remanence and eliminates problems that would otherwise occur due to magnetic interaction. Relays of this small size without magnetic screening would be totally unsuitable for applications where dense packing is required. To learn more about this industry changing Reed Relay range visit Pickering Electronics in booth E.5910 at Electronica China 2018 this March, 14 – 16, in the Shanghai New International Expo Centre, China. You can view the original article here>>. Pickering Electronics, a UK designer and manufacturer of Reed Relays, has announced they are now a member of the Electronics Representatives Association (ERA), the international trade organization for professional field sales companies in the global electronics industries, manufacturers who go to market through representative firms and global distributors. Since 1968, Pickering Electronics have been manufacturing high quality Reed Relays for Instrumentation and Automatic Test Equipment (ATE), High voltage switching, Low thermal EMF, Direct drive from CMOS, RF switching and other specialist applications. Pickering Electronics has developed a solid customer base in a wide range of industries/applications, including their sister company; Pickering Interfaces; designers and manufacturers of modular PXI/PCI/LXI Switching Systems. Pickering Interfaces are a large Reed Relay customer who work very closely with Pickering Electronics on leading edge Reed Relay designs, reliability testing, life testing, production engineering, amongst other things. This close relationship greatly benefits both companies and gives Pickering Electronics a strong insight into demanding functional test Reed Relay applications. Consecutive years of double-digital growth in sales, and recent investments to expand the capacity of the organisation, have led Pickering Electronics to update their strategy in the USA, to begin establishing strategic partnerships with representatives and distributors that are focused on the electronic components/test and measurement market. Pickering Electronics, a manufacturer of high-quality reed relays, has announced that this year marks 50 years in business. Pickering Electronics was founded in 1968 by the late John Moore. Five decades later its future is looking bright, with sales in 2017 up by 30% on the previous year. “Fifty years of designing, manufacturing and distributing reed relays means that we have a very good understanding of the product we are selling and consider ourselves to be the leaders in reed relay technology,” said Graham Dale, technical director at Pickering Electronics. “Since 1968, we have gradually evolved our reed relays from very large, relatively crude parts to the small, ultrareliable parts we have today. Production methods and quality systems have improved a great deal over that time, and costs have been radically reduced. “When I started designing reed relays in the late 1970s some were saying that these electromechanical devices would have a limited lifetime. Instead, the market for high-quality reed relays has increased into areas that were inconceivable in those days.” In 1983 Pickering Electronics established SoftCenter technology and former-less coil construction, setting it apart from other reed relay manufacturers. SoftCenter protects the sensitive glass/metal seal of the reed switch capsule, thereby increasing contact resistance stability and improving the life expectation of the relay. Former-less coil construction maximises magnetic drive and increases packing density. Pickering has now become renowned for designing reed relays for high-density applications. Just last year the company released what is claimed to be the world’s smallest footprint reed relay — the Series 120 4 mm2 — switching up to 1 A while stacking on a 4 x 4 mm pitch. The Pickering Group now comprises two privately owned companies: Pickering Electronics, a specialist in reed relay design and manufacture, and Pickering Interfaces, which since 1988 has been designing and manufacturing modular signal switching and simulation for switching systems. The group employs over 380 people worldwide, with manufacturing facilities in the Czech Republic along with additional representation in countries throughout the Americas, Europe, Asia and Australasia. To celebrate 50 years in business, Pickering Electronics has various celebrations planned, including a book about the company’s first 50 years. The book features various milestones in Pickering Electronics’ history, along with stories, quotes and personal photographs from its founder, directors and employees. The book is available to download from the company website here>>. You can view the original article here>>.
<urn:uuid:38449631-b36b-4540-a1db-1ed6f0a8df83>
CC-MAIN-2021-43
https://www.pickeringrelay.com/category/media-news/page/7/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.91/warc/CC-MAIN-20211020055136-20211020085136-00590.warc.gz
en
0.951084
2,683
3.453125
3
The color of clouds entirely depends on the color of light being transmitted to them from the sun, which is the earth’s natural source of light. The sun provides white light, which, when combined with all the colors in the visible spectrum, we get electromagnetic waves of differing lengths. When most people think of or draw clouds, it is usually the white puffy clouds that come to mind. However, clouds come in a variety of colors. The cloud is composed of tiny water and or ice droplets, and the color of the water is colorless. However, we see clouds in different shades and colors, and the color of clouds varies depending on the cloud’s thickness, which can be explained through a phenomenon called scattering. When light is permitted through the clouds, the water or ice droplets begin to scatter the light, and if the cloud is thin, light passes quickly, allowing us to see a white color. With the increase of the cloud’s thickness, the number of scattering events is also increased, which results in the color changing of the clouds. The increasing thickness changes the color to grey, and if the cloud is very thick, we see a black shade color. The color of clouds depends on their density, their makeup of ice crystals, and water vapor. In high-level clouds with frigid temperatures, the ice crystals often reflect bright, white light. This is experienced even at the top of dark and cumulonimbus storm cells due to the thermal updraft energy possessed by these cells. What Are the Different Cloud Colorations? Clouds don’t have a specific color, as they are made of water vapor or ice droplets. When illuminated by colored light, however, such as the sunrise or sunset. Clouds appear darker as they get thicker to minimize the amount of light transmitted. When light travels to the atmosphere, it affects how colors are scattered and the clouds’ color. When using the visible electromagnetic spectrum, blue scatters more than the other colors as it is the shortest wavelength, which is why the sky is mostly blue. When light rays hit droplets of water in the clouds, light is equally scattered, and the combined colors make the clouds appear white. When the clouds are thicker, they tend to appear more grey than blue or white. When there is a significant presence of water molecules within the clouds, the clouds appear grey. At times during sunrises and sunsets, the sun is located at a lower angle forcing the light to travel through more atmosphere. When this happens, it allows for more colors to be scattered, revealing the oranges and reds. These colors are then reflected onto the clouds to form pinkish clouds. Why Are There Pink Clouds? Some people argue that the color of clouds depends primarily on the cloud’s thickness. This is explained as when sunlight is transmitted through a cloud, tiny droplets of water cause the cloud to scatter all light colors in the same way. This produces a white color, and as the cloud thickens, less light is allowed through the cloud’s base, so it appears darker. Clouds themselves are colorless, as stated, mostly because they are composed of ice and water droplets, but there is a catch to it. These water and ice droplets can reflect and absorb light, which is why they appear white when light is reflected from them. Clouds tend to take a darker color when more light is absorbed or blocked from meeting the naked eye. When you are looking at a cloud, your location can also affect the cloud’s color in your eyes. For example, if you are standing under the base of a very tall cloud, the cloud appears grey as little light is permitted through the cloud, but if you stand further away and view the cloud from the side, it appears white because light doesn’t pass through the cloud before reaching your eye. Light from the sky is caused by the Rayleigh scattering of sunlight, which leads to the blue sky color perceived by most human eyes. When it is sunny outside, the Rayleigh scattering causes the blue gradient. The blue color is seen at the horizons because the blue light emanating from a distance is preferentially scattered. When this happens, a redshift of the sources of light is a bit distant, and this is compensated by a blue hue or red light scattering. At distances and near infinitude, the scattered light appears as white, and the distant clouds and snowy mountaintops seem yellow, and the effect is only pronounced on cloudy days when the blue hue is reduced from scattered sunlight. The sky can turn to various colors, including orange, red, yellow, and pink. This is mostly experienced near sunrises and sunsets and black at night. This scattering effect of sun rays partially polarizes the light from the sky, and it is most effective at 90 degrees angle from the sun. As observed from the earth, the clouds’ color tells a lot about what is happening inside the particular cloud. Cloud droplets scatter light efficiently, which decreases the intensity of solar radiation with depth into gases. When this happens, the cloud base varies from a light to a very dark grey color depending on the cloud’s thickness and the amount of light being reflected and transmitted back to you. Thin clouds look or appear to have an acquired color of their background or their environment. The non-tropospheric and the high tropospheric clouds appear mostly as white, and as a tropospheric cloud matures, dense water droplets can combine to form larger droplets. This accumulation process allows more light to penetrate the clouds, which is what causes a range of cloud color. Red, pink, and orange clouds are usually seen during sunrises and sunsets, and they are an impact of the atmosphere’s scattering of sunlight. When the angle between the horizon and the sun is small or less than 10%, especially if it’s just after or before sunset, the sunlight appears too red due to the refraction of colors other than red. The scattering of light in a way that makes red and blue wavelengths predominantly combine to meet the eye produces a perception of pink color in the clouds. There is speculation that clouds appear as pink when they are sitting between our eyes and the horizon so that they predominantly reflect blue wavelengths to our eyes. When this blue wavelength combines with the predominantly red wavelength reaching your eye from the setting sun’s light, clouds appear pink. The perception of white clouds reduces the effect of the green wavelength, so instead, we have pink clouds. Why Do Clouds Appear Pink At Sunsets? When the sun sets, light has no further way to travel, and the sun is usually lower down in the clouds. Light is composed of different colors, hence rainbows, and out of all those colors, blue travels the farthest and scatters out before reaching our eyes. On the other hand, red light can reach our eyes, which is why the sky appears as pink and red more than usual. Some people argue that these clouds aren’t pink and appear that way at certain times of the day only. This is as a result of the amount of atmosphere passed through by the sunlight. Blue and violet colors are easy to scatter where people can see them, and if the light has a high angle, only these colors are scattered, hence why the sky is usually blue. This angle is altered during sunsets and sunrises, and as the sun angle is low, light must pass through a lot more of the atmosphere. The violet, yellow, and blue colors are completely scattered out of your sightline, which leaves you with the red and orange colors to see. Hence the clouds appear pink around sunrise and sunset. The time of day also impacts the color of clouds and causes clouds to appear pink. When the sky appears pink, the clouds reflect the color pink, which is influenced by the time of the day and the sun’s angle. The light from the sun contains all the rainbow colors, and even if the color of pure sunlight is white to our naked eyes, it is usually filled with color. Light traveling through the sky passes through evaporated gas and water, clouds, and other atmospheric particles. These particles reflect and refract light, which scatters some of the colors of the sunlight. The longer the distance covered by sunlight through the sky, the more colors are lost by the sunlight. Some colors make it, and during sunrise, the sunlight has longer distances to cover across the sky before it reaches your eyes. The colors that reach your eyeballs are pink, orange, and red, less likely to be scattered through the atmosphere. This is what causes the morning sun to fill the sky with a blaze of reds and pinks. Why Does the Sky Have a Pink Hue When It’s Snowing? The color of the sky and the clouds is greatly affected by the reflection of light from the sun through the horizon to our eyes. When measuring wavelengths, it’s evident that the red light has a longer wavelength, restricting it from scattering as easily as greens and blues colors. This causes sunsets to appear as typically orange or red, giving the sky a pink hue. When the sky is moving in, or when it’s already snowing, the light that bounces off the atmospheric particles and the clouds is scattered, which leaves us to see longer wavelengths. When it begins to snow, the same light reflects off all the various snowflakes, which gives the sky a pink hue, hence pink clouds. There may also be different hues and tint in the sky and the clouds as a result of artificial lighting in the cities/. The color of the clouds varies depending on the color of lighting, which is what causes you to see yellows, pinks, and whites. Low clouds cause snowflakes to drop, and when you have a reasonable speed rate of falling snow, light is reflected off the snowflakes. Pink Clouds and Sailing The idea of a red sky at night, sailor’s delight, is derived from the same idea of light reflection. This simply means that if you see a red glow in the night sky, there is a high chance that a storm is either passing through or already has passed. The setting sun’s light in the west reflects off the cloud’s backside, which results in drier and better weather the following day. When the clouds are in the right position and expansive enough, you can often notice pink clouds during sunrises and sunsets. The clouds appear as a wall of glowing magenta light of pink hue. The sky usually looks blue due to the small oxygen molecules that interrupt and scatter the blue wavelengths. Some people believe that the high-level cloud invisibility to the eye reflects the blue wavelengths to our eyes. This combines with the sun’s incoming red wavelengths, which transit through the lower thick atmospheric levels, resulting in a pink perception in the clouds.
<urn:uuid:916e7c6b-0521-41e3-823d-9eb8425a4885>
CC-MAIN-2021-43
https://www.konnecthq.com/why-are-some-clouds-pink/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585405.74/warc/CC-MAIN-20211021102435-20211021132435-00109.warc.gz
en
0.944067
2,265
4.03125
4
THIS ARTICLE INCLUDES A FREE PRINTABLE “If we succeed in giving the love of learning, the learning itself is sure to follow.” As babies, children have an innate curiosity. They’re eager to explore the world around them, soaking up new information and skills like sponges. But somewhere along the way, this natural love of learning is often LOST. Many children grow to dislike and even dread school and learning new things. Fortunately, the love of learning can be developed and cultivated using a few simple strategies. Before you move on, be sure to sign up for our FREE weekly printables carefully crafted to teach your kids growth mindset, resilience, and much more. Sign up below to make sure you're on the list! Once signed up, you will immediately receive our popular Parent's Guide to a Growth Mindset. 1. Help Children Discover Interests and Passions Naturally, one way to spark a love of learning is to help children discover and explore topics that interest them. Studies show that learning is enhanced when children are allowed to select topics of interest to pursue. This is one reason it’s so effective for teachers to build choice into the classroom. Sally Reis, Ph.D., Associate Professor of Educational Psychology at the University of Connecticut, explains that the key to unlocking a child’s potential is finding that child’s interests and helping the child develop them. Talk to your child about what he is doing, reading, watching, and learning. Expose him to different experiences like museums, theatrical performances, zoos, etc. Help him check out books on a variety of topics from the local library. All of these activities can help you find and spark your child’s interests. There are various questionnaires designed to help you identify a child’s passions. Once you’ve identified what your child enjoys, provide resources to help him further explore these interests. This can be done in a classroom as well: If you know that one of your students loves monster trucks, get him interested in reading by finding books on this topic. This will naturally make learning more exciting. 2. Provide Hands-On Experiences Again and again, research has shown that hands-on learning is the most effective for kids. When students move, touch, and experience, they learn better. For instance, studies show that students who act out a mathematical word problem are more likely to answer correctly than students who don’t. “A very strong predictor of academic achievement was how early kids were moving, exploring their world. When kids can explore their surroundings, all of a sudden, things change.” - Sian Bilock, professor of psychology at the University of Chicago. Not only does hand-on learning help children process information, but it’s also a more enjoyable way to learn. Most children simply don’t enjoy reading from a textbook, copying notes, or “learning” through rote memorization. Experiences and hands-on activities, however, will spark a child’s interest and imagination. Teachers should incorporate movement, interaction, and tactile experiences in the classroom as much as possible. One simple and effective way to do this is through the use of manipulatives. If you’re teaching basic addition, for example, you can have students count using any object, like crayons or marbles. When teaching classification, have students sort blocks of different shapes and colors. Parents can provide additional enrichment from home. If your child is learning about aquatic animals in school, take him to visit an aquarium. If he’s studying a certain artist, take him to a museum to look at their work. Try to find hands-on, engaging experiences for your child. Make learning an adventure. Check out the Growth Mindset Activity Kit for lots of fun growth mindset activities. Kids will practice creativity, probably solving, and learning from mistakes. These experiences will help your child learn effectively, and they’ll also give him positive and enjoyable experiences with learning. 3. Make Learning Fun Even seemingly dry subjects can become more fun through songs, academic games, scavenger hunts, or creative activities. For instance, if kids are learning about the thirteen colonies (in the classroom or at home), you can provide clues and ask children to guess the correct colony. You can easily create academic BINGO, crossword puzzles, or word searches. Websites like Kahoot make it easy to gamify learning digitally as well. You can also incorporate art projects, music, or creative writing into just about any academic subject. Create a song about the water cycle, or write a story from the perspective of a tadpole as he transforms into a frog. Build a model of the solar system using materials you find around the house or classroom. Sometimes simply using humor or telling an interesting story related to the material being taught is enough to make the experience more fun. Another way to make learning more fun is to use “brain breaks.” Brain breaks are short, typically silly activities. They disrupt the monotony or difficulty of a lesson or assignment so children can return to the task feeling re-energized and focused. Looking for more "brain break" ideas? Check out our Mindful Brain Breaks found in our Positivity & Connection Kit As children begin to see learning as more fun and less stressful, their love of learning will grow. 4. Demonstrate Your Own Passion Be a great role model for your child by enthusiastically exploring your own interests and passions. Show that YOU are passionate about learning. If you have the time and resources, you can even take a course (online or in-person) in something you’re interested in: cooking, photography, literature, etc. Talk to your child about what you’re learning: the challenges, the excitement, how you’re applying what you’ve learned to your own life, and so on. Even if you can’t take a class, you can read books or watch videos to learn more about a topic that interests you. It sounds simple, but demonstrating your own enthusiasm for learning helps instill this same passion in your child. For teachers, it’s important to show passion and enthusiasm for the subject you teach. If you aren’t excited about it, your students won’t be either. A teacher who seems genuinely enthusiastic about the subject he or she teaches can engage students and spark their interest. 5. Find Your Child’s Learning Style Children have their own unique learning style, or a type of learning that is most effective for them. Educators and psychologists have identified three main learning styles: visual, auditory, and kinesthetic. There are many quizzes available online to help you determine a child’s learning style, but you can also make a solid guess based on the child’s interests and the type of activities he seems to enjoy. - Visual learners process information most effectively when it’s presented in writing or in images. They’re very observant, have excellent memories, and often enjoy art. - Auditory learners like to hear information. They’re good listeners, follow directions well, and often have verbal strengths and/or musical aptitude. - Kinesthetic learners are physical, often excelling at sports or dance. They learn best through movement and touch. They may count on their fingers or use frequent hand gestures. Many children show ability in all three of these areas, but one is likely stronger than the others. If you can find a child’s strength, you can help him learn in the way that he finds most comfortable and enjoyable. 6. Have Discussions, Not Lectures Make learning a conversation that your children or students can actively participate in, not just a lecture that they must passively receive. When your child demonstrates curiosity by asking a question, do your best to answer it. This is true in the classroom as well. Even when a question is slightly off-topic, it shows interest and creates a learning opportunity for your students. If you don’t know the answer to a question, discovering the answer together can be a fun and memorable experience. You can also expand the conversation by asking open-ended questions yourself. Begin your questions with, “Why,” “How,” or, “What would happen if….?” These questions can move children to higher levels of critical thinking and problem-solving. Paying attention to the questions your child asks will also help you discover your child’s interests, which you can then incorporate into future conversations or lessons. 7. Be Supportive and Encouraging One reason many children lose their love of learning is that they begin to associate learning with anxiety and pressure. They’re worried about getting a bad grade, answering a question wrong, or failing the test. When learning is only about outcomes, it’s no longer fun. Make learning more about the process and the effort that your child puts into his work. Stanford University researcher Carol Dweck found that when students are praised for their effort instead of their ability, they actually score higher on intelligence tests. This is because children who associate struggle or failure with a lack of intelligence are likely to avoid difficult tasks or give up when they encounter them. On the other hand, children who view challenges as learning opportunities are more likely to persist, strategize, and keep working until they find a solution. Have reasonable expectations for your child, and be supportive and encouraging when your child struggles or fails. Help him learn from these experiences, and don’t put excessive pressure on him to make straight A’s or be an exceptional student. When your child understands that learning is about just that—learning—and not all about achievement or perfection, he’ll be able to relax and enjoy the learning process much more. If your child’s love of learning has faded, it doesn’t have to be gone for good. Parents and teachers can cultivate a love of learning by: - Providing hands-on experiences - Making learning fun - Helping children discover their interests and passions - Demonstrating their own passions - Finding and appealing to the child’s learning style - Asking and answering questions - Being supportive of the effort and the process, not just successful outcomes Give your child room for error and experimentation, and make learning an interactive conversation between the two of you. Provide opportunities for hands-on, personalized, and creative education, and you’ll be surprised how much his love of learning grows.
<urn:uuid:7e5c5a3d-1700-43cf-9ccb-50416c8c0b68>
CC-MAIN-2021-43
https://biglifejournal.com/blogs/blog/instill-love-learning-children?_pos=1&_sid=509685112&_ss=r
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587915.41/warc/CC-MAIN-20211026165817-20211026195817-00310.warc.gz
en
0.943907
2,232
3.421875
3
No study questions No related resources No mentions of this document Many Republicans were dissatisfied with what they perceived as the excessive leniency of President Lincoln’s terms for re-inaugurating federal authority in rebel states, as contained inLincoln’s Proclamation of Amnesty and Reconstruction. The quickness and ease of reconstruction that Lincoln’s plan allowed for, critics worried, made it likely that little would change in the South’s approach to governing or for the freed slaves. Lincoln recommended much in his policy but required only emancipation as a condition for re-admission. Louisiana had adopted emancipation in its constitutional convention, but other provisions left freedmen bereft of rights. Arkansas and Florida too had failed to do much to protect or educate freedmen. Congress also felt that it needed to assert its power to direct future postwar policy. In June, Congress had tried to create a federal bureau to protect freedmen (later the Freedman’s Bureau), but it lacked the votes. A constitutional amendment to abolish slavery had also failed in June. On the last day of the Congressional session in July 1864, Congress passed the Wade-Davis Bill, named for Ohio Senator Ben Wade (1800-1878) and Maryland Representative Henry Winter Davis (1817-1865), both well-known radical Republicans. Lincoln declined to sign the measure before Congress adjourned (a so-called pocket veto; see the U.S. Constitution, Article I, Section 7). Lincoln issued a Proclamation explaining why he vetoed the Wade-Davis bill on July 8, 1864. Source: Abraham Lincoln, “Proclamation 115 – Concerning a Bill To Guarantee to Certain States, Whose Governments Have Been Usurped or Overthrown, a Republican Form of Government.” Online by Gerhard Peters and John T. Woolley, The American Presidency Project, https://goo.gl/aD9LKG. This site contains the text of both the Wade-Davis Bill and Lincoln’s veto proclamation. Be it enacted . . . That in the States declared in rebellion against the United States, the President shall, by and with the advice and consent of the Senate, appoint for each a provisional governor . . . who shall be charged with the civil administration of such State until a State government therein shall be recognized as hereinafter provided. SEC. 2. . . . That so soon as the military resistance to the United States shall have been suppressed in any such state . . . the provisional governor shall direct the marshal of the United States . . . to name a sufficient number of deputies, and to enroll all white male citizens of the United States resident in the State in their respective counties, and to request each one to take the oath to support the Constitution of the United States, and in his enrollment to designate those who take and those who refuse to take that oath, which rolls shall be forthwith returned to the provisional governor; and if the persons taking that oath shall amount to a majority of the persons enrolled in the State, he shall, by proclamation, invite the loyal people of the State to elect delegates to a convention charged to declare the will of the people of the State relative to the reestablishment of a State government subject to, and in conformity with, the Constitution of the United States. SEC. 3. . . . That the convention shall consist of as many members as both houses of the last constitutional State legislature, apportioned by the provisional governor among the counties, parishes, or districts of the State, in proportion to the white population, returned as electors, by the marshal, in compliance with the provisions of this act. The provisional governor shall, by proclamation, declare the number of delegates to be elected by each county, parish, or election district; name a day of election not less than thirty days thereafter; designate the places of voting in each county, parish, or district, conforming as nearly as may be convenient to the places used in the State elections next preceding the rebellion; appoint one or more commissioners to hold the election at each place of voting, and provide an adequate force to keep the peace during the election. SEC. 4. . . . That the delegates shall be elected by the loyal white male citizens of the United States of the age of twenty-one years, and resident at the time in the county, parish, or district in which they shall offer to vote, and enrolled as aforesaid, or absent in the military service of the United States, and who shall take and subscribe the oath of allegiance to the United States . . . ; but no person who has held or exercised any office, civil or military, State or Confederate, under the rebel usurpation, or who has voluntarily borne arms against the United States, shall vote, or be eligible to be elected as delegate, at such election. SEC. 5. . . . That . . . commissioners . . . shall hold the election in conformity with this act, and, so far as may be consistent therewith, shall proceed in the manner used in the state prior to the rebellion. The oath of allegiance shall be taken and subscribed on the poll-book by every voter . . . but every person known by or proved to the commissioners to have held or exercised any office, civil or military, state or confederate, under the rebel usurpation, or to have voluntarily borne arms against the United States, shall be excluded, though he offer to take the oath . . . . SEC. 6. . . . That the provisional governor shall, by proclamation, convene the delegates elected as aforesaid, at the capital of the state, on a day not more than three months after the election, giving at least thirty days’ notice of such day. . . . He shall preside over the deliberations of the convention, and administer to each delegate . . . the oath of allegiance to the United States in the form above prescribed. SEC. 7. . . . That the convention shall declare, on behalf of the people of the State their submission to the Constitution and laws of the United States, and shall adopt the following provisions, hereby prescribed by the United States in the execution of the constitutional duty to guarantee a republican form of government to every State, and incorporate them in the constitution of the State, that is to say: First. No person who has held or exercised any office, civil or military, except offices merely ministerial, and military offices below the grade of colonel, state or confederate, under the usurping power, shall vote for or be a member of the legislature, or governor. Second. Involuntary servitude is forever prohibited, and the freedom of all persons is guaranteed in said State. . . . SEC. 8. . . . That when the convention shall have adopted those provisions it shall proceed to re-establish a republican form of government and ordain a constitution containing those provisions, which, when adopted, the convention shall by ordinance provide for submitting to the people of the State, entitled to vote under this law, at an election to be held in the manner prescribed by the act for the election of delegates . . . at which election the said electors . . . shall vote directly for or against such constitution and form of State government. And the returns of said election shall be made to the provisional governor . . . and if a majority of the votes cast shall be for the constitution and form of government, he shall certify the same . . . to the President of the United States, who, after obtaining the assent of Congress, shall . . . recognize the government so established . . . as the constitutional government of the State, and from the date of such recognition, and not before, Senators and Representatives, and electors for President and Vice President may be elected in such State, according to the laws of the State and of the United States. SEC. 9. . . . That if the convention shall refuse to reestablish the State government on the conditions aforesaid, the provisional governor shall declare it dissolved. . . . SEC. 10. . . . That, until the United States shall have recognized a republican form of State government the provisional governor in each of said States shall see that this act, and the laws of the United States, and the laws of the State in force when the State government was overthrown by the rebellion, are faithfully executed within the State; but no law or usage whereby any person was heretofore held in involuntary servitude shall be recognized or enforced by any court or officer in such state, and the laws for the trial and punishment of white persons shall extend to all persons, and jurors shall have the qualifications of voters under this law for delegates to the convention. . . . SEC. 11. . . . That until the recognition of a state government . . . the provisional governor shall . . . cause to be assessed, levied, and collected, for the year 1864 and every year thereafter, the taxes provided by the laws of such State to be levied during the fiscal year preceding the overthrow of the State government thereof, in the manner prescribed by the laws of the State, as nearly as may be; and the officers appointed as aforesaid are vested with all powers of levying and collecting such taxes, by distress or sale, as were vested in any officers or tribunal of the state government aforesaid for those purposes. . . . SEC. 12. . . . That all persons held to involuntary servitude or labor in the states aforesaid are hereby emancipated and discharged therefrom, and they and their posterity shall be forever free. And if any such persons or their posterity shall be restrained of liberty . . . the courts of the United States shall, on habeas corpus, discharge them. SEC. 13. . . . That if any person declared free by this act, or any law of the United States or any proclamation of the President, be restrained of liberty, with intent to be held in or reduced to involuntary servitude or labor, the person convicted before a court of competent jurisdiction of such act shall be punished by fine . . . and be imprisoned not less than five nor more than twenty years. SEC. 14. . . . That every person who shall hereafter hold or exercise any office, civil or military (except offices merely ministerial, and military offices below the grade of colonel) in the rebel service, state or confederate, is hereby declared not to be a citizen of the United States. President Lincoln’s Pocket Veto Proclamation WHEREAS at the late session Congress passed a bill to “guarantee to certain states, whose governments have been usurped or overthrown, a republican form of government,” a copy of which is hereunto annexed; And whereas the said bill was presented to the President of the United States for his approval less than one hour before the sine die adjournment of said session, and was not signed by him; and Whereas the said bill contains, among other things, a plan for restoring the States in rebellion to their proper practical relation in the Union, which plan expresses the sense of Congress upon that subject, and which plan it is now thought fit to lay before the people for their consideration: Now, therefore, I, ABRAHAM LINCOLN . . . do proclaim . . . that, while I am (as I was in December last, when by proclamation I propounded a plan for restoration) unprepared by a formal approval of this bill, to be inflexibly committed to any single plan of restoration; and, while I am also unprepared to declare that the free state constitutions and governments already adopted and installed in Arkansas and Louisiana shall be set aside and held for naught, thereby repelling and discouraging the loyal citizens who have set up the same as to further effort, or to declare a constitutional competency in Congress to abolish slavery in states, but am at the same time sincerely hoping and expecting that a constitutional amendment abolishing slavery throughout the nation may be adopted, nevertheless I am truly satisfied with the system for restoration contained in the bill as one very proper plan for the loyal people of any State choosing to adopt it, and that I am, and at all times shall be, prepared to give Executive aid and assistance to any such people, so soon as the military resistance to the United States shall have been suppressed in any such State, and the people thereof shall have sufficiently returned to their obedience to the Constitution and the laws of the United States, in which cases military governors will be appointed, with directions to proceed according to the bill. . . .
<urn:uuid:c3db0c09-710e-44ac-9289-ba7af84ffd1c>
CC-MAIN-2021-43
https://teachingamericanhistory.org/document/wade-davis-bill/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585322.63/warc/CC-MAIN-20211020152307-20211020182307-00430.warc.gz
en
0.953903
2,585
3.796875
4
The Best 10 National Parks sunrise and Sunset Spots Although it happens twice a day, more often than not the natural phenomenon of the rising and setting sun is overlooked. These events command a lot of attention at national parks, where perhaps an extraordinary landscape or a prominent feature accentuates this important and extraordinarily beautiful event. [See “sunset wars” pictures of national parks’ amazing skies.] Acadia National Park Sunrise from Cadillac Mountain Early risers are up at Acadia National Park in time to catch America’s first sunrise. Between October and March, the first light of day to fall upon the United States shines upon 1,528-foot Cadillac Mountain in the heart of Acadia, on Maine’s coast. It’s a wonderful sensation to feel the warmth of the sun, and that experience is heightened by spectacular panoramic views from an overlook at the peak. The sun rises above the Atlantic Ocean’s horizon and casts streaks of color—oranges, reds, pinks, depending on atmospheric conditions—upon the water, while in the foreground the exposed rock of the mountains glows warmly in the sun’s glow. From the parking lot walk to the Summit Trail and find a spot (out of the wind) facing east. Plan this ascent. Gates are open 24 hours at the park, but due to weather conditions, the road to the top of Cadillac Mountain is closed between December 1 and April 14. Sunrise occurs around 6:30 a.m. in October, about a half hour later in November. Dress for the weather, bring along a camera, some snacks, and something warm to drink while waiting for the start of the daily show. Yosemite National Park Sunset views of Half Dome Nature photographers will never lack for an amazing image as long as there are sunsets and Half Dome. Near the end of each clear day, the usually harsh sun softens to cast an even and gentle glow throughout the eastern end of Yosemite Valley, deep within this California national park. Within minutes, the exposed northwest face of Half Dome begins to change hues with the setting sun and, depending on the season, the scene may vary between a brilliant reddish orange and a soft wintery gray. Other mountains are being illuminated across America, certainly, but Half Dome’s broad wall of granite seems to scoop up every ray of the setting sun, creating an inspiring glow across what could be the world’s largest sundial. Canaveral National Seashore Sunrise over Klondike Beach In the 1950s, the government determined that the scientists, engineers, and astronauts working on America’s space program on the southern end of Cape Canaveral needed some privacy. To protect the cape from further development while ensuring privacy, in 1975 Congress preserved 58,000 acres of seashore, land, and lagoons along with 24 miles of protected coastline to create the longest undeveloped beach on Florida’s Atlantic coast. Arrive here for sunrise and infinity lies to the east. The barrier island beaches—Apollo, Klondike, and Playalinda—are largely absent of people so this will reveal a Florida sunrise in its natural state, an experience that is a pure pleasure and one that can be savored minus the din of highway traffic and far from the sight of 20-story condos. This is especially true of Klondike beach, sandwiched between the two others and accessible only by foot; Klondike has been designated a backcountry beach, and the park restricts the number of visitors to its wave-lapped sands. Bring a beach chair, set up on the sands, and at daybreak the music of nature begins. Here comes the sun. Sunrise and sunset beyond the Capitol Building and the Lincoln Memorial With national icons framing each end of the east-west National Mall in Washington, D.C., dawn and dusk softly and gradually illuminate silent sentinels of American history. When the crisp blue and orange sky breaks in the east, the gleaming white Capitol dome topped by the Statue of Freedom is backlit by the refreshing rays of daybreak, and the effect elicits a natural sense of optimism. At dusk, try to find a spot on or near the steps on the west front of the Capitol building. The Mall is adorned in the comforting rays of sunset that first descend behind the Washington Monument before backlighting the Lincoln Memorial. Once again, that sense of hope and optimism returns, knowing that now and for the next several hours it will shine from here to the Pacific as it falls across 3,000 miles of America. Joshua Tree National Park Sunset of Joshua trees along park roads As the afternoon fades into evening over Joshua Tree’s cactus and pinyon, cool clouds fingerpaint the sky above this southern California desert park. Adding an aural layer to the vivid spectacle of sunset is the distinct bay of howling coyotes. Roads and ridges that run north and south persuade travelers to reach peaks that provide an ever-changing vista as the world turns. Should a vehicle be able to negotiate off-road trails, views can improve through access to little-visited areas populated by cacti and junipers and yuccas, and silhouetted against the horizon, the most special part of sunset, the park’s eponymous Joshua trees. Framed by bands of color, darker above and brighter below, these otherworldly plants appear as inky black splotches against the sky. Badlands National Park Sunrise and sunset from eastern overlooks and north-south ridges The cliché image of a cowboy riding into a beautiful sunset magnifies the cowboy’s independence as well as nature’s power. That feeling still exists in South Dakota’s Badlands National Park, where a lack of development leads to a refreshing sense of solitude. In the eastern reaches of the park, a series of overlooks are carved out from the colorful buttes for a perfect vantage point and sanctuary, where the lonely wide-open prairie, protected in the adjoining Buffalo Gap National Grassland, leads to a feeling of oneness with nature. From atop a north-south ridge are commanding views at dawn and dusk, and after the sun disappears in a swirl of pink and orange clouds the night sky is soon aglow with a shimmering sheath of stars. Death Valley National Park Sunrise and sunset at Zabriskie Point and the Sand Dunes Two areas in this western California national park offer near-ideal settings for watching the sun rise or set. Zabriskie Point is encircled by a colorful montage of mountains and valleys, and here the line of sight will sweep up to the summit of Telescope Peak and, in the distance, down again into the depths of a smidgen of 156-mile-long Death Valley. Of the 2,600 square miles contained within the park, this vantage point near Furnace Creek is considered the premium overlook for both sunrise and sunset. Just off California highway 190, this viewpoint is easily accessible by vehicle; a paved trail leads to a popular observation deck while a little-noticed path leads a short distance north to present the landscape from a slightly higher elevation. With this advantage, the soft red-violet glow of sunrise adds shadows and depth to the surreal landscape of the peaks and ridges once hidden by a prehistoric sea. The Eureka and Mesquite Flat Sand Dunes offer a no less spectacular sunrise or sunset, just a different one. Here at dawn or dusk the low-angled rays of the sun rake across the dunes, burnishing the sand to a high glow and highlighting ripples and ridges and animal tracks. Arches National Park Dusk at Delicate Arch At sunset in Utah’s Arches National Park, Delicate Arch seems to ignite with the flare and fire of the desert sun, its iconic image symbolizing the American Southwest. It’s roughly 1.5 miles from Wolfe Ranch to the arch via the Delicate Arch Trail, so time your hike to arrive at least 30 minutes before sunset and simply follow the cairns that mark the route. The trail pitches up and around the final corner where, pierced by wind and sand, the center of the “sandstone fin” has created a 46-foot arch that, at sunset, changes like a desert chameleon, filtering sunset through a color wheel of red and orange and crimson and gold. Petrified Forest National Park Summer solstice sunrise With Arizona’s Petrified Forest National Park an already magical setting for a great American sunrise, one day in particular may influence anyone’s travel schedule. With the sun’s rays tracking a slightly different path throughout the year, for ten days before and after June 21 (with the highlight being on the summer solstice), the Earth’s alignment with the sun impacts more than a dozen “solar calendars” left throughout the park by prehistoric peoples, with the spiral and circular petroglyphs being intersected by or interacting with the sun’s rising rays. Ancient tribes took time to place them here. Take time to marvel at their confluence of ancient science and nature. Saguaro National Park Sunset silhouettes of saguaro along Cactus Forest Drive Any diorama of the Old West includes the striking silhouettes of Arizona saguaro cactus, the towering icon recognized by its barrel trunk and upraised arms. These alone are worth the visit to the two areas of this national park that bookend the city of Tucson. The eastern Rincon Mountain District is the larger of the two, with ancient saguaros sharing the land with other varieties of cactus including prickly pear and ocotillo. There’s a greater density of saguaro in the western region, but the eastern side features Cactus Forest Drive, a popular loop road across the flatland that provides easy access to saguaro views. These sunsets may not be the most spectacular in America, but seeing a Southwest icon framed in silhouette is a vision not to be missed, especially as twilight falls and bands of color are squeezed on the horizon beneath a velvety dark blue night sky. Original Source: US National Parks Sunrise and Sunset
<urn:uuid:da547817-129e-455e-bb2d-1f5cad4d9067>
CC-MAIN-2021-43
https://www.whatthehellnews.com/10-national-parks-sunrise-sunset-spots/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585204.68/warc/CC-MAIN-20211018155442-20211018185442-00230.warc.gz
en
0.91786
2,125
2.734375
3
voir la définition de Wikipedia |Maria Letizia Bonaparte| |Duchess of Aosta| |Spouse||Amadeo of Spain| |Prince Umberto, Count of Salemi| |French: Marie Laetitia Eugénie Catherine Adélaïde| |House||House of Bonaparte (by birth)| House of Savoy (by marriage) |Father||Napoléon Joseph Charles Paul Bonaparte| |Mother||Princess Maria Clotilde of Savoy| |Born||20 November 1866| Palais Royal, Paris, France |Died||25 October 1926 (aged 59)| Princess Maria Letizia Bonaparte, Duchess of Aosta (20 November, 1866 – 25 October, 1926) (full name in French: Marie Laetitia Eugénie Catherine Adélaïde) was one of three children born to Napoléon Joseph Charles Paul Bonaparte and his wife Princess Maria Clotilde of Savoy. By her 1888 marriage to Amadeus, Duke of Aosta, she became Duchess of Aosta. Maria Letizia's father Napoléon Joseph was a nephew of Emperor Napoleon Bonaparte through his brother Jerome Bonaparte, King of Westphalia. This then made Maria Letizia a great-niece of Emperor Napoleon. Her mother Maria Clotilde was a daughter of Victor Emmanuel II of Italy. Through this connection, Maria Letizia was a cousin of Umberto I of Italy and Maria Pia, Queen of Portugal. Maria Letizia was born in the Palais Royal in Paris on 20 November 1866, during the last few years of the Second French Empire. She grew up living between Paris, Rome, and Italy with her two brothers Napoléon Victor and Louis. After the fall of the French Empire, their family had a beautiful estate near Lake Geneva that they resided in. Their parents' marriage was unhappy however, particularly as Maria Clotilde preferred the quieter, more duty-filled life that she felt they should maintain, while Napoléon Joseph preferred the faster, more entertainment-filled lifestyle of the French court. Another factor in their unhappy marriage were the circumstances leading up to their espousal. Maria Clotilde had been only 15 when they were married, while he had been over 37 years old. The marriage had also been negotiated out of political reasons during the conference of Plombières (July 1858). As Maria Clotilde was too young at the time for marriage, Napoléon Joseph had had to wait until the following year; many had disapproved of the speed he undertook collecting his young bride in Turin. Their marriage was often compared to that of an elephant and a gazelle; the bridegroom had strong Napoleonic features (broad, bulky, and ponderous) while the bride appeared frail, short, fair-haired, and with the characteristic nose of the House of Savoy. The marriage was also unpopular with both the French and the Italians; the latter in particular felt that the daughter of their king had been sacrificed to an unpopular member of the House of Bonaparte and consequently regarded it as a mésalliance. For France's part, Napoléon Joseph was ill regarded, and had been known to carry on a number of affairs both before and during his marriage. Their official reception into Paris on 4 February was greeted very coldly by Parisians, not out of disrespect for a daughter of the king of Savoy, but instead out of dislike for her new husband. Indeed, all her life public sympathy tended to lean in her favor; she was fondly regarded as retiring, charitable, pious, and trapped in an unhappy marriage. After Maria Clotilde's father Victor Emmanuel died in 1878, she returned back to Turin, Italy without her husband. During this period, Maria Letizia mostly resided with her mother in the Castle of Moncalieri, but her two brothers stayed mainly with their father. It was in Italy that their mother withdrew herself from society to dedicate herself to religion and various charities. In Florence, Maria Letizia met and almost married her cousin Prince Emanuele Filiberto of Italy. A change of plans occurred however, and the marriage never took place. Emanuele later married Hélène of Orléans instead. It was in Moncalieri that she met Emanuele's father Amadeus, Duke of Aosta (sometimes referred to as Amadeo). He was her maternal uncle and was formerly the elected king of Spain for a brief period of three years (1870-1873). Maria Letizia was considered very charming, and Amadeus was very dependent on her society when he visited Italy. In 1888, she agreed to marry him. The announcement of their marriage caused a great scandal in the Italian court, as he was not only her mother's brother, but was also 22 years older. Nevertheless, later that year the necessary Papal dispensation was obtained, giving them permission to marry. They wedded that same year, on 11 September in Turin, Italy. Their wedding was attended by many members of the houses of Bonaparte and Savoy, including Amadeus' sister Queen Maria Pia of Portugal. She was his second wife, as his first spouse had died in 1876. Due to the large age difference, Maria Letizia was only three years older than Amadeus' eldest child. They had one child: Amadeus died two years after their marriage, in Turin, on 18 January 1890. Once widowed, Maria Letizia maintained an open and scandoulous relationship with a military man twenty years her junior. Upon her death on 25 October 1926, he was revealed to be the sole heir in her will (her son having died in 1918). Maria Letizia's ancestors in four generations |16. Giuseppe Maria Buonaparte| |8. Carlo Maria Buonaparte| |17. Maria Saveria Paravicini| |4. Jérôme Bonaparte, King of Westphalia| |18. Giovanni Geronimo Ramolino| |9. Maria Letizia Ramolino, Madame Mère de l'Empereur| |19. Angela Maria Pietrasanta| |2. Prince Napoleon| |20. Frederick II Eugene, Duke of Württemberg| |10. Frederick I, King of Württemberg| |21. Fredericka Dorothea of Brandenburg-Schwedt| |5. Catherine of Württemberg| |22. Charles II William Ferdinand, Duke of Brunswick-Wolfenbüttel| |11. Augusta of Brunswick-Wolfenbüttel| |23. Augusta of Great Britain| |1. Princess Maria Letizia Napoléon| |24. Charles Emmanuel of Savoy| |12. Charles Albert, King of Sardinia| |25. Maria Christina of Saxony| |6. Victor Emmanuel II of Italy| |26. Ferdinand III, Grand Duke of Tuscany| |13. Maria Theresa of Tuscany| |27. Luisa of Naples and Sicily| |3. Marie Clothilde of Savoy| |28. Leopold II, Holy Roman Emperor| |14. Rainer Joseph of Austria| |29. Maria Luisa of Spain| |7. Maria Adelaide of Austria| |30. Charles Emmanuel of Savoy (= 24)| |15. Elisabeth of Savoy-Carignano| |31. Maria Christina of Saxony (= 25)| Maria Letizia BonaparteBorn: 20 November 1866 Died: 25 October 1926 Maria Vittoria del Pozzo della Cisterna |Duchess of Aosta||Succeeded by| Princess Hélène of Orléans Contenu de sensagent dictionnaire et traducteur pour sites web Une fenêtre (pop-into) d'information (contenu principal de Sensagent) est invoquée un double-clic sur n'importe quel mot de votre page web. LA fenêtre fournit des explications et des traductions contextuelles, c'est-à-dire sans obliger votre visiteur à quitter votre page web ! Solution commerce électronique Augmenter le contenu de votre site Ajouter de nouveaux contenus Add à votre site depuis Sensagent par XML. Parcourir les produits et les annonces Obtenir des informations en XML pour filtrer le meilleur contenu. Indexer des images et définir des méta-données Fixer la signification de chaque méta-donnée (multilingue). Renseignements suite à un email de description de votre projet. Jeux de lettres Lettris est un jeu de lettres gravitationnelles proche de Tetris. Chaque lettre qui apparaît descend ; il faut placer les lettres de telle manière que des mots se forment (gauche, droit, haut et bas) et que de la place soit libérée. Il s'agit en 3 minutes de trouver le plus grand nombre de mots possibles de trois lettres et plus dans une grille de 16 lettres. Il est aussi possible de jouer avec la grille de 25 cases. Les lettres doivent être adjacentes et les mots les plus longs sont les meilleurs. Participer au concours et enregistrer votre nom dans la liste de meilleurs joueurs ! Jouer Dictionnaire de la langue française La plupart des définitions du français sont proposées par SenseGates et comportent un approfondissement avec Littré et plusieurs auteurs techniques spécialisés. Le dictionnaire des synonymes est surtout dérivé du dictionnaire intégral (TID). L'encyclopédie française bénéficie de la licence Wikipedia (GNU). Changer la langue cible pour obtenir des traductions. Astuce: parcourir les champs sémantiques du dictionnaire analogique en plusieurs langues pour mieux apprendre avec sensagent. calculé en 0,062s
<urn:uuid:fa9226ce-f504-4a05-93cc-adfe37febcba>
CC-MAIN-2021-43
http://dictionnaire.sensagent.leparisien.fr/Maria_Letizia_Bonaparte/en-en/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585380.70/warc/CC-MAIN-20211021005314-20211021035314-00270.warc.gz
en
0.783355
2,324
2.625
3
If you want to save lots of time & ammo zeroing a rifle or hitting a distant target, invest a little time to understand the concepts of MOA and mils. USA –-(Ammoland.com)- In part one of the Long Range Shooting Guide, we made the astounding observation that gravity happens. The very picosecond that a bullet leaves the muzzle, it begins it’s slow and inevitable downward death spiral, ultimately ending in a collision with the ground – unless it hits something else first. Because of gravity, shooters need to account for bullet drop by “aiming up.” How much “up” depends on many things, but mainly the distance to the target. The farther away the target is, the more time elapses while the bullet is in flight, and the more time gravity has to push it towards the dirt. Let’s consider a real example. I’ve been testing a Masterpiece Arms BA Lite 6.5mm Creedmoor rifle. When it’s zeroed at 100 yards shooting some nifty hand loads with Hornady’s 140-grain ELD Match bullets, I can calculate the exact amount of bullet drop (or how much I have to aim “up”) for any given distance. At 800 yards, that bullet will drop 163.53 inches. That’s no big deal, right? All I have to do to hit the target is adjust my scope to the “163.53 inches for 800 yards” setting. Obviously, there is no such mark on the scope dial, so that’s where the concepts of minutes off angle and milliradians come into play. Those are just standardized ways of accounting for bullet drop over any distance. Both minutes of angle (MOA) and milliradians (we’ll call them mils) are (more or less) angular measurements. They do the exact same thing but represent different measurements, sort of like yards and meters. Since they are angular measurements, they’re proportional. If an MOA or mil represents some amount of drop at 100 yards, it represents double that at 200 yards and triple that at 300 yards. We’re going to dive into the basic math for just a hot second. To understand the concepts of MOA and mils, it’s important to know the land from which they hail. A radian is a unit of distance around the perimeter of a circle. If you start nibbling your way around the very edge of a Reese’s Peanut Butter Cup, and you make it all the way around, you’ll have nibbled 6.28 radians of yummy goodness. If you take just one small bite from the edge, say about one-sixth, you’d have eaten about one radian of the edge. Now, imagine drawing a line from the center of your Reese’s to the start of the bite mark and another from the center to the end of the bite mark. Those two lines form an angle that forms one radian. So if a radian represents an angle of about 1/6th of a circle, a milliradian represents about 1/6,000th of a circle, or, to be exact, 1/6,280th of a circle. That’s a really small angle. In fact, if you draw two lines extending out 1,000 yards at that angle, they would only be 36 inches apart at the end. Hold that thought for a second while we define minutes of angle. A minute of angle is an angular measurement. It just represents a different amount. A circle has 360 degrees, right? Well, a “minute” is 1/60th of a degree, so there are 21,600 (60 * 360) “minutes” in a full circle. A minute of angle (MOA) is also a really narrow-angle, even smaller than the one represented by a milliradian. If you drew two 1,000 yard-long lines separated by a single minute of angle, they would diverge to just 10.4 inches apart at the very end. Whether we’re talking about minutes or mils, both are proportional measurements, so the number they represent changes in a constant fashion as distance increases. Just like going 100 miles per hour in your Bugatti Veyron gets you to Dunkin Donuts twice as fast as traveling 50 miles per hour, the distance represented by a minute or mil is double at 200 yards from what it was at 100. Going back to the real numbers, a mil represents 3.6 inches at 100 yards, so that one mil translates to 7.2 inches (2 * 3.6 inches) at 200 yards, and 10.8 inches (3 * 3.6 inches) at 300 yards. The same thing applies to minutes of angle. One MOA at 100 yards is 1.04 inches while at 200 yards it translates to 2.08 inches and 3.12 inches at 300 yards. Now we have a way to standardize scope adjustments for distance. Since it’s impractical for scope makers to put marks like “163.53 inches for 800 yards” on the turrets, they instead put markings measured in either minutes of angle or milliradians. With some simple math, we can figure out exactly how many MOA or mils will translate to that 163.53 inches and adjust a scope accordingly. Sticking with our example of wanting to hit that 800-yard target and accounting for 163.53 inches of bullet drop, let’s do the long walk through the math to calculate how many MOA that is. To keep things simple, we’ll round a bit, and assume that one minute of angle is 10 inches at 1,000 yards instead of 10.4 inches. Since all of this is proportional, then one MOA is 8 inches at 800 yards because every 100 yards is one MOA. We need to adjust for 163.53 inches of drop, so that would be 20.44 sets of eight-inch increments (163.53 / 8) or 20.44 minutes of angle. Since most scopes have turrets that with minute of angle marks, we should be able to dial right up to 20.5 and hit the target. For minutes of angle you can use this direct formula: Minutes of Angle = Correction in inches / Range to target in hundreds of yards In our example, the calculation would be this: Minutes of Angle = 165.53 / 8 = 20.44 If we want to be extra precise, we can skip the rounding and use the exact minute of angle measurement into the math and use this: Minutes of Angle = (Correction in inches * .96) / Range to target in hundreds of yards. The .96 factor accounts for the fact that a MOA is 1.04 inches instead of an even one inch. If we want to use milliradians instead of minutes of angle, the logic is exactly the same although the units are different. Mils = (Correction in yards * 1,000) / Yards to target Using the same example, our correction is 163.53 inches, or 4.54 yards (163.53 inches divided by 36 inches per yard), so the equation looks like this: Mils = (4.54 yards * 1,000) / 800 yards = 5.675 mils If our scope uses milliradian units on the dial, we’d spin to the closest setting to 5.675. Based on some completely unscientific research on the universe of scopes, it seems that the vast majority use turrets with 1/4 MOA markings. Simply put, that means that each click of a dial makes a 1/4 minute of angle adjustment in where the bullet hits. Even more simply put, since four clicks would be one MOA, then four clicks would make a one-inch adjustment at 100 yards. Stated differently, every click moves the impact point 1/4 of an inch when shooting at 100 yards. If you are 3/4 of an inch off bullseye, then adjust three clicks. If you’re two inches away, adjust eight clicks (two inches / 1/4 inch per click). There are also scopes that use milliradian clicks, and most of those seem to use .1 mil click adjustments. Each time you turn one click, you’re adjusting 1/10th of a mil, or .36 inches at 100 yards. That’s because a full mil represents 3.6 inches at 100 yards. We’ve been talking about spinning the turrets, but all of this works exactly the same if you choose to hold over using the markings in your scope reticle. While there are 22 billion scope reticle designs, the one thing they all have in common is that the manufacturer documents somewhere the distance between the various markings on the scope. If your scope has hash marks on the vertical reticle line that are one MOA apart, you can just hold over by the required number of MOA for your shot rather than going to the trouble of moving turrets. For this reason, it really pays to know the reticle marks in your scope. This ability to hold over with precision is why mil-dot reticles are so popular. When the marks in your view are one mil apart, you can very quickly adjust for a shot at any distance once you determine how many mils of adjustment you need to make. Here we’ve been focusing mainly on bullet drop to describe the whole concept of minutes and mils, but the exact same concepts apply to sideways movement too. Whether your target is moving or the wind is blowing you need to account for sideways movement at any given range. If you want to save lots of time and ammo zeroing a rifle or increasing your odds of hitting a distant target, invest a little time to understand the concepts of MOA and mils. Just knowing the 100-yard numbers of 1.04 inches per MOA and 3.6 inches per mil will take you a long way if you can do some quick math in the field. Better yet, memorize your reticle patterns and markings, so you know exactly what all the hash marks indicate. Next time, we’ll get into more discussion on reticles and tools you can use to estimate range and bullet drop adjustment. Tom McHale is the author of the Insanely Practical Guides book series that guides new and experienced shooters alike in a fun, approachable, and practical way. His books are available in print and eBook format on Amazon. You can also find him on Google+, Facebook, Twitter, and Pinterest.
<urn:uuid:a36155ec-7323-4936-a7a3-17ef38f48ecc>
CC-MAIN-2021-43
https://pointblankholsters.com/moa-mils-and-math-the-long-range-shooting-guide-part-2/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588053.38/warc/CC-MAIN-20211027022823-20211027052823-00150.warc.gz
en
0.929995
2,235
2.8125
3
The implementation of software applications using GOAD techniques results in a better implementation structure which has an impact on many important software qualities such as enhanced risibility and reduced complexity. In turn, these software qualities lead to an improved software development lifestyle and, hence, to better software. This report introduces to management and software development staff to the concepts of aspect- orientation software development. It presents why aspect-orientation is needed in modern software development and what its contributions are to the improvement of software design and implementation structure. The report also highlight technology details though without probing much in particular, as it present the various concepts of GOAD. After reading this introduction, the reader will understand what GOAD is about, know its key concepts and terminology engaged to elaborate . Order custom essay Aspect Oriented Software Development with free plagiarism report As software systems becomes more complex developers use new technologies to help manage development. The development of large and complex software applications is a challenging task. Apart from the enormous complexity of the software's desired functionality, software engineers are also faced with many other acquirement that are specific to the software development lifestyle. Requirements such as risibility, robustness, performance, believability, etc. Re requirements about the design and the implementation of the software itself, rather than about its functionality. Nevertheless, these non-functional requirements cannot be neglected because they contribute to the overall software quality, which is eventually perceived by the users of the software application. For example, a better believability will ensure that future maintenance tasks to the implementation can be carried out relatively easily and consequently also with fewer errors. Building software applications that adhere to all these functional and non-functional requirements is an ever more complex activity that requires appropriate programming languages and development paradigms to adequately address all these requirements throughout the entire software development lifestyle. To cope with this ever-growing complexity of software development, computer science has experienced a continuous evolution of development paradigms and programming languages. In the early days, software was directly implemented in machine-level assembly languages, leading to highly omelet implementations for even simple software applications. The introduction of the procedural and functional programming paradigms provided software engineers with abstraction mechanisms to improve the design and implementation structure of the software and reduce its overall complexity. An essential element of these paradigms is the ability to structure the software in separate but cooperating modules (e. G. Procedures, functions, etc. ). The intention is that each of these modules represents or implements a well-identified subpart of the software, which renders the individual modules better reusable and evolvable. Modern software development often takes place in the object-oriented programming paradigm that allows to further enhance the software's design and implementation structure through appropriate object-oriented modeling techniques and language features such as inheritance, delegation, encapsulation and polymorphism. Aspect-oriented programming languages and the entire aspect-orientation paradigm are a next step in this ever continuing evolution of programming languages and development paradigms to enhance software development and hence, improve overall software quality. Fundamental ideas underlying aspects and aspect-oriented software development The notion behind aspects is to deal with the issue of tangling and scattering. According to Ian Somerville (2009), tangling occurs when a module in a system includes code that implements different system requirements and scattering occurs when implementation of a single concern (logical requirement or set of requirements) is scattered across several components in a program. What an Aspect Is Aspect is an abstraction which implements a concern. Aspects are completely specification of where it should be executed. Unlike other abstractions like methods, you cannot tell by examining methods where it will be called from because there is clear separation between the definition and of the abstraction and its use. With Aspects, includes a statement that defines where the aspect will be woven into the program. This statement is known as a pinpoint. Below is an example of a pinpoint before: call public void update. This implies that before the execution of many method whose starts with update, followed by any other sequence of characters, the code in the aspect after the induct definition should be executed. The wildcat matches any string characters that are allowed in the identifiers. The code to be executed is known as the advice and is implementation of the cross-cutting concern. In an example below of an aspect authentication (let's say for every change of attributes in a payroll system requires authentication), the advice gets a password from person requesting the change and checks that it matches the password of currently logged -in user. If not user is logged out and update does not proceed. Pinpoint: defines specific program events with which advice should be associated (I. E. , woven into a program at appropriate Join points) Events may be method calls/ returns, accessing data, exceptions, etc. Weaving: incorporation of advice code into the program (via source code preprocessing, link-time weaving, or execution time weaving). Why Separation of Concerns a good guiding principle for Software Development Separation of concerns is a key principle of software design and implementation. Concerns reflect the system requirements and the priorities of the system stakeholders. Some examples of concerns are performance, security, specific categorized in several types. Functional concerns, quality of service concerns, Policy concerns, System concerns and Organizational concerns. - Functional: related to specific functionality to be included in a system. - Quality of service: related to the nonfunctional behavior of a system (e. G. , performance, reliability, availability). - System: related to attributes of the system as a whole (e. G. , maintainability, configurability). - Organizational: related to organizational goals and priorities (e. G. , staying within budget, using existing software assets). In other areas concerns has been categorized according to different areas of interest or properties I. E. High level implies security and quality of service, Caching and buffering are Low level while Functional includes features, business rules and Non Functional (systematic) implies synchronization, transaction management. By reflecting the separation of concerns in a program, there is clear traceability from requirements to implementation. The principle of separation of concerns states that software should be organized so that each program element does one thing and one thing only. In this case it means each aerogram element should therefore be understandable without reference to other elements. Program abstractions (subroutines, procedures, objects, etc) support the separation of concerns. Core concerns relate to a system's primary purpose and are normally localized within separate procedures, objects, etc. And other concerns tend to scatter and cross multiple elements. These cross-cutting concerns are managed by aspect since they cannot be localized resulting in problems when changes are required due to tangling and scattering. Separation of concerns provides modular dependency between aspects and components. For instance we would like to maintain a system that manages payroll and personnel functions in our organization, and there is a new requirement to create a log of all changes to an employee's data by management. It would mean that changes will include in payroll, number of deduction, raises, employee's personal data and sass of many other information associated with employee. This implies that there are several codes that will require changes. This process could be tedious and you might end up forgetting changing other codes as well even not understanding each and every code. With aspects you old deal with a particular element only. In this case there won't be redundancy of multiple codes doing the same thing. An update function could be implemented that would be called whenever you would want to implement a particular method. In requirements engineering there is need to identify requirements for the core system and the requirements for the system extensions. Viewpoints are a way to separate the concerns of different stakeholders that are core and secondary concerns. Each viewpoint represents the requirements of related groups of stakeholder. The requirements are organized according to stakeholder viewpoint then they are analyses to discover related requirements that appear in all or most viewpoints. These represent the core functionality of the system. There could be other viewpoint requirements that are specific to that viewpoint these then can be implemented as extensions to the core functionality. These requirements (secondary functional requirements) often reflect the needs of that viewpoint and may not share there are non-functional requirements that are cross-cutting concerns. These generate requirements of to some or all viewpoint for instance requirements for security, performance and cost. Software Design Aspect Oriented Design is the process of designing a system that makes use of aspects to implement the cross-cutting concerns and extensions that are identified during the requirements engineering process. ADD focuses on the explicit representation of cross-cutting concerns using adequate design languages. ADD languages consist of some way to specify aspects, how aspects are to be composed and a set of well-defined composition semantics to describe the details of how aspects are to be integrated. (Chitchat, Awls Rashes, Pete Sawyer, Alexandra Garcia, Monica Pinto Larson, Jotter Beaker, Bedim Ticonderoga, Skibobs Clarke, Andrew Jackson, 2005) Like in object orientation, several aspect-oriented extensions to ML design language to represent aspect-oriented concepts at the design level. One of these ML extensions is ATOM. ADD in ML requires a means of modeling aspects using ML stereotypes. Is an approach of specifying the Join points where the aspect advice is to be composed with the core system. The high-level statement of requirements provides a basis for identifying some system extensions that may be implemented as aspects. Developing these in more details to identify further extensions and understanding the functionality required is to identify a set of use cases associated with each viewpoint. Each use case represents an aspect. Extension use cases naturally fit the core and extensions architectural model of system. Aspect-oriented Design Process Below is fugue 1 that illustrate the design activities of generic aspect-oriented design process Core system design is where you design the system architecture to support the core functionality of the system. Aspect identification and design Starting with the extensions identified in the system requirements, you should analyses these to see if they are aspects in themselves or if they should be broken down into several aspects. Composition design At this stage, you analyses the core system and aspect designs to discover where the aspects should be composed with the core system. Essentially, you are identifying the Joint points in a program at which aspects will be woven Conflict analysis and resolution Conflicts occur when there is a pinpoint clash with different aspects specifying that they should be composed at the same point in the aerogram Name design is the essential to avoid the problem of accidental pinpoints. These occur when, at some program Join point, the name accidentally matches that in a pinpoint pattern. The advice is therefore unintentionally applied at that point. The goal of aspect-oriented programming is to provide an advance modularization scheme to separate the core functionality of software system from system-wide concerns that cut across the implementation of this core functionality. APP must address both what the programmer can say and owe the computer system will realize the program in a program system. APP system: mechanisms are conceptually straight forward and have efficient implementations. Joint Point Model A Join point model defines the kinds of Join points available and how they are accessed and used. They are specific to each aspect-oriented programming language for instance Aspects. In Aspects, Joint point are defined by grouping them into pinpoints. A pinpoint is a predicate that matches Join points. A pinpoint is a relationship 'Join point Boolean', where the domain of the relationship is all possible Join points. Advantages and Disadvantages of APP APP promotes clear design and risibility by enforcing the principles of abstraction and separation of concerns. APP explicitly promotes separation of concerns, unlike earlier development paradigms. This separation of concerns provides cleaner assignment of responsibilities, higher modularization and easier system evolution, and should thus lead to software systems which are easier to maintain. The process is to collect scattered concerns into compact structure units, namely the aspects. On the other hand, APP cannot be elegantly applied to every possible situation. Validation and Verification is the process of demonstrating that a program meets the real needs of its stakeholders and meets its specification. Validation or testing is used to discover defects in the program or to demonstrate that the program meets its requirements. Statement verification techniques focus on manual or automated analysis of the source code. Like any other systems, aspects-oriented systems can be tested as black-boxes using the specification to derive the tests. However, program source code is problematic. Aspects also introduce additional testing (Ian Somerville (2006) Testing problems with aspects. To inspect a program in a conventional language effectively, you should be able to read it from right to left and top to bottom. Aspects make this as the program is a web rather than a sequential document. One can't tell from the source code where an aspect will be woven and executed. Flattening an aspect-oriented program for reading is practically impossible . Challenges with Aspect-oriented Systems One of the limitations of APP is that it is not supported by default on any programming platform. Although it seems to be gaining popularity, its implementation has been undertaken by third parties as extensions to development framework. This has resulted in some level of disparity on the features being implemented as some of the implementations only implement specific features making it difficult to use such frameworks in some situations in addition to creating some confusion over the feature. AAA programs can be "black-box tested" using requirements to design the tests, but program inspections and "white-box testing" can be problematic, since you can't always tell from the source code alone where an aspect will be woven and executed. Recommendations Adopting Aspect Oriented Software development will reduce repetitions of coding or Component maintenance and reuse has a great impact to the company. On the part of cost, the company can determine whether it is easy to maintain its systems or not. Using other development paradigm can be cumbersome hence increasing tangling and scattering. System performance will also be affected in such a way that there could be more codes doing the same thing. GOAD concepts reduce redundancy and increase system performance. All functional and non-functional concerns are dealt with in GOAD. On implementation of security, Design flaws and code errors or bugs old be some of the causes of security flaws in software. Unlike SOD, GOAD approach made Software Development easy with the separation of concerns leading to modularization in reuse. Did you know that we have over 70,000 essays on 3,000 topics in our database?
<urn:uuid:9fad247c-d626-4cdf-b7e6-3c6ed975f8bc>
CC-MAIN-2021-43
https://phdessay.com/aspect-oriented-software-development-2/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585280.84/warc/CC-MAIN-20211019171139-20211019201139-00470.warc.gz
en
0.928007
2,959
2.609375
3
Between freshwater and saltwater fishing, there are thousands of species of fish anglers catch every year. Some of them are plentiful, like carp, crappie, mackerel, or redfish. Some species are pretty rare, and therefore more prized by anglers. Rare Fish Caught by Anglers: - Golden Trout - Apache Trout - Blue Marlin - Atlantic Goliath Grouper Golden trout, also called California golden trout, is one of the rarest fish in the US, alongside its cousin palomino trout (a hybrid between West Virginia golden trout and rainbow trout.) Once upon a time, you could catch golden trout on a stretch of 450 miles up the South Fork Kern River and its tributaries. Now it is native to two watersheds in high altitudes in Sierra Nevada Mountains. To prevent the extinction of golden trout, the US Fish and Wildlife Service introduced the fish in Wyoming, Washington, and Idaho’s lakes and rivers. Many anglers consider golden trout one of the most beautiful freshwater fish you can catch in the US. Golden trout is bright yellow-gold color, with the dorsal side often coppery-olive or coppery-green, and deep red belly. It has bright red or pink horizontal stripes with around ten dark vertical marks on both sides. The dorsal, lateral, and anal fins have one dark stripe each and white tips. The striking colors of golden trout earned it a spot as California’s official state fish. The golden trout grows between 6 – 12 inches long but can grow to around 15 inches in good conditions. Golden trout usually weighs about 2 – 4 lbs. The largest recorded specimen was 11 lbs. Golden trout typically live 7 – 9 years. The best time to catch golden trout is early morning or late evening. You can try either ultra-light tackle with spoons, spinners, and live bait or fly rod with long leaders, medium-weight lines, and caddisflies or midges. The Apache trout is a state fish of Arizona, where are its native waters. It is one of the only two, beside Gila trout, native trout species in Arizona, and originates from the White, Black, and Little Colorado rivers in high altitudes of 5900 feet. In the 1960s, apache trout was nearing extinction. It used to live on a stretch of 600 miles of watershed in the White Mountains, but the range was reduced to meager 30 miles in a short time. It was one of the first fish on the list of endangered species after the 1969 Endangered Species Act. After implementing recovery actions and performing multiple analytics, decades later, this trout is not as rare anymore but still listed as threatened species. Now the biggest danger is crossbreeding with cutthroat trout. As for Gila trout, after down-listing it from endangered to threatened in 2006, the US Fish and Wildlife Services decided to open a limited fishing season in 2011 in Arizona and New Mexico. Apache trout usually grows about 10 – 20 inches long and weighs between 1 – 6 lbs. The record size of apache trout is almost 23 inches long and just over 5 lbs. The Apache trout is golden-yellow in color with a dark olive dorsal side and golden belly. It has dark spots evenly distributed across its back, sometimes reaching below the lateral line and on the fins. Apache trout has very characteristical spots on its eyes, two on either side of each pupil, creating a stripe across its eye. The Gila trout, as closely related to Apache trout and by many anglers thought to be the same species, shares with it many features. The only distinctions between the two trout are spots on the eyes and body. The best chance of catching Apache trout is by using a fly rod. The most successful wet fly patterns are pheasant tail nymphs, scuds, or caddis. If you want to try dry flies, your best bet is on mayfly, adult stonefly, or mosquito. Sturgeon is a common name of 27 species of fish in the Acipenseridae family. The native range of sturgeon is relatively wide. Depending on the species, they live in subtropical, temperate, and sub-Arctic rivers, lakes, and coastlines in Europe, Asia, and North America. Although there are multiple species of sturgeon fish, they suffer from overfishing, poaching, and destruction of their habitat. A few of the species are considered extinct, and most of the remaining species are endangered. Sturgeons live very long, around 50 – 60 years, and reach maturity very late, about 15 – 20 years of age. They also don’t spawn every year, and sometimes not even for few years in a row, if conditions are unfavorable. Considering all the facts, research data indicates that more than 85% of sturgeon species are at risk of extinction. That makes them the most critically endangered animal species in the world. Sturgeons have an elongated, spindle-like, scaleless body. They have five lateral rows of bony plates that create an armor-like look. Few of the species can grow large, usually 7 – 12 ft long. The biggest sturgeon caught in recorded history was the beluga sturgeon in 1827, and it measured 24 ft with a weight of almost 3500 lbs. Depending on the species, some sturgeons live only in freshwater habitats, others primarily in saltwater coastal areas. On rare occasions, sturgeons jump out of the water. Considering they can reach quite large sizes, it can be dangerous. There were a few serious accidents when the sturgeon landed in the boat. Despite low numbers of sturgeons, there is still a chance of catching a few within their fishing season on a purely catch and release basis. In the US, there are eight species of sturgeon: Atlantic, shortnose, lake, shovelnose, pallid, Alabama, white, and green sturgeon. The Alabama sturgeon is extremely rare, with no numbers known; the last recorded Alabama sturgeon caught was in 1997. Your best bet to catch sturgeon is to move over deep holes and runs. That’s where the sturgeons feed. You should use Lamprey or smelt if you want to be successful. Good baits for sturgeon are also shrimp, shad, crawfish, and squid. Make sure your tackle is heavy. Sturgeons are known for putting on an unforgettable fight. The blue marlin is native to tropical and temperate waters of the Atlantic, Pacific, and Indian Oceans. Despite having only a few natural predators, like killer whales and sharks, blue marlin’s numbers are declining every year due to unsustainable fishing. It is considered a threatened species by IUCN. The blue marlin is one of the biggest and fastest fish in the ocean, reaching the size of 14 ft and weigh os nearly 2000lb, with females usually four times the size of males. The blue marlin has a striking blue color on the dorsal side and a silvery-white belly. It has a high dorsal fin and spear-shaped, sharp upper jaw, which is used to stun and wound its prey. Blue marlin spends most of its life in the open ocean, migrating for thousands of miles after warm ocean currents. It prefers warm surface waters, where it feeds on tuna and mackerel, but is known to dive for squid. Fishing for blue marlin is considered a fantastic experience due to the fight they’re put up when hooked. Blue marlin, one of the world’s best game fishes, can be caught on artificial lures with a skirt, natural or live bait, like skipjack tuna. The best method is usually trolling. Atlantic Goliath Grouper Atlantic goliath grouper, or itajara, is a large saltwater fish living in artificial and coral reefs. It takes a long time for the Atlantic goliath grouper to mature, and while it grows, there is a multitude of predators it needs to avoid, including humans. All this contributes to rather low numbers of specimens. It led to IUCN considering the goliath grouper a vulnerable species. Goliath grouper can live in brackish waters, canals, and mangrove swamps, which is atypical behavior for grouper fish. It grows to a large size of up to 8.2 ft and can weigh around 800 lbs. The Atlantic goliath grouper is grey, greenish, or brownish-yellow in color with small dark spots covering its fins, rather big head, and dorsal part of its elongated body. Some specimens have few dark vertical stripes. Goliath grouper usually lives around 35 – 37 years in favorable conditions. Since 1990, there is a ban on the harvest and possession of Atlantic goliath grouper in Florida, but that doesn’t prevent you from catching and releasing this fish. While fishing for goliath grouper, you should always proceed with caution. Caught fish must be released immediately, alive and unharmed, and left in the water during release. Sawfish, also known under the name of carpenter sharks, are fish from the rays family. They live in tropical and subtropical waters worldwide, usually found in coastal and brackish waters, but can adapt to freshwater. There are five species of sawfish existing today, two are endangered, and the other three are critically endangered. The reason for such low numbers of specimens could be their slow breeding ratio, habitat loss, on top of overfishing, and poaching for their fins, teeth, and saw. The sawfish has a shark-like body, flat underside, a flat head ending in a long, saw-like rostrum with a row of teeth on either side. The rostrum is usually a quarter or one-third of the body length, and its most recognizable feature of sawfish, giving it its name. Sawfish has two distinct dorsal fins, pectoral and pelvic fins resembling wings, and tail with upper lobe longer than the lower one. Depending on the species, sawfish reach 10 – 25 ft long and weigh as much as 1300 lbs. The lifespan of wild sawfish is unknown, but captive specimens live between 35 – 50 years old, depending on the species, except for narrow sawfish, which reached only nine years. There is not much data on a few of the species of sawfish, and Florida Fish and Wildlife Conservation is asking for help with data collection. If you manage to catch a sawfish while fishing for other species, they ask you to contact them with all possible details, like the date, location, size, and depth, amongst other relevant information.
<urn:uuid:aa59427c-f58b-4f7c-9fd1-ed1592c2d3cb>
CC-MAIN-2021-43
https://www.eatingthewild.com/rare-fish/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587659.72/warc/CC-MAIN-20211025092203-20211025122203-00510.warc.gz
en
0.939799
2,256
2.953125
3
The things that are Caesar’s THE PHARISEES expected to hang Jesus on the horns of a dilemma—God or Caesar? Sacred or secular? Or in our terms, church or state? They inquired: “Is it lawful to pay taxes to Caesar or not?” After examining a coin bearing the emperor’s image, Jesus resolved the dilemma: “Give, then, to Caesar the things that are Caesar’s, and to God the things that are God’s” (Matt. 22:17–21). In just a few words, Jesus distinguished God and Caesar, yet commanded obedience to them both. [Forty Martyrs of Sebaste. Wikimedia: Andreas Praefke] After Jesus’ death and Resurrection, as Christianity moved into the uttermost parts of the empire, the church wrestled with its ultimate allegiance to God and its lesser loyalty to the state. What did it mean for a Christian to be both a resident of the Roman Empire and a citizen of the Kingdom of God? The apostles gave different answers at different times. During the early decades of church expansion, Roman authorities viewed Christianity as a sub-sect of Judaism, which was a legal religion. In 57 AD, basking in the protection of the civil government, Paul wrote to instruct the church in Rome: “Let everyone submit to the governing authorities, since there is no authority except from God, and the authorities that exist are instituted by God” (Rom. 13:1; 1 Tim. 2:1–2; Tit. 3:1). Even during the unsettled years before Nero’s persecution, Peter sent a letter from Rome to a group of exiles in Asia Minor (now Turkey), urging them: “Fear God. Honor the Emperor” (1 Pet. 2:17). In spite of these expressions of loyalty, both Peter and Paul are traditionally said to have died martyrs’ deaths during Nero’s persecution from 64 to 68. About three decades later, Emperor Domitian instigated the second round of imperial persecution, most severe in Rome and in Asia Minor. The apostle John fell victim and was exiled to Patmos, an island off the coast of Ephesus. During these troubled times, he recorded the Book of Revelation, expressing a much different attitude toward government. John used apocalyptic language that could be interpreted to refer to the Roman Empire, or to any government opposed to God. In Revelation John saw such governing authority as a beast that blasphemes against God and makes war on his people (Rev. 13:1–10), or a harlot from which “a voice from heaven said: ‘Come out of her, my people’” (Rev. 18:4). Paul advocated for submission to the state; John for separation from it. The early church maintained these dual views of the state as expressed by the apostles. In the year of Domitian’s death, Bishop Clement of Rome wrote a letter to the Corinthian church. He began with a reference to the recent persecution: “the sudden and successive calamitous events which have happened to us.” In the same letter, however, he modeled for the Corinthian Christians a prayer for the government: Make us obedient both to your almighty and glorious name and to all who rule and govern us on earth. For you, Master, in your supreme and inexpressible might have given them their sovereign authority that we may know the honor and glory given to them by you and be subject to them, in nothing resisting your will. Grant to them, Lord, health, peace, harmony, and security that they may administer the government you have given them without offense. Honoring the emperor Clement’s prayer for the governing authorities, expressed even in the context of persecution, exemplifies most Christians’ attitudes during the second and third centuries of the church. Apologists writing in defense of the faith often insisted that Christians prove their loyalty to the government by praying for their leaders while worshiping only God. Justin Martyr addressed his First Apology to Emperor Antoninus Pius, the Roman senate, and the people: “Therefore, we adore only God, but in other things we gladly serve you, acknowledging you as emperors and sovereigns, praying that along with your royal power you may be endowed too with sound judgment.” And Theophilus of Antioch wrote in a letter to the pagan Autolycus: “Therefore, I honor the emperor, not indeed worshiping him but praying for him.” Even the fiery-tempered Tertullian of Carthage claimed that Christians served the emperor better than others did because “our God has appointed him [the emperor]. Since he is my emperor, I take greater care of his welfare . . . because I pray for it to one who can grant it.” What was the attitude of early Christians toward civil servants and their own possible public service? Much of what we find in the New Testament and among the church fathers’ writings deals generally with the emperor and governing authorities. Occasionally, however, we read reports of specific encounters involving individual Christians. The New Testament writers record several interactions with governmental representatives—Roman centurions and other soldiers; tax collectors, who collaborated with the Roman occupiers; a royal official from Herod Antipas’s court; proconsuls, including Sergius Paulus of Cyprus, who believed Paul’s and Barnabas’s preaching; imperial guards in Rome, who heard the Gospel from Paul; governors; and even kings. Some of these encounters resulted in conversions to Christianity, but even then none were instructed to quit their jobs except Levi (Matthew), who left his tax booth to follow Jesus. Even tax collectors and soldiers who responded to John the Baptist’s call to repentance were not told to resign, but only to perform their duties honestly. Nonetheless in early church writings, we find no evidence for Christians in the military prior to 170 or in government service until even later. In fact catechumens applying for baptism were interrogated about their service in the military or the government. In the Apostolic Tradition, an early third-century church manual, Hippolytus of Rome outlined restrictions on occupations for Christians: A soldier who is in authority must be told not to execute men; if he should be ordered to do it, he shall not do it. He must be told not to take the military oath. If he will not agree, let him be rejected [from joining the church]. A military governor or a magistrate of a city who wears the purple, either let him desist or let him be rejected. If a catechumen or a baptized Christian wishes to become a solder, let him be cast out. For he has despised God. Taking the sword from soldiers During the several-years-long process of catechism to prepare for baptism, Apostolic Tradition explains that baptismal candidates agree to limitations. Soldiers and magistrates were rejected unless they resigned from their positions. An exception was usually made for soldiers who served as police or during peacetime. Tertullian, however, made no exceptions, writing in On Idolatry: “But how will a Christian war? Indeed how will he serve even in peace without a sword, which the Lord has taken away? . . . The Lord, in disarming Peter, unbelted every soldier.” This absence of Christians from public service provoked criticism from their detractors; such civic duty was important in Roman society. The pagan Caecilius complained to the Christian Octavius that Christians “do not understand their civic duty.” Celsus, a Greek philosopher and opponent of Christianity, insisted that Christians should “accept public office in our country”; otherwise, they were shirking their duties to society and neglecting their obligation to protect the empire while receiving its benefits. Origen of Alexandria, responding in Against Celsus, defended his fellow Christians on the basis of their higher calling: “But they keep themselves for a more divine and necessary service in the church of God for the sake of the salvation of men. Here it is both necessary and right for them to be leaders and to be concerned about all men, both those who are with the Church . . . and those who appear to be outside it.” Tertullian agreed: “We have no pressing inducement to take part in your public meetings. Nor is there anything more entirely foreign to us than affairs of state.” Early Christian apologists put forward many reasons for Christians not to perform military service. First the law of Christ called for Christians to “beat their swords into plows and their spears into pruning knives” (Isa. 2:4) and to love their enemies. On this Justin, Irenaeus, Tertullian, and Origen all agreed. Second, as Tertullian noted, Christians would be forced to participate in Roman idolatry and oaths to the emperor, including emperor worship. Third, said Origen, “The more pious a man is, the more effective he is in helping the emperors—more so than the soldiers who go out into the lines and kill all the enemy troops that they can.” Spiritual soldiers take up the full armor of God, he said, and engage in prayer on behalf of all in authority. Indeed Christians composed “a special army of piety through our intercessions to God.” Subscribe now to get future print issues in your mailbox (donation requested but not required). In reality some Christians did serve in the military. In 173 the Thundering Legion included those recruited from the strongly Christian region of Armenia. In his Ecclesiastical History, Eusebius wrote that during a campaign on the frontier of the Danube, the Roman army, under the leadership of Emperor Marcus Aurelius, suffered from drought, whereas their enemies had ample supplies of water. After Christian soldiers prayed for rain, not only did rain refresh the Romans, but the accompanying thunder and lightning frightened their opponents. Tertullian passed along this same story as part of his defense of Christianity, but he still disapproved of Christians in the military. No baptized Christian is able to enlist, he said, and anyone already in military service must abandon the army at the time of baptism: “There is no agreement between the divine and the human oath, the standard of Christ and the standard of the devil, the camp of light and the camp of darkness.” Nonetheless even his protests are evidence that Christians served in the army at the turn of the third century. Fighting for Christ Many of these Christian soldiers, however, suffered persecution and martyrdom. During Decius’s persecution in 250, Bishop Cyprian of Carthage related the story of two soldiers who were martyred. In 298 a centurion named Marcellus refused to worship the Roman gods, renounced his position in the army, and at his trial, testified: “It is not fitting that a Christian, who fights for Christ his Lord, should fight for the armies of this world.” During the prelude to the great persecution of Diocletian and Galerius, the first to suffer were Christian soldiers, Eusebius said—presumably because their commitment to the empire was questioned. One of the great stories of persecution among Christian soldiers is the Acts of the 40 Martyrs of Sebaste. The Edict of Milan had supposedly ended persecution in 314, but Emperor Licinius reneged on this agreement and attempted to purge his army of Christians. Evidently he feared their loyalty to Constantine, who had granted them toleration. Licinius’s edict was eventually delivered to the Twelfth Legion, successors of the Thundering Legion and now stationed in Sebaste (in modern Turkey). In 320 this famed legion still included 40 committed Christians. When they refused to recant their faith, the governor conceived a torturous punishment that he hoped would break their defiance: he ordered them to strip and to stand naked upon an icy lake until they relented. Throughout the night the Christians encouraged each other to remain faithful and to maintain the sacred number of 40. Sadly one soldier relented and left the lake to seek refuge in a heated tent on the shore. But then a guard on the shore decided to confess his faith in Jesus Christ, to join the Christians in their suffering, and to take the place of the deserter. He stripped off his clothes and confessed, “I am a Christian!” Thus, the story relates, God answered the martyrs’ prayers that their number would be complete. By the next day, all 40 Christian soldiers had died—but each martyr had earned the crown of life. From governor to bishop While unknown numbers of Christians enrolled in military service before 314, few reports tell of public service among Christians. In 278 Bishop Paul of Samosata also held the post of civil magistrate, but Eusebius criticized him for his arrogance and pomp, not to mention his heretical views. Evidently public jobs were further out of reach for Christians than military service was. This situation, of course, completely reversed with the conversion of Emperor Constantine. Upon achieving sole rule as emperor in 324, he set upon the task of Christianizing the Roman Empire. He appointed Christians to public offices and took many as his personal advisers, opening government jobs to Christians. He ordered all soldiers to worship the supreme God on Sundays. Whatever he meant by that decree or however it was interpreted by soldiers, he had legitimized the service of Christians in the army and the magistracy. Perhaps the story that best illustrates the cooperation of church and state in the fourth-century empire is that of Ambrose, bishop of Milan. Ambrose began his public career as governor of Milan, and his ambitions were political. But in 373 the death of the Arian bishop Auxentius threatened the peace of his city, which was divided over Arianism and Nicene orthodoxy. Ambrose decided to preside over the election of Auxentius’s successor. Surprisingly a child in the crowd began to cry out, “Ambrose, bishop!” The crowd took up the chant. Ambrose had no intention of accepting an ecclesiastical position, but Emperor Gratian insisted that his secular governor would now serve the empire best as bishop of Milan. It is ironic that Ambrose, who was not yet a church member, was selected as the best choice for bishop. At the time he was only a catechumen, so he submitted to baptism, ordination, and consecration as bishop all in eight days! Church and state, however, did not always harmonize. When Emperor Theodosius I commanded the slaughter of 7,000 Thessalonicans, Ambrose condemned him and demanded clear signs of his repentance. The next time the emperor appeared at the church in Milan, the bishop met him at the door and refused him entrance: “Stand back! A man such as you, defiled by sin, with hands covered by the blood of injustice, is unworthy, without repentance, to enter this sacred place, and to partake of holy communion.” Theodosius made his contrition publicly; in this case, the state yielded to the church. In 380 that same emperor completed the interlocking of church and state by declaring orthodox Christianity the official religion of the Roman Empire. Two generations later his grandson, Theodosius II, instituted a law that permitted only Christians to serve in the military, thereby expecting divine favor to rest upon the imperial armies. The distinction between “the things that are Caesar’s” and “the things that are God’s” established by Jesus and the apostles had had now become less clear. Today we still look back to the complicated legacy of the early church as we try to obey God and Caesar, to live as citizens of heaven while residing in the world. CH This article is from Christian History magazine #124 Faith in the City. Read it in context here! By Rex D. Butler [Christian History originally published this article in Christian History Issue #124 in 2017]Rex D. Butler is professor of church history and patristics at New Orleans Baptist Theological Seminary and the author of The New Prophecy and “New Visions”: Evidence of Montanism in the Passion of Perpetua and Felicitas. “The same kind of Christians we have always been” We spoke to Karen Swallow Prior, author of Booked (2012) and Fierce Convictions (2014) and professor of English at Liberty University, about how Christians ought to live as citizens in the culture.Karen Swallow Prior and the editors Christianity goes to town How Christianity changed the Roman Empire’s urban spacesAllan Doig “God has given us the earth as a gift” We spoke to Jill Sornson Kurtz, a professional architect with a particular interest in designing sustainable buildings for everyday life, on her work as a Christian in the building industry today.Jill Sornson Kurtz and the editors Citizens of no mean cities A tour of the greatest cities of late antiquityAngelo Di Berardino Subscribe to magazine Subscription to Christian History magazine is on a donation basisSubscribe Christian History Institute (CHI) is a non-profit Pennsylvania corporation founded in 1982. Your donations support the continuation of this ministryDonate
<urn:uuid:74793dd9-bfd5-4c30-b0f6-2c005362afd2>
CC-MAIN-2021-43
https://christianhistoryinstitute.org/magazine/article/the-things-that-are-caesars
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585270.40/warc/CC-MAIN-20211019140046-20211019170046-00190.warc.gz
en
0.968115
3,639
2.859375
3
The Wildlife Hotline doesn’t get many calls about turtles or frogs on an average day. No one is calling the hotline with “Help! There’s a turtle in my attic.” or “These frogs keep getting in my trash.” Normally, frogs and turtles don’t cause us much trouble. However, there is a time period in early summer each year when turtles come in to rehab clinics in droves. This is the season when turtles try to cross roadways to get to their female counterparts. Year after year we are amazed at how many kindhearted people will stop to assist a turtle in this endeavor. Most of us, as kids, all picked up a turtle or a frog at some point. If you didn’t, your kids will bring them in the house when you least expect it. Our resident turtles and frogs tend to keep to themselves, but just in case, here’s some information about the calls we sometimes get. Turtles on the Road This behavior happens every summer and it is usually our native box turtles, or on occasion a red eared slider turtle. You’re driving down the road and this little shape in the distance slowly becomes a turtle, desperately trying to cross the road, oblivious to oncoming traffic. Turtles often make this perilous journey to get to a good, sunny location with loose soil in which to lay eggs, and to return back to familiar territory—be it a woodland, pond or burrow. It is in just this situation that so many turtles lose their status as wild animals and are consigned to an unnatural, and unnaturally short, life in a back yard. By all means, help that turtle cross the road in the direction she (or he) was heading, if you can do so safely. But then leave her in the wild where she belongs. The collection of turtles by passersby seriously contributes to the ongoing population declines in many species. Turtles and tortoises are particularly vulnerable to collecting, since they are slow-moving and generally non-aggressive. Likewise, their populations are vulnerable as well. As is typical of long-lived animals, turtles are slow to sexually mature. They lay relatively few eggs, and mortality of eggs and hatchlings is frequently very high. In addition, their habitat is increasingly fractured by roads and carved up into housing developments and shopping centers, causing local extinctions. Thus every turtle who survives to adulthood is critical to his population. Turtles are said to make good pets, yet they have specific dietary and habitat requirements and can pass diseases, such as salmonellosis, to humans. What’s more, their attempts to escape from backyards and return to familiar territory puts them at tremendous risk of being crushed in the road. In addition to all of this, all of the species of turtles in the state of Missouri are legally protected except for the common snapping turtle. (These are considered game, but only hunted with approved methods – See MDC) This means that picking up a turtle and taking it home is not only hurtful to the species, but is also ILLEGAL. You can be fined for removing the turtle from his original area and relocating him to another. It is perfectly legal to help him cross the road, or even walk him over to a more wooded area, but that’s about it! Instead of catching and making turtles your pets, learn about our native turtles and support them living in the wild. There are many places across Missouri and Illinois where you can go to see turtles in the wild. Take your children to a nature center to see them in their natural habitat instead of bringing one home. Some information from Lakeside Nature Center on Box Turtles. If you find a box turtle, the first thing to do is determine whether or not it is injured. If it is, call the hotline or a wildlife rehabber immediately. Put it in a cardboard box, close the lid so it’s dark, and leave it in a quiet area so it’s not overly stressed until you can get it to someone. Rehabbers can assist with turtle injuries even when they look terribly bad. You’d be surprised at what a turtle can survive. You will not get the animal back, but they will ask for location found so they can have it released to the same location when healed up. If you’ve determined that the turtle is not injured, next determine if it is indeed a native box turtle. If it’s a species that is native to your area then by all means release it back to the area that it was found. If crossing a road, then put it in the closest woods in the direction it was heading. If not headed to the woods, put it there anyway, it’s safer than the roadway. DO NOT TAKE IT HOME. DO NOT RELOCATE the turtle. Box turtles have a homing instinct and they will try to get back to the area they came from. If you move it far from it’s home you will cause it to likely get killed trying to get back to it’s home, so leave it in the area found, do not bring it to a nicer park. Kids Catching Frogs & Toads Nope, not really. Toads and frogs are not going to give you warts, or really any other disease either. There are some species of frogs and toads that can secrete a foul tasting substance out of their pores to trick predators into not eating them. Plus, there are other toads that can produce a secretion that burns the eyes and mouth of predators – including us! The thing to remember here is that if you or the kids are handling frogs, toads, or really any animal for that matter, please practice proper hygiene. Wash your hands! Don’t let family pets lick, sniff, or play with wild animals, and keep an eye on the kids so that they don’t decide to go ahead and try out the “Kiss a Frog” idea! The toads and frogs native to Missouri & Illinois are a valuable part of our outdoor heritage. Most people probably do not give them much thought, but we need these amphibians to control destructive insects and to add their voices to the sounds of spring and summer nights. Just hearing or seeing them adds to our enjoyment of the outdoors. If you’re home or property is home to frogs, consider yourself lucky. Many frog species are on the decline due to loss of habitat and poor soil quality. There are a few things that you can do to keep the frogs in your area safe and healthy: If you regularly see frogs around your home, be careful when you close windows and doors. Check to see there are no frogs in the way before you shut that door or window. Herbicides and insecticides can be deadly to frogs and tadpoles. Don’t use them. Instead, let the spiders, geckos and the frogs themselves kill those pesty bugs. Teach your family members about weeds and how to identify them by letting them help you pull out those weeds by hand instead of using chemicals. Many people keep dogs for security as well as companionship but dogs are surprisingly good at finding and injuring frogs. Control your dog and teach it not to attack or disturb wildlife. Many cat owners insist that their cool and aloof feline doesn’t attack any wildlife but their neighbors often witness the truth. Keep your cat indoors at night where it can keep YOU company instead of the ‘locals’. Even more importantly, WORM your cat regularly. One of the problems the sick frogs are having is severe parasite infestation. The worst parasite is a tapeworm called Spirometra erinaceii. The immature worm can live in many different host animals but, according to researchers, it only reproduces in ONE animal: the cat. We now know that cats can kill frogs even if they never come within a car length of each other. The problem is the cat’s feces. The tapeworm breeds inside the cat and the huge number of eggs are deposited in the faeces. From there, the eggs are washed into waterways or picked up by insects which are then eaten by frogs. Be on the lookout for any frogs you may see in your yard or elsewhere which might be injured or sick. A frog with lumps, ulcers or holes in the skin, blotchy colours (when the skin is normally a solid colour), difficulty moving, sitting in the sun during the day, emaciated or bleeding needs to be examined right away. Keep them in a clean ice cream container (with a secure lid with airholes punched through it) with a small amount of water and dirt, or grass, leaves, twigs in the bottom, for the frog to sit or lie on. Do not use so much water that the frog has to swim around. They do not need that much water. It should be enough water to cover their feet a bit, but not swim in. Keep the container in a very warm place away from family pets until you can reach an expert to help. Contact the hotline @ 1-855-WILD-HELP or a local rehabber right away if you do see a frog which might have a problem. Water and shelter are crucial for frogs and the Midwestern states are losing large amounts of both. Vegetate your yard as much as possible and have a couple bird baths in shady spots so frogs have water during the dry season. Keep a compost pile in a corner of the yard to attract bugs – insects are in short supply during the dry season which causes ‘environmental stress’ which in turn causes the frog to lose its resistance to diseases. Most importantly, if a frog is uninjured and not in need of help, leave the frog alone! Do not try to relocate the frog to a ‘better’ area. Better to you might not be better to the frog. Don’t let your kids bring the frog inside to make a pet. If the kids want a pet frog, go to a pet store and get one with all of the correct supplies. Our native frogs are not meant to be pet frogs. Please don’t make them that. Tadpoles found when emptying a bird bath or pond From time to time frogs will lay their eggs in a spot that wasn’t the greatest choice. But the frog didn’t know that you were going to empty that pond or bird bath at the time, so it seemed like a good idea. If at all possible, leave them be and empty the water source after the tadpoles have become frogs and move on. This time period is extremely long though. Eggs can take from 6-12 weeks to go from egg to tadpole, then it can be an additional 6-8 weeks or 6-8 months before they become frogs. It all depends on temperature, and what kind of frogs they are going to be. If you need to empty the water source now and can’t wait out the tadpoles, you can transfer them to a aquarium or fishbowl, kiddie pool, even a plastic water tight plastic tote or bucket. Use as much of the water as possible from the original source, and don’t add any tap water. You can add tap water later if you need to, but you will have to leave the water out in the sun for 5-7 days before it is usable. The sun will eliminate the chlorine from the water, but that takes time. If you don’t have that much time, you can buy de-chlorinating drops at your local fish-carrying pet store. But at least leave the water out overnight, even after using the droplets. Even a little chlorine is deadly to tadpoles. Just in case, it is always a good idea to keep a little de-chlorinated water on hand. Place your new tadpole farm in a warm place away from predators like birds. A garage or shed usually works just fine. A place in the yard during spring or summer is fine as well, just make sure you provide at least 3/4 of the container in the shade. Now we have to feed them every few days. The simplest food for tadpoles is lettuce. Buy the dark green leafy kind though, not iceberg lettuce. Kale, mustard greens, romaine lettuce are all just fine. Boil the lettuce for 10-15 minutes and then drain it. Rinse with cold water and chop it up some. Lay it all flat and put in a ziploc bag in the freezer. Every day you can grab a pinch or so out of the freezer, or more depending on how many tadpoles you have. Remember though, too much food will get the water all dirty, and too little will make the tadpoles get nutty and go after each other. If your water gets dirty really fast, slow down on the feeding…and be sure to replace the dirty water with some fresh spare water. When the tadpoles start getting close to developing legs, they will need some sort of perch so they can get out of the water. Floating water lily leaves and branches are ideal, but you can also create ledges using stones or even tilting slopes of plastic in tanks. The tilt of the ledge may be important depending on what type of frog you have. Young tree frogs can climb smooth vertical surfaces such as the plastic pond liners and glass, but the ground dwelling frogs will need a rough slope when the time comes to climb out of the water. At this point, if they aren’t big enough to eat crickets but are too large to eat lettuce, you can try starting them off with small insects. A good substitute is bloodworms (live is best) which are usually found in pet stores that carry fish. You can try feeding them to the frogs by taking the lid of a jar and turning it upside down. Fill the cap with a bit of warmish water and lay a bunch of the gross wiggley worms in and usually the frogs will find them. Or you can put the worms directly into their water. Also, in addition to crickets and meal worms, in the froglet/young frog stage, aphids (super tiny little bugs) are a good food source. They are easily found on a dandelion, so just snip off a stem and place it in the water, and the tadpoles have a feast! Once your tadpoles really start looking more like frogs, and are spending a lot more time on top of the water instead of in the water, it is time to start thinking about release. If you weren’t rearing the tadpoles outdoors, you’ll need to move them outdoors now. Somewhere near a garden, or moist area of vegetation is best. Keep the area or garden well watered and well vegetated. Young frogs will need a lot of ground cover to hide. There is not much point in rearing frogs in a totally hostile environment. As long as your container is quite full with water and has plenty of floaty spots for the frogs to perch, it will be no time at all before they start jumping right out of your container. They will release themselves when they are ready. All you have to do is keep feeding and caring for them until they have all moved on. Don’t just pour it all out and force them to leave. They know when they are ready. Just make sure that they CAN get out when they are ready and they will leave. Congratulations! You have saved many little precious lives, and if you have kids, this can be a really rewarding educational experience to help children understand how we can help our environment and the animals that live in it. As always, if you are in need of more assistance, or just want to discuss your situation please feel free to call the Wildlife Hotline @ 1-855-WILD-HELP to speak with a wildlife specialist.
<urn:uuid:a73f100c-807b-4364-a4bb-16a67333b2ad>
CC-MAIN-2021-43
https://www.wildlifehotline.com/help/turtles/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585828.15/warc/CC-MAIN-20211023224247-20211024014247-00430.warc.gz
en
0.961831
3,318
3.34375
3
At a Glance Enjoy a visit to historic Havre De Grace, our name says it all, “Harbor of Grace”. As early as the 1620’s this area was recorded on nautical charts and in short histories about the upper Chesapeake Bay and the large Susquehanna River which in the Indian language meant “river of islands”. The large island located under the Thomas J. Hatem Bridge was part of a land grant given by King James I of England. It is named Garret Island in honor of a former president of the Baltimore and Ohio Railroad. HAVRE DE GRACE, MARYLAND 7 Day Weather Forecast During the Revolutionary War, this small hamlet was visited several times by General Lafayette. He mentioned that the area reminded him of the French seaport, Le Havre. Hence, our town derived its lovely name “Harbor of Grace”. The town was incorporated in 1785. You will experience museums devoted to celebrating our Waterman’s way of life. Those with an eye for architecture will appreciate the many fine Victorian homes here. In 1791, Havre De Grace narrowly lost out to Washington, D.C., as the nation’s capitol. As a result of that near brush with fate, you will find many streets such as Union, Congress, Washington, Lafayette, Adams, etc. that bear the names of noble revolutionary leaders and ideals. A scant few years later, during the War of 1812, the British again sailed up the Chesapeake Bay. After laying siege to Washington, D.C., burning the White House, and having been held at bay by the patriots in Baltimore, they proceeded to Havre De Grace. Most of the citizens fled in fear, but Lt. John O’Neill single-handedly defended the town. He was wounded, captured, and imprisoned on the British ship Maidstone. The town was sacked and burned, with only two houses and St. John’s Episcopal Church spared. O’Neill’s fifteen year old daughter, Matilda, pleaded with the Admiral of the Fleet for her father’s life. Admiral Cockburn was so impressed by the girl’s bravery that he released O’Neill unharmed, and rewarded Matilda by giving her his gold snuff box and sword. One of the most famous horse race tracks, the Graw, was in operation from 1912 to 1950. In its heyday, trains brought passengers direct from the surrounding metropolitan areas, and the jockeys voted it the best track in the country. Today, it is home to the Maryland National Guard. Havre De Grace is located in northeastern Maryland in Harford County at the confluence of the Susquehanna River and the Chesapeake Bay. Situated approximately 39 miles northeast of Baltimore and 45 miles south of Philadelphia, it is easily accessible from I-95, Via Exit 89, MD 155 and US 40. Havre De Grace is a wonderful place to visit. We know you’ll want to stay the weekend. Places to Stay Bed & Breakfast Inns through out the town of Havre De Grace The Currier House is located on 800 South Market Street, 1-800-827-2889/410-939-7886. Spencer Silver Mansion is located on 200 South Union Avenue. This structure and the Seneca Mansion are two large scale historic houses built as private residences. Seneca Mansion is the only “High Victorian” stone mansion in the city. It contains numerous architectural embellishments such as a two-storied bay window, a tower, four gables, a dormer and a variety of window shapes and placements. 1-800-780-1485/410-939-1097. The Vandiver Inn is located on 301 South Union Street, 1-800-245-1655 or 410-939-5200. The Old Chesapeake Hotel offers many beautiful guest suites. The Old Chesapeake Hotel is located on 400 North Union Avenue in Havre De Grace for more information call: 410-939-5440 or visit their web site:www.oldchesapeakehotel.com These Bed & Breakfast Inns offer excellent accommodations and are within walking distance to Antique Row. Places to Eat You will find many fine restaurants that offer fresh seafood. Ken’s Steak & Rib House provides casual dining in an elegant atmosphere. Ken’s is located on 400 North Union Avenue. For reservations call 410-939-5440. Price’s Seafood is well known for their genuine steamed crabs and is located on 654 Water Street. Call ahead for reservations 410-939-2782. Another town favorite is Coakley’s Pub located on 406 St. John Street. Where the crab cakes are famous! For reservations call 410-939-8888. You’ll also find most of the fast food restaurants (McDonalds, Burger King, Pizza Hut, etc.) located on US 40. Places to Go Havre De Grace has many places of interest: Skipjack Martha Lewis Right in Harford County lays a unique floating History Museum! That’s right the Skipjack Martha Lewis, docked in Havre de Grace, Maryland is a floating History Museum. The Martha Lewis is over 50 years old and continues to dredge for oysters today just as they did in the early 1900's. While visiting you can step back in time, learn about the rich heritage and cultural that is fading from the Chesapeake Bay at an alarming rate. With just about 12 Skipjacks remaining on the Chesapeake, the Martha Lewis is the last one to still dredge for oysters under sail. Truly the only way to fully appreciate the Maritime History of the Chesapeake Bay is to experience it from the decks of the last remaining vessel that continues to work under sail in North America. Bring your family, friends, students; walk the deck, raise the sails and make history come alive! We offer many different cruises in which you can participate. Become a part of history and support the Skipjack Martha Lewis, a floating History Museum in Havre de Grace. For information please visit our web site: www.skipjackmarthalewis.org, or call 410-939-4078 The Gallery RoCa of Fine Arts & Accessories, LLC. The cosmopolitan gallery features original oils, water media, sculpture, and other three dimensional works of fine art. When you enter the gallery RoCa you literally feel as though you've stepped into a new era of small town America and situated on the first floor of a traditional 1896 storefront with 3,100 sq. feet of original fine works of art. Also enjoy First Friday musical performances by professional classical musicians. The Gallery RoCa is located at 220 N. Washington Street in Havre de Grace. Hours for the Gallery RoCa are Monday-Saturday 11am-7PM and Sun 12PM-5PM for more information call 410-939-6182 The Decoy Museum was opened in 1986 with its collection of prized hand carved decoys and other memorabilia of “gunning on the flats”. Havre De Grace became known as the “Decoy Capitol of the World,” because so many of the master decoy carvers live, lived, studied, or were affiliated with this town through out the years. Visit the Havre De Grace Decoy Museum located on the banks of the historic Susquehanna Flats. The Decoy Museum houses one of the finest collections of decoys from the Chesapeake Bay and from around the country. The Havre De Grace Decoy Museum is open to the public seven days a week, 361 days a year, from 11:00 a.m. to 4:00 p.m. For information call 410-939-3739 or visit their web site: http://www.decoymuseum.com/ The Concord Point Lighthouse is one of the oldest lighthouses in continual operation on the East Coast. Located at the foot of Lafayette Street, the point where the Susquehanna River becomes the Chesapeake Bay. The lighthouse was built in 1827 with Lt. John O’Neill as the light keeper. This position was maintained by the O’Neill descendants until it was automated in recent years. This was one of eight lighthouses built to coincide with the opening of the Chesapeake and Delaware Canal linking the Chesapeake and Delaware Bays. The lighthouse was in continuous operation for over 150 years. On the water side you can see one of the cannons used in the defense of Havre De Grace on May 3, 1813. The Lighthouse has been restored and is open from April through October, Saturday and Sunday from 1:00 p.m. to 5:00 p.m. Call for special times for larger groups – 410-939-1498 or visit their web site: http://www.cheslights.org/heritage/concord.htm The Susquehanna Museum of Havre De Grace is a restored 1836 lock house on the Susquehanna River. This building served as a home for the lock tender and a canal office for collecting tolls for vessels headed north toward Pennsylvania. The museum offers a display of Havre De Grace History and is located on Erie and Conesto Streets. Open from May through October, Friday, Saturday, and Sunday from 1:00 p.m. to 5:00 p.m. Call for more information for group tours at 410-939-5780. For the golfer there is Bulle Rock’s 18 hole world class Pete Dye golf course open to the public and hosting The McDonalds LPGA Championship presented by Coca Cola Tournament dates for 2006: June 5-11 for more information call 410-939-8465 Website: www.harfordgolf.com The Lantern Queen River City Trading, LLC presents The Lantern Queen. Cruise and dine on an authentic paddle boat. The Lantern Queen was built in LaCross, Wisconsin in 1983 by LaCross Boat Works, one of sixteen similar vessels. The boat was named the Far West. The Far West traveled up the Missouri River to Yankton, South Dakota. It operated for ten years as a dinner cruise vessel. In 1994 it was purchased by a gentleman that took if to Englewood, Florida and renamed her "The Lantern Queen" to operate with his restaurant called The Ships Lantern. In 1996 The Lantern Queen sailed to Philadelphia arriving in June. On the 1st of July the Queen got hung up on a pile and sunk at Penns Landing in Philadelphia. Captain Jack Morey was the salvage master and raised The Lantern Queen. In 2007 The Lantern Queen was renovated by River City Trading, LLC and returned to Havre de Grace, where she is cruising year round. For more information and reservations call 410-939-1468.Several boat marinas and five city waterfront parks dot the shoreline of Havre De Grace. At Tydings Park at the foot of Union Avenue, the view is spectacular. Please stroll the streets, visit our many unique shops, meet the friendly folks here, and enjoy your visit in the “City by the Bay”. Early (1780 to 1830) This period represents the era of a sleepy fishing village. This group contains only one structure, Rogers House, but several others are strongly suspect. The British burning of Havre De Grace in 1813 caused severe damage to 60% of the existing houses. The Canal Era (1830-1850) This period when Havre De Grace flourished economically provides one of the most interesting collections of structures in the district. Dozens of structures still remain that were built in this five-year period which marked the coming of the railroad, the beginning of the modern northeast corridor and the completion of the new Susquehanna and Tidewater Canal on the western shore of the river. Although some new industry began to arise in the city during this period, the demise of canals and the lingering effects of the Civil War undoubtedly cooled the activity of the previous period. However, a sufficiently large number of structures have survived. The late industrial age brought a resurgence of prosperity to what was now officially, a city. One of the major businesses of the time was the sawing of ice every winter out of the Susquehanna for icehouses all over the region. There are a series of interesting photographs of this antique activity in the Lock House Museum. Another major business of the city was commercial fishing. Prior to the completion of the Conowingo Dam in 1926, Havre De Grace was known as the “Shad Capital of the World”. The recently constructed fish ladders at the dam are part of a long overdue attempt to restore this fishery. The Thomas Hopkins House (1893), and The Harrison Hopkins House (1868) On the SE corner of Union Avenue and Green Street stands the elaborate, two-chimney, Canal Era, Thomas Hopkins (of “Johns Hopkins Hospital fame”) House. Directly across the street (on the SW corner) is Harrison Hopkins’s home. It has been noted that the design of 226 N. Union is “an example of the highly eclectic, even eccentric styles that became popular after the Civil War.” The St.John’s Episcopal Church (1809) Located at the intersection of Congress and Union is the city’s oldest church. This church is also one of the oldest surviving structures in the city. The building is remarkable for its Flemish bond brick walls, its well executed round arched windows and its simple, early 19th century appearance. The O’Neill House (1865) Located on Washington Street portions of the structure appear to date back as early 1814. The structure has considerable historic significance to Havre De Grace, since the property was in the O’Neill family for 158 years. John O’Neill, the original owner, is known as “the defender of Havre De Grace” for his solitary attempt to thwart the British Attack on the town in 1813. Havre De Grace is visitor friendly and getting around town is quite simple. Visit any business in town and pick up a free copy of “Havre de Grace Magazine” (formerly Lockhouse to Lighthouse)…and all ’round town”, A Day Tripper’s Guide to Our Town, Havre De Grace, MD, it will help you to find the events of the season and also includes a detailed map of Havre de Grace. Take time to visit the Discover Harford County Tourism for more information on events and happenings in the area. Website: www.harfordmd.com For information call: 1-800-597-2649/410-272-2325/410-575-7278. Need more? Try these links for additional information about Havre De Grace, Maryland. The I-95 Exit Information Guide “Flat out, the single best website for auto travelers on the Net” Yahoo’s Internet Life Magazine Havre De Grace Unique on the Cheasapeake! The Havre de Grace Office of Tourism & Visitor Center’s web site Discover Harford County Hartford County Visitors Guide
<urn:uuid:e4b3c3b8-9db7-4e9a-9d98-d782148b5fd3>
CC-MAIN-2021-43
https://www.icity.net/northeast/maryland/havre-de-grace-maryland/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585171.16/warc/CC-MAIN-20211017082600-20211017112600-00430.warc.gz
en
0.947571
3,230
2.640625
3
The New York TimesApr 09, 2021 17:29:04 IST Evidence is mounting that a tiny subatomic particle seems to be disobeying the known laws of physics, scientists announced Wednesday, a finding that would open a vast and tantalizing hole in our understanding of the universe. The result, physicists say, suggests that there are forms of matter and energy vital to the nature and evolution of the cosmos that are not yet known to science. “This is our Mars rover landing moment,” said Chris Polly, a physicist at the Fermi National Accelerator Laboratory, or Fermilab, in Batavia, Illinois, who has been working toward this finding for most of his career. The particle célèbre is the muon, which is akin to an electron but far heavier and is an integral element of the cosmos. Polly and his colleagues — an international team of 200 physicists from seven countries — found that muons did not behave as predicted when shot through an intense magnetic field at Fermilab. The aberrant behavior poses a firm challenge to the Standard Model, the suite of equations that enumerates the fundamental particles in the universe (17, at last count) and how they interact. “This is strong evidence that the muon is sensitive to something that is not in our best theory,” said Renee Fatemi, a physicist at the University of Kentucky. The results, the first from an experiment called Muon g-2, agreed with similar experiments at the Brookhaven National Laboratory in 2001 that have teased physicists ever since. At a virtual seminar and news conference Wednesday, Polly pointed to a graph displaying white space where the Fermilab findings deviated from the theoretical prediction. “We can say with fairly high confidence, there must be something contributing to this white space,” he said. “What monsters might be lurking there?” “Today is an extraordinary day, long awaited not only by us but by the whole international physics community,” Graziano Venanzoni, a spokesperson for the collaboration and a physicist at the Italian National Institute for Nuclear Physics, said in a statement issued by Fermilab. The results are also being published in a set of papers submitted to several peer-reviewed journals. The measurements have about one chance in 40,000 of being a fluke, the scientists reported, well short of the gold standard needed to claim an official discovery by physics standards. Promising signals disappear all the time in science, but more data are on the way. Wednesday’s results represent only 6% of the total data the muon experiment is expected to garner in the coming years. For decades, physicists have relied on and have been bound by the Standard Model, which successfully explains the results of high-energy particle experiments in places like CERN’s Large Hadron Collider. But the model leaves many deep questions about the universe unanswered. Most physicists believe that a rich trove of new physics waits to be found, if only they could see deeper and further. The additional data from the Fermilab experiment could provide a major boost to scientists eager to build the next generation of expensive particle accelerators. It might also lead, in time, to explanations for the kinds of cosmic mysteries that have long preoccupied our lonely species. What exactly is dark matter, the unseen stuff that astronomers say makes up one-quarter of the universe by mass? Indeed, why is there matter in the universe at all? On Twitter, physicists responded to Wednesday’s announcement with a mixture of enthusiasm and caution. “Of course the possibility exists that it’s new physics,” Sabine Hossenfelder, a physicist at the Frankfurt Institute for Advanced Study, said. “But I wouldn’t bet on it.” Marcela Carena, head of theoretical physics at Fermilab, who was not part of the experiment, said, “I’m very excited. I feel like this tiny wobble may shake the foundations of what we thought we knew.” Muons are an unlikely particle to hold center stage in physics. Sometimes called “fat electrons,” they resemble the familiar elementary particles that power our batteries, lights and computers and whiz around the nuclei of atoms; they have a negative electrical charge, and they have a property called spin, which makes them behave like tiny magnets. But they are 207 times as massive as their better-known cousins. They are also unstable, decaying radioactively into electrons and superlightweight particles called neutrinos in 2.2 millionths of a second. What part muons play in the overall pattern of the cosmos is still a puzzle. Muons owe their current fame to a quirk of quantum mechanics, the nonintuitive rules that underlie the atomic realm. Among other things, quantum theory holds that empty space is not really empty but is in fact boiling with “virtual” particles that flit in and out of existence. “You might think that it’s possible for a particle to be alone in the world,” Polly said in a biographical statement posted by Fermilab. “But in fact, it’s not lonely at all. Because of the quantum world, we know every particle is surrounded by an entourage of other particles.” This entourage influences the behavior of existing particles, including a property of the muon called its magnetic moment, represented in equations by a factor called g. According to a formula derived in 1928 by Paul Dirac, the English theoretical physicist and a founder of quantum theory, the g factor of a lone muon should be 2. But muons are not alone, so the formula must be corrected for the quantum buzz arising from all the other potential particles in the universe. That leads the factor g for the muon to be more than 2, hence the name of the experiment: Muon g-2. The extent to which g-2 deviates from theoretical predictions is one indication of how much is still unknown about the universe — how many monsters, as Polly put it, are lurking in the dark for physicists to discover. In 1998 physicists at Brookhaven, including Polly, who was then a graduate student, set out to explore this cosmic ignorance by actually measuring g-2 and comparing it to predictions. In the experiment, an accelerator called the Alternating Gradient Synchrotron created beams of muons and sent them into a 50-foot-wide storage ring, a giant racetrack controlled by superconducting magnets. The value of g they obtained disagreed with the Standard Model’s prediction by enough to excite the imaginations of physicists — but without enough certainty to claim a solid discovery. Moreover, experts could not agree on the Standard Model’s exact prediction, further muddying hopeful waters. Lacking money to redo the experiment, Brookhaven retired the 50-foot muon storage ring in 2001. The universe was left hanging. The Big Move At Fermilab, a new campus devoted to studying muons was being built. “That opened up a world of possibility,” Polly recalled in his biographical article. By this time, Polly was working at Fermilab; he urged the lab to redo the g-2 experiment there. They put him in charge. To conduct the experiment, however, they needed the 50-foot magnet racetrack from Brookhaven. And so in 2013, the magnet went on a 3,200-mile odyssey, mostly by barge, down the Eastern Seaboard, around Florida and up the Mississippi River, then by truck across Illinois to Batavia, home of Fermilab. The magnet resembled a flying saucer, and it drew attention as it was driven south across Long Island at 10 mph. “I walked along and talked to people about the science we were doing,” Polly wrote. “It stayed over one night in a Costco parking lot. Well over a thousand people came out to see it and hear about the science.” The experiment started up in 2018 with a more intense muon beam and the goal of compiling 20 times as much data as the Brookhaven version. Meanwhile, in 2020 a group of 170 experts known as the Muon g-2 Theory Initiative published a new consensus value of the theoretical value of muon’s magnetic moment, based on three years of workshops and calculations using the Standard Model. That answer reinforced the original discrepancy reported by Brookhaven. Into the Dark The team had to accommodate another wrinkle. To avoid human bias — and to prevent any fudging — the experimenters engaged in a practice, called blinding, that is common to big experiments. In this case, the master clock that keeps track of the muons’ wobble had been set to a rate unknown to the researchers. The figure was sealed in envelopes locked in the offices at Fermilab and the University of Washington in Seattle. In a ceremony Feb. 25 that was recorded on video and watched around the world on Zoom, Polly opened the Fermilab envelope, and David Hertzog from the University of Washington opened the Seattle envelope. The number inside was entered into a spreadsheet, providing a key to all the data, and the result popped out to a chorus of wows. “That really led to a really exciting moment, because nobody on the collaboration knew the answer until the same moment,” said Saskia Charity, a Fermilab postdoctoral fellow who has been working remotely from Liverpool, England, during the pandemic. There was pride that they had managed to perform such a hard measurement and then joy that the results matched those from Brookhaven. “This seems to be a confirmation that Brookhaven was not a fluke,” Carena, the theorist, said. “They have a real chance to break the Standard Model.” Physicists say the anomaly has given them ideas for how to search for new particles. Among them are particles lightweight enough to be within the grasp of the Large Hadron Collider or its projected successor. Indeed, some might already have been recorded but are so rare that they have not yet emerged from the blizzard of data recorded by the instrument. Another candidate called the Z-prime could shed light on some puzzles in the Big Bang, according to Gordan Krnjaic, a cosmologist at Fermilab. The g-2 result, he said in an email, could set the agenda for physics in the next generation. “If the central value of the observed anomaly stays fixed, the new particles can’t hide forever,” he said. “We will learn a great deal more about fundamental physics going forward.” Dennis Overbye. c. 2021 The New York Times Company
<urn:uuid:3dbbc54a-d38a-4df6-b4a0-778f241be523>
CC-MAIN-2021-43
https://recode.pw/2021/04/09/breakthrough-discovery-of-misbehaving-muon-challenges-known-laws-of-the-physical-universe/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585696.21/warc/CC-MAIN-20211023130922-20211023160922-00590.warc.gz
en
0.954763
2,267
2.9375
3
The Delaware River Port Authority (DRPA) was created nearly one hundred years ago as a bi-state commission for the purpose of building a single toll bridge. By the 1930s regional leaders had started to envision a larger maritime role for their new agency, but efforts to broaden its powers to include port operations were repeatedly thwarted. The DRPA continued to grow into a major regional transportation agency, making major investments in infrastructure and gaining significant expertise in bridge and commuter rail operations. A 1992 compact amendment gave the DRPA two important new mandates–port unification and economic development–but despite the best intentions of policy makers, the implementation of both mandates proved to be difficult. The agency plunged confidently into economic development with mixed and sometimes controversial results, while the goal of a unified port proved to be a reach too far. Regional leaders in the Delaware Valley began discussing the idea of building a bridge that would span the Delaware River and connect Philadelphia and Camden as far back as the early 1800s. Their vision took nearly a century to realize, but finally, in 1912 and 1917, the New Jersey and Pennsylvania legislatures created a pair of commissions for the purpose of jointly building, operating, and owning a single toll bridge. Construction on what would briefly become the world’s longest suspension bridge started in 1922, and in 1926 the Delaware River Bridge–renamed the Benjamin Franklin Bridge in 1956–was opened to traffic. The 1931 creation of the Delaware River Joint Commission formalized the agreement between the two states while expanding the scope of operations of their new agency. The commission would now be responsible for planning and providing future bridges and passenger rail service across the river and for promoting passenger and freight commerce on the Delaware River, as “a highway of commerce between Philadelphia and Camden and the sea.” With this last duty, legislators intended to give their new bi-state agency the responsibility for unifying the region’s fragmented system of ports. Five times between 1931 and 1952, legislators amended the commission’s enabling act in efforts to expand its duties to include port operations. The draft legislation for each amendment granted the commission the power to acquire, build, own, and operate maritime cargo facilities, but every time powerful private interests successfully lobbied to curtail these powers and weaken the final legislation. In 1935 the United States government, through an act of congress, recognized the agreement between the states as a federal “compact,” and the 1952 compact amendment optimistically re-christened the organization the “Delaware River Port Authority.” As with previous amendments, however, the power to acquire port facilities was stripped from the final legislation. Walt Whitman Bridge Opens Following the 1952 compact amendment, planning for bridges and commuter rail continued, and in 1955 the Walt Whitman Bridge opened to the south of the Ben Franklin Bridge. The Port Authority Transit Corporation or “PATCO Speedline” trains began to operate in 1969, providing a commuter rail line running from the southern New Jersey suburbs across the Benjamin Franklin Bridge and into Center City Philadelphia. The Commodore Barry Bridge opened in 1974, south of the Walt Whitman Bridge, and a final bridge, the Betsy Ross, opened in 1976, to the north of the Benjamin Franklin Bridge. In 1992, the compact was amended one more time, finally making port unification a true mandate of the DRPA by granting it the power to acquire the two state-chartered port agencies on the Delaware River–the Philadelphia Regional Port Authority and the South Jersey Port Corporation. But over the next several years, entrenched interests at the two public port authorities succeeded in blocking the DRPA’s acquisition plans, and by 1998 port reunification was dead. The 1992 amendment, however, also granted the DRPA the power to engage in “economic development,” broadly defined. After the failure of port reunification, this power–which was added as an afterthought–became central to the DRPA’s mission. By 2013, nearly a century after its creation, the DRPA was still a port authority in name only. Maritime operations include a seasonal cruise terminal, a small multimodal cargo yard, and a ferry service, but otherwise the DRPA remained almost entirely a bridge and commuter rail operation. Together, the four toll bridges and the PATCO Speedline served as an integrated transportation system that carried workers from the New Jersey suburbs to their jobs in Philadelphia in the morning and back to their homes at night. In 2010, the DRPA earned $275 million in operating income and incurred expenses of $202 million for a net operating income of $73 million. More important, a full ninety percent of operating income came from bridge tolls. An additional nine percent came from PATCO fares, while maritime operations accounted for a scant one-tenth of one percent of operating income. Tolls as Revenue Source Like most governments, the DRPA effectively operates as a monopoly, so its bridge tolls are both a large and dependable revenue source and its bonds are a low-risk investment, together giving the agency enormous borrowing power. In 2010 the DRPA had about $1.4 billion in outstanding debt, backed primarily by future toll revenues. More important, periodic toll increases had a big effect on the agency’s borrowing capacity and the unique institutional character of the DRPA as a bi-state public authority influenced the allocation of funds that flowed from these increases. Unlike a municipality that provides a wide variety of tax-funded services and facilities, the DRPA has operated as a “public authority,” a specific type of government created by legislators to provide a single service or facility, such as a highway, airport, maritime cargo port, or bridge. Authorities typically pay for these facilities with proceeds from the sale of “revenue bonds” that are backed by future rents, tolls, or other charges that will be paid by the people who use the facility. Because authorities typically do not rely on tax revenues, they can skirt public review of their projects, unlike municipalities that must seek voter approval of bonding bills to avoid claims of taxation without representation. Authorities are governed by appointed boards or commissions rather than by elected officials so their leadership typically is less sensitive to political pressures. Together, appointed leadership and the lack of a need to seek voter approval for projects makes authorities more politically insulated than other units of government. But the DRPA is not just any authority; rather, in 2013 it was one of only three in the United States that enjoyed a “bi-state” jurisdiction, with a service area of 5,840 square miles that included five counties in southeastern Pennsylvania and eight counties in southern New Jersey. A commission of sixteen leads the DRPA, eight commissioners from each state, all but two appointed by the two governors. This division of the commission into two equal delegations ensures an unusual level of internal stability but it also virtually guarantees ongoing stress between the two delegations that stems from their different views of the DRPA’s purpose and duties. These differences of opinion reflect the contrasting geographies and constituencies that the two delegations represent: suburban New Jersey bedroom communities filled with workers interested in a cheap commute to the city, and densely developed Philadelphia, whose politicians have long thought that low tolls and fares promote the flight of businesses and residents to the suburbs. Prior to commission meetings, the two delegations meet separately, in closed “executive” session, where they conduct most of their business out of the public’s eye. Thus the public meetings of the full commission usually lack controversy or meaningful debate and serve instead as perfunctory events where agreements hashed out in private are merely finalized in public. The same process has typically been used to make spending decisions. Borrowing Power Grows With Tolls Because the governors of the two states are term-limited and gubernatorial politics are central to raising tolls and fares, the DRPA usually does so only every eight years. But this puts stress on the agency’s operating budget because revenues remain flat year over year while annual operating costs continue to increase because of inflation. More important, new debt cannot be issued until there is a new source of future revenues to back it. When tolls do go up, however, the impact on borrowing can be huge. As a rule of thumb, each of the four times between 1992 and 2012 that the DRPA raised tolls on its four bridges by one dollar, the agency’s bonding capacity–the amount it could borrow–rose by about a half billion dollars. That meant that the commission’s next job was to decide how and where to spend about $500 million dollars on economic development projects. The method was simple. The two delegations agreed to split the funds equally, half going to either side of the river. Then each delegation proposed a list of projects and values that added up to half the total amount of funds and the commission voted and approved all of the projects. Between 1992 and 2001 alone the DRPA’s debt grew by five times, from $250 million to $1.4 billion. During this period approximately $443 million in toll-backed proceeds from the sale of “Port Project Development Bonds” (PPDBs) were invested in economic development projects worth a total of $4.4 billion. Some of these funds were spent on job creation projects, including location subsidies to lure private shipbuilder Kvaerner to the Philadelphia Naval Shipyard after it closed in 1995, patents for “FastShip” technology, and a charter school and expansion projects for several private manufacturing companies in Camden. Other funds were spent on waterfront redevelopment projects on both sides of the river including a ballpark, aquarium expansion, and a new DRPA headquarters building on the New Jersey riverfront; a new performing arts center, a new museum to the U.S. Constitution, improvements to a science museum, and an aborted riverfront entertainment center on the Philadelphia side; and an aerial tram connecting the two cities across the river that remained uncompleted over a decade after its foundations were poured in 2000 at a cost of $10 million. Subsidizing a Shipbuilder Another example is the DRPA’s $50 million contribution–and one of its largest grants–to the nearly half billion dollars in government subsidies to Kvaerner. Despite being politically popular with democrats and labor unions in Philadelphia, Kvaerner was used as an example in a 1998 Time Magazine cover story about corporate welfare, in which the authors calculated that each new job at the Philadelphia Naval Shipyard cost $323,000 in subsidies to create. In a 2000 performance audit, democratic State Auditor General Robert Casey found that the location subsidies provided to Kvaerner by the state and other agencies grossly over-subsidized Kvaerner, which made only a relatively modest investment in the facility and bore very little risk. The DRPA’s challenges with economic development projects stem from a simple but important disconnect: Because the bridges generate substantial surpluses, the DRPA has been able to provide financial support in the form of loans, forgivable loans, and grants to projects that did not need to perform economically because their debt was backed by bridge tolls rather than by project income. This presented a two-edge sword, because while it allowed for subsidizing worthy projects and initiatives, it also made it easier for commissioners to allocate money to questionable projects such as the tram and Kvaerner because the funds would be repaid whether or not the project was economically successful. At the policy level, the DRPA’s spending decisions raised a recurring equity question, as the commuters who were required to pay higher and higher tolls and fares saw their money spent on facilities and programs that they might never use or benefit from and that were completely unrelated to the bridges. The lack of adequate due diligence and controls to assure sound economic performance left the door open for poorly vetted pet projects that could and sometimes did embarrass the agency, cause needless distraction, and waste money that could have been spent on more worthwhile endeavors. Infusions of huge amounts of cash every four or eight years, the lack of a connection between project funding and economic performance, and the perception of increased political insulation on the part of commissioners have also led to instances of inside dealing, cronyism, graft, and corruption. A remarkable case was that of DRPA Commissioner and Pennsylvania State Senator Vincent Fumo, who created a $40 million economic development fund fueled by toll revenues and directed at initiatives on the Pennsylvania side of the River. The supposed purpose of the fund was to offset disparities in spending between the two states resulting from cheap fares and tolls that benefited New Jersey. Instead, Fumo and a handful of close associates quietly spent all of the funds in 1999 with little oversight and on projects of direct personal interest to the senator. In 2009, Fumo was convicted in federal court of 137 charges of corruption and sentenced to 55 months in prison for the illicit spending of public funds, including those of the DRPA. By 2013, the DRPA had become a large and mature regional transportation agency successfully serving a densely populated region but the divisions between its constituencies, governance structure, and bi-state jurisdiction promoted conflicting ideas about its purposes. With its future as a port authority out of the question, the DRPA’s dependable stream of toll revenues continued to support the new economic development powers that it was still learning to wield. Together, these unique tensions, powers, and limits continued to influence the operations, investment decisions, and evolution of DRPA nearly one hundred years after its creation. Peter Hendee Brown is an architect, planner, and urban development consultant based in the Twin Cities. He teaches private sector real estate development at the University of Minnesota and is the author of America’s Waterfront Revival: Port Authorities and Urban Redevelopment. Before moving to Minneapolis in 2003, he lived for seventeen years in Philadelphia, where he practiced architecture and worked in Philadelphia city government, serving in the administration of Mayor Edward G. Rendell. (Information current at date of publication.) Copyright 2013, Rutgers University Barlett, Donald L. and James B. Steele. “Corporate Welfare.” Time, November 9, 1998. Brown, Peter Hendee. America’s Waterfront Revival: Port Authorities and Urban Redevelopment. Philadelphia: The University of Pennsylvania Press, 2009. Casey, Robert P., Auditor General, Commonwealth of Pennsylvania. Performance Audit of Commonwealth Spending For the Kvaerner-Philadelphia Naval Shipyard Project Through January 14, 2000. Harrisburg, Pa., August 9, 2000. Delaware River Port Authority. Delaware River Port Authority of Pennsylvania and New Jersey; Comprehensive Annual Financial Report for the Year Ended December 31, 2011. Camden, N.J.: The Delaware River Port Authority, 2012. Places to Visit Benjamin Franklin Bridge, I-676 between Philadelphia and Camden, N.J. Delaware River Port Authority, 2 Riverside Drive, Camden, N.J.
<urn:uuid:01ade97b-c5cd-44e3-bdcf-cabd3457bf5c>
CC-MAIN-2021-43
https://philadelphiaencyclopedia.org/archive/delaware-river-port-authority/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587659.72/warc/CC-MAIN-20211025092203-20211025122203-00510.warc.gz
en
0.958437
3,064
3.5625
4
Electrical properties of rocks and geoelectrical resistivity method have been discussed in this chapter, in which the results of an electrical survey over the sedimentary terrain of the central zone of Panama (Central America) are presented. This study therefore includes (i) a petrophysical study with the aim of relating its electrical resistivity values with the volumetric water contents, (ii) an electrical resistivity imaging (2D inversion), and (iii) an electrical sounding (1D inversion) for detecting the water table and its corresponding stratigraphy and variation with time. Two datasets for these last methods have been developed with the aim of monitoring the percentage changes in model resistivity. Petrophysical tests show good fits between resistivity and volumetric water content and known parameters for rocks and soils. 1D and 2D inversions show a significant reliability with the stratigraphic information obtained from a borehole and strong changes caused by rainy season in this tropical zone. - electrical sounding - electrical resistivity imaging - sedimentary rocks - geophysical inversion - time-lapse imaging In geophysical studies, resistivity method can be used in fault zone detection and stratigraphic characterization, in hydrology for tracing water transport during a given period of irrigation studies, and for archeological and agriculture purposes. Resistivity is controlled by water content, soil texture and its geochemical properties, lithology, organic matter content, and thermodynamic parameters. The electrical properties of the materials that make up part of the outermost layers of the crust can be studied either electrically or electromagnetically from the response produced by the flow of electrical current in the subsurface. Geoelectrical methods take into account these electrical and electromagnetic aspects whose physical parameters, such as electrical current, electrical potential, and electromagnetic fields can be measured naturally or artificially. In 1830 a self-potential method based on the natural electrical response of the subsurface was used . In his work, low-intensity electrical currents generated by some minerals were identified. Later, this methodology underwent certain changes in terms of using a natural source, and Schlumberger, during the second decade of the last century, decided to use artificial sources by injecting electrical current into the subsoil. The electrical resistivity of rocks is a physical property that is characterized by very large variations in their values; most rocks and soil can be classified as highly resistive or insulating, and only metallic minerals and some of their salts can be classified as conductors. There are three ways in which electrical current can propagate through the subsurface: ohmic or electronic, electrolytic, and dielectric. The first is related to normal type of flow of charges through materials with free electrons such as metals; for electrolytic conduction, almost all soils and rocks have pores that could be saturated with water; thus, for those types of soils and rocks that have high ranges of electrical resistivity, the circulation of electrical current is carried out exclusively through electrolytic conduction due to the presence of water contained in the pores and fissures of the material. This means that the value of the electrical resistivity depends on the concentration and degree of dissociation and mobility of ions . Electrolytic conduction is produced by the slow movement of the ions within the electrolyte; therefore, the rocks are electrolytic conductors where the flow of electrical charge occurs through the conduction of ions. Dielectric conduction occurs only in materials with high electrical resistivity (insulators). According to , in this class of materials, the electrons can experience a slight displacement with respect to their atomic nucleus in the presence of a variable external electric field. Geoelectrical methods include a wide variety of techniques that are adapted to the objectives of the investigation, the dimensions and topography of the area of interest, and the electrical properties of the soil and rocks that make up the study area and whether these properties undergo large variations. Techniques such as self-potential, telluric and magnetotellurics, electrical resistivity (which we will deal with in more detail in this chapter), electromagnetism, and induced polarization allow a rapid measurement of the electrical properties of the soil, such as electrical resistivity, or its opposite, electrical conductivity. These noninvasive techniques essentially involve the interpretation of these physical parameters of the soil, which quantify the degree of difficulty or ease in which a certain volume of soil responds to the passing of electric charges, respectively; for more details about these methods, see [1, 3, 4, 5, 6, 7]. The electrical resistivity method is one of the most common geoelectrical methods for the prior evaluation of soil in civil, environmental, archeological, geological, and agricultural projects. Its noninvasive nature and the rapid data acquisition make this method an inexpensive and effective tool in the detailed evaluation of soil. Then, the determination of the geochemical and geophysical properties of soil is essential to the development of civil and agricultural engineering projects. In archeology, for example, the resistivity method constitutes an additional tool of remarkable value when evaluating in advance the presence and/or absence of buried archeological features, thus optimizing resources and time spent in the field, with significant economic impact. Conventional methods of soil analysis directly affect the soil because the samples must be taken and analyzed in a laboratory. Geoelectrical methods have been used extensively in groundwater studies and stratigraphic characterization. Several authors have carried out studies of samples in the laboratory using petrophysical relationships in which the volumetric water content is obtained by the measurement of dry bulk density and the gravimetric water content, for example, see [9, 10] in leachate recirculation studies, [11, 12] for root-zone moisture interactions and watershed characterization, and in rainfall simulations. This chapter gives a short description of electrical properties of rocks, basic principles of the geoelectrical resistivity method, and a case study of sedimentary rocks of central zone of Panama (Central America) that include petrophysical soil analysis and 1D and 2D inversion methodology. This study has been developed with the aim: (i) to obtain a relationship between electrical resistivity with volumetric water content and correlation with the empirical equation of Archie’s law and (ii) to define a 1D and 2D electrical models for two datasets obtained in different seasons (dry and rainy) and relate the results to the stratigraphy and in addition monitor the percentage changes of calculated resistivity values. 2. Study area and geology The study area is located in an open test zone of the extension of the Technological University of Panama, 19 km East-Northeast of Panama City in the central zone of Isthmus of Panama, Central America; see Figure 1(a). Panama has a rainy and dry seasons, with a tropical maritime climate with a hot, humid, rainy season (May → December) and a short dry season (January → May). According to the transition at the end of the dry season to the beginning of rainy season is linked with the disappearance of trade winds. According to [15, 16, 17, 18], the study area is characterized by a dense sequence of sediments and volcanic rocks. The site is influenced by the geological elements of the Panama Formation (marine facies) of early to late Oligocene; these elements consist of tuffaceous sandstone, tuffaceous siltstone, and algal and foraminiferal limestone . Figure 1(b) shows the geological map and study area and environs. 3.1. Site layout and profile To obtain a distribution of electrical resistivity values in lateral and vertical directions, and its variations for a period of three and half months, we have defined a North-South profile of 47 m long; this profile is superimposed on a borehole drilled in 2011 with a piezometer to monitor groundwater dynamics linked with dry and wet seasons. Figure 2(a) and (b) show the area with profile, electrical sounding, and borehole positions and Figure 2(c) a geotechnical scheme of the borehole. 3.2. Petrophysical relationship A total of five soil samples were collected from the site to a depth of 20 cm. To obtain a relationship between resistivity and volumetric water content, we have used the ASTM standard G57-06, where the samples are homogenized inside a box of insulating materials as shown in Figure 3. In this box, two metal plates with an equal surface (S) are placed; we connected these plates to the source of electrical current or resistivity meter; see Figure 3. On the surface of the soil sample, two metal pins are inserted and separated by distance ( 3.3. Electrical sounding and 2D electrical resistivity imaging acquisition and processing The electrical resistivity methods generate three-dimensional patterns of electric current and electric potential flows within the subsurface . In the case of two electrodes inserted in the surface of a homogeneous and isotropic half soil and separated by a short distance, it is possible to see a symmetrical pattern in the equipotential lines and in the electric flow lines; this means that at any point in the vicinity of the system, the electrical potential can be affected by the current electrodes ( In the case of an inhomogeneous medium, the measurements of the electrical resistivity of the subsurface tend to change when the set of four electrodes or quadrupole is moved along a profile. Another important aspect is that the value of electrical resistivity defined in the last equation will depend on the geometrical configuration of the electrodes and not on the intensity of the electric current. Therefore, the value obtained in this equation will correspond to a kind of average values of resistivity of the subsurface, from which we get the apparent electrical resistivity (ρ For each of the linear arrays of Figure 4, the record of the apparent electrical resistivity value of the subsoil is taken at the center of the internal electrodes; the measurement point is located at the center of the four electrodes. These quadrupole arrays allow the development of several modalities which are closely related to the objectives of the research. In this work we used the electrical sounding and 2D electrical resistivity imaging. The first method consists of keeping the position of the potential electrodes fixed (1 m apart for this study) and moving the current electrodes by 1 m. This procedure, illustrated in Figure 5(a), allows defining a tabular model of the subsurface based on the geometrical distribution of the strata that have different electrical properties. The apparent resistivity value corresponding to each distance AB/2 is plotted logarithmically, resulting in a curvilinear tendency and, subsequently, with the resolution of the 1D inverse problem. This dataset is fitted to a curve that obeys the number of layers with their respective values of calculated electrical resistivity and thickness. The aim of inverse problem is to reconstruct a model from apparent electrical resistivity values. Two resistivity datasets were collected using a Schlumberger electrode configuration on the 16th of February, 2012, and 31st of May, 2012. The second method consists of obtaining a high-resolution 2D image of the distribution of the electrical resistivity both laterally and vertically. The process consists of obtaining a set of apparent electrical resistivity values through a finite number of electrodes aligned along a profile with a constant distance between them (1 m for this study). The data can be obtained by varying the distances between the pairs of transmitter-receiver electrodes by multiples of a value with a computer-controlled multielectrode system. Figure 5(b) shows the electrode location along the profile and the measured points. Measurements (for electrical sounding and 2D electrical resistivity imaging) were performed with a Syscal R1 Switch-48 (IRIS Instruments), in a simple mode for the first and a multielectrode mode for the second. In respect of the acquisition setting, the maximum value allowed standard deviation of the measurement was fixed at 1%; minimum and maximum number of stacks per measurement and the current time per cycle were fixed at 3 and 6 and 500 ms, respectively. To obtain a realistic 2D image of electrical resistivity distribution in the soil, we used a cell-based inversion method; this method subdivided the subsurface into a number of rectangular cells whose positions and sizes can be fixed . The aim is to use an inversion algorithm to calculate the electrical resistivity of the cells that provides a model response that agrees with the apparent electrical resistivity values obtained in the field. In this study we used the regularized least-square optimization method [20, 21, 22]. This optimization method has two different constraints: the smoothness-constrained method and the robust method ; the first is used when the subsurface exhibits a smooth variation in resistivity distribution and the second in regions that are piecewise constant and separated by sharp boundaries [20, 24]. As in the electrical sounding, two resistivity datasets were collected using a Wenner-Schlumberger array for the electrical resistivity imaging on the 16th of February, 2012, and 31st of May, 2012. 3.4. Time-lapse inversion To monitor the changes in subsurface resistivity values during the period defined in the study area, we used the Res2Dinv inversion software (Geotomo); the time-lapse dataset can be interpreted through the time-lapse method proposed by . In this software, the initial dataset for the inversion model is used as a reference model in the inversion of the later time-lapse datasets . For our first dataset, we used the robust method; regarding another inversion parameter, we used an initial damping factor of 0.15, minimum damping factor = 0.030, and a simultaneous inversion. 4. Results and interpretation 4.1. Resistivity: volumetric water content derived from soil samples Figure 6 presents a plot of electrical resistivity versus the volumetric water content of the soil samples obtained in the surveyed area. The fit was done using a power function with a good coefficient of determination, 4.2. Electrical sounding Figure 7(a) represents the two datasets obtained with a Schlumberger array in the given periods; subsequently, with the resolution of the inverse problem 1D, these datasets were fitted to a curve (for each one) that obeys the number of layers with their respective values of calculated electrical resistivity and thickness. After solving the inverse problem for each dataset, the errors obtained were not greater than 2.1%. Figure 7(b) shows a three-layer model for each test. In both cases, the resolution of the inverse problem suggests the existence of a first layer of 14.5–19.7 Ω.m and a variation of thickness from 0.6 m to 1.6; this effect is linked to the change from dry to rainy season. Water-table elevation obtained from a piezometer has shown variation between 1.57 and 0.61 m for each date, followed by a second layer of 8.9 and 9.6 Ω.m and 5.4 and 6.4 m thick for each season, respectively. Finally, there is a last layer with 16.2 and 16.5 Ω.m; the results of this last layer do not show significant changes in their electrical properties and thicknesses. In accordance with the borehole at the site, the two first layers are linked to weathered and fractured sedimentary rock, while the last layer reported for both analyses is linked to hard sedimentary rock. 4.3. Electrical resistivity imaging and time-lapse results Figure 8(a) and (b) show the results of inverse problem solution; in these electrical tomographies, it is possible to identify a first horizon related to weathered rocks and clay (13–27 Ω.m) with tones in brown, red, and yellow. The changes in calculated resistivity values are related to the beginning of rainy season; saturation of surface horizons can produce a decrease in calculated resistivity value. At depth, it is possible to identify a low resistivity (6–13 Ω.m) horizon from the result of Figure 8(a). However, these low values are also revealed at shallow depth; see Figure 8(b). About Figure 8(c), high negative percentage changes are linked to increase of water content in subsoil produced by rains which occurred on May 31, 2012. At depth, the percentage changes are close to 0. Positive percentage changes in model resistivity are related to inversion artifacts. It is possible that these unrealistic changes can be linked to the removal electrode after the first test or inversion scheme used in this analysis. The results of this study show the value of petrophysical relationship of soil samples in understanding the potential function between the electrical properties of rocks and its volumetric water content. These functions can help to understand the evolution of vadose zone moisture in response to seasonal changes in the tropics. Electrical sounding and electrical resistivity imaging are useful tools not only for monitoring changes in the physical properties of this kind of soils but also for associating the different types of soils and rocks with its electrical properties. We have seen the association of these results with the borehole at the site. The strong negative percentage variation in calculated resistivity values presented in the surveyed area shows the important seasonal changes occurring in the tropics, where these negative values are related to the superficial infiltration produced by the rainfall during the transitional season (dry → rainy). The positive percentage changes in model resistivity can be associated with artifacts, linked to inversion method used or due to the removal of the electrodes after each test. I am grateful to the Technological University of Panama and National Research System of SENACYT for all their support. I would like to thank the reviewers for their constructive observations. Conflict of interest Author discloses no potential conflicts of interest.
<urn:uuid:b6f5a882-340c-42c1-9ee0-7a63266e5ef0>
CC-MAIN-2021-43
https://www.intechopen.com/chapters/59380
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585242.44/warc/CC-MAIN-20211019043325-20211019073325-00350.warc.gz
en
0.919122
3,727
3.296875
3
The term mindfulness is becoming more well-known with the growing interest in meditation. But what is the actual definition of mindfulness? And how do you effectively define a concept metaphysically interwoven through multiple cultures over time? Gelong Thubten has a wonderful way to define mindfulness. He does so by bringing attention to the fact that mindfulness and meditation are about turning your brain ON to experience the moment on a deeper level, rather than turning it OFF to escape reality. Gelong Thubten also mentions that mindful behavior helps us better understand the way our brains work so that we can learn how to tame and understand its function for better control, instead of allowing our thoughts to control us. In this article, you’ll learn how to: - Implement mindfulness with daily habits - Meditate from an honest approach - Improve meditation techniques with a few powerful concepts to build deeper awareness What Is Mindfulness? The definition of mindfulness can be perceived as: Becoming better attuned to the internal and external environments coexisting around you. It can be perceived as building compassion for how someone else may be feeling in THEIR shoes. Mindfulness can be perceived as listening deeper into your intuition and internal wisdom. Mindfulness can also be described as observing the world around you from a neutral platform of perception by patiently watching the trees sway, the birds play, and the river effortlessly flows away. The most essential part of defining mindfulness would be by experiencing the phenomena through one’s own lens of perception, for your own interpretation to build and prosper within real time. What does mindfulness mean for you? To you? And for the world in which you experience? How does one feel when they are experiencing a mindful response or recognition? These mindful moments become your platform for a deeper comprehension of your own human experience, while still sharing commonalities with the world around you. What Is Mindfulness Meditation? A very common interpretation of meditation is that it is often perceived as doing nothing at all. “If I want to rest and do nothing, I will just take a nap or go to sleep.” When in actuality, doing nothing is still doing something. BUT, meditation most certainly is not doing nothing. The world revolves around the perceptions we place on it under any given context. Perception is everything. Everything. How do we challenge the position of our current perception? Continually introduce new information to be processed by the mind, body, and soul. Mindfulness meditation thus becomes an opportunity to deeper understand emotions, scenarios, fears, desires, happiness, habitual patterns, blissfulness, dreams, world constructs, social media, family, breathing, body alignment, strengths, or even amplifying appreciation for yourself and the world around you. When one begins to look at mindfulness or meditation as an opportunity for processing information, the doors open for development throughout the entire mansion of the mind. Some healthy attributes that build and develop through mindfulness meditation include compassion, appreciation, understanding, attention, focus, acceptance, patience, and internal wisdom. 5 Questions To Ask Yourself Before, During, And After Meditation These questions can be asked before, during, or after meditation: - How will you choose to perceive this moment? - How else could you perceive this moment? - Which option would lead to a better perception of life as a whole? - Which option would lead to a better understanding of the world as a whole? - Where could these thoughts be coming from? Memories? Expectations? Love? Fear? Or are they being influenced by an outside source? Two Myths About Mindfulness Dispelled Professional meditation teacher, Gelong Thubten, has obtained a wonderful perception of meditation through years of development and personal experience. He has come across two common myths of mindfulness throughout his experiences that should be recognized. Meditation is meant to escape from reality and turn off the mind when it is more about developing consciousness and awareness for the moment you are experiencing and becoming more engaged in the present moment. Meditation means to clear your mind or empty your mind from your thoughts (which is pointless and impossible). It is more about focusing on the mind. You are not trying to silence the mind, you are trying to understand it. Geniune joy is when your mind experiences peace from within.– Gelong Thubten Applying these concepts to the way one can interpret the idea of meditation can immediately alter the way one approaches a mindfulness meditation session. Limiting expectations of the desired outcome will also be an important tool for progression through mindfulness meditation. How To Meditate When learning how to meditate, it is helpful to recognize two major concepts: 1. Every single moment in life is a proper moment for mindfulness meditation. There does not have to be a specific morning or evening routine set in order to become mindful, although routines may help some people create mindful habits more efficiently, they are not necessary for the path towards mindfulness. Listening to the environment while stuck in traffic, paying attention to your body and breath while in the office, or positioning your posture for better alignment are all moments of mindful behaviors. Absorb these moments in a conscious manner and mindfulness will become more present within your life. 2. Begin listening, feeling, absorbing, acknowledging, comprehending, learning, or observing anything you possibly can during any moment. Become the silent observer of your environment. There is nothing special needed to begin, you just have to slowly hone in your attention, intention, and perception into the moment of the now. Meditation Techniques By Gelong Thubten Before getting started, be sure to set a clear intention to stay on the path towards building compassion, understanding, acceptance, forgiveness, joy, and wisdom. Also, start your day by slowing downtime. Sit up. Ground yourself. Breathe. Focus on your body. Be clear. Be present. Then, let your day start from there. Getting out of your thoughts and into your body for even a moment builds habit. Below are some different types of meditations to try out and some guidelines to follow with each: Compassion And Forgiveness Meditation - Setting an intention on the growth of compassion and benefit for others. - Think about people and their pain and suffering to obtain a different perspective. - Consciously develop empathy for others and a wish to help them. Let the wish become a habit through development. - Throughout the day, think about how you can benefit others, or how you can forgive others, actively putting your mind in a compassionate state so that it can become a habit. - Mantra: “I am doing this to bring more peace to my mind and more peace to the world.” Love And Kindness Meditation - Wish for the people around you to be happy, to experience happiness and pleasure to their utmost levels. - Start feeling genuinely happy for someone close to you, then start to broaden your spectrum to strangers and people you have a problem with to activate the habit deeper, making it stronger and stronger. Eventually, working towards wishing happiness upon the entire planet, world, and other living beings. - Frees us and frees others. - Trains you to think in a different way. - Putting yourself in another person’s shoes, changing positions, and feeling how they feel for better understanding to build compassion. - “You don’t have to push forgiveness, you have to change your perspective, and forgiveness will naturally arise.” – Gelong Thubten Here is a wonderful story about forgiveness from the man, himself, Gelong Thubten: - For most people, joy is dependent on external conditions. We imagine that joy will come to us when everything goes right. Since joy comes from the outside for most people, this creates an instability in happiness because outside things are impermanent and unstable. - Express daily moments of gratitude. - Look at the world around you and express how fortunate you are, even for the difficulties of like because they teach us how to become better. - Train through a conscious development of moments of gratitude throughout the day, just through thinking, you are shifting your gear in a more joyful state. - “Genuine joy is when your mind experiences peace from within.” – Gelong Thubten Wisdom (Result From Meditation) - Wisdom comes when you start to listen to your heart instead of your head. - Meditation helps you start to listen to the deeper part of your mind, your intuition, or the “deep knowing.” If you come from that place of perception, you make wise choices and have a wiser outlook around reality and your relationships. - Intuition is building a deep trust with your internal wisdom, your own inner voice. - We all want to be happy and free of suffering. - Intuition becomes internal wisdom. - Wisdom is intuition, wisdom is insight, a deep knowing about reality and then making choices from that deeper place from yourself. - We are able to build compassion and understanding by simply switching our perception into another point of view. Let Your Journey Of Mindfulness Begin There would be no right or wrong way to begin meditation because we are all on different journeys of operation, understanding, and experiences. The fact that someone is even willing to take a moment to develop a better understanding of the world around them (and the world inside of them) shows the depth of their character and willingness to learn. These two qualities alone put somebody on the proper path for healthy development through mindfulness and meditation. The stated mindfulness exercises above will serve a great benefit to the development of mindfulness meditation techniques and habits. The journey of mindfulness meditation is one that requires patience, pattern recognition, pushing through barriers, and acknowledgment for where your foundation truly is, not where we wish for it to be. Build from an honest foundation, put in the patience, and implement mindful moments throughout your day. With this, you will be well on your way to a deeper appreciation for the beautiful world around you, and around us. Good luck on your journey! You are already everything you could ever wish to be, it just simply becomes being.
<urn:uuid:07965e72-edc7-43c1-9397-b04ca405ced6>
CC-MAIN-2021-43
https://blog.mindvalley.com/what-is-mindfulness/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585449.31/warc/CC-MAIN-20211021230549-20211022020549-00230.warc.gz
en
0.92834
2,098
3.4375
3
fresh water (n.) 1.water that is not salty Fresh Water (n.) 1.(MeSH)Water containing no significant amounts of salts, such as water from RIVERS and lakes. 1.relating to or living in or consisting of water that is not salty"freshwater fish" "freshwater lakes" Fresh-waterFresh"-wa`ter (?), a. 1. Of, pertaining to, or living in, water which is not salty; as, fresh-water geological deposits; a fresh-water fish; fresh-water mussels. 2. Accustomed to sail on fresh water only; unskilled as a seaman; as, a fresh-water sailor. 3. Unskilled; raw. [Colloq.] “Fresh-water soldiers.” Knolles. voir la définition de Wikipedia Environment, Environmental Impact, Environmental Impacts, Environmental Policies, Environmental Policy, Impact, Environmental, Impacts, Environmental, Policies, Environmental, Policy, Environmental - Hydroxides - Oxides[Hyper.] Hydrogen Oxide, Water[Hyper.] Fresh Water (n.) [MeSH] composé d'oxygène (fr)[Classe] élément minimal de l'être selon les anciens (fr)[ClasseParExt.] sea water; seawater; saltwater; brine[ClasseHyper.] tear, teardrop - perspiration, sudor, sweat - body of water, water - flake, snowflake, snowflakey - diamond dust, frost mist, frost snow, ice crystal, ice needle, poudrin, snow mist - ice, water ice[Element] freshwater, fresh water[Ant.] fresh water (n.) fresh water; freshwater; drinking water[ClasseHyper.] freshwater, fresh water[Dérivé] Wikipedia - voir aussi Fresh water is naturally occurring water on the Earth's surface in ice sheets, ice caps, glaciers, bogs, ponds, lakes, rivers and streams, and underground as groundwater in aquifers and underground streams. Fresh water is generally characterized by having low concentrations of dissolved salts and other total dissolved solids. The term specifically excludes seawater and brackish water although it does include mineral rich waters such as chalybeate springs. The term "sweet water" has been used to describe fresh water in contrast to salt water. Scientifically, freshwater habitats are divided into lentic systems, which are the stillwaters including ponds, lakes, swamps and mires; lotic systems, which are running water; and groundwater which flows in rocks and aquifers. There is, in addition, a zone which bridges between groundwater and lotic systems, which is the hyporheic zone, which underlies many larger rivers and can contain substantially more water than is seen in the open channel. It may also be in direct contact with the underlying underground water. The source of almost all fresh water is precipitation from the atmosphere, in the form of mist, rain and snow. Fresh water falling as mist, rain or snow contains materials dissolved from the atmosphere and material from the sea and land over which the rain bearing clouds have traveled. In industrialized areas rain is typically acidic because of dissolved oxides of sulfur and nitrogen formed from burning of fossil fuels in cars, factories, trains and aircraft and from the atmospheric emissions of industry. In some cases this acid rain results in pollution of lakes and rivers. In coastal areas fresh water may contain significant concentrations of salts derived from the sea if windy conditions have lifted drops of seawater into the rain-bearing clouds. This can give rise to elevated concentrations of sodium, chloride, magnesium and sulfate as well as many other compounds in smaller concentrations. In desert areas, or areas with impoverished or dusty soils, rain-bearing winds can pick up sand and dust and this can be deposited elsewhere in precipitation and causing the freshwater flow to be measurably contaminated both by insoluble solids but also by the soluble components of those soils. Significant quantities of iron may be transported in this way including the well-documented transfer of iron-rich rainfall falling in Brazil derived from sand-storms in the Sahara in north Africa. Water is a critical issue for the survival of all living organisms. Some can use salt water but many organisms including the great majority of higher plants and most mammals must have access to fresh water to live. Some terrestrial mammals, especially desert rodents appear to survive without drinking but they do generate water through the metabolism of cereal seeds and they also have mechanisms to conserve water to the maximum degree. Out of all the water on Earth, only 2.75 percent is fresh water, including 2.05 percent frozen in glaciers, 0.68 percent as groundwater and 0.011 percent of it as surface water in lakes and rivers. Freshwater lakes, most notably Lake Baikal in Russia and the Great Lakes in North America, contain seven-eighths of this fresh surface water. Swamps have most of the balance with only a small amount in rivers, most notably the Amazon River. The atmosphere contains 0.04% water. In areas with no fresh water on the ground surface, fresh water derived from precipitation may, because of its lower density, overlie saline ground water in lenses or layers. Most of the world's fresh water is frozen in ice sheets. Many areas suffer from lack of distribution of fresh water, such as deserts and uncommonly known; Florida, US. |Water salinity based on dissolved salts| |Fresh water||Brackish water||Saline water||Brine| |< 0.05%||0.05% – 3%||3% – 5%||> 5%| Fresh water creates a hypotonic environment for aquatic organisms. This is problematic for some organisms with pervious skins or with gill membranes, whose cell membranes may burst if excess water is not excreted. Some protists accomplish this using contractile vacuoles, while freshwater fish excrete excess water via the kidney. Although most aquatic organisms have a limited ability to regulate their osmotic balance and therefore can only live within a narrow range of salinity, diadromous fish have the ability to migrate between fresh water and saline water bodies. During these migrations they undergo changes to adapt to the surroundings of the changed salinities; these processes are hormonally controlled. The eel (Anguilla anguilla) uses the hormone prolactin, while in salmon (Salmo salar) the hormone cortisol plays a key role during this process. Many sea birds have special glands at the base of the bill through which excess salt is excreted. Similarly the marine iguanas on the Galápagos Islands excrete excess salt through a nasal gland and they sneeze out a very salty excretion. An important concern for hydrological ecosystems is securing minimum streamflow, especially preserving and restoring instream water allocations. Fresh water is an important natural resource necessary for the survival of all ecosystems. The use of water by humans for activities such as irrigation and industrial applications can have adverse impacts on down-stream ecosystems. Chemical contamination of fresh water can also seriously damage eco-systems. Pollution from human activity, including oil spills, also presents a problem for freshwater resources. The largest petroleum spill that has ever occurred in fresh water was caused by a Royal Dutch Shell tank ship in Magdalena, Argentina, on January 15, 1999, polluting the environment, drinkable water, plants and animals. Fresh and unpolluted water accounts for 0.003% of total water available globally. Changing landscape for the use of agriculture has a great effect on the flow of fresh water. Changes in landscape by the removal of trees and soils changes the flow of fresh water in the local environment and also affects the cycle of fresh water. As a result more fresh water is stored in the soil which benefits agriculture. However, since agriculture is the human activity that consumes the most fresh water, this can put a severe strain on local freshwater resources resulting in the destruction of local ecosystems. In Australia, over-abstraction of fresh water for intensive irrigation activities has caused 33% of the land area to be at risk of salination. With regards to agriculture, the World Bank targets food production and water management as an increasingly global issue that will foster debate. Fresh water is a renewable and changeable, but limited natural resource. Fresh water can only be renewed through the process of the water cycle, where water from seas, lakes, rivers, and dams evaporates, forms clouds, and returns to water sources as precipitation. However, if more fresh water is consumed through human activities than is restored by nature, the result is that the quantity of fresh water available in lakes, rivers, dams and underground waters is reduced which can cause serious damage to the surrounding environment. Fresh water withdrawal is the quantity of water removed from available sources for use in any purpose. Water drawn off is not necessarily entirely consumed and some portion may be returned for further use downstream. There are many causes of the apparent decrease in our fresh water supply. Principal amongst these is the increase in population through increasing life expectancy, the increase in per capita water use and the desire of many people to live in warm climates that have naturally low levels of fresh water resources. Climate change is also likely to change the availability and distribution of fresh water across the planet: "If global warming continues to melt glaciers in the polar regions, as expected, the supply of fresh water may actually decrease. First, fresh water from the melting glaciers will mingle with salt water in the oceans and become too salty to drink. Second, the increased ocean volume will cause sea levels to rise, contaminating freshwater sources along coastal regions with seawater”. The World Bank adds that the response by freshwater ecosystems to a changing climate can be described in terms of three interrelated components: water quality, water quantity or volume, and water timing. A change in one often leads to shifts in the others as well. Water pollution and subsequent eutrophication also reduces the availability of fresh water. With one in eight people in the world not having access to safe water it is important to use this resource in a prudent manner. Making the best use of water on a local basis probably provides the best solution. Local communities need to plan their use of fresh water and should be made aware of how certain crops and animals use water. As a guide the following tables provide some indicators. Table 1 Recommended basic water requirements for human needs. |Activity||Minimum litres per day||Range litres per day| |Cooking and Kitchen||10||10-50| Table 2. Water Requirements of different classes of livestock |Animal||Average Gallons per day||Range gallons per day||Average in litres| Table 3 Approximate values of seasonal crop water needs. |Crop||Crop water needs mm / total growing period| Contenu de sensagent dictionnaire et traducteur pour sites web Une fenêtre (pop-into) d'information (contenu principal de Sensagent) est invoquée un double-clic sur n'importe quel mot de votre page web. LA fenêtre fournit des explications et des traductions contextuelles, c'est-à-dire sans obliger votre visiteur à quitter votre page web ! Solution commerce électronique Augmenter le contenu de votre site Ajouter de nouveaux contenus Add à votre site depuis Sensagent par XML. Parcourir les produits et les annonces Obtenir des informations en XML pour filtrer le meilleur contenu. Indexer des images et définir des méta-données Fixer la signification de chaque méta-donnée (multilingue). Renseignements suite à un email de description de votre projet. Jeux de lettres Lettris est un jeu de lettres gravitationnelles proche de Tetris. Chaque lettre qui apparaît descend ; il faut placer les lettres de telle manière que des mots se forment (gauche, droit, haut et bas) et que de la place soit libérée. Il s'agit en 3 minutes de trouver le plus grand nombre de mots possibles de trois lettres et plus dans une grille de 16 lettres. Il est aussi possible de jouer avec la grille de 25 cases. Les lettres doivent être adjacentes et les mots les plus longs sont les meilleurs. Participer au concours et enregistrer votre nom dans la liste de meilleurs joueurs ! Jouer Dictionnaire de la langue française La plupart des définitions du français sont proposées par SenseGates et comportent un approfondissement avec Littré et plusieurs auteurs techniques spécialisés. Le dictionnaire des synonymes est surtout dérivé du dictionnaire intégral (TID). L'encyclopédie française bénéficie de la licence Wikipedia (GNU). Changer la langue cible pour obtenir des traductions. Astuce: parcourir les champs sémantiques du dictionnaire analogique en plusieurs langues pour mieux apprendre avec sensagent. calculé en 0,094s
<urn:uuid:297506c6-0b01-4035-971f-988d968f3761>
CC-MAIN-2021-43
http://dictionnaire.sensagent.leparisien.fr/Fresh_water/en-en/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585280.84/warc/CC-MAIN-20211019171139-20211019201139-00470.warc.gz
en
0.778992
2,892
3.328125
3
Every morning on my way into work in a small, beautiful Middle Tennessee town, I pass an official state historical marker that reads, “Forrest Rested Here.” Nathan Bedford Forrest is venerated in large circles of the two-county span that constitutes my stomping ground. Many times, I have contemplated this as I drive from the flat farming country of one county to the slightly mountainous terrain of the next. Why is he famous here? This, I reason, is because he was a native Tennessean, and he became world-famous as a cavalry commander during the Civil War. But I feel there is something additional, something I haven’t quite put my finger on. There is more to the feeling for Forrest here than pride. I ponder this, too. I know that Forrest is controversial. As a child, I remember a battle over whether to remove a statue of Forrest from the campus of Middle Tennessee State University. I later learned the reason for the storm swirling around his legacy: his troops were responsible for the slaughter of Federal African American troops who were attempting to surrender to Confederates at the Battle of Fort Pillow. Historians differ on Forrest’s involvement, with beliefs ranging from Forrest giving the order to kill and actively participating, to Forrest being outside the gates and unaware of what was taking place, to Forrest giving the order for the massacre to be halted and doing everything in his power to stop it. He is also widely believed to be a founding member of the Ku Klux Klan, with controversies existing over the level and purposes of his involvement in this, too. I have not researched enough to give my own opinion, but needless to say, it crosses my mind that there are less controversial figures to note with a historical marker. Then I decided as part of my Civil War research to read a biography of every major military figure, North and South. When I made it to Forrest, I thought maybe I could get some elucidation on the matter of the respect for him here. I never expected to find such a direct answer. The two-county span that constitutes my home is rather unheard-of in the grand scheme of the Civil War. It is usually left out of all mention of era histories. Not so in a biography of Nathan Bedford Forrest. I was stunned to see the two county seats mentioned again and again, one of them in a very harrowing way. Federal (Union) troops moved through as they marched on toward a much larger city. In the smaller of the two towns, they rounded up every civilian (non-combatant) male in the town, arrested them, and took them nineteen miles away to the larger city, where they imprisoned them. Given that most able-bodied men were in the army, these were mostly old men and little boys. There is a quote of Forrest’s that when he arrived in the wake of this take-over scene, the women of the town were “buzzing like hornets.” At first, I thought this was a sexist comment about “noisy” women. And then I stopped and thought about how one would feel if the old men and little boys of one’s family were marched away and locked into jail by an invading army. Buzzing probably isn’t a dramatic word. This was a rounding up of civilian males reminiscent of Pharaoh, with the purpose being what—extermination? It very nearly happened. Forrest rode into town, apparently asking what the commotion was. Upon being told, he promised the women that the men would be back with them by nightfall the next day. And they were. That was the magic of Forrest for Middle Tennessee. Desperation knocked; he found a solution. His troops rode into the larger city, where they seized the town. Seeing that the matter was hopeless, a Union soldier set fire to the building where the men and boys were imprisoned, in an attempt to burn them alive. The fire was put out by Forrest’s troops, and he collected the men and boys and returned them home. As I am reading this almost fantastical story, I think: Why don’t I know this? And just like that, my mind travels back through the years, and I realize: I do know this. I am taken to a summer when I was a small child, and my mom and I were in the larger city shopping. A history teacher, she never let an opportunity pass for learning. She said, “Do you see that building? All of the men of [the small town] were rounded up during the Civil War and imprisoned there. They were held there until Nathan Bedford Forrest’s cavalry released them.” Just the simple historical facts, yet they stayed in my mind’s recesses all of those years. And I realize: We all know this. Whether every person knows the facts, this feeling has been handed down through local history just enough that it has left an impression. It is not general glory-heaping on a famous person. It is not worship of a cause or a controversial character because of something he came to symbolize. This is personal. He saved their men. Nothing more, nothing less. This seems profound, somehow. I think of our current controversy over historical markers and statues and am deeply affected because I grasp all sides of the arguments and cannot think of perfect solutions. For the life of me, I cannot formulate a succinct answer when someone asks me how I feel about removing statues. Many see historical memorials as honoring the person or event, and certainly many of them were put up for that purpose. I have seen certain statues and markers that put off a worshipful vibe and others that are more of just a general notation of history. (If you read the words even of the Forrest marker, it is more of a notation.) Even that question—honor or notation—is riddled with pitfalls, and again I have no answers. I tend to a slight revolutionary streak that sides with philosophers who say things like, “No generation has any right to bind the next to its precepts!” And I think: if we want to take them down, the argument “they have to stay because they’ve always been there” is just not good enough. But then I think of the people who raised money in centuries or decades past, of the artists who crafted the monuments, of the cities who have come to think of them as part of their city signature, and I reserve judgment. A lot of times, the argument for the side of keeping statues is that the commemoration goes beyond the person, and what he or she did, to a value that is worth upholding. An example of this would be a Thomas Jefferson statue representing not his personal record as a slaveholder, but the founding principles of the country that are sacred, such as self-government, liberty, and the pursuit of happiness. The fear is that if we remove the statues, we no longer stand for those things. However, there are deep feelings by those who look at certain statues and see instead a person, an event, or a cause that was harmful to their ancestors. Of harm that still resonates today. That still hurts today. And lest you think anyone is immune from such feelings, picture your least favorite historical figure, then picture yourself standing in front of a statue of that person. How would you feel? For a test-drive, how about: Hitler? Mao Tse-tung? Or what if your ancestors were killed by the Ku Klux Klan and you are standing in front of a statue of Nathan Bedford Forrest, whose name is irrevocably tied to that organization? Or a young man in my sister’s graduate class in South Carolina who recounted what it was like for his grandmother, whose ancestors had lost everything at the hands of General Sherman, to stand in front of his statue in New York City. Or a Native American standing in front of a statue of one of the myriad generals who made war on your ancestors? For these and others, it feels equivalent to standing in front of a statue to a murderer. I discussed feelings toward historical figures in a separate post, and I stand by the arguments made in that post. I am not speaking as to the personal character of any person in this post, but as to the separate issue of public historical commemoration, which has connotations outside of history itself. There is a feeling that statues send a message about what we value and don’t value, as well as about the legacy we are passing down to future generations. And so the argument does go deeper than just choices on aesthetic display, or liking/disliking someone. Maybe the people who say we need statues to virtues are right. Certainly virtues are less complicated than any human who has lived. Except…it works for the Statue of Liberty. I can’t imagine that working many more times. There is something unique and interesting about expressing art in the form of historical, human figures. But then, which ones? I have heard legitimate, good historians say some of the most baffling things about this. This person, because he didn’t own slaves. My response: Yes, but he didn’t believe in equality for women. Not Confederates because they fought against the United States government in an act of treason. But we can leave the Revolutionary War figures because they built the nation. My response: Yes, but they enslaved people. You see the trouble with starting to pick and choose? Then a historian will throw out a non-controversial figure and say, Why not this person? He/she never hurt anybody. And I think: Look hard enough; they did. Often these conversations themselves are frankly overblown. Statues are inanimate objects, after all. In the grand scheme of things the statues themselves do not speak to what is in the hearts and minds of people, nor can they physically harm us. This post was written before the recent humanitarian crisis reminded us of what oppression and fear looks like for many around the world… And I don’t want to post this without acknowledging that when statue conversations reach a fever pitch of life-or-death intensity…we have lost the thread. To the extent conversations center around the public interest in the message being sent or the interests of all people concerned, along with the justifiable feelings of many, they are reasonable and productive. To the extent they descend into very emotional condemnation (or defense) of long-dead people, the conversations would benefit from the increment analysis to deconstruct emotion surrounding public historical figures that I suggested in the last post. I don’t like historical demonization. I question whether we understand the human condition at all if don’t grasp the depth of our own personal sins and negative capabilities, as though as long as we haven’t done a handful of things that make up our current issues, we have it right. It seems too easy to look back and point the finger, much easier than looking inwardly. I also do not like blind historical adulation. We run the risk of becoming insensitive if we cannot put ourselves in someone else’s shoes and see that, while a past figure may be someone we admire and want to honor for various reasons, there are reasons others might feel differently. Both sides of the coin unfortunately sometimes betray a total lack of empathy for the other. Often the arguments do not consider context and use instead one aspect only to make a point, which, while it might be the most important point to one person, does not consider the whole story or aspects that might be important to others. The closest I have gotten to hearing any real solution on the subject is the suggestion by some historians for local governments to form coalitions to research and then make thoughtful, reasoned decisions about historical markers and monuments, present and future. And that will work…until the very next generation has different ideas. The closest I myself can get to expressing an opinion is saying that these things shouldn’t be decided by mobs in the heat of protest. I can imagine scenarios in which people cannot get things accomplished through the ordinary courses of government and peaceful demonstration. I can picture situations in which the majority decides to leave something that feels, to some people, to be a huge deviation from morality. And yet, to the extent the subject is not one of a violation of personal liberty or civil rights under the law, we do benefit by living in a peaceful, orderly society, and by submitting to the democratic, elected processes of government, even if that sometimes means we lose. So where do we come down on the issue of statues? Throughout history, statues coming down have usually meant some sort of revolution is brewing. Some of them have been good for democracy. Some of them have been bad. That being the case, I do want to acknowledge that there are deep ramifications to this debate long-term. A disagreement over whether something is going to signal the end or beginning of something that goes to the core of freedom is something emotional. Both sides of the debate do have a radical wing that are just not good for the country. But for the largest portion of people in this debate, for now, I do not believe either side of this debate is feeling quite so radical. Both sides can stray into the territory of high emotionalism, yes. But at their core, there are good, valid arguments for both that center on admirable values. And I do not think there is any way to reconcile that perfectly. But I think “Forrest Rested Here” has something to teach us. Not everything is cut and dry. There are complex histories, backstories, and emotions that often have nothing to do with what things might, on their surface, seem to be. A desire to remove a statue could be kindled by deep emotions based on lengthy history. A desire to place a historical marker could similarly be more complicated than the things to which we boil it down today. Historical memory is more complicated, too. Everyone has their story. These stories are complex at every level. And once that acknowledgement is made, the urgency, on any side of the matter, is already diffused by half because the urgency is driven by perceived hatred from an opposing side. But I would posit that hatred has very little to do with the valid feelings that most, for or against, historical notation, can bring to the table. Cover Photo Credit: New York Times In-text Photo by Tara Cowan
<urn:uuid:b390120d-3d2a-40e6-9796-fe7896b45d7d>
CC-MAIN-2021-43
https://teaandrebellion.com/category/ask-the-historian/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585507.26/warc/CC-MAIN-20211022114748-20211022144748-00350.warc.gz
en
0.975119
2,975
3.0625
3
St. Robert Southwell, Jesuit priest and martyr, was hanged, drawn and quartered on February 21, 1595. To commemorate the anniversary and to celebrate the legacy of this great Catholic saint and poet, Joseph Pearce was interviewed by Jan Franczak for the Polish journal, PCh24.pl. This is the interview’s first publication in English. Franczak: Robert Southwell (1561-1595) was a poet, a Jesuit missionary in his own country, a martyr, a saint and Shakespeare’s distant cousin (the last fact seems not to be mentioned at all or rarely mentioned if I’m not mistaken). Which of these roles was the most important and why? Pearce: Strictly speaking, the fact that he was a martyr is the most important because it opened the gates of heaven, leading to Southwell’s canonization as one of the Forty Martyrs of England and Wales. The fact that he was a poet, and a very fine poet, is important because Southwell exerted a considerable literary influence upon Shakespeare. This is more important than the fact that he was Shakespeare’s distant cousin. Franczak: Let’s then begin with Southwell’s martyrdom. To put our readers in the picture could we briefly describe the situation of Catholics in England at that time, particularly the situation of Catholic priests? Pearce: Southwell, a contemporary of Shakespeare, lived the entirety of his life during the long and brutal reign of Queen Elizabeth I. During the reign of “Bloody Bess”, it was a crime to be a Catholic priest or to shelter a Catholic priest from the authorities; and it was not merely a crime but a crime punishable by death. Robert Southwell went into exile in order to study for the priesthood. When he returned to England as a Jesuit missionary priest to minister to England’s persecuted Catholics, he knew that he would face torture and death if he were caught. Franczak: Reading memoirs by John Gerard or William Weston I couldn’t stop thinking how brave these men were. All of them seemed to be aware of the possiblity of one of the most gruesome deaths they could face. And yet they were ready to face it for reasons that must seem incomprehensible to many readers today, especially if these readers have been taught by some particular Catholic priests. Could you say something about the courage of these men and the sort of death they faced, if arrested? Pearce: You are correct that the courage of these holy priests is astonishing. The usual death sentence passed on those convicted of being a priest in Elizabeth’s England was to be hanged, drawn and quartered. This involved the priest being hanged by a noose, then cut down while he was still alive; then, while the priest was still conscious, he was castrated, after which he was cut open so that his vital organs could be removed, one by one, the last of which was the heart. These were then thrown on a fire. The priest, now mercifully dead, was then decapitated and his body cut into quarters. The decapitated head and the pieces of the priest’s body were then displayed in prominent places as a gruesome warning to England’s Catholics of the punishment that would be inflicted upon priests. The fact that many young men still went abroad to study for the priesthood with the intention of returning to England to minister to the Faithful says a great deal about the strength of their faith, in addition to the depth of their courage. Franczak: You called Southwell in your book Shakespeare on Love: Seeing the Catholic Presence in Romeo and Juliet “the most famous and feared Jesuit in England.” What was he especially famous for at that time and why was he “feared”? Pearce: Robert Southwell was famous for his poetry in defence of the Faith, and for his polemical prose. We need to remember that in the 1580s and 90s, poets were the bestselling writers. The age of the novel was in the future. Everyone read poetry. Southwell’s verse was widely known and widely read, even by his enemies. It seems that the queen herself was familiar with his poetry. The power of his voice, coupled with the fact that he was a Jesuit outlaw, known to be in England but managing for several years to stay one step ahead of Elizabeth’s spy network and her priest-hunters, meant that he became a sort of Robin Hood or Scarlet Pimpernel figure in the eyes of the public, especially in the eyes of the Catholic population. Franczak: The story of those “Jesuit outlaws” is really fascinating. Personally I think that for example John Gerard’s adventures are better than any James Bond movie, first of all because they are true. But I guess with the present atmosphere in Hollywood we won’t see any movie based on his captivating memoirs any time soon. Do we have any accounts of Robert Southwell’s equally dramatic adventures? Pearce: Due to the tyrannical nature of the times, those who were trying to elude the power of the state did not leave a paper trail, much as dissidents in the Soviet Union or in Poland during the communist era did not leave a paper trail. There is, therefore, little documentary evidence apart from that offered in the accounts of Frs. Gerard and Weston that you’ve already mentioned. We know from these accounts of some of his movements following his arrival in England in 1586 until his arrest six years later. The very fact that Southwell managed to avoid the priest-hunters and the spies for such a long period of time is itself astonishing, especially as he seems to have been based in London, the very heart of the beast and under the government’s very nose. We know of narrow escapes when houses were raided and of his hiding in priest-holes while homes were searched. There is also a great deal of circumstantial evidence to suggest that Southwell knew William Shakespeare, and it’s not beyond the realm of possibility that Southwell might have been Shakespeare’s confessor. We know that he was the confessor of the Earl of Southampton, Shakespeare’s patron. There is also undeniable textual evidence to illustrate Southwell’s influence on some of Shakespeare’s finest writing, such as Hamlet, King Lear, Romeo and Juliet and The Merchant of Venice, as well as on Shakespeare’s early poems, Venus and Adonis and The Rape of Lucrece. Franczak: Fr. John Gerard managed to make a daring escape from the Tower of London in 1597. However Robert Southwell was caught and after tortures sentenced to death and executed on February 21, 1595. But that gruesome execution didn’t go exactly as it had been planned. Both his imprisonment and the tortures he suffered and finally his execution showed his courage and deep faith. We know from some accounts that the crowd didn’t even shout “Traitor!” which was normal in that case. What happened? Pearce: Following his arrest, after eluding capture for six years, Southwell would face three years of brutal torture, never once divulging information to his torturers. His astonishing resilience and courage earned him the grudging respect of one of those who witnessed his excruciating suffering. “They boast about the heroes of antiquity,” wrote Robert Cecil, the son of Lord Burghley (William Cecil), Elizabeth’s chief minister, “but we have a new torture which it is not possible for a man to bear. And yet I have seen Robert Southwell hanging by it, still as a tree trunk, and no one able to drag one word from his mouth.” The same courage was present at the execution, especially in his words from the scaffold. Standing in the cart, beneath the gibbet and with the noose around his neck, he made the sign of the cross and recited a passage from Romans, chapter nine. When the sheriff tried to interrupt him, those in the crowd, many of whom were sympathetic to the Jesuit’s plight, shouted that he should be allowed to speak. He confessed that he was a Jesuit priest and prayed for the salvation of the Queen and his country. As the cart was drawn away, he commended his soul to God in the same words that Christ had used from the Cross: In manus tuas … (Into your hands Lord I commend my spirit.) As he hung in the noose, some onlookers pushed forward and tugged at his legs to hasten his death before he could be cut down and disemboweled alive. Southwell was thirty-three-years-old, the same age as Christ at the time of his Crucifixion. Franczak: You mentioned that Southwell was famous for his poetry in his times. He is counted among the group of poets called “metaphysical poets”. What is his place in the history of English literature? His influence on Shakespeare must for sure place him very high, mustn’t it? On the other hand some of his poems even made their way to pop-culture. “The Burning Babe”, one of his best-known poems, was recorded as a song by Sting for example. Pearce: Although “The Burning Babe” is the most popular of Southwell’s poems, its being the most often included in anthologies, he wrote several other poems of considerable merit. I include eleven of his poems in the anthology I edited, which is entitled Poems Every Catholic Should Know. At the time of his death, his poetry was widely known and widely read, even by his enemies. As Gary M. Bouchard shows in Southwell’s Sphere: The Influence of England’s Secret Poet, Southwell would influence many of the greatest poets in the English language, including Shakespeare, most notably, but also Michael Drayton, Edmund Spenser, John Donne, George Herbert, Richard Crashaw, and Gerard Manley Hopkins. The famous graveyard scene in Hamlet is influenced by Southwell’s “Upon the Image of Death” and Lear’s powerful speech in which the contrite Lear says to Cordelia that they should be “God’s spies” is an intertextual engagement with Southwell’s poem, “Decease Release”. The foregoing illustrates that St. Robert Southwell should not be revered solely as a Catholic martyr but also respected as one of the most important English poets. Franczak: You have mentioned Southwell’s influence on Shakespeare. You also wrote about it in your three books on the Bard. In the third of them, Shakespeare on Love, you even dedicated a separate section especially to Robert Southwell. It seems to me that two of Shakespeare’s plays, where you track these intertexual references, are particularly misunderstood and misinterpreted: Romeo and Juliet and The Merchant of Venice. Could we say that noticing these references to Southwell’s poems in both of these plays (apart from other works by Shakespeare) enables us to fully appreciate the depth of them, to understand the hidden meaning (at least hidden to most of the modern critics), to open the right casket, so to speak? Pearce: Absolutely. Shakespeare’s intertextual referencing of the works of Southwell enables us to understand Shakespeare’s specifically Catholic approach to the plays. It’s as if seeing the intertextuality enables us to see the plays through Shakespeare’s eyes. Take, for instance, Portia’s words after the Prince of Aragon’s failure in the test of the caskets: “Thus hath the candle sing’d the moth.” (2.9.78) And compare it to lines from Southwell’s “Lewd Love is Losse”: So long the flie doth dallie with the flame,/Untill his singed wings doe force his fall.” Not only does the phraseology suggest Shakespeare’s indebtedness to Southwell but the very title of the poem from which the phrase is extracted suggests a connection to Shakespeare’s theme that lewd love is loss. Aragon’s love is lewdly self-interested and his choice leads to the loss of his hopes to marry Portia. Shakespeare is not simply taking lines from Southwell, he is apparently taking his very theme from him. It is also intriguing that an expression ascribed by the Oxford English Dictionary to Shakespeare’s coinage was actually coined originally by Southwell, to whom Shakespeare was presumably indebted. The phrase is Shylock’s “a wilderness of monkeys” (subsequent to “a wilderness of Tygers” in Titus Andronicus), which owed its original source to Southwell’s “a wilderness of serpents” in his Epistle unto his Father. There is not sufficient space to give further examples in an interview of this length but Shakespeare’s intertextual “borrowing” from Southwell is extensive in plays as diverse in theme as Romeo and Juliet, Hamlet and King Lear. Those wishing to explore further are invited to read my books on Shakespeare in which I discuss this “Jesuit connection” in much more detail. Franczak: And the last question, about The Merchant of Venice. Usually critics mention the tragic fate of Doctor Roderigo Lopez which was supposed to inspire Shakespeare’s drama but I don’t remember anyone apart from you mention here Robert Southwell while describing the origin of the play. You say that The Merchant of Venice was written “shortly after Southwell’s execution … or during the period in which the Jesuit was being tortured repeatedly by Richard Topcliffe, Elizabeth’s sadistic chief interrogator.” And you add that “it should not surprise us therefore that we see Southwell’s shadow, or shade in Shakespeare’s play.” You also suggest that both, the figure of Bassanio and Antonio refer to the suffering of Robert Southwell. To sum up, could you briefly explain your interpretation to our readers who unfortunately still have no access to Polish translations of your books on Shakespeare? Pearce: The key point to remember is that life in Elizabethan England was similar in many ways to life under communism. Dissident opinion was suppressed. It was, for instance, illegal in Shakespeare’s time for plays to comment upon contemporary religious or political issues. In such circumstances, great ingenuity was needed. Shakespeare found many ways of attacking the Puritans, who were not only enemies of the Church but enemies of the theatre. In The Merchant of Venice, Shylock’s primary role is as a usurer. A close reading of the play shows that those who attack him do so far more for his usury than for his religious faith. There were no Jewish usurers in England in Shakespeare’s time, the Jews having been expelled from England three hundred years earlier. The usurers in England in Shakespeare’s time were the Puritans. Whereas the Catholic Church still condemned usury, John Calvin had permitted usury and his disciple, Salmasius, had codified the rules by which interest-bearing loans were permissible. Thomas Cartwright, a contemporary of Shakespeare and one of the leading Calvinists in England, followed the teaching of Calvin and Salmasius and was consequently condemned for his defense of usury. This was a hot topic in Shakespeare’s day, and one which divided people on religious lines. This being so, it is clear that Shakespeare’s audience would have perceived Shylock allegorically as a Puritan, thereby enabling Shakespeare to condemn both Puritanism and usury, while circumventing the law banning the discussion of contemporary religion and politics. By extension, Shakespeare’s audience would also have seen Shylock’s desire to take Antonio’s life as a thinly-veiled depiction of those Puritans who sought the life of England’s Jesuit missionaries, including St. Robert Southwell. In this, as in so much else in Shakespeare, it is necessary to understand the tyrannical times in which the plays were written in order to see beyond the surface to the deeper Catholic elements. As the title of my second book on Shakespeare illustrates, it is necessary to learn to see “through Shakespeare’s eyes”. If you value the news and views Catholic World Report provides, please consider donating to support our efforts. Your contribution will help us continue to make CWR available to all readers worldwide for free, without a subscription. Thank you for your generosity!
<urn:uuid:13a264e3-e767-414c-a408-7dbd900702c2>
CC-MAIN-2021-43
https://catholicmasses.org/the-jesuit-martyr-who-inspired-shakespeare/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587908.20/warc/CC-MAIN-20211026134839-20211026164839-00030.warc.gz
en
0.978725
3,502
3
3
By Iris Saar, M.Sc Exercise Science candidate, ACSM CPT, RRCA Athletic performance in extreme environments Competitive athletic events are the apex of many athletes’ professional career. Developed countries begin cultivating their athletes from a younger age through a community-based recreational sports programs and collegiate level following the school years. Out of the 600,097 high school male athletes who participate in NCAA events, (National Collegiate Athletic Association), less than 5% turn into collegiate athletes and about 1.9% reach division I participation. Female high schoolers show a slightly higher fraction of about 6.1% to collegiate level and 2.7% for division I participation (Smeyers, 2019). Those data demonstrate the fierce competition in the US to become a competitive athlete, and the accompanying intrinsic incentives to engage in high-volume training under demanding environmental conditions. A lengthy process, athletes are required and motivated to compete in various events throughout a season, often traveling globally and exposed to different ambient conditions then their home training base. This blog post reviews strategies to prepare an athlete for the extreme environment events. Extreme conditions often include altitude, heat or cold environments and challenge the athlete to compete under risk for low oxygen tension conditions such as hypoxia, normoxia and exposure to heat or cold injuries (Powers & Howley, 2019). Studies have shown that pre-adjustment might help to enhance performance by an early adaptation, or acclimation to the target race environment; repeated exposure during training to hot conditions (≥25º Celsius) facilitates heat acclimation (HA). This enhances performance through adaptations of the cardiovascular and circulatory systems, increased plasma volume and other factors discussed later (Pryor, 2019; Powers & Howley, 2019). An athletic event is used in this post in a retrospective - The 2019 World Athletics’ women championships marathon, held recently in September 27, 2019. The race, organized and governed by World Athletics (formerly IAAF, or International Amateur Athletic Federation), was held in Doha, Qatar. The country is geographically located on the west coast of the Persian Gulf. The topography is mainly desert, flat lands and the climate during the summer is hot and humid. The mean relative humidity in the month of September, when race is held, is amongst the height during the year averaging at 61% (Qatar Meteorology Department, access date April 15, 2020). Temperatures during the summer months (June through September) can reach 122º Fahrenheit or 50º Celsius (Crystal, J.A. & Duke, A., 2020). Not only is the climate extremely hot, but the overall air quality is unhealthy due to sand dust blowing from sand dunes into populated areas and affecting its residents’ respiratory health and causing or worsening asthma, bronchitis and pneumonia (Teather, 2013). The criterion for selecting a physical host location for world championship athletic events is based on many factors, some are performance-based. World Athletics has a pre-defined bidding application process and hosts sites are evaluated and selected based on compliance to terms set forth in the IAAF manual (IAAF Bidding Rules 21,in force from 1 January 2019). Interestingly, the manual does include a risk assessment clause which ranges from 1 (low likelihood of risk) to 5 (almost certain risk is likely). A retrospective analysis on the environmental conditions in Doha 2019 may suggest that adjustment is needed, for some of the 30 risk factors involved in the risk assessment clause of the manual. In order to compete in a world championship marathon, elite female marathon runners from across the globe are required to pre-qualify and maintain a consistent record of high athletic performance in the period prior to the event. The 2019 race is analyzed here due to the high number of athletes who did not finish (DNF) due to the extreme environmental conditions. The official IAAF time results, issued on September 28th, 2019 by Seiko (World Athletics championships, September 2019) recalls a temperature of 32º Celsius and 74% relative humidity at the beginning of the race. Start time was unprecedented by itself, set for 11:59pm at night in an effort to race at slightly cooler temperatures. The number of runners who did not finish (DNF) was untypically high at 30, comparing to the 40 who managed to cross the finish line. At almost equal divide between finished and DNF athletes, the significance of preparing an athlete to race in extreme conditions becomes apparent. Acclimating an athlete may be achieved in two concurrent settings. Acclimatization is training in laboratory settings, done in closed chambers whereas acclimation is a “repeated exposure to stressful environment” (Powers & Howley, 2019), i.e. field setting. To follow the principal of specificity in training, acclimation done outdoors resembles the environment of race day and is commonly used in athletic preparations. The end product would be a higher tolerance to heat loads and enhanced performance under hot conditions, as time to exhaustion will increase. The physiological factors improved by heat acclimation (HA) are various and include: Lower heart rate at work: a combined end product of increases in cardiac output, athlete’s Vo2max and functional power output. Lactate accumulations and lactate turning point are also elevated post-acclimation. Better thermoregulation (lower core temperatures): HA improves thermoregulation through affrenet feedback sent to the brain control centers, which in turn lower core temperature and regulate blood flow to the periphery (skin surface). Exercise in the heat increases the body water mass, which contributes to thermoregulation as well (Périard, 2015). Increased plasma volume: once achieved after acclimation, it counters balance the drop in stroke volume, which tends to reduce under heat conditions when plasma concertations are low. Earlier onset of sweating and increased rate: evaporation is a main source of cooling will enact as an efficient cooling mechanism, increased up to three times its earlier capacity. Reduced salt loss in sweat: antidiuretic hormones will inhibit loss of electrolytes such as Sodium and Chloride in sweat evaporation, promoting skeletal muscle’s metabolism. Reduced blood flow to skin: to enable more oxygenated blood to muscles, to allow for contractions. Metabolism of heat shock proteins: after exposure to heat, stress proteins are synthesizes to protect cells from heat-induced damage. Elements in training From the broader perspective of coaching, the time of year and location of the selected target race are considered. Based upon the retrospective review of the Doha marathon, athletic preparation (coaching) should begin with the macro cycle focusing on developing aerobic capabilities, about 6 months prior to the event. The aerobic focus stems from two main factors; First, extreme heat is detrimental to aerobic-based cellular processes such as aerobic glycolysis, so training to improve aerobic might offset the predicted damage. Second, the marathon trained for is a long-term event relying almost solely on aerobic energy production and required a strong aerobic base (Powers & Howley, 2019). The athlete in this hypothetical discussion is a well-trained healthy female, 35 years of age and injury free at the onset of the training plan. Her Vo2max is currently 45 ml/kg/min and her nutrition incorporates carbohydrates, fat and proteins. Current weekly mileage is 50 miles. The building blocks of the athletic preparation are all represented during the 24 weeks training plan. The microcycle is 4 weeks span and the mesocycle is 7 days in length. The following elements relate to the 7-days mesoccyle. Note that all elements are greatly affected by heat and therefore should be centered during the athletic preparation. Intensity: 1x / cycle, speed work is prescribed. This is a moderate to intermediate length event and is approximately 80% aerobic. Example: mile repeats at 85% Vo2max. Intensity levels above 40% of Vo2max are necessary to trigger HA (Powers & Howley, 2019). Duration: marathon training required long term training bouts. Run sessions over 90 minutes and up to three hours are prescribed 1x/ cycle, in a linear and gradual increase over the microcycles. Frequency of heat exposures: the athlete is exposed to heat and relative humidity for a minimum of 5x/ cycle run sessions during the base-building phase. This may increase to double session a day once base has been achieved and at least three weeks prior to tapering phase. Interval running sessions can be used with time for recovery measured as an index to HA. As HA can be reached between 7-14 days, initial base build is the most advantageous time to place it in. Environmental conditions: if possible, relocating the athlete to a climate similar to race environment. Athletes who reside in colder, dry environments might have to employ laboratory settings to increase ambient heat and humidity, or use rubber suits for shorter runs. Pre-planning of the athletic preparation will lay out the optimal time for the athlete to build base, develop and peak. In the northeast, the months of July and August are hot and humid and may be an ideal training environment for HA. Laboratory and field training: laboratory or “physiologically compensable environment” (Smoljanić et al., 2014) allow for deliberate, controlled hyperthermia. This cautious HA can fine-tune aerobic improvement and be used for detection and monitoring of physiological factors such as Vo2max, cardiac output, ventilatory rate and more. Field training, which can be utilized concurrently with lab sessions, is an independent training environment which poses uncontrolled heat exposures for the athlete. Field training will however prepare the athlete better as it is more specific to the race environment the athlete is to compete in. Additionally, it was found that running economy, and not necessarily developed aerobic abilities, improved HA when running in closed chambers (on a treadmill) (Smoljanić et al., 2014). This further emphazises the significane of field training, if aerobic fitness is key in the training plan. Strategies for fatigue management Exercise in the heat impairs performance and shortens the time to fatigue, especially in long term events such as the marathon. As HA increases in the 7-14 days period, it continues to be supported by the training plan, preparing the athlete to compete in the extreme environments. Other than HA, which is the major component to handle fatigue in extreme conditions and was discussed earlier, several other strategies to manage fatigue are advised. First, hydration is critical due to the hot ambient conditions the athlete is training in. it is necessary to pre-hydrate and continue throughout the exercise bout and after. To replenish sodium and chloride lost due to high evaporation rate, the athlete’s body weight will be measured prior and post training, to determine the amount of fluid required for replenishment (Powers & Howley, 2019). Second, nutrition should have an adequate amount of carbohydrates (CHO) to provide oxygenated energy required for the higher intensities (over 60% of Vo2max) in which marathon training is performed. Third, increasing the Vo2max itself during the training plan will decrease the relative intensity in which the athlete compete in on race day, increasing the time to fatigue. Behavioral strategies such as increased motivation can arouse the central nervous , as more motor units are recruited and fatigue is delayed due to ongoing cross-bridge activity (Powers & Howley, 2019). Attentive coaching can be significant at this point of the athletic preparation. The Doha women’s championship marathon had a high rate of DNF athlete. The environmental conditions were extremely hot and humid. While the organizers adjusted the race conditions by starting the race at 11:50pm and providing water-soaked sponges, additional water station and nutrition for the athletes, almost 50% of them DNF and some had suffered heat injuries. It is important for the coach to research their athletes’ goal race in the environmental perspective. Proper HA as a component in the training plan can establish optimal physiological adaptation and significantly assist in racing under extreme conditions such as the Doha race had. Nonetheless, athletes should be viewed individually and might not all respond to HA in an efficient and timely manner, having to be evaluated just prior to race to safely DNS (“Did not start”), if their condition poses a higher risk for injuries. Crystal, Jill Ann, and John Duke Anthony. “Qatar.” Encyclopedia Britannica, Encyclopedia Britannica, Inc., 11 Apr. 2020. Retreived from: www.britannica.com/place/Qatar. World Athletics. Rules and regulations, IAAF Bidding Rules 2. Access date April 15th 2020. Retrieved from: https://site.uat.aws.worldathletics.org/about-iaaf/documents/rules-regulations Périard, J. D., Racinais, S., & Sawka, M. N. (2015). Adaptations and mechanisms of human heat acclimation: applications for competitive athletes and sports. Scandinavian journal of medicine & science in sports, 25, 20-38. Powers, S.K. & Howley, E.T. (2019). Exercise physiology: Theory and application to fitness and performance (10th ed.). McGraw-Hill. Pryor, J. L., Johnson, E. C., Roberts, W. O., & Pryor, R. R. (2019). Application of evidence-based recommendations for heat acclimation: individual and team sport perspectives. Temperature, 6(1), 37-49. Civil Aviation Authority. (n.d.) Qatar Meteorology Department. https://qweather.gov.qa/ClimateInfo.aspx NCAA. (n.d.) Estimated probability of competing in college athletics. http://www.ncaa.org/about/resources/research/estimated-probability-competing-college-athletics Smoljanić, J., Morris, N. B., Dervis, S., & Jay, O. (2014). Running economy, not aerobic fitness, independently alters thermoregulatory responses during treadmill running. Journal of Applied Physiology, 117(12), 1451-1459. Teather, K., Hogan, N., Critchley, K., Gibson, M., Craig, S., & Hill, J. (2013). Examining the links between air quality, climate change and respiratory health in Qatar. Avicenna, 2013(1), 9. World Athletics championships, (2019). Doha (QAT) Time results. Recored by Seiko. https://media.aws.iaaf.org/competitiondocuments/pdf/6033/AT-MAR-W-f----.SL2.pdf
<urn:uuid:93fcb83f-1e00-4b3c-a03f-7ce6f44b917f>
CC-MAIN-2021-43
https://www.siyofitness.com/post/sweat-it-to-win-it
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585522.78/warc/CC-MAIN-20211022212051-20211023002051-00189.warc.gz
en
0.931181
3,078
3.140625
3
IASbaba’s Daily Current Affairs – 17th December, 2016 TOPIC: General Studies 2 India and its neighbourhood- relations. Bilateral, regional and global groupings and agreements involving India and/or affecting India’s interests Effect of policies and politics of developed and developing countries on India’s interests, Indian diaspora. India Indonesia Relations India and Indonesia have shared two millennia of close cultural and commercial contacts. The Hindu, Buddhist and later Muslim faith travelled to Indonesia from the shores of India. The Indonesian folk art and dramas are based on stories from the great epics of Ramayana and Mahabharata. The shared culture, colonial history and post independence goals of political sovereignty, economic self-sufficiency and independent foreign policy have unifying effect on the bilateral relations. For a long time that two nations have kept each other out of focus while determining their foreign policy, even though they have had converging strategic interests. Even under the present ruling governments, the nations have taken too long to reach out to each other. However, both the countries have shown willingness and intent to build a strong relationship with President Widodo’s visit to India being the first presidential visit from Indonesia to India in nearly six years. The areas of common concern and interest have been discussed below with the joint efforts made by both the countries. South China Sea India and Indonesia both are not in agreement with China’s aggressive stance on South China Sea and want the dispute to be resolved by peaceful means and in accordance with international law such as UNCLOS. Both the countries do not have a direct stake in this dispute, yet they are concerned about China’s territorial expansion and its reluctance to abide by international laws and norms. India and Indonesia want their nations to emerge as major maritime powers and ensure a stable maritime order in the region. India’s concerns lie in the security of the sea lanes of communication in the Indo-Pacific region and Indonesia has been concerned about Chinese maritime intrusions near the Natuna islands and its claim to include the island chain in its territorial maps. Indonesia claims it to be a part of its exclusive economic zone. Terrorism and Security The two countries are also now moving towards cooperation in defence and security which will help in focussing on combating terrorism and organized crime. They have also issued a joint statement which condemns terrorism in all forms and emphasises on “zero tolerance” towards terrorism. The statement has asked all nations to focus on the following: Eliminating terrorist safe havens and infrastructure, Disrupting terrorist networks and their financing channels, and Stopping cross-border terrorism. Called upon all countries to implement the UN Security Council Resolution 1267 (banning militant groups and their leaders) and other resolutions designating terrorist entities. The two nations have also laid stress on the need to combat and eliminate illegal, unregulated and unreported fishing and recognized transnational organized fisheries crime. Defence and Security India and Indonesia have been gradually enhancing their security and political ties through the strategic partnership agreement signed in 2005. This agreement also introduced the annual strategic dialogue between the two nations. In 2006, the two countries ratified a defence cooperation agreement, focussing on areas of defence supplies, technology and joint projects. An extradition treaty and a mutual legal assistance treaty for gathering and exchanging information to enforce their laws have also been signed. Other important features of the relationship between the two nations are the joint naval exercises and patrols and regular port calls by their respective navies. India is also a major source of military hardware for Indonesia. India and Indonesia have also decided to give a major boost to their trade and investment ties by focusing on the areas of oil and gas, renewable energy, information technology and pharmaceuticals. It is expected that bilateral trade between the two may grow by four times in the next decade. The importance of cooperation between these two countries is important due to the strategic location of these two. Indonesia’s location allows it to work effectively with India to ensure security in the sea lanes of communication between Europe, the Middle East and South-East Asia. Together, they control the entry point from the Bay of Bengal to the Strait of Malacca. However, the need of the hour is to ensure that the two nations speed up the progress of improving the ties. Even though, the two countries have shared cultural and historical links they have still been distant. One very major highlight of the poor quality relationship between the two countries was the lack of direct air connectivity between the two till this visit by the Indonesian President. This visit by Mr. Widodo has helped India take another step in its “Act East” policy. This will promote greater engagement and integration between India and South-East Asia. Connecting the dots What is the ‘Act East’ Policy of India? Discuss the importance of Indonesia for India, with regard to this policy. Discuss the importance of India Indonesia relationship and the steps which can be taken by both the nations of improve this relationship. SCIENCE AND TECHNOLOGY General Studies 3 Science and Technology- developments and their applications and effects in everyday life Achievements of Indians in science & technology; indigenization of technology and developing new technology. Indigenization of technology and developing new technology. Awareness in the fields of IT, Space General Studies 2 Government policies and interventions for development in various sectors and issues arising out of their design and implementation. India and its neighbourhood- relations. India’s space diplomacy India has vigorously expanded into space diplomacy as an instrument to expand Indian diplomatic clout and soft power as well as further its geo strategic interests. This has the potential to enhance India’s diplomatic relations with developed as well as developing countries. Let us look at India’s space diplomacy reign so far. Technological capabilities in outer space have long been used as an effective tool of foreign policy. For instance, US used its LandSat satellites to share the data or Russia included an Indian cosmonaut Rakesh Sharma in its manned space flight. India has established a long-standing space programme with a history of over 50 years of space exploration. This is evident from the fact that India has some of the best remote sensing satellites in the world and it has provisioned downlink capabilities for these remote sensing satellites for a number of countries. India also shares data with countries and is a part of international forums such as United Nations Platform for Space-based Information for Disaster Management and Emergency Response (UNSPIDER). Also, India has launched satellites for countries that do not have space launch capabilities as well as for countries like France, Canada and even USA who find Indian services reliable as well as reasonable. Thus, India has taken excellent steps towards utilisation of space diplomacy and more can achieved considering India’s capabilities for regional and global diplomacy. India’s space applications ISRO is now supporting many new tools and governance applications such as alert system for unmanned railway crossings, identifying water sources, pipeline safety etc. This can be used in furthering improvement in living standards of people. Civil aviation, marine navigation, road transportation and disaster management are some of the areas that would stand to benefit from the potentials of IRNSS. Significantly, the INSAT communications and IRS earth observation spacecraft constellations being operated by ISRO are being routinely harnessed for a wide ranging purposes including disaster warning, tele medicine and tele education, crop forecast, water resources monitoring and mapping of natural resources. Indeed, India’s experience in exploiting the potentials of satellite technology for accelerating the pace of socio economic development is of immense relevance to the third world countries including the India’s South Asian neighbours. A peaceful and prosperous neighbourhood Reaching out to the neighbours with the expertise in space technology has become a new, vibrant mantra of the space diplomacy projected by the current government. The SAARC satellite which is being spearheaded by ISRO, is considered an excellent example of the Indian policy of strengthening relations with the immediate neighbours. The SAARC satellite aims to help South Asian countries in India’s neighbourhood for fighting poverty and illiteracy, scientific advancement and open up the opportunities for the youths of these countries. India has successfully launched seven satellites of IRNSS which will help in regional navigation too, thereby generating an alternative to commercial navigation satellite services. India has now offered Bangladesh its expertise to build and launch its domestic satellites. South East Asia outreach With a view to project its soft power through the sharing of its space expertise, India is looking at the possibility of setting up a ground station in Fiji that could ultimately serve as a hub for sharing space expertise with the Pacific island nations. ISRO already operates ground stations in Mauritius, Brunei and Indonesia to help track the Indian satellites launched from Satish Dhawan Space Centre. India has offered to share Indian space expertise with the countries in South East region where China, Japan, Australia and USA are competing to acquire a strategic edge. As part of its international cooperation programme, ISRO has offered to share its experience in utilizing the space technology for socio economic development with ASEAN countries which are also prone to natural disasters. Relevantly, the Department of Space (DOS) Annual Report for 2014-15 makes a reference to the plan for the setting up of a satellite data reception centre in Vietnam. It says “India is actively pursuing a proposal with ASEAN comprising Brunei, Cambodia, Indonesia, Laos, Malaysia, Myanmar, Singapore, Philippines, Thailand and Vietnam to establish a ground station in Vietnam to receive, process and use data from Indian satellites for a variety of applications including disaster management and support and also to provide training in space science ,technology and applications”. China along with Pakistan, Bangladesh and a number of other countries have set up a regional partnership organization called the Asia-Pacific Space Cooperation Organization. It involves sharing data, establishing a space communication network and tracking space objects. China is helping set up a space academy/satellite ground station alongside the launch of a telecommunications satellite for Sri Lanka. Bangladesh and Maldives were also expected to pursue a similar path. Pakistan is expected to receive military grade positioning and navigation signals from China’s BeiDou system. Pakistan’s Space and Upper Atmospheric Research Commission (SUPARCO) is building a remote sensing satellite which is expected to be launched in 2018, by means of a Chinese space vehicle. Thus, India is facing tough completion from China in expanding its space diplomacy in the region. SAARC satellite could serve as an instrument to blunt the edge of China’s plan to strengthen space cooperation with South Asian countries including Maldives and Sri Lanka. Also, Indian plan to set up a state of the art satellite monitoring station in Vietnam has attracted Chinese ire where it sees satellite data reception cum tracking and telemetry station in Ho Chi Minhcity as a “clear cut attempt to stir up trouble in the disputed South China Sea region”. China is concerned that the link up of ground stations would give India a significant advantage in the South China Sea region. India is considered to be a leader in societal applications of space technology. It can play a role in capacity building for other developing countries in use of space technology to solve their local problems of land, water, forests and crop, among others, which have been successfully demonstrated by ISRO. Technological capacity-based diplomacy may very well hold the key to deepening relationships both regionally and internationally for India. India’s space prowess must be effectively used as a tool in diplomacy and foreign policy not only for regional capacity building and collaboration with developing nations but also for enhancing India’s role in a global framework. Thus, India should continue its efforts in spreading its space diplomatic tentacles. Connecting the dots: What is space diplomacy? Critically evaluate India’s position in furthering its diplomatic relations through space diplomacy.
<urn:uuid:a07773b3-c406-4a20-8df2-090e19da210a>
CC-MAIN-2021-43
https://iasbaba.com/2016/12/iasbabas-daily-current-affairs-17th-december-2016/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583087.95/warc/CC-MAIN-20211015222918-20211016012918-00350.warc.gz
en
0.946545
2,462
3.015625
3
This article was originally published by the Global Trade Review. In August last year, UK company Provenance announced a scheme to track tuna on the blockchain. In this pilot, Indonesian fishermen sent a text on the company’s blockchain-based app every time they successfully reeled one in. The fish was automatically registered as a digital asset that had been caught legally and sustainably. From an ethical point of view, tuna is doubly problematic for consumers. Yellowtail tuna is often eaten in sushi and sashimi dishes, yet it is an endangered species that should only be fished sustainably. Also, a lot of the tuna we consume is caught by slaves. Greenpeace describes working conditions aboard fishing vessels “as among the worst in the world, and that includes tuna boats”. Provenance’s pilot uses blockchain technology to eradicate those risks. For one, the blockchain is immutable – which is a fancy computer science way of saying “can’t be changed”. The digital certification stays with the tuna fish until the point of consumption, when it obviously ceases to exist. This immutability means the “digital fish” cannot be duplicated, counterfeited or tampered with: its provenance is guaranteed. And because blockchain allows data to be entered, shared and viewed across the supply chain, its journey from line to plate is transparent and visible. What may sound like a quirky science project is actually hugely important work. This was one of the early signs of how blockchain will change supply chains in the years to come. Why is it needed? Banks and companies are under huge pressure from consumers to meet sustainability standards. Regulators are clamping down on trade-based money laundering practices, with the Hong Kong government, for one, establishing ground rules for tackling this systemic problem. Blockchain technology can be used to ensure goods are both sustainable and authentic. But perhaps more practically, using blockchain along with existing tech such as radio frequency identification (RFID), the internet of things (IoT), smart devices and GPS can help satisfy operational problems that plague supply chains everywhere. “I would say that track and trace technology is at the heart of what we offer. It’s an immediate problem that companies encounter every day: they lose track of goods, and when they finally get a bill from the logistics provider there are lots of charges, and they’ve no idea where they came from,” says Rebecca Liao, vice-president of business development and strategy at Skuchain, a California-based company that builds blockchain solutions for supply chains. In the trade finance industry, the chatter around blockchain has rightly been in line with the modernisation of an antiquated industry. At the GTR Australia Trade Forum in Sydney in May, Westpac’s global head of trade, Adnan Ghani, listed the three criteria blockchain must meet if it is to reach critical mass: instantaneous transactions, reduction in fraud, and being cheaper than existing proprietary technologies. The sector is delivering a sea of proof of concepts, but critical mass is a dot on the horizon. Much greater progress has been made on the physical supply chain. Banks’ growing role in financing these supply chains exposes them to such developments. They will inevitably be taken along for the ride. Indeed, some are already onboard. Also at the Sydney forum, Digby Bennett, regional sales director at China Systems, described a project which used blockchain to ensure the authenticity of halal goods in the Middle East’s Islamic banking sector. These goods, financed through sharia-compliant processes, must pass through stringent checks in order to meet requirements, Bennett said. This is an arduous task that must be inspected at every port on the supply chain. China Systems worked with Emirates Islamic Bank to write a blockchain solution that allows them to share this information with Islamic banks on a ledger, via the Dubai Central Bank. That way, they can see which goods are compliant, what financing has been issued and which banks are involved. A real-world need, met using blockchain technology. Thomas Verhagen, senior programme manager at the Cambridge Institute for Sustainability Leadership (CISL) believes blockchain’s role in passing environmental and sustainability standards information up the value chain will help banks clean up their own portfolios. He says: “In exchange for correct entry of such data, it will be possible to offer services to participants that are upstream in a value chain. An example of this could be providing valuable information, as well as financing, to smallholders in agricultural supply chains in exchange for correct data entry in the distributed transaction ledger of that supply chain.” In the supply chain this process is well underway, covering goods from household to luxury. We spoke to those creating the most interesting projects to date. In the coffee industry, the demands on buyers and suppliers are high. Skinny-jeaned hipsters from Dalston to Williamsburg insist on exotic blends that must be sourced sustainably. And for the 125 million people that make a living growing coffee, it is essential that they get a fair wage. With this in mind, US company Bext360 created a machine that grades the coffee beans grown on plantations in Africa, at farm level. The machine combines with a blockchain-based app developed by Silicon Valley company Stellar to connect farmers with an instantaneous marketplace, and more control over the price they receive. Stellar co-founder Brit Yonge explains how it works. “In the coffee market the beans themselves aren’t priced until later on in the supply chain. They’re collected from the farmer and sent to the market, and actually graded later on. Bext thought: ‘How can we price these beans earlier on, upstream in the supply chain, so growers are capturing the value they’re grading?’” The machine analyses the beans at the farm, and makes the weight and grade available to both potential buyers and sellers via the blockchain-powered app. The pair can then negotiate a fair price. But when Bext360 figured out how to grade the beans without taking them to market, they were faced with another problem: the transfer of value. “That’s where we came in,” Yonge says. “Stellar is a blockchain solution. In this scenario, it’s a patented protocol that allows any asset to be represented as a token. Banks are excited by this because they can represent virtual reality currency as a token and not have to deal with a digital asset like bitcoin. In this case Bext were creative, they saw you can have tokens representing different grades of coffee. As the machine is assessing the quality of the beans, they can issue these tokens that essentially represent an IOU to the farmer. That’s the last part of the problem: how do you actually represent the value? “We worked with Bext to allow them to issue these tokens. The app puts these transactions on the blockchain… You’re aware of whose beans have been assessed in whatever way, and that they’ve been paid, which is a problem in some markets where people are being robbed. You know that this person produces so many grade A beans and should be compensated as such.” With legislation such as the UK Slavery Act in enforcement, this type of solution could offer buyers and lenders reassurance that their supply chains are clean of such practices. It’s the kind of real-use scenario that excites a nascent industry. Collin Thompson, co-founder of Hong Kong-based blockchain company Intrepid Ventures, tells GTR : “These coffee beans are going to Starbucks anyway. It gets 10,000 of these small growers and they don’t know where they came from… What if there was an insurgency that took those beans over and it got into the supply chain? You want to be able to know how it got into the supply chain and if people are getting paid fairly.” Arguably the most ambitious efforts we encountered in researching this article were in the US cotton industry. As with most physically traded goods, the paperwork is arduous. “For a 40-container shipment of cotton, you’re talking a three-inch binder full of paper. And that’s a small shipment of cotton,” says Mark Pryor, CEO of The Seam, a commodities software company based in Memphis, Tennessee. Traceability is another pain point. In order to work with high-profile buyers such as Levi Strauss, H&M and Gap, cotton must meet standards set by the Better Cotton Initiative – a multinational non-profit that promotes the use of organic cotton. These two needs in mind, The Seam has been working on a blockchain solution that will, in effect, “kill two birds with one stone”. “We’re going to have a consortium-based blockchain initiative that will be available from field to fabric and everywhere in between. That’s why we came up with the ‘cotton blockchain’, not the ‘agri blockchain’. This is specific to cotton, and the nuances and idiosyncrasies that commodity presents. We’ve had a very positive response from all areas of that supply chain, from retailers, to spinners, banks, freight forwarders, to producers, gins, warehouses, merchants and everyone else to come together and work with us to make this a reality,” Pryor explains. Of course, this won’t be the first usage of blockchain in the cotton industry – last year, the first cross-border transaction between banks using blockchain technology took place on a shipment of 88 bales of cotton from the US to China, involving Commonwealth Bank of Australia, Wells Fargo and Skuchain. However, Pryor describes The Seam’s work as more of an ecosystem than a solution, given that it will include all areas of the supply chain. Three pilots are set to be launched, having successfully moved through the proof of concept stage: one for smart contracts, one for physical shipment of cotton, and one to enable retailers to track and trace the finished product. The company has been developing the blockchain ledger in house and will start the three-pronged pilot in July, where it will be used to track and trace a real deal. Pryor explains: “A lot of the pilots out there aren’t real. They take one little subset of supply chains and say they’re done on blockchain, but it really didn’t prove that the technology worked. It was something that was done in a room or maybe somebody took a piece of data and walked it across to another office. That’s not what we want to do. “We want to do a real contract. We’ve already agreed with the parties, it can’t be disclosed at this point. But they’re major merchants, textile mills and players along the way. We’re going to do a real contract trade in July. Then we’re looking at the export shipment and documentation to process flow and all that, a digitised ledger for verifying information along the supply chain, that will happen in January.” The timescale is dictated by the nature of the industry: cotton trades typically happen in July, and this will be a real trade, conducted alongside trades done using traditional processes. Shipment occurs after the harvest, between November and January. Finally, the retailer will have visibility back over the supply chain by February, at which point they will have received a cargo of cotton, tracked from the farm, along the blockchain. According to Paul Sam, who leads Deloitte’s fintech practice in China, a blockchain project reaches critical mass when two to three players in an ecosystem move first. “It’s like a snowball effect,” he tells GTR For the “cotton blockchain”, critical mass is something it already has. The Seam is partly owned by Cargill, Olam and Louis Dreyfus, three commodity trading giants. The initiative already involves the biggest cotton exporters in the world. The Seam is also talking with banks such as BNP Paribas and HSBC about getting involved. If the pilots go well, Pryor says, the support is already there to roll it out on a fully operational basis. “Agri companies have an existential need,” says Collin Thompson at Intrepid Ventures. “There’s a compliance issue. If a bank launders money for Pablo Escobar, they pay a US$1bn fine and that’s it. But if you have a problem in the supply chain, in the food area, the government will shut you down!” This “existential need” means food companies cannot afford to get it wrong when it comes to sourcing their goods. According to Rebecca Liao at Skuchain, blockchain can be applied to pretty much every product in the agri food space to ensure quality control. Skuchain has been working with companies looking to improve their traceability across the board, from dairy to fruit. In one case, Skuchain was approached by a commercial bank that finances commodity trading. “Their problem is that their internal policy says you can only offer [the set] commodities price for this good. Take avocados for example,” she explains. The bank is only able to offer the quoted commodities price for standard avocados, but what happens when the farmer says his avocados are organically grown, or they’re non-GMO. The bank has no way of verifying the provenance of these avocados. What, for example, happens if they got mixed up with a batch of non-organic avocados? “They need some sort of track and trace technology that will allow them to offer a higher price for premium avocados and stop losing out on deals,” Liao says – and so Skuchain built them a solution that bolts onto their existing Enterprise Resource Planning (ERP) or Electronic Data Interchange (EDI) systems. She explains: “The farmer would be the starting point. Using the Skuchain mobile app, the farmer would apply a code to the shipment of avocados. They scan that code using the mobile app. As soon as they do that they can get an encrypted POP code, which stands for proof of provenance, onto the blockchain ledger. That indicates to the ledger: now we have this unit that has been recorded onto the blockchain. “There can be as much data as you want associated with that POP code. So the farmer would say it’s non-GMO, organically grown, the temperature he stores the avocados after they’re picked, their spoilage, etc. This can be done manually or using sensors that can provide this information – sometimes these sensors are more accurate than people.” The POP code stays with the avocado throughout the supply chain, from the farmer, to the truck, to the shipping company, until it ends up as guacamole on a brunch plate in Singapore or Melbourne. Liao continues: “We have unitisation technology on the back of the POP code, which means they can be subdivided, aggregated onto a master POP code. There are several ways you can manipulate this piece of encryption, but they will continue to track the goods all the way from point of origin to the hands of the consumer.” Meanwhile, the only thing people involved see is a smartphone app. The blockchain is invisible, in the same way that most people won’t be aware that they’re using SMTP for email. This is one of the major advantages of distributed ledger technology: the investment is at the top of the supply chain. Everybody else accesses it through a smartphone or laptop, meaning its rollout to remote farms and plantations in, say, Indonesia or Tanzania, is relatively simple. Diamonds (and wine) Having spent the early part of her career working to make objects more traceable through RFID technology, Leanne Kemp knows a thing or two about supply chains. When she saw what people were doing with cryptocurrencies a few years back, solving issues around visibility, double spend and secure transfer of value, she had a “eureka” moment. “Because I hadn’t come from a payments background, I looked at the technology in cryptocurrencies and started applying this to an object instead of a piece of money,” she tells GTR . By applying the same principles used in bitcoin platforms to physical assets, she saw how blockchain could be used in the physical commodities space. Everledger was born, and now Kemp is one of the most respected authorities on blockchain in the world. Everledger is at the cutting edge of blockchain-based track and trace. The company has devised solutions that ensure the authenticity of high-value goods, such as diamonds, wine and fine art. Each of these markets faces crises of authentication, after being plagued by issues around counterfeiting and ethics. There are said to be more bottles of New Zealand wine on the market today than have ever been manufactured, while the problems of blood diamonds are well-documented. “The marriage between provenance and procurement is a natural evolution towards transparency. With transparency comes sustainability both at an ethical trade level and from a financial point. Now we’re seeing governments pass legislation to ensure transparency and sustainability is incumbent upon directors and companies in the local market in the UK – you can reference the Slavery Act as one of those pointers,” she says. Most notably, Everledger created a global digital registry for diamonds, powered by blockchain. The platform digitally certifies diamonds traced through the Kimberley Process – the global initiative established to stop conflict diamonds from entering the mainstream market. Diamonds may be “dumb objects” [aka those inanimate objects which are not smart – unlike modern mobile technology], but they lend themselves to such a solution better than, say, slabs of copper which are indistinguishable from one another. Kemp explains: “The beautiful thing about white diamonds is the perfect bedrock, we can incarnate the physical object into the blockchain. We can extract 40 metadata points around an identity, and actually digitally incarnate that. Also, the diamond industry has control points. It uses certain types of science and scanning to then give the opinion of the expert, but also match that with machine. While we’ve seen how blockchain can be used in areas such as food and textiles, Kemp says these supply chains are “complex” and perhaps don’t lend themselves as naturally to the technology as diamonds. “We’re very disciplined about what we do, and applying it to items like art and antiquities, things that have generations, that need to consider provenance. We’re not really too excited about things like fish, or tracking perishable items, we think that can be best served by other companies.” Finbarr Bermingham is a the Asia Editor of Global Trade Review, covering trade, development, politics, and economics. © Global Trade Review This article was originally published here.
<urn:uuid:93b3cec6-635c-45b6-8de9-28b566c2a880>
CC-MAIN-2021-43
https://www.wita.org/nextgentrade/how-blockchain-is-changing-track-and-trace/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587655.10/warc/CC-MAIN-20211025061300-20211025091300-00230.warc.gz
en
0.954305
4,006
2.796875
3
Information On Pigs Are You a Potential Potbellied Pig Parent? Get your information on pigs here! While pigs have held a place of high status in Chinese and Eastern Asian cultures for centuries, there is a certain stigma attached to the pig in America. Where did all the ridiculous sayings related to pigs originate? “Sweat like a hog.” (Pigs are incapable of sweating.) “Dirty as a pig.” (Pigs are very clean, and if given the opportunity, will only use one corner of their pen as a toilette.) “Stink like a pig.” (Pigs have absolutely no odor.) Now, I can relate to the saying “Eat like a pig.” Pigs really smack their lips and chew with their mouths open…in other words, they are totally food possessed. Having household pet pigs is really nothing new. I’m always amazed at how many people I meet who had a childhood experience with a runt pig raised in the house. Pigs are very sociable, adaptable, hearty, clean, and intelligent. Their personality and appearance simply beckon many of us to become personally involved. Some pig enthusiasts own elaborate pig paraphernalia collections, while others make a pig or two or three a part of their families and lives. Outlined below is pig-related information that will assist you in making an educated decision about becoming a pig parent. Pot-Bellied Pig Behavioral Characteristics Pigs are social by nature. In their natural habitat they live in a group and a pecking order is established and maintained by body and verbal pig language. If a pig is irritated, she may throw her head in a side swiping motion, or she may scream loudly. (A contented pig ouffs around making quiet, satisfied noises that are very pleasing.) The important thing to remember is that you need to establish yourself at the top of the social hierarchy in your home or your pig will determine that she is “top pig” and dictate the rules of the roost. There is nothing worse than a pushy pig! Because pot-bellied pigs are social creatures, they may become bored and restless when they are expected to spend inordinate amounts of time alone without either human or other animal interactions. Hence, you need to be creative in providing a pet pig with entertaining distractions. You may even decide to adopt a pair o’ pigs to ensure that you never have a bored or lonely pet. In my opinion, pot-bellied pigs have very advanced communication skills. Examples of vocal communication include the “grunting” a mother pig emits while feeding her young; “barking” that warns of impending danger; and “squealing” in anticipation of eating or indicating displeasure or pain. Some individual sounds are: “Aroo” that means “You aren’t getting me what I want fast enough.” “Ha ha ha,” a quiet, hot panting that indicates acquaintanceship, a sociable “hello.” What I call a filth noise (similar to the sound your Uncle Charlie makes when trying to cough something up) means piggy is really P.O.’ed. Happy pot-bellied pigs seldom display body postures, as most are related to maintaining one’s station on the social ladder. However, a spoiled, challenged, or unhappy pig may change her ear set, throw her head, face off, or click her jaws in response to an unpleasant situation or another animal invading her territory. Pot-bellied pigs are curious by nature. They spend hours rooting in the ground (if given the opportunity) or snurddling about your home with their nose to the carpet or floor seeking out any stray tidbits of food. Their inquisitive nature can be advantageous when it comes time to train, as pigs will maintain a high level of attention when stimulated with new ideas and, of course, the primary motivator…FOOD! Man rates the pig as the fifth most intelligent animal with man ranking first, followed by monkeys, dolphins, whales, and pigs. Pet pigs function by instinct, intuition, and memory. While they have no innate sense of right or wrong and have no conscience, they learn quickly and don’t forget what they master. You need to stay one step ahead of your pig or she will train you to do exactly what suits her fancy. Pet pigs are much like children. They find your weak spot and manipulate until they get their way. If you give a pig an inch, she will most certainly take many miles. However, it is this very intelligence that appeals to many who fancy pigs. You can indeed nurture a very rewarding and interactive relationship with a pig, as a pig will treat you like an equal if given the opportunity. Never underestimate the ability of a pig. Pet pigs are affectionate animals. They love companionship and body closeness. Many pig owners actually allow their pig to share the bed and maintain that a porcine sleeping partner is not only warm and cuddly, but doesn’t wiggle, squirm, or hog the bed. The pot-bellied pig is a very sturdy animal with short legs, a slightly swayed back, a pendulous belly, a short tail ending with a flowing switch, short, erect ears, and a snout that varies from short and stubby to long and elegant. A potbellied pig continues to grow for at least two to three years. Current belief is that the average purebred (not crossbred), healthy, mature, three-year-old pot-bellied pig can weigh from 60 to 175 pounds and measure from 13 to 26 inches in height, with the length being proportional to the height. Certainly, there are a few potbellies who will be smaller or larger than this normal range. The weight of a pot-bellied pig is deceiving because they are so hard-bodied. A pig who measures 14″ tall by 24″ long and weighs 60 lbs. takes up very little space (about half the dimensions of an ottoman) and is a manageable size for a house pet and travel companion. Compare this size pig to a 100 lb. German Shepherd who is taller and longer than a coffee table, with an extension (the tail) that is capable of knocking everything off the coffee table. Granted, pigs are not as agile as the traditional dog or cat pet. A pot-bellied pig may need a ramp to assist in stair climbing and getting in and out of a car, but this is a simple task to accomplish. The pot-bellied pig has a keen sense of smell. Reports are that a pig can smell odors that are twenty-five feet under the ground. They are used to unearth such culinary delicacies as truffles for our eating pleasure, as well as sniff out drugs for law enforcement purposes. While a pig has excellent hearing capability, she does not see very well. Pot-Bellied Pig Life Expectancy Potbellied pigs have only been in the United States since 1986 so it is difficult to determine an average life span. Estimates in this regard are between fifteen and thirty years. I would tend to go with the fifteen-year prediction. If a pot-bellied pig is allowed to exercise regularly, is not overfed, and is examined and vaccinated annually by a veterinarian, she should live to a ripe old age. Both adult size and longevity are directly related to how the pig is cared for. Of course, genetics also plays an important role, but management is of utmost importance. Impulse buying a pot-bellied pig (or any pet, for that matter) is a bad idea. You need to totally acquaint yourself with the nature of the pig and your responsibilities as a pet pig owner. The fact that you are reading this website means that you are serious about educating yourself about pet pigs. Take the time to familiarize yourself with all aspects of the potbellied pig prior to adoption. Finding a Reputable Breeder “You get what you pay for” is definitely true when it comes to buying a pot-bellied pig. Of course, price is an issue; but you must pay close attention to the health, conformation, and lineage of your prospective pet. You can buy an unregistered, mismanaged, unsocialized, crossbred, unhealthy pig from a bad breeder for very little money; or adopt a happy, healthy, socialized, registered pot-bellied pig from a reputable breeder. A reputable breeder is also a valuable resource if problems arise and for developing contacts with other pig people. You will be way ahead of the game if you choose the latter approach. What you save in vet bills and heartache will be well worth the initial investment in a properly bred and handled piglet. When shopping for a pot-bellied pig, do not buy one at a swap meet or out of the back of a van at the corner truck stop. You are just asking for trouble. I don’t recommend getting a pig from a pet store either unless they can supply appropriate food and support information as well as the pig’s litter registration paper indicating the breeder. Don’t get caught up in the moment. Here’s the picture. You’re holding a cute and cuddly, pot-bellied pig that is three weeks old who is being touted as everything you could hope for. You are not given the opportunity to see the parents or littermates. You are told the circumstances surrounding the young, pre-weaning age piglet you are snuggling. “The mother got sick and couldn’t nurse her babies.”..WHY? “The piglet wouldn’t nurse, so was taken away from the litter and bottle-fed.”…WHY? Be wary of these kinds of stories. I can guarantee you that heartache is just around the corner. There are many misconceptions and much fraudulent information about teacup, micro-mini, and pocket pigs. Pet pigs are like people. They have a genetic disposition to become a certain size; but what and how much they eat, management, and environment all play roles in determining their adult size. It is unrealistic to expect a pig to stay tiny and inhumane to underfeed a pig hoping to keep it small. Adopt a pig from a reputable breeder. Ask your veterinarian to recommend one in your area. Locate a breeder and visit their facility. Are the surroundings clean and neat? Does the breeder have a good rapport with her pigs? Are the pot-bellied pigs you see in large enough pens with shelter, shade, and water? Are you allowed to see the parents of the piglet you are considering? You must insist upon seeing the sire and dam because the size and temperament of your prospective piglet’s parents are true indicators of what you can expect of your pig-a-rooter. Has the pot-bellied pig you are considering been weaned for at least one week, socialized, neutered, litter-box trained, and learned how to live with human house mates? These are all important issues. What you should see at a good breeding facility, is happy, healthy, tractable breeding stock, a few weaned pigs in the house for pre-adoption training, a clean and healthy environment both inside and outside, and pigs who respond to the breeder eagerly and with obvious affection. Piggy Comes Home So, you did all your homework, found a reputable breeder, picked out a healthy, sound pot-bellied pig of your dreams, and everything is copasetic. For the ride home, I definitely recommend that you kennel your piglet. Hopefully, the breeder has desensitized Miss Piggy to the travel carrier or it may be a scary trip. If you take the precaution of putting your pot-bellied pig in the kennel, you won’t need to worry about potty accidents or a flying pig who could cause a car crash. A kennel-savvy pig makes a lot of sense for future fun outings or trips to the vet, so you might as well get started on the right foot with crate-training. You need to locate a veterinarian in your area who has experience with pot-bellied pigs or is willing to learn. Your breeder should be able to put you in touch with a good one. Don’t put this off! Have your new pig examined by a vet within the first week to make sure she is in good health. This will also serve as an introduction of your new family member to your veterinarian. If an emergency should arise and you haven’t established a relationship with a D.V.M., you are putting your pot-bellied pig in real danger. Please call me if you are unable to locate a vet, and I will try to assist you for more information on my book check out my resources page. Owning a Pet Pig As you can see, I list far more advantages than disadvantages; but, bear in mind that I am a pig person from way back. Owning a pot-bellied pig is similar to being a parent. Patience and love are required and it is not a responsibility to be taken lightly. Long life span (12-20 years) Clean and odor-free Non-allergenic in most cases Very little shedding Quickly trained: litter box, tricks, harness, etc. Non-destructive, unlike a puppy Low maintenance: annual vet visit, low fed consumption Communicative, affectionate and intelligent You may not be zoned to own a pig You may not have a vet available who knows how to treat pot-bellied pigs Pigs can become spoiled and manipulative Pigs require a commitment of time and energy from their owners
<urn:uuid:fa2c004c-a7f5-47e5-b26c-45eff952faaf>
CC-MAIN-2021-43
https://potbellypigs.com/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585424.97/warc/CC-MAIN-20211021133500-20211021163500-00390.warc.gz
en
0.954942
2,892
2.625
3
|Republic of Ghana| |Motto: Freedom and Justice| |Anthem: God Bless Our Homeland Ghana (and largest city) |Independence||from the United Kingdom| |-||Republic||July 1, 1960| |-||Total||238,535 km² (81st) 92,098 sq mi |-||2010 estimate||24,233,431 (49th)| |GDP (PPP)||2010 estimate| |-||Total||$61.973 billion (72nd)| |-||Per capita||$2,930 (127th)| |1 note: Estimates (for this country) explicitly take into account the effects of excess mortality due to AIDS; lower life expectancy, higher infant mortality and death rates, lower population and growth rates, and changes in the distribution of population by age and gender. (July 2005 est.)| Ghana, officially the Republic of Ghana, is a country in West Africa. It borders Côte d'Ivoire to the west, Burkina Faso to the north, Togo to the east, and the Gulf of Guinea to the south. The word "Ghana" means "Warrior King." It was inhabited in pre-colonial times by a number of ancient kingdoms, including the Ga Adangbes on the eastern coast, inland Ashanti kingdom and various Fante states along the coast and inland. Trade with European states flourished after contact with the Portuguese in the 15th century, and the British established a crown colony, Gold Coast, in 1874. Upon achieving independence from the United Kingdom in 1957, the name Ghana was chosen for the new nation to reflect the ancient Empire of Ghana that once extended throughout much of western Africa. Ghana gained its independence from British colonial rule under the leadership of Kwame Nkrumah, the anti-colonial leader who served as the first president. Army officers dissatisfied with Nkrumah's dictatorial ways deposed him in 1966. Flight Lieutenant Jerry Rawlings, who claimed the presidency in 1981, led the country through a transition to a democratic state that culminated with an historic election in 2000 in which the people rejected Rawlings's handpicked successor by choosing John Agyekum Kufuor as the president. Kufuor was reelected in 2004 for a second four-year term. Ghana is a Republic with a unicameral Parliament dominated by two main parties—the New Patriotic Party and the National Democratic Congress. Over the course of nearly four hundred years, forts along the coastline of today's Ghana provided departure points for millions of West Africans who were loaded onto ships as slaves destined for plantations in the New World. In an exemplary gesture of reconciliation as Ghana prepared to celebrate its fiftieth anniversary of independence in 2007, the nation offered an apology to the descendants of those slaves for the role of black slave catchers in that cruel history, inviting them to reconnect with their ancestors' homeland. The earliest recorded site of probable human habitation within modern Ghana was about 10,000 B.C.E. Pottery dating from the Stone Age (4,000 B.C.E.) was found near the capital city, Accra. Starting in the late thirteenth century, Ghana was inhabited by a number of ancient kingdoms, including an inland kingdom within the Ashanti Confederacy and various Fante states along the coast. Trade with European states flourished after contact with the Portuguese in the fifteenth century. One of the chief exports of the region was human slaves, more than six million of whom were shipped to plantations in the Americas. Millions more died during the overland march from inland areas to the coast, while imprisoned before loading, and on the ships crossing the Atlantic. The west coast of Africa became the principal source of slaves for the New World, overshadowing trading for gold. As other nations moved in to participate in this lucrative trade, the Portuguese were edged out. The British finally gained the dominant position and established a colony, known as Gold Coast, in 1874. Once the United Kingdom granted independence, the name Ghana was chosen for the new nation, a reference to an empire of earlier centuries. This name is mostly symbolic, as the ancient Empire of Ghana was located to the north and west of current-day Ghana. But the descendants of that ancient empire migrated south and east and currently reside in Ghana. After Kwame Nkrumah was overthrown in 1966, a series of coups ended with the ascension to power of Flight Lieutenant Jerry Rawlings in 1981. Rawlings suspended the constitution in 1981 and banned political parties. A new constitution, restoring multiparty politics, was approved in 1992, and Rawlings was elected in free elections (which the opposition boycotted) that year, and in 1996. The constitution prohibited him from running for a third term. President John Agyekum Kufuor was first elected in 2000, defeating the hand-picked successor to Rawlings. He was reelected in 2004 for a four-year term. The 2000 election marked the first peaceful transfer of power in Ghana's history. Ghana is a Republic consisting of a unicameral Parliament and dominated by two main parties—the New Patriotic Party and National Democratic Congress. The capital of Ghana is Accra, with a population of 1.9 million people. Ghana is divided into ten regions, which are then subdivided into a total of 138 districts. The regions are as follows: Well endowed with natural resources, Ghana has twice the per capita output of the poorer countries in West Africa. Even so, Ghana remains heavily dependent on international financial and technical assistance. It receives about one billion United States dollars per year in foreign aid, a figure that accounts for ten percent of its gross domestic product (GDP). As one of the world's poorest countries, it was granted total debt cancellation by the Group of Eight in 2005. In his inauguration speech in 2005, President Kufuor reconfirmed his government's commitment to government accountability, capacity building, agricultural development, and privatization. Although the British have been the traditional main source of external aid, in 2006 China promised about 66 million U.S. dollars to fund development projects as part of its drive to open export markets and secure energy and mineral supplies. Ghana is Africa's second biggest exporter of gold, after South Africa. Timber and cocoa (introduced by the British) are other major sources of foreign exchange. Tourism is also a major source of income. Ghana is considered a transit hub for heroin and cocaine in the illegal drugs trade. The domestic economy continues to revolve around subsistence agriculture, which accounts for 40 percent of GDP and employs 60 percent of the work force, mainly as small landholders. Ghana borders the Ivory Coast to the west, Burkina Faso to the north, Togo to the east, and the Atlantic Ocean to the south. It is located on the Gulf of Guinea, only a few degrees north of the Equator. The coastline is mostly a low, sandy shore backed by plains and scrub and intersected by several rivers and streams. A tropical rain forest belt, broken by heavily forested hills and many streams and rivers, extends northward from the shore. North of this belt, the land is covered by low bush, parklike savanna, and grassy plains. The climate of Ghana is largely the outcome of huge dry continental air masses of the Sahara (the "Harmattan") meeting warm humid maritime air masses from the south. Ghana is divided into two distinct climatic zones by the Kwahu plateau. To the north, there are two distinct seasons—hot dry days with temperatures reaching 88 °F (31 °C) and cool nights in the winter, and warm rainy days in the summer. Rainfall averages between 29 to 39 inches (750 and 1000 mm) annually. To the south of the Kwahu, there are four distinct seasons with varying amounts of rainfall and generally warm average temperatures from 79 °F to 84 °F (26 °C-29 °C). The rainfall here ranges from 49 to 85 inches (1250 to 2150 mm) annually. Lake Volta, the world's largest artificial lake, extends through large portions of eastern Ghana and is the result of the massive hydroelectric dam completed in 1965 on the Volta River. Ghana is mainly comprised of black Africans which includes almost all Ghanians at 99.8 percent of the population. It is largely a tribal society. The major tribes are; Akan (44 percent), Moshi-Dagomba (16 percent), Ewe (13 percent), and Ga (eight percent). Europeans and others make up the remaining 0.2 percent of the population, which was counted at more than 22 million people in the 2005 census. English is the official language, however nine different languages—Akan, Dagaare/Wale, Dagbane, Dangme, Ewe, Ga, Gonja, Kasem, and Nzema—all enjoy the status of being government-sponsored languages. Perhaps the most visible (and most marketable) cultural contribution from modern Ghana is Kente cloth, which is widely recognized and valued for its colors and symbolism. Kente cloth is made by skilled Ghanaian weavers, and the major weaving centers in and around Kumasi (Bonwire is known as the home of Kente, though areas of Volta Region also lay claim to the title) are full of weavers throwing their shuttles back and forth as they make long strips of Kente. These strips can then be sewn together to form the larger wraps that are worn by some Ghanaians (chiefs especially) and are purchased by tourists in Accra and Kumasi. The colors and patterns of the Kente are carefully chosen by the weaver and the wearer. Each symbol woven into the cloth has a special meaning within Ghanaian culture. Kente is one of the symbols of the Ghanaian chieftains, which remains strong throughout the south and central regions of the country, particularly in the areas populated by members of the culturally and politically dominant Ashanti tribe. The Ashanti's paramount chief, known as the Asantehene, is perhaps the most revered individual in the central part of the country. Like other Ghanaian chiefs, he wears brightly colored Kente, gold bracelets, rings, and amulets, and is always accompanied by numerous attendants carrying ornate umbrellas (which are also a symbol of the chieftain). The most sacred symbol of the Ashanti people is the Golden Stool, a small golden throne in which the spirit of the people is said to reside. It is kept in safekeeping in Kumasi, the cultural capital of the Ashanti people and the seat of the Asantehene's palace. Though the chieftaincy across Ghana has been weakened by allegations of corruption and cooperation with colonial oppression, it remains a vital institution in Ghana. Because of their location, the northern regions of Ghana exhibit cultural ties with other Sahelian countries such as Burkina Faso, Mali, and northern Nigeria. Although those tribes are not indigenous to the area, there is strong Hausa and Mande influence in the culture of the northern Ghanaian peoples. The dominant tribe in this part of Ghana are the Dagomba. Northern Ghanaians are known for their traditional long flowing robes and musical styles that are distinct from those of the southern and central regions. Tuo Zaafi, made from pounded rice, is a specialty from this region that has become a staple across Ghana. The Larabanga mosque in Larabanga is the oldest mosque in the country and one of the oldest in West Africa, dating from the thirteenth century. It is an excellent example of the Sudanese architecture style; other examples include the Djenné Mosque in Mali and the Grand Mosque in Agadez, Niger. After independence, the Ghanaian music scene flourished, particularly the up-tempo, danceable style known as highlife, which is still played consistently at the local clubs and bars, often called spots. Many Ghanaians are adept drummers, and it is not unusual to hear traditional drum ensembles play at social events or performances. Hiplife, another genre of music in Ghana, is now in stiff competition with the more established highlife for airplay on local radio stations and at nightclubs. A movement that started in the mid 1990s, hiplife is a Ghanaian version of hip-hop rap music, with raps basically in the local dialects. Hiplife in present-day Ghana arguably represents youth culture in general. Slowly but surely, hiplife has surpassed "western music" in terms of airplay. The literacy rate is 75 percent. Ghana has 12,630 primary schools, 5,450 junior secondary schools, 503 senior secondary schools, 21 training colleges, 18 technical institutions, two diploma-awarding institutions, and five universities. Most Ghanaians have relatively easy access to primary education, but lack of facilities limits the number who can advance. Education has been a top priority of the government. At the time of independence, Ghana had only one university and a handful of secondary and primary schools. Since the mid-1990s, Ghana's spending on education has been between 28 percent and 40 percent of its annual budget. Primary and middle school education is free and will become mandatory when a sufficient number of teachers and facilities are available to accommodate all students. Teaching is mainly in the English language. - Jackson, John G. Introduction to African Civilizations, 2001. Page 201. - MacLean, Iain. Rational Choice and British Politics: An Analysis of Rhetoric and Manipulation from Peel to Blair, 2001. Page 76. - Peter N. Stearns and William Leonard Langer. The Encyclopedia of World History: Ancient, Medieval, and Modern, Chronologically Arranged, 2001. Page 1050. All links retrieved June 21, 2017. - The Parliament of Ghana Official site. - National Commission on Culture Official site. - BBC Country Profile - Ghana - CIA World Factbook - Ghana - US State Department—Ghana Includes Background Notes, Country Study and major reports. New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: The history of this article since it was imported to New World Encyclopedia: Note: Some restrictions may apply to use of individual images which are separately licensed.
<urn:uuid:52101509-1ee3-45e7-b668-dba3065b57f7>
CC-MAIN-2021-43
https://www.newworldencyclopedia.org/entry/Ghana
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587655.10/warc/CC-MAIN-20211025061300-20211025091300-00230.warc.gz
en
0.94974
3,070
3.015625
3
Pain Management: Non-Opioid Medications Given the disadvantages of opioids, including side effects and potential for addiction, non-opioid medications play an important role in pain management. The management of acute and chronic pain often includes opioid therapy. In both the acute and chronic pain settings, however, opioids have several disadvantages including risk of nausea and vomiting, somnolence, constipation, respiratory depression, androgen deficiency, physical dependence, and tolerance. Opioid medications also carry a risk of abuse or addiction by either the patient or non-medical users. For these reasons, consideration of non-opioid strategies for pain management is beneficial. While opioids will certainly continue to have a place in pain management despite their disadvantages, the use of non-opioid medication options may limit the amount of opioid necessary or even result in improved pain control. In fact, given that the majority of both acute and chronic pain is thought to be complex and multifactorial, a multimodal analgesic approach is ideal for management. The purpose of this article is to review selected non-opioid medications used in either acute or chronic pain management. Acute Pain Management IV Acetaminophen (Ofirmev) While oral and rectal acetaminophen have been available for quite some time, in 2010 an intravenous (IV) formulation was approved by the FDA. IV acetaminophen (Ofirmev) is indicated for use in management of mild to moderate pain and moderate to severe pain with adjunctive opioid analgesics.1 When studied as an adjunct to opioids following major surgery, IV acetaminophen demonstrated superiority over placebo in decreasing pain scores.2,3 IV acetaminophen has also been shown to decrease opioid consumption in major surgery by nearly one-third compared with placebo.2 The most common adverse effects seen with IV acetaminophen were constipation, nausea, injection site pain, pruritus, and vomiting.2 For adults and adolescents weighing greater than 50 kg, the recommended dosage of IV acetaminophen is 1000 mg every 6 hours or 650 mg every 4 hours, with a maximum single dose of 1000 mg.1 For adults and adolescents weighing under 50 kg as well as children ≥2 to 12 years old, the recommended dosing is 15 mg/kg every 6 hours or 12.5 mg/kg every 4 hours to a maximum of 75 mg/kg per day.1 As with other acetaminophen formulations, caution should be given to avoid exceeding the recommended maximum dose of 4000 mg per day to prevent potentially fatal hepatic injury. No benefit over oral or rectal acetaminophen has been demonstrated at this time; therefore use of IV acetaminophen would most likely be reserved for those patients who are unable to tolerate oral medications. IV Ibuprofen (Caldolor) With ongoing drug shortage concerns with ketorolac,4 IV ibuprofen (Caldolor) may begin to see increased usage. Approved in 2009, IV ibuprofen is approved for management of mild to moderate pain and moderate to severe pain as an adjunct to opioid analgesics in adult patients.5 Similar to IV acetaminophen, IV ibuprofen has been shown to decrease pain scores and opioid usage in studies evaluating postoperative pain.6,7 The dosing for IV ibuprofen is 400 mg to 800 mg every 6 hours as necessary with a maximum of 3200 mg per day.5 The product must be diluted prior to administration and then infused over a period of 30 minutes, which is a disadvantage compared with ketorolac, which is available in prefilled syringes and single-dose vials for IV push or IM administration. Like all other non-steroidal anti-inflammatory drugs (NSAIDs), caution should be used when considering use of IV ibuprofen in patients with heart failure, kidney impairment, and those with a history of gastrointestinal bleeding, due to risk of serious cardiovascular and gastrointestinal events. Of note, IV ibuprofen is contraindicated for the treatment of perioperative pain in the setting of coronary artery bypass graft surgery.5 Compares with ketorolac, which is limited to a usage of 5 days, IV ibuprofen does not have a limit on duration of use, although one would expect this formulation to be limited to time periods when patients are unable to tolerate oral medications. An important medication safety consideration is the availability of another ibuprofen formulation for injection, ibuprofen lysine, for use in the closure of patent ductus arteriosus in premature infants. Given the differing indications and dosing between these 2 IV formulations of ibuprofen, inadvertent substitution of these products could result in patient harm. Another important safety concern to note is that in September 2012, Cumberland Pharmaceuticals, the manufacturer of IV ibuprofen, issued a statement recommending that only Baxter Viaflex and Hospira 250 mL bags be used when diluting the product due to reports received indicating possible incompatibility with B Braun PAB, Hospira VisIV , and Baxter AVIVA bags.8 Chronic Pain Management As noted above, a multimodal approach to pain management is often considered ideal, especially in the setting of chronic pain, where use of long-term opioids can increase the risk of many medication-related problems. Below, several non-opioid medication options for use in both nociceptive and neuropathic pain are reviewed. Gabapentin and pregabalin (Lyrica) have established efficacy and are typically considered first-line medications in various types of neuropathic pain.9 Gabapentin is initially started at a lower dose (300 to 600 mg per day) to limit side effects such as drowsiness and dizziness and titrated as tolerated to an effective dosage typically considered to be between 1800 and 3600 mg per day. In 2011 a once-daily gabapentin formulation (Gralise) was approved.10 This product was intended to overcome the dose-limiting side effects of drowsiness and dizziness often seen with regular-release gabapentin by allowing for plasma levels to peak overnight. Currently the once-daily gabapentin formulation is approved for the management of postherpetic neuralgia and has a recommended dose titration to reach a daily dose of 1800 mg within 2 weeks.10 There is no evidence that the once-daily gabapentin formulation confers better tolerability compared with regular-release gabapentin; however, it may be a reasonable option in patients who are unable to reach effective doses of regular-release gabapentin due to side effects. Other anticonvulsants such as lamotrigine, lacosamide, topiramate, carbamazepine, oxcarbazepine, and valproic acid have been studied in the setting of neuropathic pain but are typically considered only when patients have failed multiple other agents due to their limited evidence.9 Serotonin and norepinephrine reuptake inhibitors Duloxetine (Cymbalta) is FDA approved for management of diabetic peripheral neuropathy, fibromyalgia, and chronic musculoskeletal pain.11 A desirable effect of duloxetine in the setting of chronic pain is thought to be improvement in depression. Duloxetine is typically dosed for painful conditions as 30 mg once daily and then titrated to 60 mg once daily after 1 week if tolerated.9 The most common adverse effect seen with duloxetine is nausea.11 Due to reported cases of hepatic failure with use of duloxetine, its use in patients with hepatic impairment or alcohol abuse is not recommended.11 Venlafaxine has also demonstrated efficacy in the setting of diabetic peripheral neuropathy9 and is available generically, whereas duloxetine is still available only as a brand-name medication. Tricyclic antidepressants such as amitriptyline, desipramine, and nortriptyline have shown benefit in the setting of postherpetic neuralgia, diabetic peripheral neuropathy, post-stroke pain, and polyneuropathy.12 These agents are often preferred due to low cost; however, their use may be limited by their anticholineric side effects (xerostomia, constipation, urinary retention) and potential for cardiac toxicity. Because of these potential side effects, caution is advised for use in elderly patients. A desirable effect of these agents is improvement in depression and sleep disruption, common problems among chronic pain patients. Amitriptyline, desipramine, and nortriptyline are all initially dosed as 25 mg at bedtime and increased by 25 mg every 3 to 7 days as tolerated to a maximum of 150 mg at bedtime.9 TCAs used in the setting of chronic pain are typically increased until pain is adequately controlled or side effects occur. As with many agents used in chronic pain, an adequate trial with TCAs is considered to be several weeks. Various formulations of topical diclofenac are available, including Voltaren gel, Pennsaid solution, and Flector patch, and are used in the setting of osteoarthritis or musculoskeletal pain. In clinical practice, these agents are often considered when there is a contraindication to oral NSAID therapy, such as cardiovascular disease, kidney impairment, or history of gastrointestinal bleed, as the systemic absorption of diclofenac with these formulations is low. For example, the amount of diclofenac that is systemically absorbed from Voltaren gel is on average 6% of the systemic exposure from an oral form of diclofenac.13 Voltaren gel is approved for the relief of the pain of osteoarthritis of joints such as the knees and those of the hands but was not evaluated for use on joints of the spine, hip, or shoulder.13 Recommended dosing for Voltaren gel is 4 grams to the affected area 4 times daily on joints of the lower extremities and 2 grams to the affected area 4 times daily to joints of the upper extremities.13 Pennsaid is indicated for management of osteoarthritis of the knees only and its recommended dose is 40 drops on each painful knee 4 times a day.14 Flector patch is dosed as 1 patch to painful area twice daily and is indicated for acute pain due to minor strains, sprains, and contusions.15 No direct comparison between the various topical diclofenac formulations has been performed and in clinical practice choice of an agent is often left to patient preference of a particular dosage formulation: gel, solution, or patch. Dr. McKnight is a clinical pharmacist at the University of North Carolina Hospitals Pain Management Center in Chapel Hill, North Carolina - Ofirmev [package insert]. San Diego, CA: Cadence Pharmaceuticals, Inc; 2010. - Sinatra RS, Jahr JS, Reynolds LW, Viscusi ER, Groudine SB, Payen-Champenois C. Efficacy and safety of single and repeated administration of 1 gram intravenous acetaminophen injection (paracetamol) for pain management after major orthopedic surgery. Anesthesiology. 2005;102(4):822-831. - Memis D, Inal M, Kavalci G, Sezer A, Sut N. Intravenous paracetamol reduced the use of opioids, extubation time, and opioid-related adverse effects after major surgery in intensive care unit. J Crit Care. 2010;25(3):458-462. - FDA website. Accessed December 14, 2012. - Caldolor [package insert]. Nashville, TN: Cumberland Pharmaceuticals; 2009. - Singla N, Rock A, Pavliv L. A multi-center, randomized, double-blind placebo-controlled trial of intravenous-ibuprofen (IV-ibuprofen) for treatment of pain in post-operative orthopedic adult patients. Pain Med. 2010;11:1284-1293. - Kroll PB, Meadows L, Rock A, Pavliv L. A multicenter, randomized, double-blind, placebo-controlled trial of intravenous ibuprofen (IV-ibuprofen) in the management of postoperative pain following abdominal hysterectomy. Pain Pract. 2011;11(1):23-32. - Caldolor. Cumberland Pharmaceuticals, Inc, website. Accessed December 14, 2012. - Dworkin R, O’Connor AB, Backonja M, et al. Pharmacologic management of neuropathic pain: evidence-based recommendations. Pain. 2007;132(3):237-251. - Gralise [package insert]. Menlo Park, CA: Depomed Inc; 2012. - Cymbalta [package insert]. Indianapolis, IN: Eli Lilly and Company; 2012. - Sindrup SH, Otto M, Finnerup NB, Jensen TS. Antidepressants in the treatment of neuropathic pain. Basic Clin Pharmacol Toxicol. 2005;96:399-409. - Voltaren gel [package insert]. Chadds Ford, PA: Endo Pharmaceuticals, Inc; 2009. - Pennsaid [package insert]. Mansfield, MA: Covidien; 2010. - Flector patch [package insert]. Mission, KS: Pfizer Inc; 2011.
<urn:uuid:9a8aeeea-b6e4-4aaa-a4b7-56bd85a9b259>
CC-MAIN-2021-43
https://www.pharmacytimes.com/view/pain-management-non-opioid-medications
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585768.3/warc/CC-MAIN-20211023193319-20211023223319-00150.warc.gz
en
0.906107
2,792
2.5625
3
« PreviousContinue » 1754. to say that, three out of five men who went with them were too badly frost-bitten to continue the journey.* In spite of all, however, they reached Will's Creek, on the 6th of January, well and sound. During the absence of the young messenger, steps had been taken to fortify and settle the point formed by the junction of the Monongahela and Alleghany; and, while upon his return, he met "seventeen horses, loaded with materials and stores for a fort at the Fork of the Ohio,” and, soon after, “some families going out to settle.” These steps were taken by the Ohio Company; but, as soon as Washington returned with the letter of St. Pierre, the commander on French Creek, and it was perfectly clear that neither he nor his superiors meant to yield the West without a struggle, Governor Dinwiddie wrote to the Board of Trade, stating that the French were building another fort at Venango, and that in March twelve or fifteen hundred men would be ready to descend the river with their Indian allies, for which purpose three hundred canoes had been collected; and that Logstown was then to be made head-quarters, while forts were built in various other positions, and the whole country occupied. He also sent expresses to the Governors of Pennsylvania and New York, calling upon them for assistance; and, with the advice of his council, proceeded to enlist two companies, one of which was to be raised by Washington, the other by Trent, who was a frontier man. This last was to be raised upon the frontiers, and to proceed at once to the Fork of the Ohio, there to complete in the best manner, and as soon as possible, the fort begun by the Ohio Company; and in case of attack, or any attempt to resist the settlements, or obstruct the works, those resisting were to be taken, or if need were, killed. I While Virginia was taking these strong measures, which were fully authorized by the letter of the Earl of Holdernesse, Secretary of State,|| written in the previous August, and which directed the Governors of the various provinces, after representing to those who were invading his Majesty's dominions the injustice of the act, to call out the armed force of the province, and repel force Sparks' Washington, ii. 55. + Gist's Journal of this Expedition may be found in the Massachusetts Historical Col. lections, third series, vol. v. (1836,) 101 to 108. # Sparks' Washington, vol. ii. pp. 1, 431, 446.-Sparks' Franklin, vol. iii. p. 254. | Sparks' Franklin, vol. iii. p. 251, where the letter is given. 1754. New York conferring with the Six Nutions. 61 by force; while Virginia was thus acting, Pennsylvania was discussing the question, whether the French were really invading his Majesty's dominions,—the Governor being on one side, and the Assembly on the other, * — and New York was preparing to hold a conference with the Six Nations, in obedience to orders from the Board of Trade, written in September, 1753.7 These orders had been sent out in consequence of the report in England, that the natives would side with the French, because dissatisfied with the occupancy of their lands by the English; and simultaneous orders were sent to the other provinces, directing the Governors to recommend their Assemblies to send Commissioners to Albany to attend this grand treaty, which was to heal all wounds. New York, however, was more generous when called on by Virginia, than her neighbor on the south, and voted, for the assistance of the resisting colony, five thousand pounds currency.I It was now April, 1754. The fort at Venango was finished, and all along the line of French Creek troops were gathering; and the wilderness echoed the strange sounds of a European camp, the watchword, the command, the clang of muskets, the uproar of soldiers, the cry of the sutler; and with these were mingled the shrieks of drunken Indians, won over from their old friendship by rum and soft words. Scouts were abroad, and little groups formed about the tents or huts of the officers, to learn the movements of the British. Canoes were gathering, and cannon were painfully hauled here and there. All was movement and activity among the old forests, and on hill-sides, covered already with young wild flowers, from Lake Erie to the Alleghany. In Philadelphia, meanwhile, Governor Hamilton, in no amiable mood, had summoned the Assembly, and asked them if they meant to help the King in the defence of his dominions; and had desired them, above all things, to do whatever they meant to do, quickly. The Assembly debated, and resolved to aid the King with a little money, and then debated again and voted not to aid him with any money at all, for some would not give less than ten thousand pounds, and others would not give more than five thousand pounds; and so, nothing being practicable, they adjourned upon the 10th of April until the 13th of May.|| Sparks' Franklin, vol. iii. pp. 254, 263. 62 Washington appointed Lieutenant Colonel. 1754. In New York, a little, and only a little better spirit, was at work; nor was this strange, as her direct interest was much less than that of Pennsylvania. Five thousand pounds indeed was, as we have said, voted to Virginia; but the Assembly questioned the invasion of his Majesty's dominions by the French, and it was not till June that the money voted was sent forward. * The Old Dominion, however, was all alive. As, under the provincial law, the militia could not be called forth to march more than five miles beyond the bounds of the colony, and as it was doubtful if the French were within Virginia, it was determined to rely upon volunteers. Ten thousand pounds had been voted by the Assembly; so the two companies were now increased to six, and Washington was raised to the rank of lieutenant colonel, and made second in command under Joshua Fry. Ten cannon, lately from England, were forwarded from Alexandria; wagons were got ready to carry westward provisions and stores through the heavy spring roads; and everywhere along the Potomac men were enlisting under the Governor's proclamation, which promised to those that should serve in that war, two hundred thousand acres of land on the Ohio,-or, already enlisted, were gathering into grave knots, or marching forward to the field of action, or helping on the thirty cannon and eighty barrels of gunpowder, which the King had sent out for the western forts. Along the Potomac they were gathering, as far as to Will's creek; and far beyond Will's creek, whither Trent had come for assistance, his little band of forty-one men was working away, in hunger and want, to fortify that point at the Fork of the Ohio, to which both parties were looking with deep interest. The first birds of spring filled the forests with their song; the redbud and dogwood were here and there putting forth their flowers on the steep Alleghany hill-sides, and the swift river below swept by, swollen by the melting snows and April showers; a few Indian scouts were seen, but no enemy seemed near at hand; and all was so quiet, that Frazier, an old Indian trader, who had been left by Trent in command of the new fort, ventured to his home at the mouth of Turtle creek, ten miles up the Monongahela. But, though all was so quiet in that wilderness, keen eyes had seen the low entrenchment that was rising at the Fork, and swift feet bad borne the news of it up the valley; and, upon the 17th of April, Ensign Ward, who then had charge of it, saw upon the Alleghany a sight that made his heart * Massachusetts Historical Collections, first series, vol. vii. pp. 72, 73, and note. 1754. Port at the Fork of the Ohio taken by the French. 63 sink,-sixty batteaux and three hundred canoes, filled with men, , and laden deep with cannon and stores. The fort was called on to surrender; by the advice of the Half-king, Ward tried to evade the act, but it would not do; Contrecæur, with a thousand men about him, said “Evacuate,” and the ensign dared not refuse. That evening he supped with his captor, and the next day was bowed off by the Frenchman, and, with his men and tools, marched up the Monongahela. From that day began the war. Sparks" Washington, vol. i. The number of French troops was probably overstated, but to the captives there seemed a round thousand. Burk, in his history of Virginia, speaks of the taking of Logstown by the French; but Logstown was never a post of the Ohio Company as he represents it, as is plain from all contemporary letters and accounts. Burk’s ignorance of Western matters is clear in this, that he says the French dropped down from Fort Du Quesne to Presqu'ile and Venango; they, or part of them, did drop down the Ohio, but surely not to posts, one of which was on Lake Erie, and the other far up the Alleghany! In a letter from Captain Stobo, written in July, 1754, at fort Du Quesne, where he was then confined as hostage under the capitulation of Great Meadows, he says there were but two hundred men in and about the fort at that time.(American Pioneer, i. 236.-For plan of Forts Du Quesne and Pitt, see article in Pioneer; also, Day's Historical Collections of Pennsylvania, 77.) WAR OF 1754 TO 1763. Washington was at Will's Creek, (Cumberland,) when the news of the surrender of the Fork reached him. He was on his way across the mountains, preparing roads for the King's cannon, and aiming for the mouth of Red Stone Creek, (Brownsville,) where a store-house had been already built by the Ohio Company; by the 9th of May, he had reached Little Meadows, on the head waters of a branch of the Youghiogany, toiling slowly, painfully forward, four, three, sometimes only two miles a day!-- All the while from traders and others he heard of forces coming up the Ohio to reinforce the French at the Fork, and of spies out examining the valley of the Monongahela, flattering and bribing the Indians. On the 27th of May he was at Great Meadows, west of the Youghiogany, near the Fort of Laurel Hill, close hy the spot now known as Braddock's Grave. He had heard of a body of French somewhere in the neighborhood, and on the 27th, his former guide, Gist, came from his residence beyond Laurel Hill, near the head of Red Stone Creek, and gave information of a body of French who had been at his plantation the day before. That evening from his old friend the Half-king, he heard again of enemies in the vicinity. Fearing a surprise Washington at once started, and early the next morning attacked the party referred to by the Chief of the Iroquois. In the contest ten of the French were killed, including M. de Jumonville their Commander; of the Americans but one was lost. This skirmish France saw fit to regard as the commencement of the war, and in consequence of a report made by M. de Contrecæur, to the Marquis Du Quesne, founded upon the tales told by certain of Jumonville's men who had run away at the first onset, it has been usual with French writers to represent the attack by Washington as unauthorized, and the party assailed by him as a party sent with peaceable intentions; and this impression was confirmed by the term “assassination of M. de Jumonville,” used in the capitulation of Great Meadows in the following July; - this having been accepted by
<urn:uuid:0138f556-8abd-4fa8-b68f-3f3a162527c9>
CC-MAIN-2021-43
https://books.google.co.nz/books?pg=PA63&vq=Major&dq=related:UOM39015066464473&lr=&id=bTVAAAAAYAAJ&output=html_text
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585348.66/warc/CC-MAIN-20211020183354-20211020213354-00710.warc.gz
en
0.986869
2,563
2.640625
3
Chapter 2: Natural Law & the Divine Right of Kings The oldest laws would have to have been laws of nature. These laws were transmitted to early society through the understanding of the storms, the rain, the seasons, and by observing nature. The advent of law would only have occurred after a shift from a tribal matrilineal family to the later patriarchal “civilization.” This is when slavery would have started and therefore fines and punishments were fixed to offenses against the ruling king. The king himself was a newer implement, which did not begin until around the time of the rise of the first Sumerian civilization, followed by Sargon of Akkad. The kings of Sumer were the first father-based lineage, and this was when writing started, during the Ur-III period, c. 3200 BC. The kings took their right to rule from the idea that they personally descended from the gods, therefore the king’s law was god’s law. This was called the “divine right of kings.” Thus, religion and law were inseparable since the beginning of the written law. This is explored in depth in my book, Ancient Psychedelia: Alien Gods and Mushroom Goddesses. Most laws were originally handed down by people who worked within the priesthood to control the masses using their higher education. First, these people were like shamans within the tribe, but as the tribes grew, they took on larger roles, and eventually became a priesthood, having all knowledge of drug use and the spiritual laws of nature. Several quotes in this chapter are taken from a book titled, The Rape of Justice: America’s Tribunals Exposed by Eustace Mullins. Mullins was descended from William Mullins whose name is on the Mayflower Compact. He is no stranger to the legal system. Mullins has authored many groundbreaking books and has been brought up on numerous charges and harassed continuously by the legal system. He has represented himself in court for most of his life and has a lot to teach the student of Freedom. However, his religious beliefs prevented him from seeing the greater picture sometimes, and like many Christians, he believed mythological propaganda. See Curse of Canaan. In the second chapter of The Rape of Justice, titled “The Origin of Law,” Mullins writes: “In previous civilizations, the law was not only regarded as a fixed power; it was deemed to originate in the heavens, and in godly rule. We find in the Cairo Museum, a nineteenth century B.C. papyrus, the “Hymn to Amen Ra”: “Hail to thee, Ra, Lord of Law; father of the gods, maker of men.” Civilized nations have generally acknowledged that the ultimate source of law and its authority is the will of God, and it was codified in scripture. In Isaiah 2;3, “The Law shall come forth from Zion.” In Micah, 4;2, “The Law shall go forth from Zion.” Isaiah 51 declares, “Thus saith the Lord; Harken unto Me, ye that know righteousness, the people in whose heart is My Law; fear ye not the reproach of men.” (1) Zion translates to “sin.” Mount Zion was the mountain of the “moon-god” Sin. Mount Sin-ai or Mount Zion. In truth, however, this mountain was not a moon-god mountain at all. It was really a bull-god mountain where the mushroom which sprung from the bull’s dung was worshipped. The reason being, for hundreds of years now, this bull god was mistaken for a moon god due to the crescent above his head. This crescent has been mistaken for the moon, when in reality, it represents the bull’s horns, based on the direction facing up and not from the side. All of law evolved from the use of the mushroom and men deciding to take it upon themselves to interpret nature’s laws themselves and then impose those laws upon their slaves. Once again, all of this is covered extensively in my book Ancient Psychedelia: Alien Gods and Mushroom Goddesses. Then in the next paragraph, Mullins continues: “Sir William Blackstone, in his Commentaries, a primary source in the English common law, states a profound belief in the origin of law: ‘When the Supreme Being formed the universe, and created matter out of nothing, he impressed certain principles upon that matter, from which it can never depart, and without which it would cease to be’.” (2) (1) Rape of Justice: America’s Tribunals Exposed, Eustace Mullins, 1989, p. 16-17 (2) ibid, p. 17 The first written laws that have been uncovered are from 2100-2050 BC, called the Code of Ur-Nammu. They imposed fines of monetary compensation for bodily damage as opposed to the later lex talionis (‘eye for an eye’) principle of Babylonian law. The crimes of murder, robbery, adultery and rape were capital offenses. There was already differentiation between the freeman and the slave and matrimony rules were already in place. The prologue of the Code of Ur-Nammu, invokes the deities for Ur-Nammu’s kingship, Nanna and Utu, and decrees “equity in the land.” The next oldest known law would be the Code of Hammurabi, only three centuries later, in 1754 BC. It consists of 282 laws and differentiates between classes of freemen and slaves and men and women. Nearly half of the code deals with matters of contract, the terms of the transaction and liability for damages. One third deals with issues of family relations such as inheritance, divorce, paternity and reproduction. There is a rule for judges relating to altering decisions after being written down and issues of military service. A covenant is a contract and the Ten Commandments are the Hebrew people’s contract with God, just as the Law of Hammurabi was the law of the ancient Babylonians. I will continue to quote from the second chapter of The Rape of Justice: “The law was codified by the jurists of England, principally by Coke and Blackstone, as the English common law. It was later transformed, after having been brought across the Atlantic Ocean by English colonists, as The Constitution of the United States. ….. The history of civilization has always been marked by the clearly defined milestones of codified law. In 2250 B.C. (actually 1754 BC, ed. note), the code of Hammurabi was promulgated ‘to establish law and justice in the land.’ “We have also been greatly influenced by Roman jurisprudence, which were administered as the ruling code of the world for some thirteen hundred years. Kent’s Commentaries, the principle legal textbook for American lawyers throughout the nineteenth century, notes, Vol. I, page 556: “The great body of Roman or civil law was collected and digested by order of the Roman Emperor Justinian, in the former part of the sixth century… It exerts a very considerable influence upon our own municipal law.” Mullins continues: “The Roman jurists developed the principles of “jus naturale,” that is, a code of laws which reflected the laws of nature and the natural order. In his Commentaries, Blackstone expands upon this “law of nature.” – “Law of nature — the Will of his maker is called the Law of Nature, being coeval with mankind, and directed by God Himself as a course superior in obligation to any other. It is binding all over the globe in all countries and at all times; no human laws are of any validity, in contrary to this.” Blackstone also writes that: “Revealed Law is only scripture. Upon these two foundations, one, the law of nature, and two, the Law of Revelation, depend all human laws; that is to say, no human law should be suffered to contradict them.” (3) Roman Civil Law Mullins expands on the origin of the Romans and Roman Law: “Founded by Romulus in 753 B.C., Rome became a Republic in the year 509, after the expulsion of the Etruscan kings. In 450 B.C., the Laws of the Twelve Tablets were formulated. The earliest Roman law was the Jus Quiritium, developed by the Quirites, who were the first families of the Republic. As patricians, the Quiritium Law was developed primarily to protect their families and their property. These families were known as gentes, or the clans. Their descendants have since been known to history as “gentlemen,” as contrasted to the less distinguished masses, or plebs, as the freedmen or non-gentiles were known. … The privileges arrogated by the First Families, the gentlemen, became a source of constant criticism and contention from the plebs. In fact, ancient Rome soon developed into two groups which have remained fairly constant for three thousand years, the older families, which held the majority of property, and the masses…. The essential difference between the two classes was that the patricians knew who their parents were, and the plebs, who paid little attention to such niceties, did not. Because of their family records, the patricians were able to hand down their property to their heirs, while the plebs, even if they prospered, had no family records to protect their holdings (3) ibid. p. 20 Con’t: “The fundamental distinction led to the demands of the plebs that the government intervene to support them, demands which, twenty-five centuries later, led to the Communist Manifesto, and Karl Marx’s demand that all inheritance be abolished. In the United States this precept of Communism was enshrined in punitive inheritance taxation and income taxes.” “Emboldened by their increasing numbers, the plebs began to demand more and more “rights” for themselves. The issuance of the Twelve Tablets marked a watering down of the original Jus Quiritium. The process was greatly enhanced with the Jus Civile, at the establishment of the Republic. Our “civil law” derives its name from the century long struggles between the patricians and the plebs, when the plebs insisted upon a law which granted them more privileges as “civil” laws. In 471 B.C., the plebs celebrated their final triumph, with the establishment of the “tribuns,” as the expression of their newfound political power. Thus, the patrician age in Rome lasted a scant three hundred years, a short period in the long history of Rome. Nevertheless, much of the power and organization of Rome continued to be based on the stern precepts of its founding patricians, just as much of the protection afforded to its citizens in the United States by the Constitution has been laid down by the stern precepts of our own Founding Fathers. Even today, our law giving bodies are frequently referred to as “tribunals,” as recognition of the triumph of the plebs in Rome in 471 B.C. “In 445 B.C., Caious Canuleius led the final assault of the plebs against the entrenched privileges of the patrician families. He wrested from them the source of their continuing power, the protection of their blood lines. By very stringent and exclusive marriage bans, they had managed to preserve their blood lines by prohibiting marriage with a pleb. Canuleius now succeeded in overcoming this ancient prohibition. From that time on, plebs were allowed to marry into the patrician families. Rome was now “democratized.” …”With the new democracy came increasing power and growing complexity of the Roman legal system. Cicero was led to publicly denounce the well-known practice of bribing of jurors. By the end of the fourth century B.C., Ammianus Marcellinus protested that, “We see the most violent and rapacious classes of men besieging the houses of the rich, cunningly creating lawsuits. Doors are now daily more and more opened to plunder by the depravity of judges and advocates who are all alike.” (4) Half of Page 7 and Page 8 Not Included in Free Edition – To Order the FULL Version in either Softcover or E-Book, Please Visit the Store.
<urn:uuid:b4726d14-da25-4153-bd77-131d5422c636>
CC-MAIN-2021-43
https://thepoliticsofpot.com/chapter-2/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588102.27/warc/CC-MAIN-20211027053727-20211027083727-00429.warc.gz
en
0.977177
2,651
3.375
3
A trademark is either a word, phrase, symbol or design, or combination thereof which identifies and distinguishes the source of one party’s goods or services from those of others (people do not have to know exactly where the goods are made, just that there is a specific source). A service mark is the same except that it identifies and distinguishes the source of a service rather than a product. Normally, a mark for goods appears on the product or on its packaging, while a service mark appears in advertising for the services. A trademark is different from a copyright which protects an original artistic or literary work, or a patent which protects an invention. Trademark rights arise from either: - actual use; or - filing a proper application with the Patent and Trademark Office (“PTO”) stating the applicant has a bona fide intention to use the mark in commerce regulated by the U.S. Congress; or - more limited rights by filing a state trademark registration in an individual state(s). The use in commerce must be a bona fide use in the ordinary course of trade, and not made merely to reserve a right in a mark. Use of a mark in promotion or advertising before the product or service is actually provided under the mark on a normal commercial scale does not qualify as use in commerce. Use of a mark in purely local commerce within a state does not qualify as “use in commerce.” Federal registration is not required to establish rights in a mark, nor is it required to begin use of a mark, but can secure benefits beyond the rights acquired by merely using a mark, e.g., the owner of a federal registration is presumed to be the owner of the mark for the goods and services specified in the registration, and to be entitled to use the mark nationwide. Period of protection Unlike copyrights or patents, trademark rights can last indefinitely if the owner continues to use the mark to identify its goods or services. The term of a federal trademark registration is 10 years, with 10-year renewal terms. However, between the fifth and sixth year after the date of initial registration, the registrant must file an affidavit setting forth certain information to keep the registration alive. If no affidavit is filed, the registration is canceled. A United States registration provides protection only in the United States and its territories. To protect a mark in other countries the owner must seek protection in each country separately under the relevant laws. TM (trademark) or SM (service mark) and ® ( registration symbol) Anyone claiming rights in a mark may use the TM (trademark) or SM (service mark) designation with the mark to alert the public to their claim. It is not necessary to have a registration, or even a pending application, to use these designations. The claim may or may not be valid. The registration symbol, ®, may only be used when the mark is registered in the PTO. It is improper to use this symbol at any point before the registration issues. Symbols should be omitted from the mark in the drawing submitted with your application as they are not considered part of the mark. An applicant may apply for federal registration in 3 principal ways: - based on existing use of a mark in commerce (“use” application); - based on a bona fide intention to use the mark in commerce (“intent-to-use” application); and - under certain international agreements, an applicant from outside the United States may file in the United States based on an application or registration in another country. The application must be filed in the name of the owner of the mark, who controls the nature and quality of the goods or services identified by the mark. The owner may submit and prosecute its own application for registration, or may be represented by an attorney. Class of goods or services The applicant must be careful in identifying the goods and services because the filing of an application establishes certain presumptions of rights as of the filing date, and the application may not be amended later to add any products or services not within the scope of the identification. E.g., the identification of “clothing” could be amended to “shirts and jackets,” which narrows the scope, but could not be amended to “retail clothing store services,” which would change the scope. Similarly, “physical therapy services” could not be changed to “medical services” because this would broaden the scope of the identification. Also, if the identification includes a trade channel limitation, deleting that limitation would broaden the scope of the identification. If the applicant has already used the mark in commerce and files based on this use in commerce, then the applicant must submit 3 specimens per class showing use of the mark in commerce with the application. If, instead, the application is based on intention to use the mark in commerce, the applicant must submit three specimens per class at the time the applicant files either an Amendment to Allege Use or a Statement of Use. The specimens must be actual samples of how the mark is being used in commerce, and may be identical or examples of three different uses showing the same mark. If it is impractical to send an actual specimen because of its size, photographs or other acceptable reproductions that show the mark on the goods or packaging for the goods must be furnished. Invoices, announcements, order forms, bills of lading, leaflets, brochures, catalogs, publicity releases, letterhead, and business cards generally are not acceptable specimens for goods. If the mark is used for services, examples of acceptable specimens are signs, brochures about the services, advertisements for the services, business cards or stationery showing the mark in connection with the services, or photographs which show the mark either as it is used in the rendering or advertising of the services. In the case of a service mark, the specimens must either show the mark and include some clear reference to the type of services rendered under the mark in some form of advertising, or show the mark as it is used in the rendering of the service, for example on a store front or the side of a delivery or service truck. Specimens may not be larger than 8.5 inches by 11 inches (21.59 cm by 27.94 cm) and must be flat. Smaller specimens, such as labels, may be stapled to a sheet of paper and labeled “Specimens.” A separate sheet can be used for each class. Intent to use A registration may be filed based on intent to use. See notice of approval, below. Search for conflicting marks An applicant is not required to conduct a search for conflicting marks prior to applying with the PTO, however, it can be useful to determine if a conflicting mark is in use. There are a variety of ways to get this type of information: - PTO public search library located on the second floor of the South Tower Building, 2900 Crystal Drive, Arlington, Virginia 22202; - by visiting a patent and trademark depository library; - a private trademark search company or an attorney who deals with trademark law; or - the internet or other electronic information services (The PTO does not conduct searches for the public). The application fee, which covers processing and search costs, will not be refunded even if a conflict is found and the mark cannot be registered. To determine whether there is a conflict between two marks, the PTO determines whether there would be likelihood of confusion (whether relevant consumers would be likely to associate the goods or services of one party with those of the other party as a result of the use of the marks at issue by both parties). The principal factors are the similarity of the marks and the commercial relationship between the goods and services identified by the marks. The marks need not be identical and the goods and services do not have to be the same. The PTO reviews an application for minimum requirements before giving it a filing date assigning a serial number and sending the applicant a receipt, typically about two months after filing. If the minimum requirements are not met, the entire mailing is returned to the applicant, including the filing fee. About four months after filing an examining attorney at the PTO reviews the application and determines whether the mark may be registered. If the examining attorney determines that the mark cannot be registered they issue a letter listing any grounds for refusal and any corrections required in the application, or they may contact the applicant by telephone if only minor corrections are required. The applicant must respond to any objections within six months of the mailing date of the letter or the application will be abandoned. If the applicant’s response does not overcome all objections the examining attorney will issue a final refusal, and the applicant may appeal to the Trademark Trial and Appeal Board, an administrative tribunal within the PTO. Common grounds for refusal are: - likelihood of confusion between the applicant’s mark and a registered mark; - marks which are “merely descriptive” in relation to the applicant’s goods or services or a feature of the goods or services; or - marks consisting of geographic terms or surnames. Marks may be refused for other reasons as well. If there are no objections or if the applicant overcomes all objections, the examining attorney will approve the mark for publication in the Official Gazette, a weekly publication of the PTO. The PTO will send a Notice of Publication to the applicant indicating the date of publication. In the case of two or more applications for similar marks, the PTO will publish the application with the earliest effective filing date first. Any party who believes it may be damaged by the registration of the mark has 30 days from the date of publication to file an opposition to registration. An opposition is similar to a formal proceeding in the federal courts but is held before the Trademark Trial and Appeal Board. If no opposition is filed the application enters the next stage of the registration process. If the application was based upon the actual use of the mark in commerce prior to approval for publication, the PTO will register the mark and issue a registration certificate about 12 weeks after the date the mark was published if no opposition was filed. Notice of allowance If there is no opposition filed after publication, a notice of allowance will be issued. If the mark was published based on intention to use the mark in commerce, the PTO will issue a Notice of Allowance about 12 weeks after the date the mark was published, again provided no opposition was filed. The applicant then has six months from the date of the Notice of Allowance to either: (1) use the mark in commerce and submit a Statement of Use, or (2) request a six-month Extension of Time to File a Statement of Use. The applicant may request additional extensions of time only as noted in the instructions on the back of the extension form. If the Statement of Use is filed and approved, the PTO will then issue the registration certificate. Solicitations Offering to Maintain Your Trademark Registration 11/22/11 Trademark applications are stored on publicly accessible databases, so private entities can identify trademark applicants, and send them these solicitations offering to maintain their trademark. Some of these solicitations are formatted to resemble official documents, but are not. Some solicitations may be for watching services, and others request fees for listing on directories of no apparent merit. If you used trademark counsel to apply for your trademark registration, they have “watching services” for infringing uses, and will follow-up regarding the affidavit to be filed between the fifth and sixth year following registration, and again for the ten year renewal, and so these services are not required. Following is a list prepared from such solicitations and from “warning” notices (there are undoubtedly other similar businesses not listed). I cannot comment on the legitimacy of any of the entities on this list, but you typically do not need these companies’ services: - American Trademark Agency - Company for Economic Publications Ltd. – Vienna, Austria - CPI (Company for Publications and Information Anstalt, and not be confused with Computer Packages Inc. of the U.S. and the Netherlands) - Globus Edition S.L. – Spain - INFOCOM – Switzerland - Publication et Information SARL – Liechenstein - Societe pour Global Edition KFT - TM-Collection Kft – Hungary - TMI Trademark Info Corporation – Texas - Trademark Renewal Service – Washington D.C. - U.S. Trademark Maintenance Service – Houston, TX - U.S. Trademark Protection Agency – Washington DC - United States Trademark Protection Agency in Seattle, Washington - United States Trademark Maintenance Service Border Protection Service located in Houston, Texas - ZDR – Datenregister GmbH in Germany - Globus Edition S.L. in Spain - Company of Economic Publications Ltd. in Austria - The Marks KFT in Hungary - Commercial Centre for Industry and Trade located in Switzerland — CPI (Company for Publications and Information Anstalt) in Liechtenstein (a phantom company which says it works with The Publication of Brand Names of the International Economy – another phantom company) - IDM International Data Medium AnsbH in Liechtenstein - S.A.R.L. – Societe pour Publications et Information in Austria - TMI Trademark Info Corporation located in Pearland, Texas - IT&T AG in Switzerland - Federated Institute for Patent & Trademark Registry in Coconut Creek, Florida - Central Data Register of International Patents in Germany - CPTD – Central Patent & Trademark Database in Austria - European Institute for Economy and Commerce in Belgium - INFOCom in Switzerland — International Organization for Patent & Trademark Service in USA - Register of International Patent Bulletin in Germany - TM Collection KPT in Hungary
<urn:uuid:546075e7-0218-4ca6-84e7-4d690d831879>
CC-MAIN-2021-43
https://www.carnahanlaw.com/trademark/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585045.2/warc/CC-MAIN-20211016231019-20211017021019-00589.warc.gz
en
0.915218
2,864
2.96875
3
You Should Be Getting Your Biographies in Children’s Picture Book Form 8 (and a half) nonfiction books to rip from kids' grubby little hands If you enjoy reading Electric Literature, join our mailing list! We’ll send you the best of EL each week, and you’ll be the first to know about upcoming submissions periods and virtual events. November is Picture Book Month, so these illustrated little gems are deservedly in the spotlight. In a recent blog post for Books Are Magic, novelist and bookstore owner Emma Straub curated a list of picture books. Among Straub’s picks for the best picture books of 2019 is a wonderful biography of Margaret Wise Brown—which also included a bold claim about this lesser known, sub-genre of Kid Lit: “Most picture book biographies are deadly boring. There, I said it!” Well, I’m here to respectfully contest this! Emma Straub, a novelist I deeply admire, is like, totally wrong—and okay, also kind of right. It’s true that there are many sucky picture book biographies (let’s call them PBBs), just like there are many sucky books of every genre. The good news about bad books is that they only amplify the gloriousness of the excellent books by comparison. And there are many excellent picture book biographies out there. Reading PBBs is an amazing hack for readers who want to know the general beats of notable lives. In a very short time, you can learn about the most influential artists, intellectuals, politicians, and changemakers in history. But beyond acquiring facts and increasing your Jeopardy! score, what I relish most about PBBs is how they infuse history with much-needed empathy and emotion. There’s also one more hidden benefit: reading them will make you a better writer. A biography in a picture book format is a master class in distillation. All writing involves making choices, sometimes excruciating choices, of what to leave in and what to leave out—but the art of a biographer takes this excision to the next level. And the scissory task of a picture book biographer is even more arduous: how to fit an entire life into a 32-page container. It’s no coincidence that some of the best PBBs have the fewest words. Ultimately, I wonder if too many writers (and non-writers) stumble back into picture books only when they start procreating. So I’m here to say: there’s no need to wait to be a parent to (re)discover picture books. Go ahead and plunk yourself down on one of those miniaturized chairs in the children’s section of your local library with a fat stack of PBBs. Sure, you’ll be hella uncomfortable, and you might get some serious side eye from a sticky-fingered toddler suspicious of you infiltrating her turf, but trust me—it’s totally worth it. And hey, there might even be a tub of crayons waiting for you on those tiny tables. Here’s a list of 8.5 of my favorite PBBs to get you started. The Right Word: Roget and His Thesaurus by Jen Bryant and Melissa Sweet This book is a word nerd’s and fellow list-maker’s dream. It’s the story of a shy, skinny Latin- and Linnaeus-loving boy who begins to compile lists of words to cope with the death of his father. As he grows up, Peter Roget continues to gather his epic collection of synonymous language, and becomes the creator of the almighty thesaurus, which I learned from this book (be still my geeky heart) means “treasure house” in Greek. Fun factoid: Peter Roget was in fact, a doctor, and was only 19 years old when he graduated medical school in Edinburgh, Scotland in 1798. Bonus book: Fans of the fantastic Bryant-Sweet collaboration will also devour their book A River of Words: The Story of William Carlos Williams. Me…Jane by Patrick McDonnell This is one of the sweetest books on this list (and maybe ever), but don’t let the tenderness fool you—the sparseness and economy of this storytelling, in its ability to pack both a biographical and emotional punch, is pretty astounding. The story begins with Jane Goodall, as a little girl, and her loyal companion, Jubilee, a stuffed toy chimpanzee. Together they comprise a dynamic duo on the hunt for joy and wonder, as they spy on the miracle of life in Grandma Nutt’s chicken coop. Readers are transported into the inner life of a little girl who dreams about helping animals in Africa, and then realizes these dreams. Jane, of course, grows up to be one the world’s foremost experts on chimpanzees. But still, beware of the last page: it pulls off a sudden and remarkable narrative and visual turn. Your heart might leap out of the book and right onto the page. Fun factoid: Jane Goodall quite literally read her way into her future, reading and re-reading Tarzan of the Apes, about another girl named Jane. Looking at Lincoln by Maira Kalman Famed writer-illustrator Kalman is one of my all-time favorite artists, and she doesn’t disappoint with her bright take on the legacy of Abraham Lincoln. With her signature mix of wit, whimsy, and that unmistakable handwriting, readers are in for a treat. Kalman (or “the speaker”) inserts herself into the story at the very beginning of the book with a walk in the park. There, the narrator sees a man who looks familiar, and later, while paying her breakfast bill using a five dollar bill, realizes the stranger looks like Lincoln. This spurs a creative deep-dive into one of the most beloved American presidents (Kalman reveals over 16,000 books were written about him) and the result is this magnificent book. Beyond a very well-researched Lincoln mini-biography, the narrator continues to insert herself throughout the book to include pretty hilarious and delightful observations and riffs. I love how Kalman models what it means to be an engaged and curious human being and artist—how such a tiny moment or observation can grow. How a perceptiveness combined with wonder and a good dose library of research can be transformed into incredible art. Fun factoids: Lincoln’s signature tall hat was apparently used as portable receptacle for the many notes he wrote and placed inside it. Also, if you take the second page of this book at face value—Kalman loves pancakes. Bonus books: What began as a column in The New York Times turned into Kalman’s And The Pursuit of Happiness, a year-long artistic inquiry into American democracy. For more presidential artistry, here is an illustrated piece on George Washington. Kalman also wrote and illustrated the picture book, Fireboat: The Heroic Adventures of John J. Harvey, which tells the true story of a restored fireboat that was used during September 11. Cloth Lullaby: The Woven Life of Louise Bourgeois by Amy Novesky and Isabelle Arsenault Possibly one of the most ambitious and lush picture books I have ever encountered, this book engages all of the five senses in a reading synesthesia that ignites the whole body, firing the right and left sides of the brain and every chamber of the heart. Telling the story of artist and sculptor Louise Bourgeois, the book is particularly powerful in language and narrative arc. Bourgeois’s upbringing is fascinating, as she learns tapestry restoration from her mother, who is also her best friend. This is a searing tribute to the mother/daughter bond, particularly as Bourgeois reels from the death of her beloved mother, and uses art and weaving, as a way to try to make herself whole again and honor her childhood memories. Spiders delicately crawl through the pages as the inspiration behind the giant steel spider sculptures that Bourgeois is most known for as an adult artist. Novesky reminds us that these spiders are not scary, but sweet weavers—just like Bourgeois’ mother. They are the heartbreaking and healing art of a motherless child. Fun factoids: At university, Bourgeois originally studied mathematics, and enjoyed subjects like geometry and cosmology, before focusing on art. PSA: artists (and girls) can also rock at math! Bonus book: Novesky has a new PBB coming out in Fall 2020 called Girl on a Motorcycle, illustrated by Julie Morstad. It’s the story of Anne-France Dautheville, the first woman to ride solo around the world on her motorcycle in 1973. I Dissent: Ruth Bader Ginsburg Makes Her Mark by Debbie Levy and Elizabeth Baddeley From the streets of her 1940s Brooklyn childhood home to the halls of law school and then the Supreme court in her trademark collars, the throughline of this book is Ginsburg’s glorious history of dissenting, disagreeing, objecting, and resisting—her determination to fight injustice and change the world. Ginsburg was one of nine women in law school, and she tied for first place in her class. Her marriage to Marty Ginsburg, who was also a lawyer but managed to cook family dinners and master French cooking, is legit Couples Goals of epic proportion. Fun factoids: Ginsburg got a D on her penmanship test because she was a lefty and her teacher forced her to write with her right hand. Her extracurricular life was pretty colorful too, and included baton twirling—but her voice was so bad, her teacher asked her not to sing aloud in chorus. Bonus book: You can never get too much RBG: there’s another PBB that I also loved called Ruth Bader Ginsburg: The Case of R.G.B. vs. Inequality by Jonah Winter and Stacy Innerst. Radiant Child: The Story of Young Artist Jean-Michel Basquiat by Javaka Steptoe Radiant Child focuses on the childhood of self-taught artist Basquiat and his formative years in Brooklyn, with the encouragement of his mother, Matilde, who fed him poetry, jazz, and arroz con pollo. Tragically, Matilde is removed from the home due to her mental health issues, and this deep loss serves to fuel Basquiat’s dream to be a famous artist. We travel with him as a teenager to the Lower East Side, where the streets become his canvases. Basquiat’s graffitied art blazes the Big Apple, and eventually makes its way to the gallery walls of some of the world’s most famous museums. Fun factoid: The medical textbook Gray’s Anatomy was an important influence in Basquiat’s work and was given to him by his mother as a child. Swan: The Life and Dance of Anna Pavlova by Laurel Snyder and Julie Morstad Swan is a deeply poetic and touching story about Anna Pavlova, a Russian ballerina, who grew up the daughter of a laundress in 1881. Written in intensely spare language, the words dance across the page in staggered lines and stanzas. Despite the fact that Pavlova did not fit the ideal ballerina body, with her “all wrong” feet, she still persisted and went on to become what some believe to be one of the greatest ballerinas of all time. Famous for her role in The Dying Swan, the metaphor of Anna as a bird is quite frankly breathtaking. The way the author uses this as a delicate device to allude to Pavlova’s tragic death closes this book with immense power and a bittersweet compassion. Fun factoid: In her quest to bring art to everyone, Anna Pavlova traveled the world and performed in unconventional places like bullfighting rings. Bonus book: Brave Ballerina: The Story of Janet Collins, by Michelle Meadows and Ebony Glenn, the story of the first African American prima ballerina to dance with the Metropolitan Opera House in 1951. Barack Obama: Son of Promise, Child of Hope by Nikki Grimes and Bryan Collier What makes this biography stand out is the unique narrative structure. This book is framed as a story within a story, as a mother and her son David watch Obama on television. As his mother narrates Barack’s story, the little boy interrupts the unfolding biography to ask his mother questions and to make astutely touching comments, in colorful text boxes on the corners of all the pages. Hope is not just part of the title; it is literally personified throughout the book, and even kicks off the first line: “One day Hope stopped by for a visit.” Hope is a woven thread through the lives of David and Obama—bringing the little boy and the president together as well. The book has a strong focus on Obama’s childhood in Honolulu, but takes the reader along the ride to Indonesia, Hollywood, Harlem, Chicago, Kenya, then ultimately the White House. Particularly moving is the depiction of the father/son relationship, and the enduring effects of the absence, reconciliation, then loss of Obama’s father. Fun factoid: When Obama moved to Djakarta as a child, he attended school taught in Indonesia and can still speak the language today. A Velocity of Being: Letters to a Young Reader edited by Maria Popova and Claudia Bedrick Okay, so this book might not technically qualify as a PBB, but it still deserves half credit. Maria Popova, of Brainpickings, my most cherished weekly email, collaborated with Claudia Bedrick of Enchanted Lion, an independent publisher of children’s books, to compile over 120 letters written to children about the experience of reading. Contributers include: Dani Shapiro, Regina Spektor, Neil Gaiman, Lena Dunham, Alain de Botton, a 100 year-old Holocaust Survivor, Janna Levin, Jacqueline Woodson, Shonda Rhimes, Elizabeth Gilbert, Rebecca Solnit, Daniel Handler, Judy Blume, Arcelis Girmay, Ann Patchett, Tavi Gevinson, and surprisingly, my personal TV Hero, Law & Order SVU’s, Mariska Hargitay. Each letter is juxtaposed with an illustrated work of art. So, A Velocity of Being definitely contains pictures, and it certainly reveals biography—just in a more nuanced way. Allow me to make the argument that the letters we write reveal who we are, and therefore belong in the realm of (auto)biography. And maybe what and how we read, are actually the most accurate indicators of who we really are. There are too many fun factoids to mention, so I won’t even try to capture them. What I will say is that A Velocity of Being is one of the most exquisite text/art objects I have ever encountered, and something every writer/reader (or person inhaling oxygen) should be required to own. There is something spooky-beautiful to this book. Like you are in a time travel portal, reading to the childhood version of you—it’s as if you are mothering yourself. Before you go: Electric Literature is campaigning to reach 1,000 members by 2020, and you can help us meet that goal. Having 1,000 members would allow Electric Literature to always pay writers on time (without worrying about overdrafting our bank accounts), improve benefits for staff members, pay off credit card debt, and stop relying on Amazon affiliate links. Members also get store discounts and year-round submissions. If we are going to survive long-term, we need to think long-term. Please support the future of Electric Literature by joining as a member today!
<urn:uuid:3b95b559-1afb-4ad6-84a8-529bd9111d31>
CC-MAIN-2021-43
https://electricliterature.com/you-should-be-getting-your-biographies-in-childrens-picture-book-form/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583408.93/warc/CC-MAIN-20211016013436-20211016043436-00630.warc.gz
en
0.947926
3,346
2.609375
3
« 이전계속 » propositions,- in short, to not only state what the law is, but ascertain, if possible, what it should be. Progress is a procession of facts followed by theories; in the long run the two harmonize, and what the law should be it will be, but, it must be confessed, at the present time the lack of harmony is only too apparent. Combination as an economic factor in the industrial and commercial world is a fact with which courts and legislatures may struggle, and struggle in vain, until they frankly recognize that, like all other conditions, it is a result of evolution to be conserved, regulated and made use of, but not suppressed. Since the large combinations of recent years differ only in degree from the smaller combinations fariliar to the common law, the principles of the common law broadly and intelligently applied are quite sufficient to meet the exigencies of the present situation. The common law itself is a noble development, and as such can more successfully deal with economic conditions, which are also the results of evolution, than laws which are the arbitrary and frequently the thoughtless edicts of man. It is much easier to enact a new law than to apply an old, but the latter will be found virile and effective where the former is either impotent or mischievous. In a preface it is customary for the author to confess the faults and errors his book contains; but why discount the labor of the critic who discovers all things ? Errors are recognized while virtues are yet a long way off; it would seem more reasonable to hasten the introduction of the latter, leaving the former to shift for themselves. That the errors herein are not more numerous is due in no small degree to Mr. Charles S. Williston of the Chicago bar, who has verified the citations and prepared the table of cases, and to Mr. Fred W. Arthur of the Madison (Wis.) bar, who with Mr. Williston has carefully read the proofs and made many valuable suggestions. A. J. E. CONTRACTS IN RESTRAINT OF TRADE (A) The earlier American cases. APPENDIX, pp. 1337–1450. organization of corporations for profit, together with forms. TABLE OF CASES. References are to pages. Abbot v. Am. Hard Rubber Co., 552, | Amory v. Merryweather, 103. 656. Anchor Electric Co. v. Hawks, 843. Adams v. Barrett, 97. Anderson v. Dunn, 1215. Adams v. Paige, 243. Anderson v. Jett, 519,523, 546, 821. Adams v. People, 232. Anderson v. United States, 915, 943, Addyston Pipe & Steel Co. v. United 979, 982. States, 908, 911, 915, 925, 931, 992. Andrews v. Brown, 869. Adm'r of Smith v. Adm'r of Wain. Andrews v. Russell, 676. wright, 862. Angerstein v. Hunt, 1214. Ætna Ins. Co. v. Harvey, 1322. Angier v. Webber, 822. Ætna Ins. Co. et al. v. Com., 1058. Angle v. Railway Co., 338. Ainsworth v. Bentley, 144. Anheuser-Busch Brew, Ass'n V. Anthony v. Unangst, 83, 101. Appeal of McClurg, 845. Alger v. Thacher, 615, 803, 805, 806, Archer v. James, 725. 822. Archer v. Marsh, 744, 762, 774, 777. Allen v.'Flood, 364, 363, 369, 370, 392, Archer v. Terre Haute & Indianapo 393, 394, 396, 397, 401, 439. lis R. Co., 1245. Allen v. Merchants' Nat. Bank, 513. Archibald v. Thomas, 852. Allgeyer v. Louisiana, 910, 1156. Ardesco Oil Co. v. North American Allsopp v. Wheatcroft, 751, 764, 765, Oil & Mining Co., 870. 774. Armstrong v. Toler, 991, 1323. Althen v. Vreeland, 146, 859. Arnold Bros. v. Kreutzer & Wasen, American Biscuit & Mfg. Co. v. 824. Klotz, 620, 646, 898, 928. Arnot & Pittston v. Elmira Coal Co., American Preservers' Trust v. Tay- 62, 64, 468, 523, 545, 546, 564, 565, lor Mfg. Co., 583, 586. 586, 938. American Steel & Wire Co. v. Wire Arrowsmith v. Burlington, 676. Drawers' Union et al., 437, 1158, Arthur et al. v. Oakes et al., 354, 357, 1163, 1176, 1177, 1188, 1207, 1210, 358, 403, 412, 421, 438, 460, 1165, 1219. 1186. American Strawboard Co. v. Helde- | Asher v. Texas, 1002. man Paper Co., 782, 837. Ashley v. Ryan, 1002. American Strawboard Co. v. Peoria | Ashton v. Dakin, 79. Strawboard Co., 1040. Aspinwald et al. v. Ohio & Miss. R. Ames v. Kansas, 1226, 1300. Co., 1343, 1344. References are to pages. Association v. Niezerowski, 435. Babcock v. Thompson, 97. Bacon v. Mich. Cent. R. Co., 511. Atcheson v. Mallon, 523, 565. Badische Anilin und Soda Fabrik v. Atchison St. Ry. Co. v. Nave, 1237. Schott & Segner, 766, 769. Atchison, T. & S. F. R. Co. v. Denver Bagg's Case, 475. & N. O. R. Co., 970. Bagley v. Peddie, 865. Atkyns v. Kinnier, 777. Bagley v. Smith et al., 866. Atlantic Giant Powder Co. v. Ditt Bailey v. Bensley, 104. mar Powder Mfg. Co., 1216. Bailey et al. v. Association of Master Attorney-General v. Amos, 1242. Plumbers of the City of MemAttorney-General v. Bank of Niag- phis, 431, 432, 433, 434, 1128. Bailey et al. v. City of Philadelphia, Attorney-General v. Brown, 1167. 24. Attorney-General v. Cambridge Gas Bajou's Case, 847. Co., 1293. Baker v. Hedgcock, 766. Attorney-General v. Chapman, 852. Baker v. Neff, 1238. Attorney-General v. Delaware & B. Baldwin v. Binsmore, 82. B. Ry. Co., 1268. Ball v. Gilbert, 70. Attorney-General v. Detroit Subur. Balt. etc. Co. v. West. Un. Tel. Co., ban Ry. Co., 1247. 565. Attorney-General v. Heishon, 1167. Bangs v. Hornick, 108. Attorney-General v. Holihan, 1242. Bank v. King, 564. Attorney-General v. Johnston, 1268. Bank v. Schermerhorn, 1216. Attorney-General v. Looker et al., Bank of Auburn v. Aiken, 1290. 1235. Bank of Augusta v. Earle, 1002, 1343, Attorney-General v. Michigan State 1344. Bank, 1277, 1286. Bank of Bethel v. Pahquioque Bank, Attorney-General v. Mid. Kent Ry. 1251. Co., 1164. Bannon v. United States, 234. Attorney-General v. New Jersey R. Barbier v. Connolly, 717, 1137, 1138. Co., 1167. Barnard v. Backhaus, 75, 93, 95, 113. Attorney-General v. New York & L. Barnett v. Baxter, 83. B. R. Co., 1269. Barr v. Essex Trades Council, 403, Attorney-General v. Perkins et al., 406, 414, 438, 447, 450, 451, 1176. 1242. Barrow v. Richard, 838. Attorney-General v. Petersburgh & Barry v. Croskey, 82. Roanoke R. Co., 1248, 1253, 1254. Barton v. Mulvane, 1051, 1320. Consumers' Co., 1268, 1293. Baxter v. Connolly, 815. Bazley's Case, 228. Attorney-General v. Utica Ins. Co., Beadel v. Perry, 1163. 1227. Beal v. Chase et al., 782,798, 832, 840, Auburn & Cato R. Co. v. Douglas, 491. 868, 957. Aurora Bank v. Oliver, 555. Bean v. Bean, 243, Austin v. Murray, 710. Beard et al. v. Dennis, 580, 813, 815, Avery v. Langford, 744, 764, 814, 868. 840, 857.
<urn:uuid:90bf7c12-1fea-4d67-bf1c-ebf53ae38236>
CC-MAIN-2021-43
https://books.google.co.kr/books?id=HZ89AAAAIAAJ&pg=PR7&focus=viewport&vq=%22A+conspiracy+is+a+combination+of+two+or+more+persons,+by+concerted+action,+to+accomplish+a+criminal+or+unlawful+purpose,%22&dq=editions:HARVARD32044109597054&lr=&hl=ko&output=html_text
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585321.65/warc/CC-MAIN-20211020121220-20211020151220-00150.warc.gz
en
0.812967
2,151
2.546875
3
- Improvement in Action Te Ahu Whakamua We are a multicultural school existing in a bicultural nation. Tangata whenua will always have a special place regardless of how many cultures we have. In response to student voice, this school sought external expertise to provide opportunities for the children to learn more about their identity, language and culture. For those involved, the opportunity to develop new knowledge and understandings is just the beginning of the journey. - “School’s about learning. It wouldn’t be that cool if you learnt all these things but you couldn’t learn about your own culture” - Leadership needed to find expertise outside the school to provide a pathway that supported Māori to enjoy and achieve education success as Māori. - An important focus was creating the opportunity for children to ‘bring their culture to the school’ - Students’ growing confidence in their unique identity influenced their approach to learning in the classroom and contribution to the life of the school. Things to think about: - What is the range of community expertise that you draw on to create opportunities for students to develop their language, culture and identity? - How might you build on what you are doing? The evaluation indicators this video illustrates - Domain 3: Educationally powerful connections and relationships - Evaluation indicator Community collaborations enrich opportunities for students to become confident, connected, actively involved, lifelong learners - Evaluation indicator This video is part of a series This video is part of the series Improvement in Action Te Ahu Whakamua. We created this series to inspire schools with examples of success in action. These examples highlight the benefits of fulfilling the evaluation indicators we use to review schools. (There is chatter as a group of children wearing schoolbags cross the street. A man’s voice begins as a voiceover.) We're a multicultural school, existing in a bicultural nation. (The video changes to show the narrator, a man sitting in a school office. Text on the bottom of the screen identifies him as Laurie Thew, Principal, Manurewa Central School.) So tangata whenua will always have a special place, regardless of how many cultures we have. (Cut to Richard Hempo, a parent at Manurewa Central School. He stands outside the school, looking into the camera.) It's easy to lose your identity with the things that the city has to offer and things Māori are being lost. (The video returns to the children from the opening shot, Richard Hempo continues speaking. Two young boys run up the path to the school. A girl hugs another girl outside a playground.) To learn it first-hand introduces you to a whole lot more, like spiritually and, of course, physically. (The video is back to Laurie Thew in the office. As he speaks there is footage of children carrying musical instruments to class, a wall showing the school motto “Effort Brings Reward” in a variety of languages and photos of children. There are students playing outdoors and parents walking with their children outside the school.) We thought, what can we do that will help our students develop that sense of belonging and bringing their culture to their school, if you like. Rather than us doing it, they can do it. (The video cuts to Sandy Griffin, Deputy Principal, Manurewa Central School.) A group of students from another school performed a haka. (Her voiceover continues as the video shows a group of young boys outdoors. They each hold a large wooden staff and are being instructed in Mau Rākau by a teacher.) And the kids were hugely motivated by how earnest these participants were and how committed they were and how serious and disciplined they were. (Cut back to Sandy Griffin) And so we looked into it. (Cut back to Laurie Thew. As he speaks the video changes to show a group of boys sitting on the grass listening to a teacher.) It's hard to find the right person, because you want someone who will exemplify the kind of things that you are trying to get across to them-- Māori boys. (The teacher begins to speak, addressing the students) So we've got the right attitude. We've got the discipline. We got the skills. We just need to tune up those little bits. (The camera remains with the boys, but the audio returns to Laurie Thew’s voiceover) So the decision to do it's easy. It's getting it done properly is the tricky bit. (The teacher addresses the class again as the camera pans across where the boys are sitting) You can't be the best by being lazy. You can't be the best by being half-pie. You can't be the best by thinking you got lucky. (Sandy Griffin’s voiceover plays while the teacher continues instructing the students) We invited Matua Philip in. And we made available this programme. (The teacher is now standing alone in front of a mural as he speaks directly to the camera. Text on the screen identifies him as Philip Repia, Kapa Haka teacher, Manurewa Central School.) Yeah, I try to just relate it back to how much you value yourself. I tell them a lot, I'm not the boss of you. You're the boss of yourself. (The video returns to Phillip addressing the students) If you act like a fool and a clown, people are going to treat you like a fool and a clown. If you act sensible, people will respect you, eh? (At the end of his speech the boys run across the playground to where someone is handing out the wooden staffs from earlier in the video. They line up and each take one. Phillip continues to speak as a voiceover.) It gives them power over their own self. They be where they want to be. (Several of the boys are now indoors, sitting together on a couch. One of them speaks about the programme as the video returns to footage of the boys learning Mau Rākau.) He's teaching us discipline and teaching us about our culture. (Another boy begins to speak) It's really good, because we're learning more about our Māori culture. It's pretty important, because I didn't know those hakas and to do the Mau Rākau. (Phillip instructs the students) Keep the rākau straight. Make your body strong. (The video cuts back to the boys inside and a third boy speaks) School is about learning. It wouldn't be that cool if you learned all these things but you wouldn't be able to learn about your own culture. (We are back outside with the Mau Rākau class as a fourth boy speaks in voice over) You're the boss of yourself. And no one's the boss of you. You control your actions and your mind. And if you act silly, people will treat you silly. (The camera briefly returns to the parent from earlier in the video, Richard Hempo, then cuts back to the Kapa Haka class as he speaks) It's good to open them up and let them know a bit about who they are, where you come from, and the things your ancestors done. So I was proud of my son, because he had always said that he would never give it a go and he didn't really like it. And for him to charge head-on into it, I was really proud of him. (We return to the boys on the couch and one of them speaks into the camera) If Māori kids don't know their culture, it might be hard for them in the future. (another of the boys on the couch speaks) Yeah, it makes me ready for school work, like, make sure I do it properly, like how I did this properly. (The camera changes to a woman sitting in an office. Text on the screen identifies her as Michelle Dibben, Deputy Principal, Manurewa Central School.) We did another evaluation of the programme at the end of 2015 to ask them: OK, so what difference has it made to your learning? (As Michelle continues speaking the camera cuts back to the boys outside, and then to inside a classroom. The boys enter the classroom, return to their desks, take out stationery and begin schoolwork.) They felt they'd learned a lot more about self-respect, which had an impact back in the classroom and in the playground on the choices they were making. They felt that they were more disciplined and more able to self-regulate, which is something that the teachers have been acknowledging as well. (A woman now appears on the screen and begins to speak. The text reads Liane Mcleod, Year 5 & 6 Techer, Manurewa Central School.) They're applying what they are doing in their tamatoa group to the classroom. (As she speaks the camera returns to the boys in their classroom, focusing on their schoolwork) Just watching them is awesome. Their heads are high. Their shoulders are back. (The video returns to Liane Mcleod. The camera is now zoomed out enough to see a desk in the background, on which sits an ornate kete and a frame displaying a pounamu pikorua) And they're just buzzing with pride now, instead of with their heads down. (The video once again shows Sandy Griffin, Deputy Principal) When we surveyed the children, the girls felt quite left-out. (As Sandy continues speaking the camera shows several young girls learning in class, then cuts back to Philip Repia’s Kapa Haka class.) And that just started the ball rolling again of finding out what there was available. (We now see a group of girls doing a crafting activity outside with a teacher, polishing pieces of pāua shell.) And so we came across Te Aho Tapu. And like the boys, we've seen an immediate group of children sign-up for it. And they love it. (A young girl’s voice speaks in voice over as the girls continue to polish their shells.) She's teaching us how to do different things, like make awesome necklaces and self-management and things like that. (The camera now shows the girl, who is sitting on a couch indoors. Another girl sitting next to her begins to speak as the camera returns to the girls outside.) She teaches us things that I never knew. And-- It's good for people to learn their culture. (A woman begins speaking) It's vital for their identity. They know who they are if they can have somewhere to grow from. (The video returns to Philip Repia standing in front of the mural. A woman now stands next to him. Text on the screen identifies as Ebony Repia, Te Aho Tapu Teacher, Manurewa Central School.) They're more responsive in the learning environment. (Ebony’s voiceover continues as the camera shows a young girl holding up a shell pendant she has made. The girls continue their polishing as the boys continue their Mau Rākau class in the background.) It's setting them up with tools for life, with a mind-set to help them navigate the world. So being aware of what's around them, making sure they maintain a respect for themselves that they can then give to others. You treat others kindly, you've been kind to yourself. (The video returns to Principal Laurie Thew in the school office) John Hattie talks about students who are physically present but psychologically absent. And that won't work. (The girls activity continues as he speaks. They turn to look at the boys as they yell out as part of their routine.) So we've got to have them psychologically present and engaged. (Philip Repia now speaks, as the camera shows him instructing the Mau Rākau class.) So if we raise the boy, they be good men, because they want to be good men, they're going to be the best men. (We cut back to Phillip standing with Ebony) And that's what I see when I look at them-- potential.
<urn:uuid:14d6a132-e23e-4d22-bc1b-7d9a6bd141e2>
CC-MAIN-2021-43
https://ero.govt.nz/our-research/culture-language-and-identity
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585322.63/warc/CC-MAIN-20211020152307-20211020182307-00430.warc.gz
en
0.962494
2,594
2.765625
3
OVERVIEW: What every practitioner needs to know Are you sure your patient has cystic fibrosis–related diabetes? What are the typical findings for this disease? Diabetes is the most common comorbidity in people with cystic fibrosis (CF). Cystic fibrosis–related diabetes (CFRD) is found in about 15% of adolescents with cystic fibrosis, 40% of individuals in their 20s and 30s, and more than half of those older than 50 years of age. It shares features of both type 1 and type 2 diabetes, but is a distinct clinical entity. Patients have modest insulin resistance, which waxes and wanes depending on the acute state of infection and inflammation. The primary defect, however, is insulin insufficiency, caused by fibrosis of the pancreas combined with probable genetic defects in beta cell function. Diabetes is associated with a worse prognosis in patients with CF. This is felt to be related to both the catabolic effects of insulin insufficiency and the proinflammatory effects of hyperglycemia. With extreme hyperglycemia, polyuria and polydipsia may be present. Usually, there are no obvious symptoms, so routine screening is critical to make the diagnosis of CFRD. Screening will also identify high-risk patients with impaired glucose tolerance. What other disease/condition shares some of these symptoms? Type 1 and type 2 diabetes may present with polyuria and polydipsia. Unlike patients with type 1 diabetes, patients with CFRD seldom experience diabetic ketoacidosis (DKA). Patients with CF and DKA should be screened for concomitant type 1 diabetes with diabetes autoantibodies. Unlike patients with type 2 diabetes, patients with CFRD are seldom obese. Cholesterol levels are low (although triglyceride levels may be elevated), and they do not have a risk of atherosclerotic cardiovascular disease. What caused this disease to develop at this time? The primary defect leading to CFRD is insulin insufficiency caused by fibrotic destruction of the islets. During baseline periods of stable health, patients with CF compensate for insulin insufficiency by being insulin sensitive. During acute illness and/or steroid treatment, they become very insulin resistant and can no longer compensate. Acute illness does not so much cause diabetes as unmask the underlying insulin insufficiency. It is not uncommon for hyperglycemia to wax and wane in CF in response to changes in inflammation and infectious status. What laboratory studies should you request to help confirm the diagnosis? How should you interpret the results? All hospitalized patients with CF should have capillary blood glucose levels measured before and 2 hours postprandially for the first 48 hours of hospitalization. These are the recommendations of the 2009 CFRD Consensus Conference. After 48 hours, glucose monitoring can be discontinued if criteria for the diagnosis of CFRD are not met. CFRD is diagnosed if fasting glucose levels greater than or equal to 126 mg/dL or postprandial levels greater than or equal to 200 mg/dL persist beyond 48 hours. This 48 hour “waiting” period was based on the clinical observations that blood glucose levels immediately normalize in some patients when treatment for infection begins, but in those in whom hyperglycemia persists beyond 48 hours, it tends to last for weeks. Would imaging studies be helpful? If so, which ones? There are no useful imaging studies. The pancreas is grossly abnormal by computed tomography or magnetic resonance imaging in all patients with CF, but the images do not distinguish patients with and those without diabetes. Confirming the diagnosis The diagnosis of diabetes in CF is associated with increased risk of death from pulmonary disease. This is true even for patients who are otherwise completely asymptomatic. Fortunately, recent studies show that aggressive screening and insulin treatment improve prognosis in this population. CFRD is diagnosed by standard ADA criteria for all forms of diabetes. If you are able to confirm that the patient has cystic fibrosis–related diabetes, what treatment should be initiated? Insulin is the only recommended treatment for CFRD. Many different treatment regimens are possible, but most patients are placed on basal bolus insulin therapy, similar to that provided for patients with type 1 diabetes. Once the acute illness resolves, insulin needs decrease substantially and some patients only have illness during periods of infection. Insulin Therapy for CFRD Patients are generally treated with standard basal bolus insulin therapy by multiple subcutaneous injections or by insulin pump according to the following principles. They should be taught to adjust their insulin dose for special circumstances such as exercise, travel, and acute illness. Those already on insulin therapy usually require 2-4 times as much insulin during illness or steroid therapy. The dose must subsequently be reduced to baseline when the patient recovers. Many patients with CF require a 50:50 basal:bolus insulin ratio. Some require lower amounts of basal insulin, likely because of residual endogenous insulin secretion. Subcutaneous basal insulin is often given in the morning or at midday rather than bedtime to reduce the risk of nocturnal hypoglycemia. Fasting glucose levels help determine if the basal insulin dose is appropriate. CFRD without fasting hyperglycemia does not require basal insulin therapy to normalize fasting glucose levels. Whether basal insulin is beneficial for anabolic purposes is a research question. Usual doses of rapid-acting insulin for meal coverage range from 0.5 units to about 2.0 units/15 g of carbohydrate, with the lower doses being more common when patients are in their stable baseline state of health. If meal coverage doses greater than approximately 2.0 units/15 g of carbohydrate are needed, the basal insulin dose is probably not high enough. If the meal coverage dose is appropriate (the insulin is matched to the carbohydrate intake), glucose levels preprandially and 2-3 hours postprandially should be about the same. Correction Dose (“Sensitivity Factor”) A typical starting correction dose is 1 unit of rapid-acting insulin to lower the glucose by about 50 mg/dL (2.8 mmol/L). During a period when the patient is not eating or exercising, the correction dose can be tested and readjusted as necessary by determining how much it lowers the glucose level over a 2-3–hour period. Overnight Continuous Drip Gastrostomy Feedings These are “long” meals that require about 8-10 hours of insulin coverage. A single injection of regular and NPH insulin before the feeding (with or without rapid-acting insulin as correction for the prefeeding glucose level) covers the feeding. The regular insulin covers the first half and the NPH covers the last half. The usual starting dose is 0.5-1.0 units/15 g carbohydrate in the total feeding, divided as half regular and half NPH insulin. Glucose levels 3-4 hours into the feeding are used to adjust the regular insulin dose and at the end of the feeding to adjust the NPH insulin. An appendix on insulin therapy for cystic fibrosis-related diabetes is provided in Table I. (Adapted from Moran A, Brunzell C, Cohen RC, et al; CFRD Guidelines Committee. Clinical care guidelines for cystic fibrosis–related diabetes: a position statement of the American Diabetes Association and a clinical practice guideline of the Cystic Fibrosis Foundation, endorsed by the Pediatric Endocrine Society. Diabetes Care 2010;33:2697-708.) What are the adverse effects associated with each treatment option? The only adverse effect of insulin is hypoglycemia. Patients on insulin therapy should monitor blood glucose levels at least four times a day. Extra monitoring is required during situations that place the patient at higher risk for hypoglycemia such as increased activity. Also, patients require more insulin when they are sick than when they are well because of increased insulin resistance. A patient who is recovering from an acute illness will likely need progressive reduction in the insulin dose in the month after the illness to avoid hypoglycemia. All patients and their families should be taught to recognize and treat hypoglycemia. Usually oral treatment with 15-30 g of carbohydrate is sufficient. However, family members should be provided with glucagon and taught how to use it for patients on basal insulin therapy. What are the possible outcomes of cystic fibrosis–related diabetes? The additional diagnosis of diabetes has a negative impact on survival in people with CF. However, recent studies have shown that aggressive treatment with insulin is able to reverse this phenomenon and improve nutritional status and survival from the lung disease of CF. Patients with CFRD are also at risk for diabetes microvascular complications (retinopathy, nephropathy), but the risk appears to be lower than in other forms of diabetes. What causes this disease and how frequent is it? Twenty percent of adolescents and 40%-50% of adults with CF have diabetes. The primary cause is fibrotic destruction of pancreatic islets. Genes related to the development of type 2 diabetes may be more common in patients with CF and diabetes. How do these pathogens/genes/exposures cause the disease? Like all secretions in CF, pancreatic exocrine secretions are thick and sticky. This plugs the pancreatic ductules, leading to scarring, fibrosis, and adiposis. Genetic defects, related to the defects that cause type 2 diabetes, may further impair beta cell function in these patients with reduced islet mass. What complications might you expect from the disease or treatment of the disease? Over time, diabetes causes a decline in both nutritional status and pulmonary status in CF. This is believed to be primarily due to the catabolic effects of insulin insufficiency. In addition, hyperglycemia per se may help promote a proinflammatory, proinfectious environment. Microvascular complications occur in patients with long-standing CFRD, although they tend to be less frequent and less severe than in other forms of diabetes.. Are additional laboratory studies available; even some that are not widely available? Additional studies are not useful. Of note, hemoglobin A1c levels are spuriously low in patients with CF. if they are high, they indicate the patient has been hyperglycemic, but low levels do not exclude a diagnosis of diabetes. How can cystic fibrosis–related diabetes be prevented? There is no known way to prevent the development of diabetes in CF. What is the evidence? The evidence is summarized in a recent consensus conference document sponsored by the ADA, the Cystic Fibrosis Foundation, and the Pediatric Endocrine Society (PUBMED:21115772) as well as in an accompanying technical review (PUBMED:21115770). Patients with CF frequently first have hyperglycemia during stressors such as acute illness. Blood glucose levels may normalize when the stress is not present. In the past, this was called “intermittent CRFD.” In the general population, hyperglycemia that develops during acute illness may be called “stress hyperglycemia” and the patient might not be given a diagnosis of diabetes. In CF, however, the 2009 CFRD Consensus Committee recommended that the patient be given a diagnosis of diabetes under these circumstances because of the following: The presence of hyperglycemia during illness reveals those CF patients who have the greatest degree of insulin insufficiency. Bouts of acute illness are frequent and patients tend to remain hyperglycemic for weeks at a time during these episodes. Longitudinal outcome data have shown that CF morbidity and mortality are associated with CFRD first diagnosed in the setting of acute illness. Aggressive treatment of hyperglycemia has been associated with improvements in prognosis. Ongoing controversies regarding etiology, diagnosis, treatment There is little controversy regarding patients with CF and diabetes. The greatest question at present is whether patients with CF and milder degrees of abnormal glucose tolerance should also receive insulin replacement therapy. Copyright © 2017, 2013 Decision Support in Medicine, LLC. All rights reserved. No sponsor or advertiser has participated in, approved or paid for the content provided by Decision Support in Medicine LLC. The Licensed Content is the property of and copyrighted by DSM. - OVERVIEW: What every practitioner needs to know - Are you sure your patient has cystic fibrosis–related diabetes? What are the typical findings for this disease? - What other disease/condition shares some of these symptoms? - What caused this disease to develop at this time? - What laboratory studies should you request to help confirm the diagnosis? How should you interpret the results? - Would imaging studies be helpful? If so, which ones? - Confirming the diagnosis - If you are able to confirm that the patient has cystic fibrosis–related diabetes, what treatment should be initiated? - What are the adverse effects associated with each treatment option? - What are the possible outcomes of cystic fibrosis–related diabetes? - What causes this disease and how frequent is it? - How do these pathogens/genes/exposures cause the disease? - What complications might you expect from the disease or treatment of the disease? - Are additional laboratory studies available; even some that are not widely available? - How can cystic fibrosis–related diabetes be prevented? - What is the evidence? - Ongoing controversies regarding etiology, diagnosis, treatment
<urn:uuid:a889f618-f7fd-4281-b5d9-5e3926f03b78>
CC-MAIN-2021-43
https://www.psychiatryadvisor.com/home/decision-support-in-medicine/pediatrics/cystic-fibrosis-related-diabetes/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585997.77/warc/CC-MAIN-20211024111905-20211024141905-00550.warc.gz
en
0.92413
2,810
3.15625
3
Hidden from history for over a century, Scotland Yard took seriously the possibility that an American man was the notorious Jack the Ripper. The suspect’s name was Dr. Francis Tumblety. He was a quack doctor and fake Indian herbalist, who was in London at the time of the murders. Police arrested him, and after posting bail, he sneaked out of the country and sailed back to New York City. Meanwhile, Jack the Ripper’s murders stopped. Since they had nothing on Tumblety for the murders upon his arrest, Scotland Yard soon charged him with a convictable misdemeanor offense to hold him. This was not an extraditable charge in the United States. Thus, Tumblety had no legal requirement to go back to England. Six months later in July 1889, someone killed another prostitute. At the time, Scotland Yard believed she was also the victim of Jack the Ripper. Since Tumblety was in New York, police took him off the suspect list. Experts agreed only later that Jack had not committed the 1889 murder. An Interesting Letter In 1993, retired Suffolk Constabulary police officer and crime historian Stewart P. Evans uncovered a private letter dated September 23, 1913. Chief Inspector John Littlechild, head of the Metropolitan Police Special Branch, had written it and addressed it to famous journalist George R. Sims. In it, Littlechild revealed that Dr. Francis Tumblety became an important suspect after the Mary Kelly murder. He stated to Sims that “amongst the suspects,” Dr. Francis Tumblety was “a likely one.” Stewart Evans then discovered that Tumblety’s arrest on suspicion had been in the newspapers at the time, especially in the U.S. dailies. Newspaper reports claimed that Tumblety was initially arrested on suspicion of the Whitechapel crimes. However, when the police had insufficient evidence to hold him, they re-arrested him on a misdemeanor charge of gross indecency and indecent assault. Apparently, Dr. Francis Tumblety engaged in sexual relationships with young men. In the nineteenth-century in England, this was an illegal act. During his arrest, Tumblety had a correspondence in his possession, which allowed Scotland Yard to pursue the gross indecency and indecent assault case against him. The Controversial Dr. Francis Tumblety Dr. Francis Tumblety was a relatively well-known and somewhat notorious figure in the mid-nineteenth-century in North America. He was born in Ireland around 1833, and in 1847, he immigrated with his family to Rochester, New York, on the coffin ship Ashburton during the Irish potato famine. As a teenager, he was employed as a steward by a Rochester doctor who proclaimed himself an expert on “‘French cures for sexual diseases.” Tumblety peddled the doctor’s sexually-explicit literature on the Erie Canal boats. Soon after, a man named Rudolf Lyons set up a temporary office in town. He called himself “The well-known and celebrated Indian herb doctor,” which immediately attracted the attention of young Tumblety. When Lyons left, Tumblety followed, and he quickly learned the trade. By 1855, Francis Tumblety started on his own in Detroit, Michigan, as a full-fledged Indian herb doctor. However, he also continued the practice of selling French cures literature for sexual diseases. Although it was untrue, he also claimed to have received a medical diploma through a medical school. The charlatan began signing his name with M.D. Thus began his highly successful traveling quackery. Subsequently, he left Detroit and traveled through Canada from Toronto to St. Johns and eventually made his way to New York City by 1860. Tumblety’s Notoriety Grows Tumblety’s chosen profession ensured that his name was always in the daily newspapers everywhere he traveled. In any public setting where many eyes were upon him, Tumblety displayed his flamboyant Liberace-type character. Although nearly all accounts describe him as peculiar, the fact that people regularly called upon him suggests that his schemes were successful. Indeed, he was earning hundreds of thousands of dollars. You May Also Like: Was America’s H.H. Holmes Jack the Ripper? For example, as he came to a new city in the United States and Canada, he would enter as though he was entering a circus ring wearing a flashy outfit and riding a beautiful horse. A valet and two huge dogs followed closely behind him. However, legal problems also followed him closely because he practiced without a license and used the M.D. designation in his signature. In 1860, while he was in St. John, a patient died in his care. Authorities charged him with manslaughter. Instead of facing the music, Tumblety left Canada under cover of darkness and eventually settled in New York City. After the defeat of the Union forces at the first major battle of the American Civil War just outside of Washington D.C. on July 21, 1861, President Lincoln appointed Major General George B. McClellan as Commander of the Army of the Potomac. McClellan was responsible for the defense of the capital. At this time, Francis Tumblety began his “two-year sojourn” at Washington D.C. He stated in his autobiography that he partially made up his mind to tender his “services as a surgeon in one of the regiments.” Herb Doc or Surgeon? Contemporary newspaper reporters repeated his boast of being on McClellan’s surgical staff. Once he arrived, papers reported he was promenading up and down Pennsylvania Avenue. Interestingly, he did not flood the papers with his usual newspaper advertising campaign as a famous Indian herb doctor before his arrival. Instead, he waited for six months and began his campaign in 1862. This change in business practice makes sense, since he was attempting to convince the General that he was a surgeon, not an Indian herb doctor. The problem for Tumblety was that he did not have a medical diploma and was not a real surgeon. Unsurprisingly, he did the next best thing. He invited the General’s officers to an illustrated medical lecture. This was a practice that prominent surgeons performed in the nineteenth century to demonstrate their credibility. The Collection of Uteruses At the conference, he revealed his anatomical collection, specifically, his prized collection of uterus specimens. Perhaps these included the same uteruses that were taken by Jack the Ripper from two of his victims. The man who saw Tumblety’s uterus collection was New York City lawyer and Civil War reptile journalist/spy Charles A. Dunham. Dunham stated to a New York World reporter on December 1, 1888, that he was a colonel when he met Tumblety in the capital. His position as one of the General’s officers would have been why Tumblety invited him to the lecture. The General’s officers were his eyes and ears. Once the General rejected him by the end of 1861, Tumblety left for two months. However, he returned and decided to practice his money-making scheme as an Indian herb doctor. In February 1862, he began advertising himself. Did Dunham lie to the reporter about Tumblety having an anatomical collection and giving the medical lecture? Interestingly, just before Tumblety arrived in D.C., people saw him in New York City with pictures of anatomical specimens posted outside his Broadway Street office, “which look as if they might once have formed part of the collection of a lunatic” (Vanity Fair, August 31, 1861). Further, Tumblety made his way to Buffalo, New York, after his two-year sojourn at the capital. The Buffalo Courier reported that Tumblety gave medical lectures, “with Thespian emphasis.” Move to London In the 1880s, the world traveler was spending half the year in England. In May of 1888 – the year of the Ripper murders – Tumblety sailed across the Atlantic and made residence in West End London. During the murders, police arrested him on suspicion for the Whitechapel crimes. This occurred sometime before police took him into custody on November 7, 1888, for gross indecency and indecent assault. They immediately brought him up in front of Marlborough Police Court Magistrate James L. Hannay for his remand hearing. This would determine if he should remain in custody at Holloway Prison until his committal hearing one week later. Hannay had the discretionary powers to give Tumblety bail. On November 9, 1888, just one or two days later, Mary Kelly suffered a brutal death. Tumblety had his committal hearing on November 14, 1888. Magistrate Hannay listened to the evidence and agreed the case should be brought up to the judge at Central Criminal Court on November 20, 1888, following a grand jury review. Hannay set bail at £300, and on November 16, 1888, Tumblety posted bail. Holloway Prison released him, and he was free. Did Hannay set bail at the earlier remand hearing, allowing Tumblety to be free at the time of the Kelly murder? If the magistrate set bail at the later committal hearing for the same offense, he likely did the same at the earlier remand hearing, especially since the case against Tumblety was in preparations for the committal hearing. The fact that three Scotland Yard officials considered Tumblety a suspect after the Kelly murder supports this. After posting bail, Tumblety slipped into the English shadows. On November 20, Tumblety instructed his lawyer to request a postponement. The courts approved the request and scheduled it for December 10, 1888. Interestingly, Tumblety’s New York bank transferred approximately £260 transferred on November 20. He sneaked out of England to Boulogne, France, on November 23, 1888. He embarked on the transatlantic steamship La Bretagne at noon in Havre, France, on November 24, and arrived in New York City on December 2, 1888. Because the charge was a misdemeanor offense, Tumblety would not face extradition back to England. On Amazon: Michael L. Hawley’s book – The Ripper’s Haunts Six months later, on July 17, 1889, someone murdered Alice Mackenzie. Scotland Yard believed she was the Whitechapel fiend’s victim. Since Tumblety was in New York City, this convinced police that Tumblety was not Jack the Ripper. The world soon forgot about him. Only later did people realized that Jack did not murder Mackenzie. Tumblety had two personas. Publicly, he was a ubiquitous, eccentric, aristocratic medical professional. However, privately, he was a narcissistic loner, frequenting the slums of every city seeking encounters with young men. On countless occasions, Tumblety found himself in legal trouble, defending against various charges including assault. The Main Reason Tumblety Became a Suspect The primary reason journalists reported Tumblety as a suspect was the idea that the Whitechapel murderer hated women. This is precisely what Littlechild stated in his private letter, “…his feelings toward women were remarkable and bitter in the extreme, a fact on record.” Littlechild’s recollections were surprisingly detailed and accurate about Tumblety being a suspect after the Kelly murder. He also alluded to Tumbelty’s bitter hatred of women. Additionally, his arrests and escape from France made him very suspicious. Scotland Yard’s identification of Tumblety in France could only have come from officials in Litttlechild’s Special Branch division, which explains why he knew of Tumblety’s escape. However, law enforcement then makes a blatant error. “He [Tumblety] shortly left Boulogne and was never heard of afterward. It was believed he committed suicide…” Tumblety made it safely back to New York and died in 1903. The incredible accuracy of Littlechild’s earlier comments suggests this error did not stem from a lapse of memory. More likely, Littlechild did not participate in the case after authorities identified Tumblety in France. Another reason officials suspected Tumblety was because he preferred young males for sexual companionship. Theories flew that he had an unusual hatred of women, or misogyny, which had begun in his teenage years in Rochester, New York. Supposedly Tumblety stated that women were a curse to the land, and he even blamed them for all the world’s trouble. He considered them imposters who lured young male youths away from their intended lovers: older men. The journalist who broke the story of Tumblety’s arrest on suspicion of the Whitechapel crimes was the New York World’s London Special correspondent, E. Tracy Greaves. The story surfaced in his Saturday, November 17, 1888, news dispatch. The report was a weekly update on the Whitechapel murders investigation one week after the murder of the last victim, Mary Kelly, on November 9. American reporters also used the police as their source for the Whitechapel investigation. Greaves even admitted to having a Scotland Yard informant. In an interview with a New York World reporter in January 1889, Tumblety admitted to his arrest in England. He said: I happened to be there when these Whitechapel murders attracted the attention of the whole world, and, in the company with thousands of other people, I went down to the Whitechapel district. I was not dressed in a way to attract attention, I thought, though it afterward turned out that I did. I was interested by the excitement and the crowds and the queer scenes and sights, and did not know that all the time I was being followed by English detectives. Two Scotland Yard officials mentioned Tumblety as a suspect after they completed the case for indecency and indecent assault. Assistant Commissioner Robert Anderson sent private cable dispatches to at least two U.S. chiefs of police. He asked San Francisco’s Patrick Crowley and Brooklyn’s Patrick Campbell for all the information they had on Tumblety. A misconception is that Anderson requested handwriting samples, but this did not occur. He merely asked for all information. Nonetheless, Crowley did offer handwriting samples, and Anderson accepted them. He sent these cable dispatches on November 22, before they realized Tumblety had sneaked out of the country. Also, when Inspector First Class CID Walter Andrews was in Toronto, Canada, on December 11, 1888, a reporter asked if he knew Tumblety in reference to the murder case. Andrews stated: Do I know Dr. Tumblety, of course, I do. But he is not the Whitechapel murderer. All the same, we would like to interview him, for the last time we had him he jumped his bail. He is a bad lot. If Andrews stated Tumblety was not the murderer, why did he still want to interview Tublety? Getting an interview for the gross indecency case would have been fruitless since he could not be deported. Perhaps it’s because he believed Tumblety might face murder charges if more evidence turned up. On December 4, 1888, journalists reported on an English detective staking out Tumblety’s residence. Purportedly, the detective told a bartender that he was investigating the chap who committed the Whitechapel murders. Could Francis Tumblety Be Jack the Ripper? If the Whitechapel murders were sex crimes, then Francis Tumblety was not Jack the Ripper. Most gay male sado-sexual serial killers prey upon men, as with Jeffrey Dahmer. However, in Tumblety’s case, the evidence shows that his motive would have been a hatred of women. Some modern experts do not see the Whitechapel murders as sexually-motivated or even sadistic. Forensic pathologist Dr. William Eckert M.D. investigated the Whitechapel case in 1989. He concluded that the motive was anger-retaliation exhibiting non-sadistic behavior. Forensic scientist and criminal profiler Dr. Brent Turvey also observed the victims of the Whitechapel murders. He did not see a sexual motive, but anger-retaliation: specifically, misogyny. Interestingly, in January 1888, the year of the murders, Dr. Francis Tumblety told a Toronto mail reporter that he was in constant dread of sudden death for kidney and heart disease. How coincidental that the three organs removed from the Whitechapel victims included the uterus, kidney, and heart.
<urn:uuid:553484cf-957d-4c03-9597-d59e39533315>
CC-MAIN-2021-43
https://www.historicmysteries.com/francis-tumblety/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588102.27/warc/CC-MAIN-20211027053727-20211027083727-00429.warc.gz
en
0.976816
3,491
2.75
3
by Dr Zoltan P Rona MD, MSc The leaky gut syndrome is a name given to a very common health disorder in which the basic organic defect (lesion) is an intestinal lining which is more permeable (porous) than normal. The abnormally large spaces present between the cells of the gut wall allow the entry of toxic material into the blood stream that would, in healthier circumstances, be repelled and eliminated. The gut becomes leaky in the sense that bacteria, fungi, parasites and their toxins, undigested protein, fat and waste normally not absorbed into the bloodstream in the healthy state, pass through a damaged, hyperpermeable, porous or "leaky" gut.. This can be verified by special gut permeability urine tests, microscopic examination of the lining of the intestinal wall as well as the bloodstream with phase contrast or darkfield microscopy of living whole blood. Why is leaky gut syndrome important? The leaky gut syndrome is almost always associated with autoimmune disease and reversing autoimmune disease depends on healing the lining of the gastrointestinal tract. Any other treatment is just symptom suppression. An autoimmune disease is defined as one in which the immune system makes antibodies against its own tissues. Diseases in this category include lupus, alopecia, rheumatoid arthritis, polymyalgia rheumatica, multiple sclerosis, fibromyalgia, chronic fatigue syndrome, Sjogren's Syndrome, vitiligo, thyroiditis, vasculitis, Crohn's Disease, ulcerative colitis, urticaria;hives, diabetes and Raynaud's disease. Physicians are increasingly recognizing the importance of the gastrointestinal tract in the development of allergic or autoimmune disease. Understanding the leaky gut phenomenon not only helps us with safe and effective therapies to bring the body back into balance. Due to larger than normal spaces between the cells of the gut wall, larger than usual protein molecules are absorbed before they have a chance to be completely broken down as occurs when the intestinal lining is intact. The immune system starts making antibodies against these larger molecules because it recognizes them as foreign, invading substances. The immune system starts treating them as if they had to be destroyed. Antibodies are made against these proteins derived from previously harmless foods. Human tissues have antigenic sites very similar to those on foods, bacteria, parasites, candida or fungi. The antibodies created by the leaky gut phenomenon against these antigens can get into various tissues and trigger an inflammatory reaction when the corresponding food is consumed or the microbe is encountered. Autoantibodies are thus created and inflammation becomes chronic. If this inflammation occurs at a joint, autoimmune arthritis (rheumatoid arthritis) develops. If it occurs in the brain, myalgic encephalomyletis (a.k.a. chronic fatigue syndrome) may be the result. If it occurs in the blood vessels, vasculitis (inflammation of the blood vessels) is the resulting autoimmune problem. If the antibodies start attacking the lining of the gut itself, the result may be colitis or Crohn's disease. If it occurs in the lungs, asthma is triggered on a delayed basis every time the individual consumes the food which triggered the production of the antibodies in the first place. It is easy to see that practically any organ of the body tissue can become affected by food allergies created by the leaky gut. Symptoms, especially those seen in conditions such as chronic fatigue syndrome, can be multiple and severely debilitating. The inflammation that causes the leaky gut syndrome also damages the protective coating of the IgA family normally present in a healthy gut. Since IgA helps us ward off infections, with leaky gut problems we become less resistant to viruses, bacteria, parasites and candida. These microbes are then able to invade the bloodstream and colonize almost any body tissue or organ. When this occurs in the gums, periodontal disease results. If it happens in the jaw, tooth extraction or root canals might be necessary to cure infection. In addition to the creation of food allergies by the leaky gut, the bloodstream is invaded by bacteria, fungi and parasites that, in the healthy state, would not penetrate the protective barrier of the gut. These microbes and their toxins, if present in large enough amounts, can overwhelm the liver's ability to detoxify. This results in syndromes such as confusion, memory loss, brain fog, or facial swelling when the individual is exposed to a perfume or to cigarette smoke that he or she has had no adverse reactions to prior to the development of leaky gut syndrome. Leaky gut syndrome also creates a long list of mineral deficiencies because the various carrier proteins present in the gastrointestinal tract that are need to transport minerals to the blood are damaged by the inflammation process. For example, magnesium deficiency (low red blood cell magnesium) is quite a common finding in conditions like fibromyalgia despite a high magnesium intake through the diet and supplementation. If the carrier protein for magnesium is damaged, magnesium deficiency develops as the result of malabsorption. Muscle pain and spasms can occur as a result. Similarly, zinc deficiency due to malabsorption can result in hair loss or baldness as occurs in alopecia areata. Copper deficiency can occur in an identical way leading to high blood cholesterol levels and osteoarthritis. Further, bone problems develop as a result of the malabsorption of calcium, boron, silicon and manganese. The leaky gut syndrome is basically caused by the inflammation of the gut lining. This inflammation is usually brought about by the following: - Antibiotics because they lead to the overgrowth of abnormal flora in the gastrointestinal tract (bacteria, parasites, candida, fungi; - Alcohol and caffeine (strong gut irritants); - Foods and beverages contaminated by parasites like giardia lamblia, cryptosporidium, blastocystis hominis and other food and beverage contaminated by bacteria like helicobacter pylori, klebsiella, citrobacter, pseadomoas and other chemicals in fermented and processed food (dyes, preservatives, peroxidized fats); - Enzyme deficiencies (e.g. celiac disease, lactase deficiency causing lactose intolerance) NSAIDS (non-steroidal anti-inflammatory drugs) like ASA, ibuprofen, indomethancin, etc); - Prescription corticosteroids (e.g. prednisone); - High refined carbohydrate diet - (e.g. candy bars, cookies, cake, soft drinks, white bread); - Prescription hormones like birth control pills; mold and fungal mycotoxins in stored grains, fruit and refined carbohydrates. more common is the fact that many people are suffering from mycotoxicosis (toxic mold poisoning). The leaky gut syndrome can cause the malabsorption of many important micronutrients. The inflammatory process causes swelling (edema) and the presence of many noxious chemicals all of which can block the absorption of vitamins and essential amino acids. A leaky gut does not absorb the nutrients properly. Bloating, gas and cramps occur as do a long list of vitamin and mineral deficiencies. Eventually, systemic complaints like fatigue, headaches, memory loss, poor concentration or irritability develop. Prescription broad spectrum antibiotics, especially when taken for extended periods of time, wipe out all the gut friendly bacteria that provide protection against fungi and amoebic (parasitic) infections, help the body break down complex foods and synthesize vitamins like B12 and biotin. Since the friendly bowel flora is killed off, the body now has no local defense against parasites or fungi that are normally held in check. This then quickly develop and these may trigger the signs and symptoms of arthritis, eczema, migraines, asthma or other forms of immune dysfunction. Other common symptoms of this bowel flora imbalance and leaky gut syndrome are bloating and gas after meals and alternating constipation and diarrhea. This set of symptoms is usually labeled as IBS (irritable bowel syndrome) or spastic bowel disease and treated symptomatically by general practitioners and gastroenterologists with antispasmodic drugs, tranquilizers or different types of soluble (pysllium) and insoluble (bran) fiber. The Leaky Gut and IBS The mainstream thinking on IBS is that it is caused by stress. Irritable Bowel Syndrome is the number one reason for general practitioner referrals to specialists. In well over 80% of the cases, tests like intestinal permeability test ( a special urine test involving the determination of the absorption rates of two sugars called lactulose and mannitol), CDSA or livecell darkfield microscopy reveal the presence of an overgrowth of fungi, parasites or pathogenic bacteria. The one celled parasite, blastocystis hominis and different species of candida are the most common microbes seen in IBS. The only stress associated with IBS is that which is generated by leaky gut syndrome. If allowed to persist without correct treatment, IBS can progress into more serious disorders like the candidasis syndrome, multiple chemical sensitivities, chronic fatigue syndrome, many autoimmune diseases and even cancer. If treated medically, IBS is rarely cured. To treat it correctly, natural treatments work best and must include the removal of the cause, improvement of gastrointestinal function and healing the lining of the gut. How to reverse Leaky Gut syndrome Band-aid treatment with corticosteroids, prescription antibiotics and immunosuppressive drugs may be temporarily life saving for acute episode of pain, bleeding or severe inflammation as occurs in lupus or colitis. In the long run, however, none of these treatments do anything to heal the leaky gut problem. To reverse the leaky gut syndrome the diet must be completely changed to one which is as hypoallergenic as possible. Sugar, white flour products, all gluten containing grains (especially wheat, barley, oats and rye), milk and dairy products, high fat foods, caffeine products, alcohol and hidden food allergies determined by testing must all be eliminated for long periods of time (several years in the more severe cases). Treatment might also include the use of natural antibiotics: (echinacea, colloidal silver, garlic), antiparasitics:(cloves, wormwood, black walnut) and antifungal (taheebo, caplytic acid, grapefruit seed extract) depending on the type of infection which shows up on objective tests. It is rare that victims require prescription drugs for these infections and they should be discouraged. The drugs are usually expensive, have unpleasant side effects and are best reserved for life threatening conditions. Leaky gut syndrome patients can help themselves by chewing their food more thoroughly, following the basic rules of food combining, eating frequent small meals rather than three large ones and taking more time with their meals. Gastrointestinal function can be improved with a juice fast or a hypoallergenic diet and supplements like lactobacillus acidophilus and bifidus as well as natural (not derived from the aspergillus fermentation process, although most is, use caution!) FOS (fructooligosaccharides) derived from Jerusalem artichoke, chicory, the dahlia plant or burdock root. Beneficial supplements for leaky gut syndrome - Natural digestive enzymes - from plant (e.g. bromelain, papain) or pancreatic animal tissues (porcine, bovine, lamb) and aloe vera juice with high MPS concentration (good brands are International Aloe, Earthnet and Royal); - Stomach enhancing supplements - betaine and pepsin, glutamic acid, stomach bitters, apple cider vinegar; amino acids - L-glutamine, N-acetyl-glucosamine (NAG) - Essential fatty acids - milled flax, flax seed oil, evening primrose oil, borage oil, olive oil, fish oil, black current seed oil; soluble fiber - pysillium seed husks and powder, apple and citrus pectin, the rice derived gamma oryzanol; - Antioxidants - carotenoids, B complex. vitamin C, E, zinc, selenium, germanium, coenzyme Q10, bioflavinoids, especially quercetin, catechin, hesperidin, rutin and proanthocyanidins (pycnogonals, grape seed extract, pine bark extract, bilberry; herbs Plant extracts - kudzu, various high chlorophyll containing green drinks like spirulina, chlorella and blue-green algae, burdock, slippery elm, Turkish rhubarb, sheep sorrel, licorice root, ginger root, goldenseal, bismuth and bentonite. If you suspect you may be suffering from leaky gut syndrome, the most important thing to do is get yourself tested by a natural health care practitioner. A personalized natural program of diet and supplements can then be instituted to help you reverse this debilitating condition. Gittleman, A L - Guess what came to Dinner - parasites and your health. Garden City Park, NY Avery 1993. Gottchall, Elain. Breaking the vicious Cycle. Intestinal Health through diet. Kirkton, Ont, Kirkton Press 1994 Martin, Jeanne Marie and Rona, Zoltan P. The complete Candida Yeast Guidebook. Rocklin, California. Prima Books 1996 Robert L et al. The effects of procyandolic oligomers on vascular permeability. A study using quantitative morphology. Pathol Biol. 38.608-616: 1990 Rogers, Sherry A. Finally Healing the immune System. Macrobiotics Today. Sept/Oct 1995 pp 16-20 Home Testing & Sanitizer:
<urn:uuid:5fae606f-2ed9-43f0-8a53-80e9d87652bd>
CC-MAIN-2021-43
https://www.ei-resource.org/articles/leaky-gut-syndrome-articles/altered-immunity-a-the-leaky-gut-syndrome-dr-zoltan-p-rona-md-msc/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585321.65/warc/CC-MAIN-20211020121220-20211020151220-00150.warc.gz
en
0.900389
2,877
3.359375
3
Prisons an institution designed to securely house people who have been convicted of crimes. These people are known as prisoners or inmates and are kept in an ongoing custody for a certain amount of time. The type of crime decides the length of the sentence. For some such crimes (i. e. murder) individuals may be sentenced to a lifetime imprisonment. In order for an individual to be incarcerated, they have to be accused of violating criminal law and then tried and found convicted in a jury of their peers. Then the now offender will be given a sentence for a specific punishment. Depending on the nature of the crime and whether or not it is a first offense decides if the punishment will be probation or incarceration in a prison or jail. According to historians, temples were used as sanctuaries before the concept of prisons evolved. They were used for the accused to flee to, but if they were unable to make it to one, they were to be punished by the accuser, which sometimes ended in death (Kosof, 1995, pp.19). According to Encarta online Encyclopedia, the existence of prisons originated in ancient Rome and Greece. The first place of confinement, Mamertine Prison, was constructed in the 7th century B.C. in Rome. It was mainly many tunnels of dungeons under the sewers. Small, miserable chambers held criminals for short periods of time. Instead of incarcerating the serious offenders, England began transporting of criminals. England’s first deportation law was passed in 1597, allowing them to send the worse criminals to the Americas (Kosof, 1995, pp.20). After the American Revolution, transporting of criminals was no longer allowed, so Britain began using ‘convict ships’. They were even worse conditions. Many felons died on the sea. These ships were equipped with chains, torture devices, and barbaric equipment to put people to death in gruesome ways (Kosof, 1995, pp.20-22). But it was British social reformer John Howard’s work that helped pass the Penitentiary Act of 1779. He criticized prison conditions and visited several facilities in different countries; then he would report his finding to politicians in England. In turn the British Parliament passed penal reform legislation, hence the Penitentiary Act of 1779. Under this, new prisons were constructed, allowing prisoners to have clean, individual cells and adequate food and clothing. In 1816 New York established a prison at Auburn. The original design of the prison included 61 double cells, but William Britten, the first warden, made each double cell into solitary cells. Thinking this would help in the rehabilitation of inmates. (Kosof, 1995, pp. 22). Prisoners wore different uniforms to set them apart from one another. Since the thought of keeping up a prison would be expensive and very costly, they made deals with surrounding businesses and made the prisoners work as part of sentencing. The American Civil War was what changed the structure and purposes of prisons at least indirectly in the South. Prisons there were beginning to be frowned upon, and officials think they were exploiting the inmates. So the basis changed to the like of the North. Today, prisoners are allowed to work for wages though (Paragraph 14). The number of state and federal prisoners in the United States quadrupled during the 1980s and 1990s: 319,000 in 1980 to 773,000 in 1990 to 1,302,000 in 1999. The ones convicted on a drug offense makes up the largest group: sixty percent of federal prisoners and twenty-one percent of state prisoners. Nearly 94 percent of all prisoners are male. Most male prisoners in the United States are poor and members of the minority groups. African Americans make up nearly half of all male prisoners in the U.S. prisons. Hispanics make up about 18 percent of the male inmate population. According to studies most of the male inmates were unemployed and the average level of education was the 11th grade at the time of their arrests. One-third of all male prisoners in state and federal penitentiaries are in the age group of 35-54, which has dramatically increased by 70 percent since 1990, another one-third is comprised of prisoners in the age group of 25-34, and one-fifth are between the ages of 18-24. Approximately one-fourth of male inmates in prisons in the United States have been convicted of property offenses, while nearly half were sentenced for violent crimes. Drug offenders make up slightly less than one-fourth of male prisoners. For the female inmates, nearly half of the prisoners in United States prisons are between the ages of 25-34, and a similar proportion have never been married. Similar to the male prisoners, half of the female inmates are African Americans. Hispanics make up 14 percent and Caucasian females make up 36 percent of the female population. As true for the males, most of the females have not completed high school, and half were without jobs at the time of their arrests. Though more than 75 percent of convicted female inmates have children. In 1997 studies show that about 6 percent were pregnant or gave birth in prison . In the United States, drug offenses and violent crimes are the most frequent charges for incarceration for women. Together these two categories make up two-thirds of the female population. Females convicted of property offenses (i.e. fraud) make up just under half to one-third of the inmates. There are several different types of prisons that house criminals that committed ranges of crimes. They are used to determine an inmate’s custody level. The higher the custody level, the more security and supervision.Minimum-security prisons are designed to contain low-risk, first-time offenders convicted of nonviolent crimes. They are also used for prisoners from maximum-security and medium-security prisons who will soon be paroled. In 1998, one fifth of all United States prisons were made up by these facilities. These facilities are much like the freedoms on a college campus. The housing is like the dorms and the grounds and buildings are set up like a school. Inmates that are assigned to these are trusted to an extent to comply with the rules and regulations. Most of the inmates here are just trying to get out in as quick as time with no possible restrictions.Medium-security institutions make up one-fourth of all state and federal prisons in the United States. Medium-security prisons are known as ‘catchall’, which means they harbor criminals or inmates in ranges of convictions. Meaning extremely violent and nonviolent offenders are placed in common living quarters. Inmates often occupy cells that accommodate more than one prisoner. At medium-security facilities, freedoms are greatly restricted. Access to educational programs, freedom of movement, and any sort of privileges are monitored to a T. Visitation rights are limited, but when granted, visitors and inmates face one another through a glass window and talk on a telephone to one another. Sometimes work release and other types of transitional programs are offered, but only a small percentage of prisoners are allowed to participate. Maximum-security facilities make up about 15 percent of all U.S. prisons. Inmates incarcerated in these types of institutions are usually the most dangerous, high-risk, offenders.Maximum-security prisons have many harsh rules and regulations. Inmates are mostly isolated from one another in solitary cells for long periods. Video cameras are used by correctional officers to observe prisoners in their cells or work areas for a constant watch. Many maximum-security prisons confine inmates for 23 hours a day, allowing them out only for a short period of time to shower and exercise. Examples of maximum-security facilities are more widely heard of by the public. The U.S. penitentiaries in Leavenworth, Kansas and Terra Haute, Indiana are examples. Other facilities are Sing Sing Correctional Facility in Ossining, New York and Attica Correctional Facility in Attica, New York. In the United States, the highest security-level facilities are super-max or maxi-maxi prisons, which make up less than five percent of all U. S. penitentiaries. Also called ‘control units’, these prisons or areas within prisons have severe restrictions. Human contact in minimal. Inmates are kept in solitary confinement in small (usually six feet by eight feet) cells for long period of each day. They eat alone in their cells, and no opportunities for work or socialization exist. Outdoor recreation is only permitted once a week. Restraints, such as leg shackles, are used whenever inmates leave their cells. The federal penitentiary located in Marion, Illinois, which was constructed in 1963, was the first designated super-maximum facility. Those sentenced there were convicted of the most violent crimes and considered the most dangerous prisoners who are most likely to escape. Many prisoners have been transported there after committing murder in other prisons. The vast majority of female prisoners in the United States are held in women-only facilities. About one-fifth of all female inmates are housed in co-ed facilities. Interaction between male and female inmates at co-ed prisons is minimal, and men and women only share certain resources and recreational facilities. Female inmates are housed in units that are entirely separate from units that house male inmates during evening hours. The first U.S. prison exclusively for women, known as Mount Pleasant Female Prison, was established in Ossining, New York in 1837. Because there were few female criminals and it was less costly, the government decided to house them with the male inmates in prison, instead of respecting women’s needs and constructing a building for a female prison. From 1873 to 1990s more than 112 female prisons have been built in the United States. In 1999 Amnesty International, a private human rights organization, issued a report expressing concerns about the treatment of female inmates in the U.S. prison system. Governments have provided few facilities and minimal services for female inmates. Women have not had the access to rehabilitation programs that have been available to male offenders. The organization reported widespread complaints of sexual abuse and rape as well. It criticized the practice of allowing male correctional officers to supervise female inmates. According to an article entitled “Violated” by Stacie Stukin, in the January 2004 Vibe magazine, T’Nasa Harris, 32, was an inmate at Robert Scott Correctional Facility, a multilevel security state prison just outside of Detroit. She was raped by a correctional officer while she was serving her 90-day sentence for shoplifting. T’Nasa Harris’ 90-day sentence turned to be something she would always remember and not only just for the correctional time. She now has a son who bares the look of the man who raped her. 18 women are part of a state class action lawsuit against the Michigan Department of Corrections (MDOC) for failing to prevent or remedy allegedly rampant sexual abuse and harassment in its prisons; the case was filed in 1996 but is still pending. Every prison facility runs into problems with inmates or rules and regulations being broken, or overcrowding. The prison system is not perfect, but the government is trying to correct all of the imperfections as well as possible. And they have made the first steps at trying to do so. Whether it is investigating a possible rape charge or conducting a ‘lockdown’ like the one similar to the Marion, Illinois incident, inmate violence is expected, and the government is now trying to find ways to quench that from happening. Maybe starker punishments need to be allotted or more prisons need to be built to accommodate all the criminals. Just as the rate of criminal’s increases, the prison environments get a little bit better from the organizations that protest malignant mistreating.
<urn:uuid:17acc04a-4d49-4537-86f0-02eeb21ea6fb>
CC-MAIN-2021-43
https://gerardcambon.net/prisons-an-institution-designed-to-securely-house-people-who-have-been-convicted-of-crimes/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585199.76/warc/CC-MAIN-20211018062819-20211018092819-00390.warc.gz
en
0.971206
2,412
3.96875
4
No. 211 Squadron RAF |No. 211 Squadron RAF| |Branch||Royal Air Force| |Role||Light bomber / fighter-bomber squadron| |Motto(s)||Toujours à propos| ("Always at the right moment") |Squadron badge||An azure lion disjointed, ducally crowned.| No. 211 Squadron RAF was a squadron in the Royal Air Force active from 1917 to 1919 and from 1937 to 1946. In World War I it operated as a bomber and later a reconnaissance unit on the Western Front. In World War II it operated as a medium bomber unit in the Middle East and Far East and later as a strike fighter unit in the Far East, equipped with, successively, the Bristol Blenheim, the Bristol Beaufighter and de Havilland Mosquito. World War I No. 11 (Naval) Squadron was formed in March 1917 as a squadron of the Royal Naval Air Service. It was primarily an operational training squadron, flying single-seat fighter aircraft, mainly Sopwith Pups and Triplanes, and a few Camels. It also flew standing patrols over the British naval ships stationed in the North Sea off the coast of the Netherlands. It was disbanded in August 1917. On 10 March 1918 it was reformed as an RNAS bomber squadron at Petite-Synthe, Dunkirk, operating the DH.4 and DH.9 day bomber. Its operations were mainly directed against the ports of Bruges, Zeebrugge and Ostende, in an attempt to interdict the German U-boat campaign. On 1 April 1918, with the merging of the RNAS and the Army's Royal Flying Corps, it was renamed No. 211 Squadron RAF. It later flew operations in support of the Belgian Army in Flanders. From October 1918 it operated as a photographic reconnaissance unit. The squadron was disbanded at RAF Wyton on 24 June 1919. During its period of service it lost 22 aircrew killed in action, 10 taken prisoner and 15 interned in the Netherlands. A further 18 men were wounded, while two men died during the post-war flu pandemic. They had accounted for 35 enemy aircraft, dropped 150 tons of bombs, and flown 205 reconnaissance sorties. Squadron members were awarded three Distinguished Service Orders and one Bar, seven Distinguished Flying Crosses, one Distinguished Flying Medal, three mentions in despatches, two Silver Medals for Gallantry in Saving Life at Sea, and two Distinguished Service Crosses from the United States. World War II The squadron was re-formed at RAF Mildenhall on 24 June 1937, with 10 officers and about 50 airmen, and was initially equipped with 12 Hawker Audax light bombers organised into two flights of six. By the end of the year, there were 15 officer pilots and three sergeant pilots. In August 1937 the squadron was re-equipped with the Hawker Hind, and moved to RAF Grantham the following month. In May 1938 the squadron was one of several deployed to RAF Middle East. Based at RAF Helwan in Egypt with 18 Hind aircraft, the squadron was organised into three flights of six, with 14 officers and about 180 other ranks. This included 18 pilots, split equally between officers and NCOs. In January 1939 it moved to RAF Ismailia where in April it re-equipped with the Bristol Blenheim Mk.I twin-engined light bomber. With nine or twelve Blenheims, the squadron establishment was set at 360 officers and men. From June 1940, following the Italian declaration of war, 211 Squadron was involved in operations against the Italians in Libya and the Western Desert, including the attack on Tobruk on 12 June, during which the cruiser San Giorgio was damaged, and a few days later in the capture of Fort Capuzzo. Following the attack by Italy, in November 1940 it moved to Greece, initially based at Tatoi, the pre-war civil airport and Hellenic Air Force base at Menidi on the northern outskirts of Athens, before moving forward to Paramythia near the north-western border with Albania. On 13 April 1941, the squadron suffered a severe blow when, following an attack on German forces at Florina in the Monastir Gap by six aircraft, they were attacked by Bf 109Es of JG 27 on the return flight, and all six aircraft were shot down. The German advance forced 211 Squadron back, first to Agrinion and then to Tatoi from where it was evacuated in April 1941 through Crete to Egypt. The squadron then moved to Palestine. Based at RAF Aqir by May 1941 and partly re-equipped with the Blenheim Mk. IV, the squadron flew operations against Vichy French forces in the Syria–Lebanon Campaign. Withdrawn to Egypt in June 1941, it was based at RAF Heliopolis to regroup for the pending move to Wadi Gazouza in Sudan. There it was to act as a reserve training Squadron from July to October 1941, before providing the nucleus for the formation of No. 72 OTU, into which the squadron and personnel were formally absorbed in November 1941. The squadron was re-established in December 1941 at RAF Helwan, equipped with 24 Blenheim IVs with around 90 aircrew and over 400 ground staff. In January 1942, it was sent to the Far East to operate from Sumatra and Java in a short-lived campaign against the Japanese. The squadron suffered heavy casualties, losing ten aircraft and 19 aircrew killed or missing during operations from 6 February to 21 February 1942. By the first week of March, Allied forces were withdrawing from Java but only 87 of 211 Squadron's personnel were evacuated before the surrender on 8 March 1942. At least 340 personnel of the squadron were taken prisoner by the Japanese, of whom 179 died in captivity. The squadron re-formed at Phaphamau in India on 14 August 1943 and in October was equipped with the Bristol Beaufighter Mk. X. Operating 16 or 18 aircraft the squadron comprised 40 to 50 aircrew with around 350 groundcrew. After moving to Ranchi in November, then to Silchar in December, in January 1944 it moved to Bhatpara, from where it was engaged in operations against the Japanese in Burma. By July 1944 it was based at Chiringa in Bengal Province, India (now Bangladesh) where it was to operate until stood down for conversion to the de Havilland Mosquito from June 1945. From March 1945, the squadron's maintenance personnel were re-established as No. 7211 Servicing Echelon, undertaking all the squadron's aircraft maintenance work thereafter. In May 1945 the squadron was stood down from operations and moved to Yelahanka, near Bangalore, where in June it was re-equipped with de Havilland Mosquito FB Mk. VI. In July it moved to St. Thomas Mount, Madras, and in November, following the Japanese surrender, to Akyab, Burma, then to Don Muang, Bangkok, Thailand. There, on 15 March 1946, it was finally disbanded. Between 1937 and 1946 the members of 211 Squadron were awarded three Distinguished Service Orders, 27 Distinguished Flying Crosses and one Bar, eight Distinguished Flying Medals, five mentions in dispatches, and four awards from other countries. |Major H.G. Travers||March–May 1918| |Major R. Loraine||May–July 1918| |Major G.R.M. Reid||July 1918 – March 1919||Retired as Air Vice-Marshal, 1946| |Captain H.N. Lett||March–June 1919| |Squadron Leader R.J.A. Ford||July 1937 – March 1938||Retired as Group Captain, 1954| |Squadron Leader S.H. Ware||March 1938 – February 1939||Retired as Air Commodore, 1948| |Squadron Leader J.W.B. Judge||February 1939 – July 1940||Retired as Group Captain, 1952| |Squadron Leader A.R.G. Bax||July–September 1940||Retired as Wing Commander, 1955| |Squadron Leader J.R. Gordon–Finlayson||September 1940 – March 1941||Retired as Air Vice-Marshal, 1967| |Squadron Leader R.J.C. Nedwill||March 1941||Killed in air accident, 26 March 1941| |Squadron Leader A.T. Irvine||March–April 1941||KIA, 13 April 1941| |Squadron Leader K.C.V.D. Dundas||April–May 1941||KIA, 10 February 1942| |Squadron Leader A.S.B. Blomfield||May–July 1941||KIA, 7 October 1943| |Wing Commander D.C.R. Macdonald||July–November 1941| |Wing Commander R.N. Bateson||January–March 1942||Retired as Air Vice-Marshal, 1967| |Acting Squadron Leader J.E.S. Hill||October 1943| |Wing Commander P.E. Meagher||October 1943 – August 1944| |Squadron Leader J.S.R. Muller–Rowland||August–October 1944||Killed in DH 108 accident, 15 February 1950| |Squadron Leader H.E. Martineau||October–December 1944| |Squadron Leader R.N. Dagnall||December 1944 – January 1945||KIA, 13 January 1945| |Wing Commander R.C.O. Lovelock||January–August 1945| |Wing Commander D.L. Harvey||August 1945 – March 1946||Retired as Wing Commander, 1966| - Pine, LG (1983). A Dictionary of mottoes. London: Routledge & K. Paul. p. 234. ISBN 0-7100-9339-X. - Clark, D. (24 December 2010). "211 Squadron Markings". 211squadron.org. Retrieved 14 December 2014. - Clark, D. (2014). "World War I". 211squadron.org. Retrieved 14 December 2014. - Constable, Miles (2008). "Arthur Roy Brown, World War I Fighter Ace: A Short History". Canadian Air Aces of WWI, WWII and Korea. Archived from the original on 3 March 2016. Retrieved 15 December 2014. - Clark, D. (2014). "211 Squadron Movements". 211squadron.org. Retrieved 14 December 2014. - Clark, D. (2014). "211 Squadron personnel rolls". 211squadron.org. Retrieved 14 December 2014. - Clark, D. (2014). "No. 211 Squadron RAF History". 211squadron.org. Retrieved 14 December 2014. - Playfair, I.S.O. (2009), pp.110, 112–113 - Clark, D. (2014). "C.F.R. Clark". 211squadron.org. Retrieved 14 December 2014. - Clark, D. (2014). "The Far East". 211squadron.org. Retrieved 14 December 2014. - Clark, D. (2014). "211 Squadron Gallantry awards". 211squadron.org. Retrieved 14 December 2014. - Clark, C.F.R. (1998). 211 Squadron Greece 1940–1941: An Observers Notes and Recollections. Canberra: D.R. Clark. - Dunnet, J. (2001). Blenheim Over the Balkans. Durham: Pentland Press. ISBN 9781858218823. - Playfair, Major-General I.S.O.; Molony, Brigadier C.J.C.; with Flynn, Captain F.C. (R.N.) & Gleave, Group Captain T.P. (2009) [1st. pub. HMSO:1954]. Butler, Sir James (ed.). The Mediterranean and Middle East, Volume I: The Early Successes Against Italy, to May 1941. History of the Second World War, United Kingdom Military Series. Uckfield, UK: Naval & Military Press. ISBN 1-84574-065-3. - Squire, S/Ldr H.F. (1997). "RAFMO". Middle East Scrapbook. Durham: Pentland Press. - Spencer, D.A. (2009). Looking Backwards Over Burma — Wartime Recollections of a RAF Beaufighter Navigator. Bognor Regis: Woodfield Publishing. ISBN 9781846830730. - Wisdom, T.H. (1942). Wings Over Olympus. London: George Allen & Unwin. - Wright, P.A. (2011). The Elephant On My Wing — The Wartime Exploits of Flight Lieutenant Bobby Campbell, a Blenheim Pilot with 211 Squadron RAF 1939–1943. Bognor Regis: Woodfield Publishing. ISBN 9781846831195. |Wikimedia Commons has media related to No. 211 Squadron RAF.| - Clark, D. (2014). "No. 211 Squadron RAF". Retrieved 22 December 2014. - "211 Squadron". Royal Air Force. 2014. Retrieved 14 December 2014. - Rickard, J. (2013). "No. 211 Squadron (RAF) during the Second World War". History of War. Archived from the original on 5 February 2018. Retrieved 14 December 2014. - "Squadron Histories 211–215". Air of Authority – A History of RAF Organisation. 2014. Retrieved 14 December 2014. - "211 Squadron". RAF & Airfield History in Lincolnshire. 2014. Retrieved 14 December 2014.
<urn:uuid:1a8ab78b-37c1-4199-8202-245ba650484d>
CC-MAIN-2021-43
https://en.wikipedia.org/wiki/No._211_Squadron_RAF
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585768.3/warc/CC-MAIN-20211023193319-20211023223319-00150.warc.gz
en
0.941967
2,886
2.515625
3
This article was co-authored by Karin Lindquist, a trusted member of wikiHow's volunteer community. Karin Lindquist earned a BSc in Agriculture as an Animal Science major from the University of Alberta, Canada. She has over 20 years of experience working with cattle and crops. She's worked for a mixed-practice veterinarian, as a sales representative in a farm supply store, and as a research assistant doing rangeland, soil, and crop research. She currently works as a forage and beef agriculture extension specialist, advising farmers on a variety of issues relating to their cattle and the forages they grow and harvest. wikiHow marks an article as reader-approved once it receives enough positive feedback. This article received 19 testimonials and 94% of readers who voted found it helpful, earning it our reader-approved status. This article has been viewed 402,393 times. Knowing when a heifer or cow is ready to be bred is important to a breeding operation. There are specific guidelines one must follow to ensure that a heifer or cow is ready to be bred. Heifers are female bovines that have not had a calf. A heifer will no longer be a heifer after she has had her second calf where she becomes a Cow: a female MATURE bovine that has had two calves. Heifers remain heifers from the day they are born until they have had their second calf. Method 1 of 2:Breeding Heifers 1Depending on the breed of the heifer, most heifers will start to show the first signs of heat when they are between 9 and 22 months of age. - The rate of sexual maturity or puberty is determinate of genes and breeding. The rate of maturity, being carcass maturity, is not directly related nor determinate of when a heifer is ready to breed. Carcass maturity is when bone and muscle growth plateau's off and fat begins to be laid down. 2Usually it's best to wait until they are at least 15 months of age before breeding. Even though the early maturing breeds do reach puberty by the time they are around 7 to 9 months of age, it is best to wait until they are around 13 to 15 months of age before you can breed them. X Research source This is because it allows them to grow more, increase their pelvic area and gain enough condition that can allow them to sustain themselves throughout gestation. Heifers that are bred too early tend to have too small a pelvic area to calve out, so some "whoopsie" heifers need to have a C-section done on them, or have the calf pulled. This can be quite costly, as the new calf will often have to be bottle-fed to get enough milk for him. - Occasionally, though, some heifers that are bred too early are able to pass and raise a calf without human interference, either at calving or during the period the calf is being raised. 3The heifer must also be at least 60% to 65% of the average mature weight of the cowherd before she can be bred. This is so that she is big enough to hold and grow a calf in her while she also continues to grow. X Research source 4In order to successfully breed a heifer, there are two ways to go about it: - Choose a bull with good (as in low) calving-ease numbers to breed her (and other heifers like her), or ... - Time her estrus periods so that you can Artificially Inseminate (AI) her (or get an AI tech to do it for you). - A heifer can only be successfully bred during her heat periods. It's critical in timing everything right to make AI successful for her. She must be AI'd 12 hours after you see her first signs of estrus. And remember that AI only has a 60% to 70% success rate. - With use of natural insemination, the bull will know when she will stand to breed and when she isn't receptive. It is best to leave the bull in with the heifers for 60 to 80 days to make sure he services all of them. Use a yearling bull (one that is around 12 months of age) on them to reduce injury. (Note, though, that using a yearling bull may or may not decrease the size of the calves born. Most, if not all veteran producers can tell if a young bull will sire small calves simply by looking at his conformation and his EPD numbers based off the genetics and EPDs of his sire and dam.) Method 2 of 2:The Cow 1A cow should be bred back after she has had a calf. The optimum time to breed her is 45 to 60 days after she has had a calf. In order to get her to calve on the same date as the previous years, allow for 80 to 90 days of rest before getting her bred again. X Research source Typically it will take longer for her to come back into normal heat if she's in poor condition or has reduced fertility due to age, inadequate diet, or environment. - The poorer her condition, or, the thinner or fatter she is, the later she will be able to breed back. See How to Judge Body Condition Scores in Cattle for more. Age and poor or undesirable conformation will also determine how soon a cow will breed back in time. - The reason there is a waiting time between when the cow has given birth to when it is best to get her bred again is because it takes time for the uterus to involute or shrink back to its normal size. It also takes time for the cow's ovaries and hormonal system to get back to normal. Though a cow will show signs of heat 14 to 18 days after giving birth, her heat periods are quite unpredictable and short. This is because it takes some time for the corpus luteum to grow back to its normal state and begin to normally produce new ova again. 2As mentioned above on Step #5 for heifers, a cow can either be bred by AI or by natural service. - AI follows the same principles and rules mentioned for breeding heifers in order to have a higher success rate in getting her settled. - You usually do not need a calving-ease bull with cows, certainly not like you do with heifers. However, please be cautious in what bull you select for your cows. If you are using a Continental bull on British-type cows, for instance a Charolais bull on Angus cows, that bull does need to have good or low calving ease EPDs in order to reduce risk of dystocia or calving problems in your cows. Charolais are typically notorious for causing calving problems in British-type cows, or rather throwing calves that are often larger than what a British bull on British cows usually throws. If you do not pay attention to the numbers (the EPDs or Expected Progeny Differences)of that herd sire, you will land yourself in a lot of trouble, and be very busy next calving season pulling calves. - On the other hand, if you are using a British bull on British cows (and it doesn't necessarily have to be the same breed), you can still get a bit careless with calving ease and still end up with problems. All the same, watch out for those bulls that have extremely high calving ease EPDs, regardless of what breed they are or you choose. - Also remember to select the bull that complements and improves your herd, not the other way around. - On the other hand, if you are using a British bull on British cows (and it doesn't necessarily have to be the same breed), you can still get a bit careless with calving ease and still end up with problems. All the same, watch out for those bulls that have extremely high calving ease EPDs, regardless of what breed they are or you choose. X Research source QuestionIf I have heifers and cows running together, should I separate the heifers to run with bulls first?Community AnswerIt would be a good idea to do so. That way you will be able to spend quality time watching the heifers calve out; once you have most of the heifers calved out, the cows will start calving. Arrange it so that your heifers are bred one to two weeks ahead of your cows so that the heifers are calving out a week or two before the cows. QuestionWhy would my heifer still breed monthly?Community AnswerIf your heifer continues to come into estrus (stands to be mounted by the bull), then she is not actually becoming pregnant. There are rare instances of heifers continuing to stand even though they are pregnant, but it is uncommon. It would be best to call your vet to determine if the bull is infertile, or the cow has something going on causing her to not catch a calf. QuestionHow long is a cow pregnant?Community AnswerTheir average gestation rate is 285 days. - The better condition score a cow is after calving, the sooner she'll be ready to breed. - Heifers should be bred when they have had at least 3 heat periods after the initial start of puberty, no matter what the breed. - Check the hindquarter conformation of the heifer first before you decide to get her bred. A deep, long, and wide rump is a very good sign that that heifer is a keeper. - You will always know when a heifer or cow is ready to be bred when they go into heat. - Normal estrus periods last 24 hours and occur between 17 to 24 days. - Heifers should be in as good a condition score as cows are 30 days prior to breeding. Females should be at a Cdn BCS between 2.5 and 3.5 (USA BCS of 3 to 5) before breeding season. - A lone cow or heifer that you have on your farm that does not have access to other herd-mates is a danger for you, particularly when she goes into heat. You could be in for a surprise when she tries to mount you in her "vigorous" state. - Be wary around bulls during breeding season. They can get quite protective of their harem if they don't know that you are not really competition for them. - AI only has a 60 to 70% success rate if you choose to use it on your cows or heifers. However, the better the AI tech, the higher success rate you'll have. - ↑ https://beef.unl.edu/faq-2009breedingage - ↑ https://hereford.org/wp-content/uploads/2017/02/issue-archive/0214_HeiferDevelopment.pdf - ↑ https://www.livecorp.com.au/LC/files/e4/e4a91e28-4b11-45c2-bff4-db9d83faa555.pdf - ↑ https://hereford.org/wp-content/uploads/2017/01/CalvingEaseEPDs.pdf About This Article You’ll know a heifer is ready to be bred when she’s started to show signs of heat. This usually occurs between 9 and 22 months of age, but even if she’s in heat earlier, you should only try to breed her after 15 months. This will ensure she's big enough and her hips are wide enough to survive the birth process. You’ll also want her to be at least 60 percent of the average mature weight of the cowherd to ensure that she’s big enough to grow a calf. After a cow has had a calf, wait 45 to 60 days to breed her again, or when she starts to show signs of heat. If you want her to calve at the same time of year as before, wait 80 to 90 days before breeding her. For more tips from our Agricultural co-author, including how to choose a bull for your cow to breed with, read on!
<urn:uuid:c44185e6-cafe-43d8-909b-ec2c425ca8af>
CC-MAIN-2021-43
https://www.wikihow.com/Know-when-a-Heifer-or-Cow-Is-Ready-to-Be-Bred
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587659.72/warc/CC-MAIN-20211025092203-20211025122203-00510.warc.gz
en
0.963469
2,544
3.28125
3
Most everyone will have suffered from diarrhea at some point during their lives. It can be extremely debilitating and is often very painful. There are a number of potential causes from viral and bacterial infections to alcohol abuse but in the majority of cases, the condition can be resolved at home. Natural Remedies for Diarrhea can be very effective to get relief of the problem. Essential oils are one of a number of potential Home Remedies for Diarrhea that can help relieve disease and its symptoms. Symptoms can range from nausea to bloating and pain. However, there is very little research into the effects of essential oils on diarrhea and it may be a sign of a more serious illness. You should consult your doctor and only use essential oils as complementary therapy. WHAT IS DIARRHEA? When a person frequently passes loose, soft, and watery stools, they are defined as having diarrhea. This is often accompanied by abdominal bloating, cramps, nausea, and flatulence. Treatment depends on the severity of the condition as well as the causes and the patient’s overall health. You should seek medical help if you are suffering from severe pain, dehydration, fever, vomiting, or rectal bleeding. However, in the majority of cases, you can treat diarrhea at home where it will typically resolve in a couple of days. To ease your symptoms, you should drink lots of fluids and eat simple foods like rice, toast, and bananas. Over-the-counter medications like Imodium can help relieve your symptoms. You should however speak to your doctor before taking an OTC remedy since some people should avoid them. Children under the age of five should not take OTC medications and it is important to keep your child well-hydrated. CAUSES OF DIARRHEA Diarrhea has a number of potential causes but viral infections are the most common. Viral diarrhea typically lasts between three and seven days. The following viruses can cause diarrhea: Norovirus: this is a common cause of diarrhea epidemics in schools, workplaces, and nursing homes. Rotavirus: commonly causes infant diarrhea. Adenovirus: This can commonly affect people of any age. Bacterial infections usually cause more severe diarrhea than viral infections. They are often caused by food poisoning. Bacterial infections can cause more severe symptoms including fever, abdominal pain, and vomiting. Bacterial diarrhea can be caused by the following: Salmonellae, Campylobacter, and Shigella: these organisms are the most common bacteria linked to bacterial diarrhea. - coli, Listeria, and Yersinia: these strains of bacteria can also cause diarrhea. Several parasites can infect the digestive system and cause diarrhea. They often enter the system through contaminated water. Parasites that cause diarrhea include Cryptosporidium and Giardia lamblia. Foreign Travel: Many people pick up diarrhea when they travel to foreign countries and come into contact with unfamiliar parasites and viruses. This is known as traveler’s diarrhea. OTHER CAUSES OF DIARRHEA Intestinal Diseases: Such as IBS, Crohn’s disease, and ulcerative colitis. Food intolerance: including lactose intolerance and artificial sweeteners. Certain Medications: Including antibiotics, gout medication, blood pressure drugs, weight loss medications, and antacids. Alcohol abuse: Including chronic alcoholism and binge drinking. Running: runners may be familiar with the so-called ‘runner’s trots’ which can happen after long runs. SYMPTOMS OF DIARRHEA Diarrhea can come with a range of uncomfortable symptoms including the following: Loose and Watery Stools: The stools can be any color but red stools may be a sign of intestinal bleeding and serious infection. Abdominal Cramp: Diarrhea is sometimes accompanied by abdominal pain and cramping, the severity varies but is usually mild to moderate. More severe pain could be a sign of a more serious disease. Fever: this is an uncommon symptom. The presence of a fever may be a sign of a more serious illness. Bloating and Flatulence: gas and bloating are common symptoms The Frequent Urge to Move your Bowels HOW ESSENTIAL OILS NATURAL REMEDIES FOR DIARRHEA CAN HELP It is important to note that Natural Essential Oils should only be used as a complementary treatment along with your medical advice. There are very few studies into the effects of essential oils on diarrhea. However, there is some evidence that essential oils can help to ease some of the symptoms associated with diarrhea like bloating, cramps, and nausea. Many essential oils also have antibacterial and antiviral properties that may help fight against the pathogens responsible for the condition. Unfortunately, essential oils have not been extensively studied for their efficacy against diarrhea itself. Nevertheless, many essential oils contain antispasmodic and anti-inflammatory properties that may help ease some of the symptoms. Some oils can also help relieve nausea and stimulate digestion. BEST ESSENTIAL OILS FOR DIARRHEA PEPPERMINT ESSENTIAL OIL Peppermint essential oil contains menthol and is known for its carminative and anti-inflammatory effects. This essential oil can help combat certain symptoms of diarrhea including cramps, bloating, and flatulence. As well as its carminative actions, peppermint essential oil also has powerful antispasmodic actions that can help ease the cramps and pain that often accompany a bout of diarrhea. (1) It also makes people feel refreshed and generally healthy and may help ease feelings of nausea. Peppermint essential oil may also improve your digestive health by stimulating bile production. To use peppermint essential oil for Diarrhea Natural Treatment, you can diffuse it around the home. You can also apply it topically to your abdomen but make sure that you dilute it well first in a carrier oil like coconut oil or sweet almond oil. CINNAMON ESSENTIAL OIL Cinnamon essential oil is a warming oil with a wonderful fragrance but it has a wide range of potential medicinal effects thanks to its active compounds like cinnamaldehyde. Studies have found that cinnamon oil has powerful antimicrobial effects against a range of pathogens that may cause diarrhea including salmonella and Staphylococcus aureus. (2) The cinnamon essential oil also has antispasmodic actions that can help ease the intestinal cramps that often accompany bouts of diarrhea. Cinnamon essential oil is one of the best Herbal Products for Diarrhea that work properly without any side effects. Diffuse the oil or dilute and apply it topically to your abdomen. OREGANO ESSENTIAL OIL Oregano essential oil is powerful medicinal oil with a number of uses. These medicinal applications are largely down to the presence of two compounds – carvacrol and thymol. Studies have found that oregano essential oil has a number of therapeutic effects that may help ease diarrhea and its symptoms. Oregano essential oil has antibacterial actions against various bacterial strains. (3) It also has excellent anti-inflammatory actions and has demonstrated an ability to fight small intestinal bacterial overgrowth. (SIBO) (4) Although the oil has not been tested for its anti-diarrhea effects on humans, one animal study found that treatment with oregano oil significantly reduced the symptoms of diarrhea in newborn calves. (5) Oregano oil is a very powerful essential oil and should never be taken internally. It can be diffused or applied topically as long as it is well diluted with a carrier oil. Use Oregano essential oil for the Herbal Treatment for Diarrhea to get rid of the disease. GINGER ESSENTIAL OIL The ginger essential oil may help ease some of the symptoms associated with diarrhea. Several studies have found that it has powerful anti-inflammatory effects while it may also act as a digestive aid. (6) Diarrhea is sometimes accompanied by a feeling of nausea and ginger essential oil is one of the best oil remedies for sickness and nausea. One study found that it helped people suffering from nausea following surgery while it is also used to deal with motion sickness. (7) The ginger essential oil can be diffused and is considered safe to apply topically in its diluted form. Try diluting the oil with a suitable carrier oil and rubbing it into your belly and temples to help relieve nausea caused by diarrhea. TEA TREE ESSENTIAL OIL Tea tree essential oil is one of the most diverse and popular of all essential oils that can be used for Natural Remedies for Diarrhea. It has a very wide range of medicinal benefits including antibacterial and anti-inflammatory properties that may help fight diarrhea and ease the symptoms. Studies have found that tea tree oil is an effective remedy against several bacterial strains that can cause diarrhea including E. coli. (8). its anti-inflammatory effects can also help soothe the inflammation and discomfort of the digestive tract caused by bouts of diarrhea. Diffuse this oil through the house or dilute and apply it topically… RAVINTSARA ESSENTIAL OIL Ravintsara essential oil is not as popular as many on the list but it comes with a wide variety of medicinal benefits. It is extremely top in eucalyptol which provides this oil with excellent antiviral properties. It can be used in Herbal Remedies for Diarrhea and other diseases. ROMAN CHAMOMILE ESSENTIAL OIL Roman chamomile essential oil can help in Natural Treatment for Diarrhea and its symptoms especially pain and discomfort. While it does not have the range of medicinal properties found in more powerful oils, it can help ease discomfort, cramping, and pain caused by an outbreak of diarrhea. Roman chamomile essential oil has a wonderful, floral fragrance and is perfect for diffusing. It can also be diluted and applied topically to help ease your digestive symptoms. LAVENDER ESSENTIAL OIL Lavender essential oil finds its way onto most medicinal lists and there is good reason for its popularity. This oil is gentle-acting and has a very wide range of potential users for both the body and the mind. Among its many medicinal properties are anti-inflammatory and analgesic actions as well as antispasmodic properties that can help relieve many of the symptoms of diarrhea. It can also help ease the anxiety you may be feeling during a bout of illness. While lavender oil is one of the few essential oils that can be applied topically without dilution, it is still better to stay on the side of caution and dilute it with a carrier oil first. EUCALYPTUS ESSENTIAL OIL Eucalyptus essential oil gets the majority of its medicinal effects from the presence of eucalyptol or 1, 8-cineol. Studies have found that this compound has a very wide range of medicinal effects including anti-inflammatory and analgesic effects. (9) This makes eucalyptus oil a good option for dealing with many of the symptoms of diarrhea including inflammation, cramps, and pain. HOW TO USE ESSENTIAL OILS FOR DIARRHEA Even though some people use essential oils internally, we do not recommend ingesting them. There is however a number of readymade capsules on the market that contain essential oils like peppermint or oregano. Make sure that you speak with your doctor before taking any of those internal remedies. These are the safest and hopefully most effective ways of treating diarrhea and its symptoms with essential oils. Inhalation: If you are feeling nauseous as a result of your condition, try inhaling ginger or peppermint essential oil directly from the bottle. Diffuser: Most essential oils can be inhaled via a diffuser machine. You can use between 5 and 20 drops of essential oil in your machine either one essential oil or several oils combined. Some people advise not diffusing for more than a few hours at a time because it may cause nausea or headaches. Topical Application: When it comes to treating the symptoms of diarrhea, you can apply the oils in this article directly to your stomach. Make sure that you dilute your chosen oil in a carrier oil before massaging the mixture into your lower abdomen. This is a good way to treat the cramps, bloating, and nausea that accompany diarrhea. Aromatic Bath: Add ten or so drops of your chosen essential oil to your bathwater. Sit back, relax and let the oils do their work. SIDE EFFECTS AND PRECAUTIONS Essential oils are highly concentrated plant extracts. While they can be incredibly useful, they must be treated with respect. Make sure that you buy your oils from a reputed supplier and be sure to follow these guidelines: Do not take your essential oils internally unless under expert medical supervision. Most essential oils are safe for topical use as long as they are diluted first. There is any number of carrier oils to choose from and some of the most popular include olive oil, coconut oil, and jojoba oil. Even after you have diluted your essential oils, you should perform a patch test to ensure against sensitivity. Apply a small amount of the diluted oil to your skin, cover, and wait for 24 hours. If there is no reaction, you may go ahead and apply the larger dose. Pregnant women and nursing moms should consult their doctor before using essential oils. Diarrhea is a common condition with a number of potential causes. Symptoms include loose and watery stools that may be accompanied by pain, gas, bloating, and nausea. Most cases of diarrhea clear up pretty fast without the need for medical treatment. There are a number of potential home remedies including essential oils, While essential oils may not ‘cure’ your condition, they can help ease some of the symptoms like bloating, pain, and gas. Some of the best essential oils to treat diarrhea and its symptoms include peppermint, ginger, oregano, and lavender essential oils.
<urn:uuid:079dfc22-78ae-488d-a38a-9b2d022ce7a1>
CC-MAIN-2021-43
https://www.naturalherbsclinic.com/blog/essential-oils-and-natural-remedies-for-diarrhea/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588282.80/warc/CC-MAIN-20211028065732-20211028095732-00670.warc.gz
en
0.954386
2,974
2.609375
3
We the people of Great Britain need to understand some pretty important facts in relation to how a nation which has stood for over a thousand years, or to be precise if we include the Molmutine Laws, over 2480 years give or take a few. Since 1215 and the forced signing of the Magna Carta by King John, forced by the established barony of the day, we can conceive a realm held together in charter and statutes bound together into a common law. In 1689 the Coronation Oath Act was created 1701 The Act of Settlement made a statutory requirement on the monarch to take the Coronation Oath and was taken by both William and Mary in 1689. The office of monarch has the main duty of ensuring everything beneath that office operated according to the law of the land. The system as a whole is bullet proof save for treason at the very top, today we are witnessing the effects of just such action which comes in the fact that the coronation oath was modified without statutory authority, first in 1936 with the father of Elizabeth and continued with Elizabeth. Both swore a slightly different oath to the 1689 version. It still included a promise to maintain the established Protestant religion in the United Kingdom, but this is an oath broken with the covenant made with the construct that is Methodism. The text of the oath taken by Elizabeth II in 1953 is also appended to this note. 9 Coronation Oath Act 1688 (1 Will & Mar chap 6) This translates to the fact, Great Britain does not have a constitutional monarch with the aim of upholding the realm, what we have is a private organisation having their own partner holding the office of British monarch, with all the powers thereof, yet has no obligation to uphold the realm, in fact the agenda is to demolish the realm and replace a nation with a corporation; The United Kingdom Corporation Sole, itself an agent of the Crown and by definition of that fact is also an entity to which Elizabeth, unlawfully, is duty bound to uphold. If it was not so then signatures to the European Union, NATO, the United Nations etc could not have been given royal ascent, a role that is the sole duty of the sitting monarch. The British Realm has suffered this position outwardly since 1936. It then follows through that the offspring to this House have no rite to succession. Yet another open move to break the Oath of Coronation comes in Charles through his marriage to Camilla Parker Bowles which was not carried out by the Church of England, the marriage was a civil ceremony. Read More This could be as many have suspected, to position Charles in power behind a more acceptable personage to the British people in his son William. The following letter is dated 2003 and unless their have been new statutes since this date, then as we can see Magna Carta and the Coronation Oath are still in force : In a Home Office memo in July 1964 it was stated : Marriages of members of the Royal Family are still not in the same position as marriages of other persons. Such marriages have always been expressly excluded from statutes about marriage in England and Wales and marriages abroad, and are therefore governed by the common law. This means that in England and Wales such a marriage can be validly celebrated only by a clergyman of the Church of England. A civil marriage before the registrar, and marriage according to the rites of any church other than the Church of England, are not possible. That aside given Elizabeth did not take the correct oath to become the monarch of Great Britain, then William is also void from becoming King, there is no succession through Elizabeth. I have written for some years now that in order we can solve these issues in a manner that would prevent such fraudulent behaviour by the wealthy elites, would be to form a new constitution in line with our ancient and current laws of right with the full and complete consent of the British Christian population, to then place the constitution itself on the throne of this nation which will allow our existing and ancient system to remain in place. And then there are these reasons for this monarchs illegitimacy : The Coronation Oath The basis for the coronation oath, which forms part of the coronation ceremony, is enshrined in statute in the Coronation Oath Act 1689. 9 This Act required the King William and Queen Mary, as joint monarchs, to swear an oath during the coronation ceremony. The Act of Settlement 1701 and the Accession Declaration Act 1910 make a statutory requirement on the monarch to take the coronation oath. 10 The text of the oath as set down in the 1689 Act is appended to this note. The text includes the promise that they would to the utmost of their power to maintaine the Laws of God the true profession of the Gospell and the Protestant reformed religion established by law […] and […] preserve unto the bishops and clergy of this realm and to the churches committed to their charge all such rights and privileges as by law do or shall appertain unto them or any of them. The legal obligations surrounding the oath are set out in Halsbury’s Laws : 28. The Crown’s duty towards the subject. The essential duties of the Crown towards the subject are now to be found expressed in the terms of the oaths which every monarch is required to take before or at the coronation. The duties imposed by the coronation oath are : (1) to govern the peoples of the United Kingdom of Great Britain and Northern Ireland, and the dominions etc belonging or pertaining to them according to their respective laws and customs (2) to cause law and justice in mercy to be executed in all judgments, to the monarch’s power; (3) to maintain the laws of god, the true profession of the Gospel, and the protestant reformed religion established by law, to the utmost of the Sovereign’s power; (4) to maintain and preserve inviolable the settlement of the Church of England, and its doctrine, worship, discipline and government as by law established in England; and (5) to preserve unto the bishops and clergy of England, and to the Churches there committed to their charge, all such rights and privileges as by law do or shall appertain to them or any of them.The monarch is also bound by oath to preserve the Presbyterian Church in Scotland. 1 See para 26 ante. 2 The coronation oath must be taken at the coronation under the Act. As to the statutory form of the oath and the alteration in the oath as at present administered see para 39 note 3 post. As to the citation of the Act of Settlement see para 35 note 3 post. 3 By the Act of Settlements, it is declared that whereas the laws of England are the birthright of the people thereof and all the kings and queens who shall ascend the throne of this realm ought to administer the government of the same according to the said laws and all their officers and ministers ought to serve them respectively according to the same the same are ratified and confirmed accordingly. As to the Crown’s duty to exercise the prerogative in conformity to law see para 368 post. 4 The duties as set out above are based on the oath in the Form and Order of Service in the Coronation of Queen Elizabeth II, 1953. These duties incorporate the duties set out in the coronation oath enacted in the Coronation Oath Act 1688 s 3. 5 Union with Scotland Act 1706 art XXV (embodying art XXV of the Treaty Union) ss 2-5; and see paras 51-66 post. This oath is taken before the coronation; see para 39 note 4 post. As to the accession declaration see para 39 post. And : 10 Act of Settlement 1700, and see Halsbury’s Laws Vol 8(2), para 39 for statutory conditions of descent of the Crown 11 Vol 8(2) paras 28 and 39 39. Statutory conditions of tenure The descent of the Crown in the present line of succession is subject to certain statutory conditions as follows : (1) a person who is a Roman Catholic or marries a Roman Catholic, is excluded from inheriting, possessing or enjoying the Crown, and in such case the people are absolved of their allegiance, and the Crown is to descend to such person or persons, being protestants, as would have inherited it in case the person so reconciled etc were dead; (2) every person inheriting the Crown must take the coronation oath in the form provided by statute; (3) every king or queen must make, subscribe and repeat, sitting on the throne in the House of Lords, either on the first day of the meeting of the first Parliament after the accession, or at the coronation, whichever shall first happen, a declaration that he or she is a faithful protestant, and will, according to the true intent of the enactments which secure the protestant succession to the throne, uphold and maintain those enactments to the best of his or her powers according to law; (4) any person coming into possession of the Crown must join in communion with the Church of England; and (5) it is also provided as a fundamental term of the union of England with Scotland that every person who succeeds to the Crown must take and subscribe the oaths for the preservation of the Established Church in England and the Presbyterian Church in Scotland. 1 The terms of the Act of Settlement are any person who shall be reconciled to, or hold communion with, the see or Church of Rome, or profess the popish religion, or marry a papist’s. As to the citation of the Act of Settlement see para 35 note 3 ante. 2 This is the joint effect of the Act of Settlement s 2, and the Bill of Rights. As to the history and citation of the Bill of Rights see para 35 note 3 ante. 3 Act of Settlement s The form of the oath is provided by the Coronation Oath Act 1688 s 3, and must be by the Archbishop of Canterbury or York, or any other bishop of the realm appointed by the monarch for that purpose, in the presence of all persons attending, assisting or otherwise present at the coronation s 4. The form of the oath as at present administered differs from that provided by the Act owing to the disestablishment of the Irish Church (by the Irish Church Act 1869), and to the provisions of the Union with Scotland Act 1706 art XXV. As to the oath for the preservation of the Established Church of England see the text and note 6 infra. For the form of oath as administered to Her present Majesty see para 28 ante. 4 Bill of Rights ; Act of Settlements; Accession Declaration Act 1910, The declaration was made by King George V at the opening of Parliament, and therefore the necessity for making it at the coronation did not arise:7 HL Official Report (5th series) col 4. The same was true in the case of Elizabeth II. King George VI made the declaration during the coronation service: see Supplement to the London Gazette, 10 November 1937, p 7054. For the purposes of any enactment requiring an oath or declaration to be taken, made or subscribed by the monarch on or after the accession, the date on which the monarch attains the age of 18 years is deemed to be the date of the accession: Regency Act 1937 s 1(2). However, it should be noted that the monarch has no minority, and his exercise of the prerogative is valid even if he has not attained 18 (see 5 Co Litt 43a, b; 2 Co Inst, proem, 3; 1 BI Com (14th Edn) 248), although the Regency Acts (see para 40 post) mean that the prerogative is exercise in the monarch’s name while the monarch is under 18. By 28 Hen 8 c 27 (Succession to the Crown) (1536), power was given to future monarchs to revoke all enactments made by Parliament whilst they should be under the age of 24. This enactment was repealed temporarily by Edw 6 c 11 (Repeal of 28 Hen 8 c 17) (1547), and both these statutes were determined and annulled by 24 Geo 2 c 24 (Minority of Successor to Crown) (1750), s 23 (repealed). 5 Act of Settlements 3 6 See paras 51, 53 post. The oath for the preservation of the Established Church of England is now administered as part of the coronation oath: see text and note 4 supra. The oath for the preservation of the Presbyterian Church was taken by Queen Elizabeth II at a meeting of the Privy Council held immediately after her accession, the instrument being subscribed in duplicate, and one part sent to the Court of Session to be recorded in the Books of Sederunt, and afterwards to be lodged in the Public Register of Scotland, the other part remaining among the records of the Council to be entered in the Council book: see the London Gazette Extraordinary, 8 February 1952, p 839; London Gazette, 12 February 1952, p 861. Further reading Monarchy web site http://www.royal.gov.uk/ Robert Blackburn, King and Country, 2006 Vernon Bogdanor, The Monarchy and the Constitution, 1995 R Allison and S. Riddell (ed), Royal Encyclopaedia, 1991 Roy Strong, Coronation, 2005 Janos M. Bak, Coronations : Medieval and Early Modern Monarchic Ritual, 1990 Nicholas Kent, A Modern Monarchy TRG, 1995 Edward Ratcliff, The Coronation Service of Her Majesty Queen Elizabeth II, SPCK, 1953 Appendix A : Text of the Oath as set down in the Coronation Oath Act 168812 3. Form of oath and administration thereof Will you solemnely promise and sweare to governe the people of this kingdome of England and the dominions thereto belonging according to the statutes in Parlyament agreed on and the laws and customs of the same? The King and Queene shall say, I solemnly promise soe to doe. Arch bishop or bishop, Will you to your power cause law and justice in mercy to be executed in all your judgements. King and Queene, I will. Arch bishop or bishop Will you to the utmost of your power maintaine the laws of God the true profession of the Gospell and the Protestant reformed religion established by law? And will you preserve unto the bishops and clergy of this realme and to the churches committed to their charge all such rights and priviledges as by law doe or shall appertaine unto them or any of them. King and Queene. All this I promise to doe. After this the King and Queene laying his and her hand upon the Holy Gospells, shall say, King and Queene. The things which I have here before promised I will performe and keepe Soe help me God. Then the King and Queene shall kisse the booke 12 Coronation Oath Act 1688 (1 Will & Mar chap 6), s 3 Appendix B: Text of the oath taken by Elizabeth II in 195313 The Queen having returned to her Chair, (her Majesty having already on Tuesday, the 4th day of November, 1 9 5 2, in the presence of the two Houses of Parliament, made and signed the Declaration prescribed by Act of Parliament), the Archbishop standing before her shall administer the Coronation Oath, first asking the Queen, Madam, is your Majesty willing to take the Oath? And the Queen answering, I am willing. The Archbishop shall minister these questions; and the Queen, having a book in her hands, shall answer each question severally as follows : Archbishop. Will you solemnly promise and swear to govern the Peoples of the United Kingdom of Great Britain and Northern Ireland, Canada, Australia, New Zealand, the Union of South Africa, Pakistan, and Ceylon, and of your Possessions and the other Territories to any of them belonging or pertaining, according to their respective laws and customs? Queen. I solemnly promise so to do. Archbishop. Will you to your power cause Law and Justice, in Mercy, to be executed in all your judgements? Queen. I Will. Archbishop. Will you to the utmost of your power maintain the Laws of God and the true profession of the Gospel? Will you to the utmost of your power maintain in the United Kingdom the Protestant Reformed Religion established by law? Will you maintain and preserve inviolably the settlement of the Church of England, and the doctrine, worship, discipline, and government thereof, as by law established in England? And will you preserve unto the Bishops and Clergy of England, and to the Churches there committed to their charge, all such rights and privileges, as by law do or shall appertain to them or any of them? Queen. All this I promise to do. Then the Queen arising out of her Chair, supported as before, the Sword of State being carried before her, shall go to the Altar, and make her solemn Oath in the sight of all the people to observe the premises: laying her right hand upon the Holy Gospel in the great Bible (which was before carried in the procession and is now brought from the Altar by the Archbishop (The Bible to be brought) and tendered to her as she kneels upon the steps), and be brought saying these words: The things which I have here before promised, I will perform and keep. So help me God. Then the Queen shall kiss the Book and sign the Oath. And a Silver Standish Queen having thus taken her Oath shall return again to her Chair, and the Bible shall be delivered to the Dean of Westminster. Failure to uphold the Gospel and the Church of England The Queen of England is the head of the Anglican Church, yet this has been conjoined to the Zionist Methodist doctrines in the Anglo Methodist Covenant : On 1 November 2003, the Archbishops of Canterbury and York and the General Secretary of the General Synod , together with the President, Vice President and Secretary of the Methodist Conference signed the Covenant at Methodist Central Hall, Westminster, in the presence of the Queen. The ceremony continued at Westminster Abbey with a short service of thanksgiving and dedication. and for an older model : Provisions of Oxford and Westminster English Bill of Rights Act of Union
<urn:uuid:3735cee3-6234-4ea0-b25e-0f6fe41c6801>
CC-MAIN-2021-43
https://thebridgelifeinthemix.info/british-law/british-coronation-oath/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.56/warc/CC-MAIN-20211020024111-20211020054111-00310.warc.gz
en
0.948481
3,828
2.703125
3
- Class: Aves (Birds) - Order: Falconiformes - Family: Cathartidae - Genus: Vultur - Species: gryphus Old bird from the New World. The Andean condor is the largest raptor in the world and the largest flying bird in South America. It flies majestically over the mountains and valleys of the Andes. This bird of prey and its close cousin, the California condor, are part of the New World vultures, a group of birds more closely related to storks than to the vultures of Africa. Andean condors are the only New World vultures to show sexual dimorphism. Males are usually larger than females and have a distinctive comb on top of their head, as well as a large neck wattle and yellow eyes. The females lack the comb and have red eyes. The males keep the comb all their life, which makes it easy to tell the sex of an Andean condor chick as soon as it hatches. As adults, both sexes have black plumage with white secondary feathers and white neck ruffles. Juveniles have brown plumage and skin and don’t develop their adult coloring until they are about six years old. Andean condors do not have a syrinx (similar to our larynx), so they cannot vocalize. Instead, they hiss, click, and grunt to communicate. HABITAT AND DIET Flying high. These impressive birds live in the highest peaks of the Andes. They nest in rocky crags and soar over open grasslands and lowland desert regions. When not scanning the landscape for a meal, the birds may roost in small groups, often stretching their enormous wings to catch some sun or preening. Andean condors used to range in large numbers from the highlands of northern South America to the tip of Tierra del Fuego in the south. They tend to stay away from human disturbance, causing their range decrease dramatically. Today, Andean condors are most often seen in Peru, Chile, and Argentina, although a reintroduction program is taking place in Colombia. The sky's the limit. To find their food, Andean condors use their excellent eyesight and can spot a meal from high up in the air. They also look for clues to their next meal, such as other raptors gathering in one area on the ground or circling in the sky. Condors can glide over large areas while using little energy. These huge birds are too heavy to fly without help. They use warm air currents (thermals) to help them gain altitude and soar through the sky. By gliding from thermal to thermal, a condor may need to flap its wings only once every hour. When a condor stretches out its wings, the wing feathers look like outstretched fingertips. These “fingertips” let the condor make fine adjustments in flight, like wing flaps on an airplane. A meal to die for. Like all vultures, Andean condors are scavengers and find most of their food after it is already dead. This lifestyle isn’t for everyone, but it does have certain advantages—the food can’t fight back! Like most other vultures, condors have a featherless head. This keeps the head from getting too messy while buried in a carcass. Condors have a high resistance to harmful bacteria, and their curved beak is good for tearing rotting flesh. But as strong and impressive as an Andean condor’s beak looks, it is not as strong as the beaks of other birds of prey. After a condor eats, it rubs its head and neck back and forth across the ground to get all the “crumbs” off. These birds can consume more than 15 pounds (6.8 kilograms) of meat at one time, and may not be able to fly after such a large meal. At the San Diego Zoo and San Diego Zoo Safari Park, the Andean condors eat rats, rabbits, beef spleen, trout, and ground meat, depending on the day. Although they are able to eat rancid meat, they prefer fresh food. Watch out. Healthy adult condors have no natural predators and are vigilant when protecting their egg or chick. Humans have become non-natural predators. Ranchers poison livestock carcasses to ward off mountain lions and foxes; the poisoned carcasses kill the condors, too. Look at me! The male Andean condor uses quite a display to attract his mate. He spreads his wings, clicks his tongue and hisses, and his neck turns yellow. If the female is impressed, the two find an appropriate nesting spot, usually in a shallow cave on a cliff ledge. The female lays a single egg, which the parents take turns incubating. Baby makes three. Once the chick hatches, both parents are responsible for its care for over a year, well after the chick has fledged at six months. The young condor spends this time learning how to be a condor from its parents, everything from how to catch a thermal to what to eat and how to find it. Not until the condor is about six years old does it molt the brown feathers of its youth and grow the black-and-white plumage of an adult. AT THE ZOO Our first condors. Having an Andean condor at the San Diego Zoo in 1929 seems rather remarkable for the times. He was named Bum, and he was quite a character. Bum had been hand raised as a youngster in a zoo in Germany, so he was used to people and even liked to play with them. But since he had a large wingspan and a sharp beak, the humans had to watch their step. Bum’s favorite person was wildlife care specialist Karl Ring. When Karl came by, Bum hopped over to say hello. The two also had a favorite game in which Karl would lie down flat on his back so Bum could hop up to stand on his chest, wings spread. Bum was soon paired with a young female from South America named Cleo. She was not nearly as friendly as Bum and hissed and charged at the wildlife care specialists, but Cleo and Bum got along famously. In July 1942, a male chick named Guaya was hatched and raised in his zoo habitat—believed to be the first Andean condor hatched in managed care in the United States! The world’s first incubator-hatched Andean condor hatched in May 1950. There were more exciting moments as the Zoo continued to hatch and raise Andean condors, leading to a successful reintroduction program for the birds. Helping California condors. Our knowledge in working with Andean condors helped us prepare to save the critically endangered California condor. Andean condors were temporarily released in California to help test reintroduction techniques for their northern cousins as part of the California Condor Recovery Program. (The Andean condors were later brought back under human care and reintroduced in Colombia.) Perilous prestige. Andean condors play a key role in a healthy, well-balanced environment because of their important role as nature’s recyclers. Consuming wildlife carcasses helps reduce the spread of diseases such as anthrax and botulism. Yet Andean condors are threatened over most of their range, both revered and feared by people. The condor is seen as a symbol of power, health, and liberty, and its bones and organs are used in traditional medicines. It is believed that the bird's stomach cures breast cancer, roasted condor eyes improve eyesight, and a condor feather under the bed wards off nightmares. Condors also appear in many South American myths. The Incas thought that the condor brought the sun into the sky every morning and was a messenger to the gods. There are also misconceptions about the condor's role in the food chain, so condors are shot or poisoned to “protect” livestock. Condors also face threats from loss of habitat and reduced food sources. High hopes. The good news is that Andean there are continuing successful efforts to restore the condor population in their native habitat. Michael Mace, curator of birds at the San Diego Zoo Safari Park, is the coordinator for the Andean Condor Species Survival Plan. Since 1989, 68 Andean condors, raised in American and Colombian zoos, have been reintroduced in Colombia, Venezuela, and Peru in an attempt to re-establish the birds in their range countries. By using satellites and radiotelemetry, Colombian biologists have been able to track and monitor the reintroduced birds and have found that they have survived, matured, and are now beginning to breed in their native habitat, a significant milestone of success for any reintroduction program. In 1995, we received a significant achievement award for our Andean condor reintroduction program. This program will continue until Andean condors have recovered. We are also exploring other opportunities in South America where another reintroduction program could be developed. Teamwork! One of the reasons for the survivorship of the reintroduced birds is public education and outreach. Condor “guards” from local communities teach condor natural history and conservation, and local school students learn to “look to the skies” through workshops in techniques such as biotelemetry (radio tracking), field notations, and the use of binoculars and spotting scopes. This provides a deeper understanding of why condors are important to the environment, Andean ecology, and their intrinsic value to the human community. The Andean condor program is a shining example of what can be accomplished when there is cooperation between the public and private sectors, zoos and government agencies, field biologists, and aviculturists. The excitement and enthusiasm for Andean condor recovery is no less now than it was in 1989, when the first male condors flew out of the reintroduction aviary in Chingaza, Colombia. The future for the Andean condor is much brighter now than it was when we began. These are birds that we can enjoy watching over the Andean peaks for many more years to come. You can help us protect Andean condors by supporting San Diego Zoo Wildlife Alliance. Together we can save wildlife worldwide.
<urn:uuid:c9d3df11-bdbb-4fba-b7ea-e7e86e74b683>
CC-MAIN-2021-43
https://animals.sandiegozoo.org/index.php/animals/andean-condor
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585242.44/warc/CC-MAIN-20211019043325-20211019073325-00350.warc.gz
en
0.962877
2,159
3.484375
3
My series of articles has been focused, in large part, on use of terminology as a foundation for comprehension and elucidation. When considering issues such as Angels, demons, Nephilim, and giants, we have seen how definitions also assist in systematizing in terms of putting together a theological jigsaw puzzle without which we end up with various “-ologies” that may not cohere. In this article, the focus is on the terms heaven and hell, both of which are generically employed in certain English Bible versions as catch-all terms that covereth a multitude of theology—or rather, superumology (the study of heaven) and infernology (the study of hell).1Playing off of a Latin term for heaven─superum, and a Latin term for hell─infernum Basic-level theological terms regarding eternal destinations are either heaven or hell; and yet, there is a bit more than that along the way. The following is a bottom-line conclusion as to the various terms we encounter in Hebrew and Greek (as well as English renderings), namely: Sheol, gehenna, Hades, Abyss, bottomless pit, Tartarus, lake of fire, second death, heaven (intermediate heaven and eternal heaven proper), Abraham’s bosom, paradise, Kingdom of Heaven/God, New Heavens, and New Earth. Rendering all of these as either heaven or hell causes confusion and is vague. The Old Testament makes general references to all of the dead being in the grave, to all being in (Hebrew term) Sheol aka (Greek term) Hades (examples include Genesis 37:35 and 1 Samuel 2:6). It may be that Enoch and Elijah were assumed directly into the intermediate heaven; yet, all other dead went to Sheol/Hades. Sheol/Hades is the dual chambered locale which consist of one unnamed chamber in which torments are experienced and one chamber referred to as Abraham’s Bosom aka Paradise wherein comfort is experienced (see Luke 16:19-31). Heaven may be thought to refer to a specific location; and yet, it is more precisely though to refer to the locale in God’s presence where one is in a temporarily disembodied form—the body sleep to which I referred in my article Demons Ex Machina: What Are Demons? The Cherub Satan fell from heaven; and yet, he is still allowed before God until he is cast out therefrom since “there was a day when the sons of God came to present themselves before the LORD, and Satan also came among them” (Job 1:6, also see 2:1, New American Standard Bible: NASB) and eventually, “there was war in heaven…And the great dragon was thrown down, the serpent of old who is called the devil and Satan…he was thrown down to the earth, and his angels were thrown down with him” so that “there was no longer a place found for them in heaven” (Revelation 12:7-9). The Angels who sinned were incarcerated in the Abyss aka Bottomless Pit aka Tartarus (Jude 1:6 has “everlasting chains under darkness,” 2 Peter 2:4 has “hell and committed them to pits of darkness” with hell being Tartarus and Revelation 9:1 has “bottomless pit” abyssos phrear). The Kingdom of Heaven/God manifests in the heaven-bound and gradually comes upon the Earth (examples include Matthew 3:2 and Mark 1:14-15). During the three days of His death, Jesus descended to Sheol/Hades and led those in the Abraham’s Bosom/Paradise chamber into intermediate heaven (see 1 Peter 3:19 and Ephesians 4:7-9). By intermediate heaven I am referring to God’s presence into which even Satan appeared post-fall as opposed to heaven proper which refer to where these, as well as the heaven-bound who died after them, will live eternally which is the New Heavens, on the New Earth, in the New Jerusalem (see 2 Peter 3:13 and Revelation 21:1-2) wherein “He will wipe away every tear from their eyes; and there will no longer be any death; there will no longer be any mourning, or crying, or pain” (Revelation 21:4). The hell-bound will be placed into the Lake of Fire when Gehenna aka Hell/Sheol/Hades are placed into the Lake, which is the Second Death (see Revelation 20:10, 14). By Gehenna aka Hell/Sheol/Hades I am technically referring to the torment chamber of Sheol/Hades, but all of Sheol/Hades will be thrown into the Lake of Fire since the only remaining inhabitants will be in the torment chamber. Luke 16:23 states that the rich man was in Hell (specifically in Hades) when “he lifted up his eyes, being in torment, and saw Abraham far away and Lazarus in his bosom,” etc. Note that Abraham’s bosom need not be thought of as the name of a location, but rather, it is a description of where the poor man was taken which was literally to Abraham’s bosom─to his presence, to his care, to his embrace. Also, Psalm 16:8-10 states, “I have set the LORD continually before me; because He is at my right hand, I will not be shaken. Therefore, my heart is glad and my glory rejoices; my flesh also will dwell securely. For You will not abandon my soul to Sheol; nor will You allow Your Holy One to undergo decay” with Sheol here often rendered as hell (such as in the King James Version: KJV). This Psalm is reiterated in Acts 2:25-31 wherein Sheol is rendered as HADES by the NASB and hell by the KJV. Thus, God “WILL NOT ABANDON MY SOUL TO” Sheol/Hades “NOR ALLOW YOUR HOLY ONE TO UNDERGO DECAY” because the context is that the heaven-bound will ascend from therein (FYI: the NASB New Testament employs all caps when quoting the Old Testament). Before Jesus’ sacrifice, all the dead went to the grave/Sheol/Hades: either the Abraham’s Bosom/Paradise chamber or the Gehenna chamber—with the possible exception of Enoch and Elijah who apparently went directly into God’s presence/the intermediate heaven. After Jesus’ sacrifice, all those within the Abraham’s Bosom/Paradise chamber were taken into the intermediate heaven in a disembodied state. When all is said and done, those in the intermediate heaven will be taken into the New Heavens wherein they will be embodied (I am bypassing “end-times” technicalities). Those in the Gehenna chamber will suffer the Second Death, being thrown into the Lake of Fire. Fallen Angels were incarcerated in the Abyss/Bottomless Pit/Tartarus and, along with Satan, the beast, and the false prophet, also eventually will be thrown into the Lake of Fire. This succinct review includes each term I review in detail within my book What Does the Bible Say About Heaven and Hell? A Styled Superumology and Infernology, wherein I attempt to lay out the biblical elucidation of the afterlife.2What Does the Bible say about Angels? A Styled Angelology, What Does the Bible say about Demons? A Styled Demonology, What Does the Bible say about Giants and Nephilim? A Styled Giantology and Nephilology, with other books within the series being What Does the Bible say about the Devil Satan? A Styled Satanology, What Does the Bible say about Various Paranormal Entities? A Styled Paranormology, and What Does the Bible say about Heaven and Hell? A Styled Superumology and Infernology. Moreover, see The Paranormal in Early Jewish and Christian Commentaries: Over a Millennia’s Worth of Comments on Angels, Cherubim, Seraphim, Satan, the Devil, Demons, the Serpent and the Dragon all by Ken Ammi and available at True Free Thinker’s No End Books: http://www.truefreethinker.com/articles/“no-end-books”-publicationsΩ Ken Ammi is a long-time researcher and lecturer on issues pertaining to Christian apologetics. He has a background in Eastern Mysticism and the New Age. He is Jewish and has accepted Jesus as Messiah. You can find him online at: True free Thinker © 2019, Midwest Christian Outreach, Inc All rights reserved. Excerpts and links may be used if full and clear credit is given with specific direction to the original content. |↑1||Playing off of a Latin term for heaven─superum, and a Latin term for hell─infernum| |↑2||What Does the Bible say about Angels? A Styled Angelology, What Does the Bible say about Demons? A Styled Demonology, What Does the Bible say about Giants and Nephilim? A Styled Giantology and Nephilology, with other books within the series being What Does the Bible say about the Devil Satan? A Styled Satanology, What Does the Bible say about Various Paranormal Entities? A Styled Paranormology, and What Does the Bible say about Heaven and Hell? A Styled Superumology and Infernology. Moreover, see The Paranormal in Early Jewish and Christian Commentaries: Over a Millennia’s Worth of Comments on Angels, Cherubim, Seraphim, Satan, the Devil, Demons, the Serpent and the Dragon all by Ken Ammi and available at True Free Thinker’s No End Books: http://www.truefreethinker.com/articles/“no-end-books”-publications|
<urn:uuid:32805602-9d76-4811-b1e5-86d151a1b2e4>
CC-MAIN-2021-43
https://midwestoutreach.org/2019/12/19/heaven-and-h-e-double-hockey-sticks/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585828.15/warc/CC-MAIN-20211023224247-20211024014247-00430.warc.gz
en
0.934815
2,121
2.609375
3
You will also find options for commentaries on judges that help pastors, teachers, and readers with application of the bible, commentaries that approach the scripture versebyverse, classic christian commentaries, and much more. The book ends with a private army raiding a temple and then burning a city to the ground. It is here that the unique contribution of the book of judges can be identified. The book of judges depicts the life of israel in the promised land from the death of joshua to the rise of the monarchy. The title refers to the leaders israel had from the time of the elders who outlived joshua until the time of the monarchy. Introduction in the last study we looked at the first enemy to totally dominate the nation of israel in the land that doubly wicked. Bible study notes and commentary on the old testament book of judges. Previous page the judges page 2 next section the judges. Enough questions are included for teachers to assign as many. The book of judges is an important historical book in the hebrew bible, full of colorful characters and memorable stories. One of the major themes of the book is yahwehs sovereignty and the importance of being loyal to. Best judges commentaries updated for 2020 best bible. Judges 7 commentary matthew henrys complete commentary. We will see a cycle that the israelites pass through repeatedly in the history of the judges. Remembering the past teaches us countless lessons about how to live today. He is on the biblical timeline chart during the time of the judges that was after the death of moses. The book of judges shows how corrupt gods leaders and people can become. The book of judges, sefer shoftim is the seventh book of the hebrew bible and the christian old testament. More than perhaps any other book in the bible, judges drives home the idea repeated throughout the bible that i the lord god am a jealous god kjv ex. Judges, book of is so called because it contains the history of the deliverance and government of israel by the men who bore the title of the judges. Poisonwood bible quizzes about important details and events in every section of the book. Bible study questions on the book of hosea introduction. Join pastor armstrong for an indepth study of the book of judges. The book of judges acts as the sequel to the book of joshua, linked by comparable accounts of joshuas death joshua 24. On the one hand, it is an account of frequent apostasy, provoking divine chastening. Book of judges bible study outline judges commentary part one the cycle of sin defined by i gordon. Though few christians study the book end to end, most recognize the books. The book of judges is the history of israel during the government of the judges, who were occasional deliverers, raised up by god to rescue israel from their oppressors, to reform the state of religion, and to administer justice to the people. I made this very basic so that you can add as little or as much info as you want. The judges did not oversee merely legal matters, as in our sense of the role. Take a study break every book on your english syllabus summed up in a quote from the office. The old testament is the first half of the bible and follows man from creation to the destruction and captivity of gods chosen people. Skim the book of judges, reading as much as you can, and state the theme of the book. Greear wades into the book of judges to shine a light onto the muddy waters. Although the book is only 21 chapters long, it recounts a massive period of the history of israel, approximately 350 years or. The book of judges covers the period in the history of israel from the death of joshua to the time of the prophet samuel. Ehud a judge in israel amazing bible timeline with world. What can we learn from the story of the levite and his. Judges 1 new international version niv israel fights the remaining canaanites. The book of ruth originally formed part of this book, but about a. Book of judges, an old testament book that, along with deuteronomy. Introduction the book of judges of israel agape bible study. Assignments on judges 1 please read the whole book of judges at least once as we study chapter 1. Study guide for james 1 by david guzik blue letter bible. What can we learn from the account of micah and the idol. The book of judges is a sequel to the book of joshua. This workbook was designed for bible class study, family study, or personal study. Watch our overview video on the book of judges, which breaks down the literary design of the book and its flow of thought. Both a bible and a dictionary, this tool for bible study, the hebrewgreek key word study bible identifies the keywords of the original languages and presents clear, precise explanations of their meaning and usage. Faithfulness implies a deep understanding that god is sovereign and that israels task is to honor, obey. It is considered part of the deuteronomic history that begins in the last book of the torah and ends with the second book of kings. April 22, 2018 ezekiel wants us to know that god is where he always is. Upon learning of her destined fate, she requested a twomonth period to be. Israels lack of spiritual interest is made manifest with each of the twelve cycles of sin revealed in the book of judges. However, he played a great part in the book of judges as the ad voc or hebrew judge, who delivered the people of israel from the domination of king elgon, the ruler of the moabites. The micah of judges 1718 offers an example of how not to worship god, and his story illustrates the consequences of practicing religion according what we think is best rather than according to gods teachings. The book of judges is the most violent book in the bible. Intended to be helpful to all christians, including teachers and preachers, while avoiding an emphasis on technical issues. Martin luther knew and taught exactly what the book of james teaches. This summary of the book of judges provides information about the title, author s, date of writing, chronology, theme, theology, outline, a brief overview, and the chapters of the book of judges. The central verse that summarizes this says, in those days israel had no king, and everyone did what was right in their own eyes. The book of judges is an important historical book in the hebrew bible, full. In the narrative of the hebrew bible, it covers the time between the conquest described in the book of joshua and the establishment of a kingdom in the books of samuel, during which biblical judges served as temporary leaders. Judges, book of definition and meaning bible study tools. In the hebrew bible, joshua and judges were regarded as one scroll and formed the first book in the former prophets section. The most shocking feature in the book of judges, therefore, is not the horror of the sin of gods people depicted in these narratives, but the glory of salvation from that sin accomplished by the god of patience, mercy, compassion, steadfast love, and faithfulness ex. Free interactive bible quizzes with answers and high score tables. The narratives contained in the book of judges were written to bear witness. The judges of israel the 12 judges of israel the 12. Gods word is given to guide and protect us, as well as to bring him glory. Emphasizes understanding the text with practical applications. The people of israel did what was evil in the sight. Questions in the lessons contain minimal human commentary, but instead guide students to study to understand scripture. A fun way to see how much you know about the bible whilst complementing your bible study. The book of judgesthe book of judges agape bible study. Sometimes, a greek or hebrew word has a distinct meaning that seriously affects the proper interpretation of scripture. Heroes and heroines arise, who seem to have the potential to save israel. It would seem that nowhere in the book of judges does a judge assume the role of a king whose descendants would become a dynasty. The questions contain minimal human commentary, but instead urge students to study to understand scripture. If youve been studying with us through the book of joshua, you learned about an obedient people of israel who conquered the land of promise because they. In the story of the last three judges, their corruption is seen with increased clarity as gideon forgets god, jephthah doesnt know gods character, and samson lives completely contrary to gods law. But in the end, each proves to be a broken savior that cannot deliver. The book of judges is the account of the generations between the conquest of canaan and the time of the monarchy. Although the book is only 21 chapters long, it recounts a massive period of the history of israel, approximately 350 years or 25% of the historical period described in the old testament. Tragedy and hope in the book of judges bibleproject. What is the significance of the book of ruth in the old. One of the stories that demonstrate the chaos and lawlessness of the time is the account of the levite and his concubine, which begins in judges 19. Book of judges bible study commentary the cycle of sin. This workbook on the book of judges was designed for use in bible classes, family study, or personal study. Now the men of israel had sworn an oath at mizpah, saying, none of us shall give his daughter to benjamin as a wife. On the other hand, it tells of urgent appeals to god in times of crisis, moving the lord to raise up leaders judges through whom he throws. Every man must cry, for the lord, and for gideon, so some think it should be read in judges 7. The book of judges brings out the fact that gods people came to be complacent, forgetful, and downright disobedient through the process of time. The book of judges is the second book in neviim prophets, the second section of the tanach hebrew bible. It is a time in which the israelites continually strayed from god and everyone did as he thought fit. Apr 24, 2016 i decided to add this judges post separate from the bible lesson because i just kept adding to it. The book of judges thus fills the gap between joshua and 1 samuel in such a way that it prepares the reader for what is to come in 1 and 2 samuel. July 14, 2015 hebrews shows us that the bible is not a collection of unrelated stories, but is rather one unified story. Judges 1 niv israel fights the remaining canaanites. We think about the judges as both a period of time and a book of the bible. The class book material is suitable for teens and adults. The judges in the book of judges, like the kings after them, cause us to look forward to the coming of the king of kings. The state of gods people does not appear in this book so prosperous, nor their character so religious. These books tell of the israelites reign over the land of canaan and have a heavy focus on divine reward and punishment. The concluding chapters of judges highlight the fact that everyone did what was right in his own eyes judges 17. Intro to judges biblica the international bible society. The book of ruth is such a touching love story and such a charming tale of emptiness to abundance that we can easily think there is nothing more to it. Take a study break 11 quotes that sum up the entire book. The levite had a concubine who had run away and been unfaithful. This accessible study guide reveals how the unfaithfulness of israel. Scholars believe some of the judges ruled simultaneously in. The book of judges describes a decentralized period of israels history. During the time of the judges there was a woman prophet named deborah. Together, we will be looking for that king who does not do what is right in his own eyes, but who delights to do the will of his father in heaven john 6. Book of judges overview insight for living ministries. Though many such leaders are mentioned, the book of judges focusses. Ehud or ehud bengera ehud the son of gera in the tribe of benjamin is not one of the famous bible characters. This bible study commentary from the book of judges focuses on the cycle of sin and its importance today. The book of judges, which is believed to have been written by the prophet samuel around 1050 bc, presents us with a sad and turbulent period in israels history. The hidden symbolism you never saw in the book of ruth. In the israelites repeated forgetfulness of gods mighty acts on their behalf and their ingratitude in their forgetfulness of those acts, the book of judges tells us each man did what he felt was fit 17. It should seem, he borrowed the word from the midianites dream judges 7. New generations served other deities and had forgotten god. Chapter three ehud, the fat man and the power of praise. In another place he wrote of the book of james, i think highly of the epistle of james, and regard it as valuable it does not expound human doctrines, but lays much emphasis on gods law. The book of judges cracks a window into depths of the human soul. Watch a nation struggle with its identity and relationship to its god. Book of judges bible study ehud and the power of praise. The former prophets also included 1 and 2 samuel and 1 and 2 kings. Events within the book of judges span the geographical breadth of the nation, happening in a variety of cities, towns, and battlefields. Other studies in this series include othniel, gideon. Today we live in a time that is in many ways similar to the times of the judges of israel. Bible study guide on judges \old testament\ keywords.907 1616 47 1396 1604 883 843 868 664 776 210 949 1005 815 1465 643 1448 1595 1237 1259 460 652 839 1023 1449 1040 5 657 925 725
<urn:uuid:78c804ac-25a4-46ff-98f5-51bdccbe1845>
CC-MAIN-2021-43
https://quesegsoftgrun.web.app/634.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585348.66/warc/CC-MAIN-20211020183354-20211020213354-00710.warc.gz
en
0.966441
2,954
2.734375
3
The hammam on Mateos Gago street, in the southern Spanish city of Seville, is located just a few meters away from the city’s Roman Catholic cathedral, and for a century it has been the most crowded of the city’s Arab baths. The thing is, customers were not going there to immerse themselves in water, but rather to pour liquid down their throats: the baths were concealed under a popular bar named Cervecería Giralda. In the early 1900s, the architect Vicente Traver converted the building into a hotel, thus concealing (and preserving) a bathhouse dating back to the 12th century, during the days of the Almohad Caliphate that ruled Al-Andalus. The ancient structure emerged again last summer when the bar underwent some renovation work. The work exposed high-quality murals that are unique to Spain and Portugal. The find came as a big surprise as everyone had previously thought the structure was nothing more than “a Neo-Mudejar pastiche,” in the words of Fran Díaz, the architect in charge of the refurbishment. “The most important thing is that we realized the bath was completely painted, from top to bottom, with high-quality geometric decoration,” says Álvaro Jiménez, an archeologist who has supervised the work. “The drawings were made in red ochre on white, and large fragments were preserved on the walls and vaulted ceilings. This is the only surviving Arab bath with an integral decoration; until now, the only known examples had paint just on the baseboards.” “It’s been a complete surprise. This is an important discovery that gives us an idea of what other baths might have looked like during the Almohad period, especially in Seville, which was one of the two capitals of the empire together with Marrakech,” adds the archeologist Fernando Amores, who collaborated on the project. “The hammam is very near the site of the main mosque, which was also built in the 12th century, and which also explains its much richer decorative elements.” The first probes under the false ceilings at Giralda – one of the most popular venues in Seville’s historic center – soon unearthed several different kinds of skylights known as luceras. This discovery triggered a completely different approach to the reform work, which began focusing on the complete recovery of the Arab baths. “Given the relevance of the finds, architecture took a step back and made way for archeology. The solution we found to preserve the baths while allowing the space to keep functioning as a bar was to use a metal cornice to crown the traditional wall tiles put there by Vicente Traver and which are now a part of the establishment’s personality; the original wooden bar counter has also been preserved,” notes Fran Díaz. The 202-square-meter tapas bar, which opened in 1923, will continue in operation when the work ends next month. The venue’s main space, where the bar counter is located, was once the warm room of the hammam, a space covering 6.70 square meters with an eight-sided vaulted ceiling resting on four columns. One side opens into a rectangular room with a barrel vault that is 4.10 meters wide and 13 meters long, once serving as the bath’s cold room. The kitchen area is where the hot room must have been, although the only remaining vestige is a portion of an arch. The baths were accessed from Don Remondo street, where the dry area used to be, notes Álvaro Jiménez, who wrote his PhD dissertation on the remains of the Almohad mosque, now the site of Seville’s Roman Catholic cathedral. The restoration work unveiled 88 skylights in different shapes and sizes, such as stars, lobulated designs and octagons, that together are much more elaborate than decorations found in other Arab baths from the same period. Amores also highlights the paintings in the arches of the warm room, made in a zigzagging style meant to represent water. “Nearly all the representations in the Islamic world allude to paradise,” he notes. The uniqueness of this bath does not rest solely on its latticed paintings, but also on the five rows of skylights in the cold room – other baths have three, and sometimes just one. The cold room, which for the last century has served as the bar’s eating area, lost two meters in 1928, when Mateos Gago street was widened. In order to understand the structure of the baths, which were typically built by the state and handed over to third parties for management, an expert named Margarita de Alba used photogrammetry techniques to recreate what these spaces must have looked like in the 12th century when Seville was known as Isbilia. “There is documentary evidence in Christian texts from 1281 about the so-called baths of García Jofre, described as adjoining a property given by King Alfonso X to the Church of Seville. The next testimony is from the 17th-century historian Rodrigo Caro, who said that the vault you see when you enter from Borceguinería [the earlier name for Mateos Gago street] is not a bath, writing: ‘I’d sooner believe these are relics from some circus or amphitheater.’ Even the art historian José Gestoso said the vault is ‘of Mauritanian tradition, a construction that is frequently seen in Seville monuments from the 15th and 16th centuries,” says Jiménez, illustrating how popular belief held that the García Jofre bath had disappeared due to the passage of time. But it was there the whole time. In the 17th century, there was a major reform that took down the vault in the warm room and rebuilt a much lower one to make room for an extra floor above it. “The building was ‘Italianized’ and the original columns, probably made from reused Roman columns, were replaced with others made with Genoese marble. All the skylights were shut. Our theory is that it became the premises for a merchant who built his home over the shop,” adds Jiménez. The 20th-century architect Vicente Traver could have torn down the remains of the bathhouse, but he chose to protect and preserve them. And now, customers of Cervecería Giralda know that they are having their beers inside an Almohad hammam. A teenager who killed a dog by kicking it so hard it went above the head of its owner has been jailed for six months. Josh Henney (19) twice kicked the dog in its underbelly while its owner was speaking with his mother. Dublin Circuit Criminal Court heard that the dog, who was a cross between a Jack Russell Terrier and a Yorkshire Terrier, was named Sam and was approximately 10 months old at the time. Henney of North William Street, Dublin City centre, pleaded guilty to killing a protected animal at his address on March 23rd, 2020. He has 36 previous convictions and is currently serving a sentence of two years with the final six months suspended for an offence of violent disorder. Garda Adam McGrane told Dara Hayes BL, prosecuting, that on the date in question, the injured party was on North William Street with her dog and was speaking with Henney’s mother. Gda McGrane said Henney was having an argument with his mother and was shouting from a window. Henney then came out of the flat and told the injured party to “f**k off out of here and mind your own business”. The garda said Henney told the woman that he would “f**king kill your dog”. Henney then took a run up of around two metres and kicked the dog in their underbelly. The dog was kicked so hard it went above the head their owner. Henney walked away, then took a second run at the dog and kicked the dog again in their underbelly. The dog’s breathing was laboured following the second kick and saliva with blood was coming from their mouth. The dog, which could not walk or drink, was carried by their owner to a veterinary practice and was still alive upon arrival. The dog was put under anaesthetic, but died while undergoing treatment. Multiple fractures and fissures The court heard that Dr Alan Wolfe, who performed the autopsy on the dog, found multiple fractures and fissures to the dog’s liver. Dr Wolfe found all of the injuries were consistent with the dog dying of blood loss due to acute trauma. Mr Hayes told the court that the injured party in the case has no children and told gardaí that the dog was like family to her and went with her wherever she went. Gda McGrane agreed with Cathal McGreal BL, defending, that his client told gardaí he had lost his temper and did not really remember what happened. He agreed the accused told gardaí he had not been able to sleep remembering the dog screaming and wished to apologise for what he did. Mr McGreal said his client very much regrets what he did. He said his client claims he never told the victim that he would kill the dog. Counsel said his client’s father was shot in Malaga in front of Henney when he was aged 14. He said that his client told a psychologist that the offence was a “horrible thing to do” and that he wants to get help so he does not do anything like that again. Mr McGreal said his client’s mother smoked heroin and his client caught her doing so as a child. He said the presence of the injured party was a “triggering factor” and that there was “a heroin taking relationship going on”. Counsel said there is no gainsaying that what his client did but he is sorry for it and it haunts him. On Tuesday Judge Melanie Greally Judge Greally imposed a one year prison sentence with the final six months suspended on strict conditions including that Henney engage with the Probation Service for 12 months upon his release from prison. This sentence is to be consecutive to the term he is currently serving for violent disorder. She said the anger and aggression was carried out on the dog, when it was the dog’s owner that was “the subject of his anger”. Judge Greally accepted that Henney was “extremely ashamed and remorseful for his actions” and has now expressed himself as young man who wants to live a normal life. “He has a stable relationship and is applying himself well in prison,” she noted. She acknowledged that the report prepared by the Probation Service concluded that Henney was a vulnerable young man who would benefit from probation supervision upon his release from prison. Alec Baldwin once borrowed the words of one of the acting colleagues he admires the most – “the incredibly intelligent and wise Warren Beatty” – to explain his ongoing image problems. “Your problem is a very basic one, and it’s very common to actors. And that’s when we step in front of a camera, we feel the need to make it into a moment. This instinct, even unconsciously, is to make the exchange in front of the camera a dramatic one,” Beatty said. Last Thursday, on the set of the movie Rust, of which Baldwin is the star and a producer, that moment could not have been more dramatic. It was Baldwin who pulled the trigger on a prop firearm that killed the Ukrainian director of photography, 43-year-old Halyna Hutchins, and wounded the movie’s director, 48-year-old Joel Souza. The tragic incident left Baldwin speechless for several hours until he expressed his “shock and sadness,” offering his help and support to Hutchins’ family and stating that he was “fully cooperating” with the police investigation into the accident. A social media post from a few days earlier in which he was kitted out in his cowboy gear and covered in blood in character for Rust was removed from his accounts. Scandal seems to follow Alec Baldwin around, whether or not he is looking for that drama to which Beatty alluded. The eldest of six siblings of a middle-class Catholic family of Irish descent, the four Baldwin brothers are all involved in show business, although they couldn’t be much different from one another. Daniel has had problems with drugs. Stephen is currently involved with an Evangelical church and his political views are inclined toward conservatism. The second-youngest, William described his brother as someone who always has something “to fucking whine about,” according toThe New Yorker. Alec is the eldest and the most disciplined, but also the one who protected the other brothers from bullies as he was the most combative. He went to school with the notion of becoming the president of the United States, but on recognizing he had little chance of achieving that goal he enrolled at the Lee Strasberg Theatre & Film Institute in New York, graduating many years later. His career could have panned out like Al Pacino’s or Jack Nicholson’s, actors who he looked up to, but Baldwin’s generation was not the same. Perhaps neither was his talent, and certainly, the world of movies had changed. In 1992, Baldwin ensured that he would be associated with his idols when he starred with Jessica Lange in a Broadway revival of A Streetcar Named Desire, which three years later would be turned into a television movie with Baldwin and Lange reprising their roles for the small screen. Not only did Baldwin receive a Tony nomination for his Broadway performance, he also drew favorable comparisons to legendary actor Marlon Brando, who starred in the stage production and the 1951 movie version. Around this time Baldwin was also landing meaty screen roles, including that of Jack Ryan opposite Sean Connery in The Hunt for Red October. But as time progressed, Baldwin’s name was more frequently heard in connection to his social life and scandals than for his stage or screen performances. His marriage to actor Kim Basinger, who he met in 1991 while filming The Marrying Man, ended acrimoniously, and Baldwin’s relationship with the couple’s daughter, Ireland, has often been fractious. In 2007, a voicemail message the actor left for Ireland, who was 11 at the time, caused a sensation due to Baldwin’s use of not very fatherly language, during an ongoing spat with Basinger following their 2002 divorce. Then there is the other Alec Baldwin, described by the actor himself as “bitter, defensive, and more misanthropic than I care to admit,” in an open letter to Vulture magazine in 2014 titled Good-bye, Public Life. At that time Baldwin had forged a reputation as a violent, homophobic egocentric following several incidents aired in the media. And, of course, from his own mouth. Even so, he managed to resurrect his career in the most surprising way imaginable: by making fun of himself. Baldwin’s portrayal of the absurd and conceited television executive Jack Donaghy across seven seasons of 30 Rock (2006-13), a character inspired by Baldwin himself, earned back his public popularity and landed the actor back-to-back Primetime Emmy Awards in 2007 and 2008 and three Golden Globes. In 2011, he started a new chapter in his personal life with his current wife, Hilaria Baldwin, with whom he has six children. But as one of his closest friends, Lorne Michaels, producer of Saturday Night Live where Baldwin has received plaudits for his impersonations of former US president Donald Trump, once said: “Everything would be better if you were able to enjoy what you have.” Baldwin’s altercations – mostly verbal, occasionally physical – with the paparazzi or anyone who in the actor’s opinion has violated his privacy have been frequent, including on productions on which he has worked. In 2013, the actor Shia LaBeouf was fired from the Broadway theatre production of Orphans when Baldwin said: “Either he goes or I do.” Years earlier an actress left another play Baldwin was working on by leaving a written note stating that she feared for her “physical, mental and artistic” safety. Every one of Baldwin’s reinventions seems inexorably to be followed by another fall from grace. On the one hand, there is the Baldwin who has stated on several occasions that he intends to withdraw from public life, and on the other the Baldwin who is obsessed with social media, writing a tweet for every occasion. Many of these posts have cost the actor, such as in 2017 when he commented on a video of a suspect being fatally shot by police: “I wonder how it must feel to wrongfully kill someone…” There are still unanswered questions surrounding the death of Halyna Hutchins. The investigation has not disclosed whether the firearm was discharged accidentally or if Baldwin was aiming it at the time, although the transcript of a call to the emergency services appears to indicate it happened during a rehearsal. As of yet, no charges have been filed against Baldwin but it is unknown if this may yet occur at a later date. A statement taken from the assistant director states that Baldwin was told by crew members that the gun was not loaded. Many observers are wondering if Rust will be completed, if the project will be abandoned. And many more are asking the same about Baldwin: will he be able to find a way back from this latest dramatic moment?
<urn:uuid:a74f3def-a5a4-4dff-be6f-a3535246c449>
CC-MAIN-2021-43
https://voiceofeu.com/seville-twelfth-century-bathhouse-uncovered-in-spanish-bar-culture/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587915.41/warc/CC-MAIN-20211026165817-20211026195817-00310.warc.gz
en
0.98544
3,714
2.65625
3
The Effects of Limestone Powder Particle Size on the ... · Particle size of the limestone powder still had little effect on shrinkage, even at a larger volume of limestone powder used to replace cement. Using less cement in the specimens is the driving factor reducing the shrinkage and agrees with the compressive strength results that the particle size of limestone has little to no effect on the ...Get price PARTICLE SIZE CHART Ground Limestone. Pollens: Spray Dried Milk Cement Dust Pulverized Fuel Fly Ash Smelter Dust & Fumes Sulfuric Acid Mist & Fumes: Coal Smoke Atmospheric Dust Foundry Dust: ... .30-1.0 pm Particle Size: General Surgery 15 >95% n/a All Bacteria Hospital Inpatient Care n/a : 17 16 n/a 20 19 n/a n/a n/a : …Get price Development of particle size distribution during limestone ... · Before each test, 100 g of limestone particles, whose properties are summarized in Table 1, were sieved and weighed to determine the original particle size distribution and mean particle size.The particles were then loaded into an electrically heated feed hopper. For the high-temperature tests, the air was preheated to the desired temperature by the preheaters and superheater.Get price Limestone Particle Size and Scheduling Influence ... This study aims to evaluate the effects of particle size (2-4, 0.5-2, 0.25-0.5, <0.25 mm) of magnesium limestone as well as the application schedule (in a single application or split in 3 yearly applications) on the proprieties of an acid soil in Galicia and on the yield and quality of pasture growing on the soil during the 2 years after ...Get price FEED GRANULOMETRY AND THE IMPORTANCE OF FEED … Feed particle size is an often-overlooked aspect of poultry production. Producers should not assume that feed is of a uniform size and homogeneously mixed, or that the feed mill is providing the ideal mix of particles in a ... particles of limestone (2–4 mm diameter). Large particle limestone is needed to maintain good eggshell quality ...Get price Effects of Particle Size Distribution on the Burn Ability ... The effect of particle size reduction on the burn ability of Limestone was investigated using the limestone obtained from Obajana Cement Mines. Limestone samples were grinded and were classified into following particles size distribution: 90µm, 200µm, …Get price EFFECT OF THE PARTICLE SIZE ON FLOTATION … Key words: Limestone, Particle size, Sodium oleate, Sodium silicate, Sokem 565C. 1. Introduction In designing a suitable flow sheet for a flotation process, particle size of the sample is of primordial importance. This is determined on the basis of either the mineralogy and/or a careful design of laboratory flotation tests. The effect ofGet price Particle size | Hans H. Stein Particle size is an important consideration for some feed ingredients in pig diets. Reducing the particle size of cereal grains and soybean meal in diets fed to pigs improves digestibility of energy, amino acids, and other nutrients, because feed ground to smaller particle sizes has more surface area on which digestive enzymes can work.Get price Container Substrate-pH Response to Differing Limestone ... · A particle size efficiency (PSE) factor can be assigned to each particle size fraction of an agricultural limestone (Meyer and Volk, 1952; Motto and Melsted, 1960; Murphy and Follett, 1978). A single limestone source includes a range in particle sizes, and the percent by weight of each particle size fraction (PF) describes the distribution.Get price LIMESTONE PARTICLE SIZE IN LAYER DIETS Table 3.5 The effect of limestone particle size on egg output and feed conversion ratio during the experimental period (54, 58, 62 and 70 weeks) (Mean±s.e.) 64 Table 3.6 Variables used in different studies, regarding limestone particle size 66 Table 3.7 The effect of limestone particle size …Get price (PDF) Influence of limestone particle size on iron ore ... At coarser limestone mean particle size, i.e. >1.52 mm, bed permeability was good but assimilation of coarser limestone particles with hematite was poor (Figure 2a-b) due to limited physical contact of coarser solid particles and limited reaction time during sintering.At finer limestone particle size, the thermal efficiency of the sinter bed ...Get price INFLUENCE OF LIMESTONE PARTICLE SIZE ON EGG … Since limestone particle size had no effect (P >0.05) on any of the tested parameters, differences in results between the former mentioned and the present study could probably be ascribed to differences in genetic strain, environmental and housing conditions as well as the interaction between limestone source, particle size and dietary ingredients.Get price Effect of Calcium Sources and Particle Size on Performance ... regarding the ideal limestone particle size for laying hens were under continued investigation and ranged generally between 1.40 and 5.60 mm, depending on the production status and age of the hens (De Witt et al., 2009). The objective of this study was to determining the effects of different calcium sources and particle size on ...Get price Limestone particle attrition and size distribution in a ... · The effect of attrition time on limestone particle size distribution was first investigated at 25 °C with a gas velocity in the riser of 5.6 m/s. Fig. 2 shows the cumulative limestone particle size distributions for different attrition times. The curves shift to the left as the attrition time increases due to continuing particle attrition.Get price Influence of source and particle size of agricultural ... · particle size impact on the efficiency at increasing soil pH are considered when assessing a material''s value. Agricultural limestone (aglime) is the most commonly used material used to neutralize soil acidity in production agriculture. Both CaCO 3 and magnesium carbonate (MgCO 3) in different proportions are the main constituents of ...Get price The effects of limestone particle size on bone health and ... Limestone particle size, housing system and strain effects on tibia bone mineral density, bone mineral content, and area at 13 and 18 wk of age _ 82 Table 3.5. Odds and odds ratios of strain and housing system interaction and housing system and limestone particle size interaction for keel boneGet price Particle size distribution of limestone fillers ... article is to review different techniques for analysing size and shape of micrometric particles such as limestone fillers. Particle size measurement has been studied by means of laser light scattering, wet sieving (45, 63 and 125µm) and static image analysis. This last technique enables also to characterize the shape of particles.Get price Particle Size Analysis Simple, Effective and Precise ... Figure 1: Particle size distribution densities q3(x) of limestone powder Be-tosöhl 100 for multiple measurements of a mixed sample without sample division. It can be seen that the confidence intervals in the fine range are very narrow and that larger confidence intervals only occur with larger particle …Get price Effects of the Limestone Particle Size on the Sulfation ... Limestone particle size has a crucial influence on SO2 capture efficiency, however there are few studies on the sulfation reactivity, which covers a broad range of particle sizes at low SO2 concentrations. In this paper, a large-capacity thermogravimetric analyzer (LC-TGA) was developed to obtain the sulfur removal reaction rate under a wide range of particle sizes (3 μm–600 …Get price Influence of limestone particle size on iron ore sinter ... · In the present work laboratory sintering experiments have been carried out with different levels of limestone mean particle size (from 0.14 to 1.83mm) to understand the influence of limestone particle size on mineralogy, productivity, physical and metallurgical properties of the sinter.Get price Effects of limestone particle size and dietary Ca ... Figure 1. Particle size distribution of particulate limestone. Ana-lyzed Ca concentration: 36.5% Preparation of different particle size limestone A large batch of commercially used limestone (de-finedasparticulate,PAR ) was purchased from Irv-ing Materials, Inc. (#20, IMI Cal Pro, IN). A subsam-ple was taken from each 25 kg bag, well mixed, andGet price A hazardous particulate size less than 5 microns. Particle sizes of 2.5 micron (PM 2.5) are often used in USA. The total allowable particle concentration - building materials, combustion products, mineral fibers and synthetic fibers (particles less than 10 μm) - specified by …Get price Building Materials: Particle Size & Particle Shape Analysis The particle size and particle shape of raw materials are important for many reasons. The particle size distribution has various effects on the processing of building materials, for example: Powder flow: a wide distribution or too many fines reduce flowability. Segregation: a wide distribution will lead to size …Get price Limestone particle size fed to pullets influences ... · Limestone particle size fed to pullets influences subsequent bone integrity of hens. Eusebio-Balcazar PE(1), Purdum S(2), Hanford K(3), Beck MM(4). Author information: (1)Case Foods, 385 Pilch Rd., Troutman, NC. (2)Animal Science Department, University of Nebraska-Lincoln 68588. (3)Statistics Department, University of Nebraska-Lincoln 68588.Get price EFFECT OF LIMESTONE PARTICLE SIZE ON BONE QUALITY … An increase in limestone particle size resulted in an increased tibia breaking strength (P = 0.01) and -stress (P = 0.04) of layers at 70 weeks of age. However, different particle sizes limestone had no (P >0.05) effect on tibia bone ash and humerus breaking strength and -stress. Results of …Get price Effects of Limestone Particle Size and Dietary Ca ... The present study evaluated the effects of limestone particle size and Ca concentration on apparent ileal digestibility (AID) of P and Ca in the presence or absence of a 6-phytase derived from Buttiauxella sp., expressed in Trichoderma. Treatment diets were corn-soybean meal () based with no adde …Get price Particle size (grain size) Particle size, also called grain size, means the diameter of individual grains of sediment, or the lithified particles in clastic rocks. The term may also be used for other granular materials. φ scale Size range (metric) Size range (approx. inches) Aggregate name (Wentworth Class) Other names < −8 > 256 mmGet price Limestone particle size, calcium and phosphorus levels, and phytas… Influences of limestone particle size distributions and ...Get price Eggshell Quality IV: Oystershell versus limestone and the ... optimal particle size to use to obtain maximum shell quality can vary (Rabon and Roland, 1985). Unfortunately, there are no detailed research data to show the best ratio of particle size with fine granular limestone. Most research workers used approximately *h of the CaCOi source as large particles and lh as finely ground. However, in theGet price Limestone particle size, calcium and phosphorus levels ... · Limestone particle size (PS) affects its solubility and thus can influence broiler performance by altering the rate of calcium (Ca i levels (positive control [PC]; negative control [NC]) on live performance, bone ash, and apparent ileal nutrients digestibility (AIDGet price The particle size of limestone · The particle size of limestone. Interested in the influence of particle size of limestone on the rate of decarbonization. Reply. Know the answer to this question? Join the community and register for a free guest account to post a reply. 537 posts. Time Posted 02/02/2016 00:32:23. Ted Krapkat says. re The particle size of limestone ...Get price Effect of limestone particle size and calcium to non ... · The true Ca digestibility coefficients of limestone with Ca:non-phytate P ratios of 1.5, 2.0 and 2.5 were 0.65, 0.57 and 0.49, respectively. Particle size of limestone had a marked effect on the Ca digestibility, with the digestibility being higher in coarse particles (0.71 vs. 0.43).Get price Influences of limestone particle size distributions and ... · The three different limestone sizes used had mean particle sizes of 53 μm, 25 μm, and 3 μm, each of which is signified by CCX, such that X stands for the particle size, e.g., CC3 is the limestone powder with mean particle size 3 μm.Get price Influence of Source and Particle Size on Agricultural ... · This study evaluated how calcitic and dolomitic agricultural limestone (aglime), pelleted calcitic lime, and different particle sizes of both aglime sources increased soil pH. Both aglime sources were fractionated to pass different Tyler mesh sizes (4–8, 8–20, 20–60, 60–100, or 100+).Get price - metco crusher and feeder spares distributors in ind - list stone results - recycling scrap feeders - marbles for sale in canada - ore fines crushing unit - gold ore machinerytrader - sx plant control - moinho de esfera para a minerao de ouro para a linha de produo venda - roll crusher for oil sand - used clay brick factory plant for sale and secondhand clay - procedure penyambungan belt conveyor - malkhed cement plant in belgaum - agitator in surfactants manufactring - ore dressing usha martin beneficiation plant - quarry crusher requirements - used stone aggregate crushing plant - small scale production plant calculation weighing scales - tungsten separation shaking le - small used rock crusher for sale in west ia
<urn:uuid:78867f40-50c8-4800-9720-862e4423ed87>
CC-MAIN-2021-43
https://china-relax.pl/particle-limestone-particle-size/29215.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585231.62/warc/CC-MAIN-20211019012407-20211019042407-00070.warc.gz
en
0.876459
3,041
2.828125
3
Water is essential for human activity, circulating around the planet while changing to and from solid, liquid and gaseous states. 70% of the Earth’s surface is covered with water, and of that amount, 97.5% is salt water. The remaining 2.5% is fresh water which supports human activity, including business activities. However, about three-quarters of that remaining fresh water exists in a frozen state such as within glaciers. Therefore, the amount of freshwater that is actually available for human use accounts for not even 1% of the Earth’s total amount of water. Under the United Nations’ Sustainable Development Goals (SDGs), the world is striving to secure access to safe drinking water for all people, and through business activities and other economic initiatives, each country is also working to achieve more prosperous livelihoods, including access to water. Since economic scale and water consumption are closely correlated, companies must use water, one of the Earth’s precious natural resources, in an efficient and appropriate manner while also aiming to solve challenges related to water issues through their businesses. MC has clarified its intention to promote the “sustainable use of natural resources including water” in its Environmental Charter, which was first established in 1996 and later revised in 2017. The MC Group, which engages in a wide range of businesses worldwide, recognizes water as an essential element for its business activities and places critical importance on the sustainable use of water in all of its operations. In particular, MC identifies relevant risks and opportunities in a timely manner and, with the goal of achieving the sustainable use of water, establishes appropriate water consumption, recycling and reuse rates throughout its operations and makes efforts to improve use efficiency and reduce consumption. Furthermore, MC will contribute to the resolution of global water issues by establishing water infrastructure businesses and developing comprehensive water operations that contribute to solving water issues. Aiming to reduce the consumption of limited water resources, for the fiscal year ended March 2022, MC has set a target to reduce water consumption at its Head Office compared to the consumption in the previous year. MC also conducts a sustainability survey which aims to track the withdrawals, discharges and recycling of water for the total operations of its portfolio investment companies. In addition to achieving a 100% response rate for this survey, MC is conducting an analysis of individual increases and decreases of the various surveyed items. |Officer in Charge||Akira Murakoshi (Member of the Board, Executive Vice President, Corporate Functional Officer, CDO, CAO, Corporate Communications, Corporate Sustainability & CSR)| (A subcommittee under the Executive Committee, a management decision making body) |Sustainability & CSR Committee Important matters related to water resources deliberated by the Sustainability & CSR Committee are formally approved by the Executive Committee and put forward or reported to the Board of Directors based on prescribed standards. |Department in Charge||Corporate Sustainability & CSR Dept.| When reviewing and making decisions on loan and investment proposals, MC conducts a comprehensive screening process which considers not only economic aspects, but ESG factors as well. From a water resources perspective, MC has set up a screening process for decision-making that first confirms compliance with environmental regulations related to such factors as water discharge and withdrawals (confirmation of regulatory risks), as well as the impact of water withdrawals on surrounding communities and local society, and the impact of climate change on the fresh water environment (confirmation of physical risks). For this screening process, particularly for businesses in areas considered to have high levels of water stress, MC utilizes the World Resource Institute (WRI) ’s Aqueduct tool in order to incorporate external perspectives. Besides screening new investment and exit proposals, MC also strives to make improvements to existing business investments by monitoring their management practices. MC is involved in the copper mining business in countries such as Chile and Peru. Since copper mining requires large amounts of water, MC encourages the introduction of technologies to maximize the water efficiency of each mine’s operational processes and takes measures to reduce new water withdrawals. The Los Bronces copper mine project (located in Chile’s capital province), in which MC invests together with Anglo American, achieved a water recycling rate of 78% in 2019 through measures such as extracting and recycling water from tailings. One of MC’s major investments is the Escondida Copper Mine. This mine is located in the desert region of northern Chile, and boasts the largest production volume in the world. Water consumption is reduced in the process of mine selection and so forth through water-saving and reuse, among other means. Moreover, the construction of a desalination plant with one of the largest processing and pumping capacities in the world, at a cost of approximately US$4 billion to date, has helped to eliminate reliance on subterranean aquifers as of the end of 2019. Going forward, we will continue to promote environmental protection and coexistence with local communities. Toyo Reizo Co., Ltd., one of MC’s consolidated subsidiaries, has declared in its environmental policy that it will reduce the amount of water resources to be used in its production processes and will take preventative measures against the discharge of pollutants. It has also set targets to reduce water consumption on both a single fiscal year and mid-term basis, and engages in reduction activities. In particular, MC aims to reduce its environmental impact by saving water. To achieve this, it calculates consumption and discharge amounts in plants and related facilities with high water consumption for monthly assessment and review. In addition to these efforts to reduce water consumption, MC has also set targets for the reduction of CO2 and waste discharge and for the implementation rate of food waste recycling, etc. while seeking continual improvements though a PDCA cycle. Reference:Environmental report (see here for targets, data and initiatives) MC affiliate Olam International Limited committed in 2013 to reduce water consumption across its agricultural and manufacturing activities. As a specific example, a program in the US that produces onions with higher solid content and lower water content, combined with a focus on optimizing irrigation in collaboration with the growers, has achieved quantified savings of 27 billion liters of water consumption over the past decade. Recognizing the depletion of groundwater due to excessive pumping of water for agriculture and irrigation as a problem, in 2019, Olam's nut business in the US formed a partnership with the California Water Districts to replenish 1.2 billion liters of groundwater through three projects, and it is also pursuing initiatives such as maximizing the amount of water replenished during snowmelt. MC is delivering seawater desalination projects in drought regions of the world such as Atacama Desert in Chile and the State of Qatar in the Middle East which contribute to the alleviation of water stress in those regions. Northern Chile is facing serious depletion of groundwater, and alternative water sources are required in consideration of local communities and the agricultural industry. MC provides a stable supply of desalted water to mines and farmlands in the region on the basis of a BOO (Build-Own-Operate) contract. In Qatar, MC is delivering an Independent Water and Power Project that supplies 2,520,000 kWh of electricity and 620,000 tons per day of water (which comprises 25% of Qatar’s desalination capacity) to Qatar General Electricity & Water Corporation over 25 years. MC is delivering the Project in cooperation with the Qatari government to fulfill growing demand for water associated with economic development and population growth and to contribute to the long-term development of the country. エジプトAl Yosr 海水淡水化施設 As living standards improve, global water consumption has increased dramatically, outpacing the rate of population growth and exacerbating water shortages in some regions. Water utilities, which provide a stable supply of sanitary water, have become indispensable for the survival of humanity and the viability of cities. MC is contributing to addressing various challenges pertaining to water resources throughout the world by developing water-related infrastructure and addressing local water issues. MC affiliate South Staffordshire Plc is a water company that supplies water to approximately 1.7 million people in the UK, as well as providing technical and retail services to other water, electricity and gas companies in a wide range of business areas. As part of MC’s efforts to conserve water resources as well as to support regional measures to address flooding, South Staffordshire Plc is collaborating with Cambridge University to install the UK’s largest rainwater recycling system in a newly developed area in the Cambridge region. It is responsible for the design, construction and operation of the system. MC affiliate Metito Holdings Limited is an integrated water engineering company that is engaged in a wide range of activities, from the construction of water and wastewater treatment plants and seawater desalination plants to business investment and operation, mainly in the Middle East, Africa and Asia. MC provides water-related solutions optimized for each different region to address water shortages and underdevelopment of water infrastructure with the aim of improving people’s living conditions and protecting the local environment. Specifically, Metito has been managing long-term desalination projects in Egypt since 1999 to supply water to drought regions along the Red Sea coast. In addition, it has built a large-scale desalination facility for the Egyptian government, contributing to the improvement of the region’s water infrastructure. Metito has also been constructing desalination plants in water-scarce Qatar, contributing to the stable supply of drinking water to the region through the long-term management of those facilities. MC affiliate Swing Corporation is engaged in the domestic waterworks business, including the design and construction of water and sewage facilities and the provision of operation and maintenance services. Specifically, MC is engaged in phosphorus recovery from sewage sludge for fertilizer use at the Higashinada Sewage Treatment Plant in Kobe, Hyogo Prefecture. It promotes resource recovery through local production for local consumption and recycling (In the fiscal year ended March 2021, the project won the innovation category of wastewater recycling award sponsored by the Japanese Ministry of Land, Infrastructure, Transport and Tourism). Swing Corporation developed a Private Finance Initiative (PFI) project in the city of Kurobe, Toyama Prefecture to establish a facility for the beneficial reuse of sewage biomass and has assumed responsibility for everything from financing to design, construction, maintenance and operation. The sewage sludge is mixed with coffee residue to extract biogas, which is used for power generation and sludge drying, and the dried sludge can be used effectively as an alternative to coal and as a raw material for fertilizer (In the fiscal year ended March 2012, the project won the sustainability category of wastewater recycling award sponsored by the Japanese Ministry of Land, Infrastructure, Transport and Tourism). MC has built up experience in the water business in the UK, Japan, Australia, the Philippines, Chile and other countries in Asia, the Middle East and Africa. By drawing on private-sector funds and technology, we will continue to improve efficiency and offer higher-quality water services. We will also provide water-related solutions optimized for each different region to address water shortages and underdevelopment of water infrastructure with the aim of improving people’s living conditions and protecting the local environment. MC actively disseminates information about its ESG-related initiatives to its various stakeholders around the world. MC engages with CDP, an NGO holding the world’s largest database of corporate disclosures on climate change initiatives, and since the year ended March 2012, MC has responded the CDP Water questionnaire, which evaluates corporate water management. MC participates in the Water Project*, a public-private initiative which promotes initiatives aimed at preserving or restoring healthy water cycles. MC shares information with other companies on water risks and water-related initiatives, and considers how to pursue such initiatives internally. * The project was launched based on the Basic Act on Water Cycles in 2014. The Water Project was founded to build a public-private collaboration platform and to promote initiatives and self-motivated approaches from private sector companies aimed at achieving sound water cycles and water environment preservation. MC is promoting activities, both through business and corporate philanthropy initiatives, to maintain and restore sound water cycles. MC also disseminates information about its initiatives and the importance of water through internal and external communication. MC supports a wide range of initiatives focusing on environmental and sustainable development in Europe and Africa through the Mitsubishi Corporation Fund for Europe and Africa (MCFEA) (Since 1992, more than GBP5.1 million have been funded). The MCFEA delivers support through various partner organizations including the Earthwatch Institute, Rainforest Alliance, Acumen Academy and Springboard. One of these partners is the NGO WaterAid, which provides safe water and sanitation to people in dire need around the world in order to help greatly improve their health and quality of life. |Non-consolidated:||42||38||25⋆||Mitsubishi Shoji Building, Marunouchi Park Building and certain other offices in Tokyo| |Consolidated:||(Components are as follows)||97,060||95,268||93,058||Non-consolidated and main domestic subsidiaries| |Industrial water, water supply:||24,841||24,814||25,402| |2019.3 results||2020.3 results||2021.3 results||Note| |Water Note withdrawal||Recycling volume||Recycling rate||Water Note withdrawal||Recycling volume||Recycling rate||Water Note withdrawal||Recycling volume||Recycling rate| |Non-consolidated:||42||-||-||38||-||-||25⋆||-||-||Mitsubishi Shoji Building, Marunouchi Park Building and certain other offices in Tokyo| |Consolidated:||97,060||9,568||10%||95,268||1,375||1%||93,058||790||1%||Non-consolidated and domestic subsidiaries| * Ratio of recycling volume to the total of amount of water withdrawal and recycling volume. ESG Data marked with a star (⋆) for the year ended March 2021 has received independent practitioner’s assurance from Deloitte Tohmatsu Sustainability Co., Ltd.
<urn:uuid:87f22955-11b2-4771-b523-0a61dea0ad85>
CC-MAIN-2021-43
https://mitsubishicorp.disclosure.site/en/themes/114
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588102.27/warc/CC-MAIN-20211027053727-20211027083727-00429.warc.gz
en
0.941587
2,960
2.609375
3
Organizational behaviour is the study of group and individual performance and activities. This study is helpful in knowing the human behaviour at workplace and it also studies their impact on communication, motivation, organizational structure, job performance, etc. In other words, it is the study of different ways through which people interact in a group. This study is normally done in order to create business organization more effective. This report is about the study on British Gas and EDF Energy as per the given case study in which the leadership styles that they should adopt in order to encourage their employees to effectively are assessed. Further, it covers the organizational theories which underpin the practices of management. Lastly, it also includes the different approaches used b 1.1 Effectiveness of different leadership styles which can be used by British Gas and EDF Energy Leadership style is adopted by a leader to provide direction, guidance, support and motivation to employees in order to implement the decided plans to get the targets attained. There are many leadership styles which a firm can adopt. Each style has its own disadvantages and advantages. A firm should select a leadership style according to the goals and culture determined. Organizations can also adopt more than one leadership style depending upon the requirements of tasks and needs of department. Following are the different leadership styles which British Gas and EDF Energy can adopt: Opportunistic leadership style In this, the leader is taken as a person who is egoistic, mistrusted or manipulative. Main aim of this leader is that he mainly focuses on achieving the personal objectives and goals. This type of leader comes under Autocratic leadership style where all the decisions are taken by the head himself. In this leadership style, employees are not involved in decision making. Leaders provide their employees with duties, tasks and orders as well as give them proper guidance so that they can work effectively. This leadership style is less effective as in this generation; everyone wants freedom where no one should control them. Diplomats are those leaders who sense the opportunity which prevails in the environment. This type of leader avoids conflicts and they have high experience through which they try to learn from their mistakes. They are much better than the leaders who adopt Opportunistic leadership style as they are not manipulative, egoistic or mistrusted. Diplomats are goal oriented and are effective problem solvers. This type of leader comes under the transformational leadership style. Leaders provide their employees with certain tasks and rewards or punishments are given according to their performance. They set goals and objectives for employees and they have to follow them according to the direction provided by their leaders. These leaders challenge and support their employees to develop a positive atmosphere for working. They have the capability to lead a team and to implement new strategies within three years. They implement suggestions and feedback from employees if they are really beneficial for the organization. They encourage new ideas from the employees and inhibit thinking out of the box. These types of leaders come under the democratic leadership style where all employees are allowed in taking major decisions. Changes adopted by the organization are accepted by the employees as these changes take part with the active participation of employees. This is the best style which a firm can adopt. This style motivates the employees and they contribute with their highest level of efficiency. Employees are involved in decision making but the final decision is taken by the head leader. This style helps in boosting up the moral of employees. These leaders mainly focus on the organizational constraints and perceptions. They believe in adopting different actions and in developing clear vision which will help in encouraging the personal and organizational transformations. They are very comfortable in handling the people effectively and in dealing with the conflicts. Strategist tries to come up with the new ideas for solving problems and they are the risk takers. Mostly leaders who take risks can solve the problems effectively. Magician leadership style These leaders are very thoughtful and reflective. They can solve any problem as they are very knowledgeable and experienced in their respective areas. They are visionary, powerful and successful as well as inspire others with a lot many skills. These leaders are enthusiastic and full of energy. They mainly aim for fulfilling the tasks efficiently and effectively. Among these styles, British Gas can adopt achiever leadership style as these leaders provide a positive atmosphere by supporting their employees. In addition to this, they implement feedback and suggestions from employees if they are really beneficial for the organization. On the other hand, EDF Energy can adopt diplomat leadership style as they focus mainly on completing the tasks. They try to solve all the issues or problems faced by their employees. Leaders provide their employees with certain tasks and punishments a well as rewards are given according to their performance. 2.2 Organizational theories which underpin the practices of management Organizational theories are the types of guidelines which bring managers and employees together for achieving the organizational goals and objectives. It is adopting strategist which would bring out or enhance the individual skills for fulfilling their tasks effectively . It also means delivery of goods to the customers in such a way that the firm would get maximum profit out of it. There are many theories which help the organization in adopting changes, taking decision, promotion, dividing power, etc. Few organizational theories are contingency, classical, neoclassical and organizational and systems theory. According to the given case, British Gas follows neoclassical theory. This theory focuses on maintaining good relationship between managers and employees. Human relation theory is another name of neoclassical theory. Organization focuses on fulfilling the wants and requirements of employees. Main aim of firm through this theory is to motivate the employees and developing a strong relationship with the employees and managers. With this, organization gets higher productivity as the employees work with high level of efficiency. This theory is also helpful in underpinning the management practices as it improves the communication and interactions between employees and the manager. It also develops a relation of honesty, openness and trust among the team members. Feedback and suggestions given by the employees are taken into consideration and as a result, they develop confidence in sharing any ideas freely to their seniors. In addition to this, it also encourages employees to come up with innovation. Organizational theory helps in underpinning the management functions as it improves the planning among employees as there is an effective communication between taff members. As a result, new ideas and innovation are created. Organizational theory also helps British Gas in effective controlling as the employees are involved in decision making. They are well aware of the changes which are going to take place in the organization. It helps in creating a systematic plan in achieving the objectives of firm. This has effective control over other functions of the organization. It also involves monitoring the performance of employees meeting the set standards. Moreover, the management authorities and management roles are also defined through organizational theory. The theory and leadership adopted by the firm are very effective as it allows employees to take decisions regarding the issues or problems faced by the firm which creates a feeling of importance among employees. On the other hand, neoclassical theory helps in developing a strong relation with the managers and employees that makes the employees confident enough and create trust for the manager in their minds as a result of which workers would share their thoughts and ideas freely to their superiors. 2.3 Different approaches to management used by British Gas Management is a process which helps in getting things done by employees. Managers of the firm should be capable enough to understand the human behaviour. Main aim through these approaches is to increase the productivity by developing good relation and motivating employees. Leadership, communication, participative management and motivation are the core of these approaches. In addition to this, it helps in realizing the employees about the importance of understanding roles and responsibilities so that they can work accordingly and achieve the organizational goals. Mainly these approaches help in understanding the behaviour of employees and adopting different strategies which will help employees in working with their full efficiency. There are many approaches which a firm can adopt. Following are the few approaches which can be followed by British Gas and EDF Energy. Some of them are like: This theory emphasizes on the scientific study of work methods to improve the productivity of individual workers. This management recommends the methods which analyses the work and determines the method to complete the task. In other words, it is the art of knowing the actual action which has to be taken and the way it has to be done. In this approach, firms are very careful in selecting any candidate. Organization applies scientific techniques for selection, training and recruitment. This technique works according to two principles which are: - While performing a particular task, the best method should be discovered. - The best method selected in fulfilling a task is a kind of process which helps in increasing the employee’s efforts. According to this approach, employees have only physical and economical needs. Other needs like job satisfaction and social needs are not at all important. It mainly aims at increasing the efficiency of organization and employees based on management practices. According to this approach, Fayol has developed 14 principles which are very helpful in managing the firm effectively. Following are the 14 principles of management: - Division of work - Unity of command - Unity of direction - Scalar chain - Stability of tenure of personnel - Espirit de corps This approach emphasizes on the requirement of organizations to operate rationally. According to Max weber, there are five principles: Proper division of labour All the employees should be given with balanced responsibilities and power. Chain of command All the firms should have proper hierarchy so that the information can be passed effectively. Separation of personal and official property Assets of organization and owners are different and cannot be treated same. Application of complete and consistent rules For running the firm, there should be proper rules and regulations. Promotion and selection based on qualification Selection and promotion of employees should be based on experience, knowledge, skills and age. Personal relations should be influenced. This approach focuses on the conflicts which takes place among employees or superiors. The managers are free to take decision and can adopt any type of strategies according to the situations. According to this approach, all the components in the organization are interrelated with each other. Any change in one component will affect the other components . It mainly focuses on the overall effectiveness of system rather than sub-system effectiveness. Among these approaches, British Gas should adopt scientific approach as it selects the candidates very carefully during the recruitment process. EDF Energy should adopt Bureaucratic approach as it has a systematic way of distributing power and responsibilities and it also focuses on having a proper way of interacting in an organization. From this report, it can be articulated that among all the leadership styles, achiever leadership is the best as it allows suggestions and feedback for the employees that motivate them in improving their efficiency towards the job that they perform. EDF Energy should adopt this leadership style as it will be helpful in developing confidence among the employees to share ideas and innovations. Further, scientific approach is the best approach which a firm can follow as organizations which follow this approach is very careful in selecting the candidates during recruitment and it recommends the methods which analyses the work and determines the method to complete the task effectively. - Aasland, M. S. and et.al., 2010. The prevalence of destructive leadership behaviour. British Journal of management. - Belschak, F. D. and Hartog, D. N., 2010. Pro"self, prosocial, and pro"organizational foci of proactive behaviour: Differential antecedents and consequences. Journal of Occupational and Organizational Psychology. - Blomme, R. J. and Bornebroek-Te Lintelo, K., 2012. Existentialism and organizational behaviour: How existentialism can contribute to complexity theory and sense-making. Journal of Organizational Change Management. - Bratton, J. and et.al., 2010. Work and Organizational Behaviour 2nd Edition: Understanding the Workplace. Palgrave Macmillan. - Eid, J. and et.al., 2012. Leadership, psychological capital and safety research: Conceptual issues and future research questions. Safety science. - Gagné, M., Sharma, P. and De Massis, A., 2014. The study of organizational behaviour in family business. European journal of work and organizational psychology. - Jafri, M. H., 2012. Influence of psychological contract breach on organizational citizenship behaviour and trust. Psychological Studies.
<urn:uuid:7f47d366-81f0-485c-baa4-7e8f10184d2d>
CC-MAIN-2021-43
https://www.assignmentprime.com/free-samples/case-study-sample-organizational-behaviour
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585516.51/warc/CC-MAIN-20211022145907-20211022175907-00630.warc.gz
en
0.960724
2,520
2.859375
3
Is it true that certain (super)foods have anti-ageing properties, have the power to improve our health & energy and even prevent chronic disease? Superfoods have recently become a buzzword and more and more studies are starting to emerge which evaluate the powerful effects and benefits some of these foods may have on health. These nutrient-dense foods could be described as “foods that are both high in nutrition value due to a high concentration of nutrients and, on the other hand, [has] great biological value due to satisfactory bioavailability and bioactivity within the body due to a variety of bioactive ingredients they contain” (Proestos, 2018). This means that these foods do not only supply the body with a megadose of nutrients, but the nutrients are also very easily absorbable. Let’s look at a few of the most common superfoods and what the studies show: It has been shown (Anderson, 2016) that cinnamon can be used to lower blood sugar and insulin levels as well as cholesterol. Participants took 500mg of cinnamon each day for two months and the results showed that it reduced fasting insulin, glucose, total cholesterol, and LDL cholesterol. In another randomized double-blind clinical trial, done in 2015, (Jaafarpour et al, 2015) it was shown that cinnamon reduced menstrual bleeding, pain, vomiting, nausea and systemic symptoms of primary dysmenorrhea. The study concluded that cinnamon can, therefore, be viewed as an effective and safe treatment for dysmenorrhea, without any side effects. The anti-inflammatory activities of cinnamon have also been extensively studied and some researchers have proven that therapeutic concentrations of cinnamon could be useful in the treatment of inflammatory and age-related conditions ( Gunawardena, 2015). Goji Berries are one of the most nutrient-dense superfoods available today and contain almost 12 times the antioxidants contained in blueberries. A study was done on mice in 2010 where they were shown to be protected from UV radiation-induced skin damage after drinking goji berry juice. This is because the “antioxidant pathways alter the photodamage induced in the skin of mice by acute solar simulated UV (SSUV) irradiation” (Reeve, 2010). The results, therefore, suggest that the consumption of goji berry juice could provide photoprotection. Another evidence-based study (Cheng et al, 2015) done in 2015 showed that goji berries possess a wide array of pharmacological activities, which includes the improvement of immune system functions & general wellbeing. It has also been shown to have anti-ageing and antioxidative properties and in some cases, it has even been shown to inhibit different types of cancer. It seems to also be beneficial to the male reproductive system as it increases the quantity and quality of sperm. Spirulina has been shown (Ichimura, 2013) to prevent hypertension in rats and it further also acts as a cancer-fighting food, as it decreased “the proliferation of experimental pancreatic cancer.” This data shows that spirulina has a chemo-preventive role (Koníčková, 2014). Health benefits of spirulina also include potentially preventing plaque build-up in the arteries, reducing blood cholesterol, LDL and triglycerides, while increasing HDL cholesterol. Larger studies are, however, needed to come to definitive conclusions in terms of whether spirulina can indeed be used for treating cholesterol (Kumari, 2011) According to Poulose (2012), the acai berry protects brain cells and have implications on improved motor and cognitive functions. These dark blue fruits thrive in the Amazon in Brazil and have been shown to improve lipid profile, blood antioxidant status and an increase of a serum lipid profile (a screening tool for any abnormalities in lipids, such as triglycerides and cholesterol. (Sadowska-Krępa, 2015). Due to the active ingredient in turmeric, namely curcumin, plenty of health benefits are offered. One study shows that curcumin is one of the most potent anti-inflammatories even when compared to aspirin, ibuprofen and more (Takada, 2004). Evidence also suggests that it may play a role in lowering blood glucose levels and, therefore, play a beneficial role in managing diabetes (Kim, 2009). It is very important to note, however, that turmeric should be paired with black pepper to increase bioavailibity and absorption. The bioavailability increases by up to 2000% when combined with piperine (black pepper) (Shoba, 1998). Bone Broth is extremely nutrient-dense and very easy-to-digest. This is because it includes every part of an animal, including ligaments, tendons, feet, skin, marrow and bones which have been boiled and simmered for a few days. Easier assimilation and digestibility of nutrients are allowed because of the longer cooking process. Bone broth offers excellent benefits for our joints, this is because collagen (contained in bone broth) will support healthy cartilage. It also contains gelatine which supports healthy cartilage as well as offering building blocks to maintain and form strong bones. It is further also excellent when it comes to supporting gut health, fighting food sensitivities and supporting the growth of good bacteria (probiotics) and it is especially soothing to the digestive system. Because collagen and the amino acids support healthy tissue, the whole digestive function, including the colon and entire GI tract is supported. It is also excellent at repairing a damaged gut lining and ‘leaky gut’ which will support not only a healthy functioning digestive system but also your immune system. On top of that, bone broth is also a very powerful detoxifying agent, as it increases the liver’s potential to get rid of toxins and improve the use of antioxidants (Axe, 2020). Chestnuts are a type of nut that have been shown to have many benefits when it comes to improving gut and heart health. This is because they contain plenty of fibre and antioxidants and are a wonderful source of a variety of nutrients including vitamin C, B vitamins and manganese. Research done by Blaiotta (2013) has shown that chestnut extract plays an essential role in the gastric tolerance of a beneficial bacteria, called lactobacilli. It is, therefore, great for improving our overall gut health and microbiome. It proofs to be challenging for probiotics to not be influenced by the acidic gastric secretions as it moves through the stomach, so the survival of some of these strains of bacteria are, therefore, dependant on the food used for its delivery. A wide range of health benefits of the Ginkgo biloba extract has been reported in traditional Chinese medicine. A study done in 2012 (Cheng, 2013) showed promising results when evaluating the effects of Ginkgo biloba on induced diabetes in rats. After the rats were given Ginkgo extract for 30 days, the symptoms of diabetes were reversed significantly and it further seems to possess antihyperglycemic, antihyperlipidemic and antioxidant activities which promise to be a possible treatment for diabetics. According to the research, it seems like superfoods can indeed offer us a wide variety of antioxidant and antimicrobial substances as well as vitamins, fatty acids and fibre in quantities that exceed many other foods we typically consume each day. Superfoods, therefore, might play an important role in reducing our risk of degenerative diseases. Always remember that superfoods should not be used exclusively but should be enjoyed as part of a balanced diet. Anderson, R.A et al, 2016,Cinnamon extract lowers glucose, insulin and cholesterol in people with elevated serum glucose, J Tradit Complement Med, 2016 Oct; 6(4): 332–336. Dr Axe, 2020. Six Amazing Benefits, One Superfood, https://draxe.com/six-amazing-benefits-one-superfood/). Blaiotta G, et al. 2013. Effect of chestnut extract and chestnut fiber on viability of potential probiotic Lactobacillus strains under gastrointestinal tract conditions. Food Microbiol. 2013 Dec;36(2):161-9. Cheng. D et al. 2013. Antihyperglycemic Effect of Ginkgo biloba Extract in Streptozotocin-Induced Diabetes in Rats”, BioMed Research International, vol. 2013, Article ID 162724, 7 pages. Cheng, J et al. 2015. An evidence-based update on the pharmacological activities and possible molecular targets of Lycium barbarum polysaccharides, Drug Des Devel Ther. 2015; 9: 33–78. Gunawardena D, et al. 2015. Anti-inflammatory activity of cinnamon (C. zeylanicum and C. cassia) extracts – identification of E-cinnamaldehyde and o-methoxy cinnamaldehyde as the most potent bioactive compounds. Food Funct. 2015 Mar;6(3):910-9. Ichimura M, et al, 2013. Phycocyanin prevents hypertension and low serum adiponectin level in a rat model of metabolic syndrome. Nutr Res. 2013 May;33(5):397-405. Kim T, et al. 2009. Curcumin activates AMPK and suppresses gluconeogenic gene expression in hepatoma cells. Biochem Biophys Res Commun. 2009 Oct 16;388(2):377-82. Jaafarpour et al, 2015, The effect of cinnamon on menstrual bleeding and systemic symptoms with primary dysmenorrhea, Iran Red Crescent Med J, 2015 Apr 22;17(4) Koníčková R, et al, 2014. Anti-cancer effects of blue-green alga Spirulina platensis, a natural source of bilirubin-like tetrapyrrolic compounds. Ann Hepatol. 2014 Mar-Apr;13(2):273-83. Kumari, D.J. 2011. POTENTIAL HEALTH BENEFITS OF SPIRULINA PLATENSIS. Pharmanest, Vol.2 (2 – 3) September –October -2011 Poulose SM, 2012. Anthocyanin-rich açai (Euterpe oleracea Mart.) fruit pulp fractions attenuate inflammatory stress signaling in mouse brain BV-2 microglial cells. J Agric Food Chem. 2012 Feb 1;60(4):1084-93. Proestos C. Superfoods: Recent Data on their Role in the Prevention of Diseases. Curr Res Nutr Food Sci 2018;6(3). Reeve VE, et al. 2010. Mice drinking goji berry juice (Lycium barbarum) are protected from UV radiation-induced skin damage via antioxidant pathways. Photochem Photobiol Sci. 2010 Apr;9(4):601-7. Sadowska-Krępa E, et al. 2015. Effects of supplementation with acai (Euterpe oleracea Mart.) berry-based juice blend on the blood antioxidant defence capacity and lipid profile in junior hurdlers. A pilot study. Biol Sport. 2015 Jun;32(2):161-8. Shoba G, et al. 1998. Influence of piperine on the pharmacokinetics of curcumin in animals and human volunteers. Planta Med. 1998 May;64(4):353-6. Takada Y, et al, 2004. Nonsteroidal anti-inflammatory agents differ in their ability to suppress NF-kappaB activation, inhibition of expression of cyclooxygenase-2 and cyclin D1, and abrogation of tumor cell proliferation. Oncogene. 2004 Dec 9;23(57):9247-58.
<urn:uuid:5276e4a1-2282-41d0-ae57-102eacdd3c2d>
CC-MAIN-2021-43
https://zenaleroux.co.za/2021/03/01/the-nutritional-healing-and-immune-boosting-benefits-of-superfoods/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585177.11/warc/CC-MAIN-20211017113503-20211017143503-00710.warc.gz
en
0.901325
2,479
3.078125
3
Albert Ellis was a psychologist who developed Rational Emotive Behaviour Therapy (REBT). He investigated beliefs (realistic and unrealistic) that we hold about ourselves, and identified 3 core beliefs that we all hold. What are the 3 core beliefs and why are they important to be aware of? The 3 core beliefs are: ~ firstly, the belief that I must be perfect; ~ second, everyone must like me, and ~ finally, that life must treat me well. Now, I’m sure as you read these 3 core beliefs, your rational mind will be telling you that they are irrational and unrealistic. However, if you stop and really think about it, it is possible that you hold one or more of these beliefs to some extent, and they impact on, or sabotage your business success. But they also worm their way into your positive mindset and resilience. The important thing is that you are able to recognise which of the core beliefs you hold, and how they might be undermining your ability to successfully build and market your business or sell your services. We are also going to look at how they can impact on your mindset. A closer look at the 3 core beliefs But first, let’s take a closer look at each of these core beliefs: If I go to Wikipedia, we can see the extent and depth of these beliefs in their full forms as defined by Ellis: I must be perfect : “I absolutely MUST, under practically all conditions and at all times, perform well (or outstandingly well) and win the approval (or complete love) of significant others. If I fail in these important—and sacred—respects, that is awful and I am a bad, incompetent, unworthy person, who will probably always fail and deserves to suffer.” Everyone must like me: “Other people with whom I relate or associate, absolutely MUST, under practically all conditions and at all times, treat me nicely, considerately and fairly. Otherwise, it is terrible and they are rotten, bad, unworthy people who will always treat me badly and do not deserve a good life and should be severely punished for acting so abominably to me.” Life must treat me well: “The conditions under which I live absolutely MUST, at practically all times, be favorable, safe, hassle-free, and quickly and easily enjoyable, and if they are not that way it’s awful and horrible and I can’t bear it. I can’t ever enjoy myself at all. My life is impossible and hardly worth living.” Now I’m quite sure that as you read those beliefs intellectually you going: “Yeah, right” and thinking how completely unrealistic they are. And you’re right. But the fact of the matter is that we do all hold these core beliefs or a mixture or them to some extent or another. How do these core beliefs impact our businesses? We might believe that we need to show up at all times with absolutely no flaws and in the most perfect way possible. That we need to have the answers to everything at our fingertips. This puts a LOT of unnecessary pressure on us. I’d like to hope that we believe that we need to do our jobs in the most perfect way possible (which is not a terribly bad thing, as I think striving for quality has merit – don’t you?). What about having everyone like us and treating us well all the time, and if not, it makes THEM rotten, bad and unworthy to mention only a few things? A corollary of one of the core beliefs to help you test your assumptions Do YOU like everyone that you come across? If not, you do realise that the corollary of this particular belief means that, if you don’t happen to get on with someone, it makes YOU rotten, worthy of punishment and undeserving of a good life. When you start pulling them apart, they actually ARE ridiculous beliefs and expectations, aren’t they? But we still hold them as core beliefs and the surface when we really are honest with ourselves. For example, I get frustrated and down when I agonise over creating valuable, educational videos and social media content, and people don’t RUSH to follow me on social media. What does it say about me? Is it a negative judgement on the quality of my work? When it comes to my audience and potential clients, what does it tell me about the impact of my work on them? N-O-T-H-I-N-G! Absolutely nothing. It means that people are busy with their lives and may not have had time to read this article or to watch my video. It certainly doesn’t mean that my work is of a low standard or that I’m unlikeable. Even coaches I admire were not instantaneous successes In fact, I recently learned that two online course creators that I admire both only had two people sign up to their programmes the first time they launched. But they didn’t let it get them down. They came back fighting and a few years later, both have flourishing and extremely prosperous businesses. How many of us hesitate to “put ourselves out there” or expose ourselves because we don’t believe we’re good enough or because we’re having a bout of imposter syndrome? I was chatting to a colleague earlier today and admitted to her that, even after almost 20 years as a coach, I still have moments of imposter syndrome…and then proceeded to coach her into defining a fabulous niche for her business that she hadn’t even considered but has SUCH amazing potential – watch this space because I definitely think there’s room for us to collaborate. Take action – even if it’s imperfect My first lead magnet was (and still is) an ebook with 5 simple, but effective tips that ANYONE can implement immediately to get more visibility for their coaching practice. I fiddled and twiddled with it when I first created it. It still sits in the back of my mind on my unwritten to-do list because it isn’t PERFECTLY brand aligned – I was still creating my professional brand at that stage. I’d like to re-do it aesthetically…and I will get there Fortunately, at the time that I created it, I was part of a great mastermind and was able to ask their opinions about it. The ONLY change they suggested was to simplify the title by changing 1 word! But it took me forever to actually feel that it was at a stage where it was okay – not perfect enough for my high standards – but okay enough to put out into the public domain. Well, actually, there’s nothing really wrong with it. Sure, a graphic designer could make it look incredible, but it’s good. It’s filled with 17 really valuable and solid information, and I’ve had great feedback from coaches who have downloaded it and implemented the actions in it. Our core beliefs shift when we take action and as we become more experienced The irony now is that I have 5 lead magnets ready to pull out when they are appropriate. They are all brand-aligned and work that I’m proud of. Our core beliefs SHIFT with our experience. Creating a lead magnet (what the hell IS that) was a deep-dive into the world of online marketing for me this time last year. I had only just created my first successful one and started using it to build my connections. A year later, it’s easy. I’ve got templates and I can just plug and play and respond to what my clients need at any one time to support them. Our core beliefs about ourselves shift, if we allow ourselves to take action. We hold ourselves back and procrastinate, which is actually our brain’s way of protecting us from something that it perceives as a threat of some sort. But we are simply our own worst enemy and there is that wonderful question about how you will feel a year from now if you start to follow your dream TODAY! There is a popular quote: “Done is better than perfect.” I looked it up and it’s attributed to Sheryl Sandberg from her book, Lean In (apologies to Sheryl as I attributed it to Jenna Kutcher in the video below this article). It is so true: there is very little that we cannot go back and fix or improve – especially in the online world. One of the reasons why “done is better than perfect” is that you can put something out into the public domain and test it – get feedback – and refine and improve it based on the feedback you get. We all know that business is tough at the moment. So many coaches tell me that they are putting out proposals to clients and are being rejected out-right or being told that the client doesn’t have the budget or capacity right now. That doesn’t say ANYTHING about the coach, the quality of their work or what they can do for their potential clients. It simply says that the time is not right. Times are tough for everyone. It doesn’t mean that the client doesn’t like the coach. Build the “know, like, trust” factor What we need to keep in mind is that this is a phase where we need to stick to our guns and to persevere. This is the season for building the “know, like, trust” factor with our clients – to support them in any way that we can, to keep the lines of communication open and to be patient. Remember that it normally takes somewhere between 5 – 12 contacts for a successful transaction to take place. Actually, I heard someone say that this number has increased to about 30 points of contact because of how things have changed in the world. Building resilience and taking action We all need to be resilient and to build coping strategies to survive and thrive as our world changes, because who knows what kind of a world we will end up with? Life is NOT fair or easy right now. It’s tough for most of us, and it’s going to continue to be tough for a while. If is YOUR job to believe in yourself and what you do – that it is good enough. It is also YOUR job to learn to like yourself and not to depend on other people or outside influences for validation. Finally, it is YOUR job to keep calm and carry on, and to realise that life doesn’t feel fair right now but it will change – nothing is permanent. Journaling assignment for personal insights and growth I’d like to encourage you to journal on these core beliefs – because they’re powerful things and have major impact on what we believe we are capable of – and to explore how they might be sabotaging you in terms of achieving your dreams. I also think that these core beliefs cycle. I think that different ones dominate at different times and that sometimes we struggle with feeling the need to be perfect, at other times we struggle with feeling that life has dealt us an unfair deal and at other times, we feel rejection when it wasn’t actually intended. Think about it and be realistic and proactive rather than allowing untrue beliefs or assumptions to paralyse you. Here is another article that you might also enjoy: As well as this one:
<urn:uuid:df2e7746-b43c-41b5-80cc-266b2c9de7d9>
CC-MAIN-2021-43
https://www.nicheintelligence.co.za/2020/08/28/core-beliefs-that-could-sabotage-your-success/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585381.88/warc/CC-MAIN-20211021040342-20211021070342-00550.warc.gz
en
0.965678
2,394
2.71875
3
You’ve heard of therapy and counselling, but how does counselling promote positive mental health? Spoiler Alert! You don’t need to have a mental illness or mental health concerns to benefit from counselling. If you’ve been struggling with your mental health and are looking for a way to change your life, you could try different types of therapy – counselling as one of them. For many people, the biggest issue around counselling is the word itself. In fact, the National Alliance on Mental Health found that it takes an average of 11 years for someone with mental health condition symptoms to receive treatment Finding the best solution for your mental health is a journey. It’s important to understand that therapy is simply a process of working with your therapist to help you identify and address the issues that are negatively affecting your life. Let’s dive into how counselling could be a great way for you to stimulate positive mental health in your life. Why Is Counselling Important To Mental Health? Counselling is important to mental health because it helps people develop coping mechanisms and build resilience. With your busy daily life, it can be hard to find time for yourself. That’s why it’s important to take a step back and ask yourself how to treat your mind and body better. If you are struggling with anxiety, depression, or other mental health conditions , it’s important to get an outside perspective on things. A licensed therapist can help you figure out how to clear up the issues that are holding you back. How does that happen? Well, the process of counselling can help you to gain a better understanding of your emotions, and of the events in your life that lead to you having those emotions. By having a safe place to be vulnerable and develop solutions with a trained mental health professional, you could figure out the best way to cope with and work with your mental health situation. Does Counseling Help With Mental Health? Yes! Counselling is one of the best ways to promote positive mental health because it can help you face your challenges head on and overcome them. Many people have reservations when it comes to counselling, especially in seeking help for the first time, as they don’t feel comfortable with the idea of speaking to a stranger about their problems. That’s why it can help knowing that counselling is a process of exploring our thoughts and feelings, which can be a great tool in overcoming any mental health issues that negatively impact our lives. What Is A Mental Health Counselor? A mental health counselor is a counselor with special training in mental and emotional health. Some mental health counselors are trained to provide mental health treatments and different types of therapies such as behavioral therapy, couples therapy, and even mindfulness-based cognitive therapy. Licensed mental health counselors could help you with mental health conditions such as depression, obsessive-compulsive disorder, and post-traumatic stress disorder. They could also help you through difficult life events that are not tied to a mental health condition. The main downside of mental health counseling is how it can be pretty expensive. There are low-cost options for mental health treatment! I suggest this helpful guide to low-cost mental health treatment by the Anxiety & Depression Association of America. How Important Is Counselling In Today’s Life? Counselling is important to the mental health of hundreds of millions of people around the world. Even if your biggest struggle is just trying to overcome the stress of adopting a puppy, counselling could be VERY beneficial to your mental resilience of this stage in your life. That’s right. You don’t necessarily need to be struggling with a mental illness or mental health struggle to benefit from mental health services like counselling Let me explain…Counselling essentially allows you to confide in a counselor and to discuss any issues you may be facing – mental health related or not. Counselling also allows for you to reflect on your lifestyle and to work towards any changes that you desire. So, whether you’re looking to maintain your lifestyle, improve it, or just talk to someone about what’s going on in your life, you could very much benefit from counseling! Why Is Mental Health So Important? Mental health is important because it is a large factor in your overall health. If you start to develop poor mental health habits and symptoms, you might start to realize that it’s harder to deal with physical ailments and harder for you to recover from physical illness and diseases. On top of its effect on your own physical health, your mental health is important because the way you feel and act can affect your quality of life and the lives of those around you. We all have good days and bad days, but poor mental health can make these days even more difficult to handle. Although you might no realise it, mental health affects us all in some way, whether it’s experiencing a low or struggling with panic attacks. How Can Counselling Promote Positive Mental Health? Counselling can help promote positive mental health by helping identify issues that are affecting your mental health and working towards a solution. It can be a constructive way to talk about your life experiences, struggles, and goals. Do you struggle with panic disorders? Current or past relationship issues? Do you think you might need family therapy? Well, there’s a positive way to work through these concerns – counselling! In the end, the health of your mind affects that of your body, and vice versa. Going into counselling, it can be encouraging to know that your mental health matters and your mental illness can be treated. It’s not just a case of ‘pulling yourself together’ or ‘getting over it’, but with the right support, you can work toward positive mental health. Counselling is a confidential, non-judgemental three-way conversation between you, your mental health counselor and your inner self. It offers a safe, non-threatening environment where you can explore your feelings, thoughts, worries and fears in a supportive, trusting space. What Are Common Mental Health Problems In Counselling? Some common mental health issues discussed in counselling are stress, anxiety, depression, eating disorders, and substance abuse. It’s important to remember that mental health issues can be traced back to the early adolescence in most cases. So it’s great when you take the step to find solutions to these underlying problems with a trained counselor. If you are struggling with mental health issues (whether you know “why” or not), you may think that you have tried everything. That’s why many people with mental health problems never seek professional help. But it’s at this point where counselling could promote positive mental health the most in your life. It can be very difficult to know where to begin when it comes to finding a mental health therapist. And it’s also easy to feel overwhelmed by all the different types of therapy that are available. This is because there are many different kinds of mental health services as well as different approaches and professionals within each type of service. The important thing is to find a mental health counselor, therapist, or social worker who you feel comfortable with and validated by. This is so that you feel safe being vulnerable around them in a way that let’s both of you address and figure out how to cope with your mental health issues. What Are 3 Factors That Contribute To Positive Mental Health? There are many factors that contribute to positive mental health – definitely more than 3! Positive health can be shaped by… - building and maintaining fulfilling relationships - enjoying physical and mental healthy habits - having a reliable and safe social support system - practicing mindfulness in various situations - maintaining regular sleep patterns - establishing healthy boundaries - making time to do (and discover new) things that you enjoy - and many more things! The first step to developing positive mental health is to be honest about how the symptoms you are experiencing are negatively and positively impacting your life. (Make a quick list!) Anxiety, depression, trauma, and stress are all apart of this equation and should be considered when you are thinking about your mental health. The second step to developing positive mental health is to identify your stressors and the methods that work best to manage them. While one person may be good at talking through their problems, another person may need to physically burn their stress through running, or talk through their problems with a dog, friend, or counseling session. Finding out the factors that are most impactful on your life is a process that you need to figure out for yourself. Is it hard? Maybe. It can definitely be a lot of work and quite time-consuming. But is it worth it? Yes. As someone who has gone through so many types of therapies and 9 therapists, I can say that, even though the process was SO exhausting, it was definitely worth trying to figure out which factors contribute to my positive mental health the most. It really made things easier moving forward. So, if you are still asking yourself, “How does counselling promote positive mental health for me?”, going through this process could make it even more clear for you. So What? How Does Counselling Promote Positive Mental Health? In 2017, only 42.6% of people who knew they had symptoms of mental health issues accessed mental health services. When you’re feeling down, it can seem like there’s no way out. You might feel like you’re drowning in problems, or struggling with overwhelming emotions. …But while you might feel like you’ll never feel happy again, counselling can help you improve your mental health and get back on track. Counselling helps you to deal with feelings of sadness, anger, or other difficult emotions in a safe and confidential space. It promotes positive mental health by giving you the opportunity to learn new ways of coping with problems and managing stressful situations. Want to start building positive mental health habits? Find some amazing in-depth lists of various activities and strategies here.
<urn:uuid:4802e871-c3f1-4b90-8d68-2f063f6c710c>
CC-MAIN-2021-43
https://mytherapybuddy.org/how-does-counselling-promote-positive-mental-health/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588398.42/warc/CC-MAIN-20211028162638-20211028192638-00509.warc.gz
en
0.949672
2,109
3.015625
3
Why Most People Are Deficient in Minerals Deficient food, chronic disease are leaving us malnourished. It’s estimated that 1 in 3 Americans is deficient in at least 10 minerals, including potassium, manganese, magnesium, and zinc, putting them at risk of chronic diseases such as heart disease and diabetes. “The Mineral Fix,” written by James DiNicolantonio and Siim Land, author of “Metabolic Autophagy,” provides a comprehensive guide about the role of essential minerals and why you need them to optimize physiological function and survival. There are 17 essential minerals, broken down into seven macrominerals and 10 trace minerals. There are another five minerals that are possibly essential. The primary role of minerals is to act as cofactors for enzymes, but that’s just the bare minimum. “They literally are the shields for oxidative stress,” DiNicolantonio said, “because they make up our antioxidant enzymes. They help us produce and activate adenosine triphosphate (ATP), help us produce DNA, protein, so literally every function in the body is dependent, in some way, on minerals.” Minerals’ role in the creation of ATP alone is a clue to their importance. As the energy currency in your body, ATP is essential for cellular functions throughout your body, including in your heart, which is dependent on sufficient amounts of ATP to function properly. DiNicolantonio believes that not getting enough minerals in your diet can be just as damaging as eating an unhealthy diet focused on sugar and seed oils. 3 Reasons Why You Might Be Deficient in Minerals About a third of the U.S. population is likely deficient in the 10 minerals below (estimated percent not hitting RDA/AI or estimated percent deficient): 1-Boron (> 75 percent) 2-Manganese (~ 75 percent) 3-Magnesium (52.2–68 percent) 4-Chromium (56 percent) 5-Calcium (44.1–73 percent) 6-Zinc (42–47 percent) 7-Iron (25–34 percent) 8-Copper (25–31 percent) 9-Selenium (15–40 percent) 10-Molybdenum (15 percent) For instance, gastrointestinal damage can decrease the amount of minerals you absorb; living in a state of significant inflammation taxes your system, and will increase the burn rate of minerals. Kidney damage increases the excretion of minerals, while high insulin levels will cause minerals to be excreted in your urine. “Those three key factors are why so many of us are depleted in so many minerals,” DiNicolantonio said. Minerals for Antioxidant Defense and Immunity You may associate antioxidants with vitamins, such as vitamins C and E, but minerals were the first antioxidants in living organisms. DiNicolantonio uses the example of blue-green algae that lived billions of years ago, producing oxygen and creating an abundance of oxidative stress. They utilized selenium and iodine as antioxidants. In humans, we use these similarly. Iodine is an essential mineral that helps prevent polyunsaturated fats from oxidizing, provides your thyroid with the necessary nutrients to produce thyroid hormones, and is a natural antibacterial agent. Thyroid hormones are essential for normal growth and development in children, neurological development in babies before birth and in the first year of life, and in regulating your metabolism. However, DiNicolantonio states that thyroid hormones also act as antioxidants, with effects 100 times stronger than vitamin C, vitamin E, and glutathione. You need minerals, including iodine and selenium, to form your thyroid hormones, and your levels of powerful antioxidants such as glutathione are directly dependent on your selenium and magnesium status. There’s also superoxide anion, a free radical that causes many types of cell damage. It’s the product of a one-electron reduction of oxygen, which is the precursor of most reactive oxygen species and a mediator in oxidative chain reactions. These oxygen free radicals attack the lipids in your cell membranes, protein receptors, enzymes, and DNA that can prematurely kill your mitochondria. Superoxide dismutase neutralizes superoxide anion, rendering it harmless. But superoxide dismutase depends on copper and zinc. DiNicolantonio explained: “If you’re low in copper and zinc, you can’t neutralize the superoxide. It combines with nitric oxide, reducing your nitric oxide levels, increasing blood pressure, leading to atherosclerosis and heart disease, and then you form the toxic peroxynitrite. So it goes to show you how just having a low mineral status can lead to high inflammation.” RDAs Are Inadequate “The recommended dietary allowance (RDA) for many minerals may be inadequate to protect your health and won’t help you reach the levels needed to optimize your antioxidant defenses,” DiNicolantonio said, calling this issue the crux of the book. RDAs are based on studies to make sure you’re not deficient, but this level isn’t the same as the one that will give you optimal health. In the case of enzymes dependent on vitamin C, for example, you need to consume 120 milligrams (mg) to 150 mg of vitamin C to make sure those enzymes are highly optimized, which is far more than the 6 mg to 8 mg of vitamin C needed to prevent scurvy. “You can have up to a 1,000-fold difference between preventing deficiency and optimal intake,” according to DiNicolantonio. Magnesium is another example, but with lower differences between deficiency and optimal levels. You only need about 150 mg to 180 mg a day to prevent deficiency, but optimal levels are closer to the 600 mg/day level. For comparison, the RDA for magnesium is around 310 mg to 420 mg per day depending on your age and sex. But like DiNicolantonio, many experts believe you may need around 600 mg to 900 mg per day. As noted in a study DiNicolantonio worked that was published in 2018 in Open Heart: “Investigations of the macro- and micro-nutrient supply in Paleolithic nutrition of the former hunter/gatherer societies showed a magnesium uptake with the usual diet of about 600 mg magnesium/day. … “This means our metabolism is best adapted to a high magnesium intake. … In developed countries, the average intake of magnesium is slightly over 4 mg/kg/day. … The average intake of magnesium in the U.S. is around 228 mg/day in women and 266mg/day in men.” Another important point that DiNicolantonio makes is that simply increasing your mineral intake by taking supplements may not be enough, because you need to be insulin-sensitive in order to utilize the minerals properly. If you’re insulin-resistant, you can’t drive the minerals into your cells to work well, and you’ll be eliminating the minerals in urine as well. “So really the first step,” he says, “is to eliminate the harmful substances that are causing you to be insulin-resistant in the first place. That’s automatically going to boost your mineral status, because you’re going to be able to utilize those minerals better.” Top Food Sources of Minerals The best way to increase your mineral intake is via healthy foods. For copper and iron, for instance, DiNicolantonio recommends pairing muscle meat with liver, or eating oysters, which are also high in zinc. A lot of oysters may be contaminated with cadmium, however, so they should be eaten in moderation depending on where they’re sourced. One of the foods with the highest minerals overall is mussels. They’re high in manganese, chromium, and copper, which are minerals many people are deficient in. Liver is another nutrient-dense food that’s rich in minerals, but it’s possible to overdo it. According to DiNicolantonio, in terms of mineral consumption, one-half to one ounce of liver per day is the ideal amount, which will give you vitamin A, folate, and copper. He recommends pairing this with about 10 to 12 ounces of pastured red meat per day for the vitamin B12, protein, zinc, and iron it provides. If you don’t like the taste of liver, try a blend of meat made with pastured liver, heart, and muscle meat. You can add in more pastured ground beef to make it more palatable and still reap the rich mineral benefits. Women need more than twice the amount of iron as men, so animal-based sources of iron, which are 10 times more bioavailable than plant sources, are important. For those who don’t eat meat, combining vitamin C with beans, spinach and other iron-rich greens may help make the iron more bioavailable. Benefits of Mineral Waters It’s important to balance animal foods with alkaline minerals such as potassium and magnesium from plant foods or mineral waters, which will balance out the acid and help protect your kidneys. Mineral waters that contain bicarbonate can help with this acid-base balance while providing an additional source of minerals such as calcium and magnesium. Drinking mineral water with a meal is also beneficial and can increase mineral absorption while lowering postprandial blood sugar. “It’s also useful to sip mineral water throughout the day,” DiNicolantonio said, citing a study that found consuming seven ounces of mineral water seven times a day increases magnesium absorption and retention by 40 percent, versus consuming larger amounts twice a day. “It’s that slow infusion, which mimics more of an evolutionary intake—we would have just drank water throughout the day and it would have been natural. It wouldn’t be these artificially softened waters, it would be natural waters that contain bicarbonate, that contain magnesium, that contain calcium. So it’s something that I do.” Less-Known Minerals That You May Be Missing Minerals such as boron often get overlooked, yet they’re extremely important for well-being and health. Boron, consumed at levels of about 3 mg daily, is beneficial for bone health and testosterone, but it’s thought that most Americans only consume about 1 mg. The highest concentrations of boron are found in bones and tooth enamel and, according to the Natural Medicine Journal, it “appears to be indispensable for healthy bone function,” as it reduces the excretion of calcium, magnesium, and phosphorus. There may also be other, as yet poorly understood, mechanisms by which it benefits bone-building and other aspects of health. The optimal dosage is unknown, but you can get significant amounts of this trace mineral by eating small amounts of raisins, peaches, prunes, dates, black currants, and avocados. A trace mineral supplement can also be helpful in optimizing your levels of “overlooked” minerals such as boron, chromium, and molybdenum. Chromium, which has been linked to improved blood sugar levels, can be found in mussels, lobster, crab, and shrimp, as well as broccoli, in smaller amounts. Further, chromium is lost in sweat, so you if sweat a lot due to living in a hot climate, sauna usage, or exercise, a chromium supplement may be necessary, especially if you don’t regularly consume chromium-rich foods. Copper is another mineral lost through sweat, and since most people don’t consume much copper, it’s possible to lose more copper than you’ve taken in if you sweat heavily for about an hour a day. Molybdenum is another often-overlooked mineral, which is an essential catalyst for enzymes to help metabolize fats and carbohydrates and facilitate the breakdown of certain amino acids in your body. The best dietary source of molybdenum, according to DiNicolantonio, is liver. If you want to learn more, or are concerned that you’re not getting enough minerals, “The Mineral Fix” goes much more in-depth about the role of the 17 essential minerals your body needs, including optimal intake levels, symptoms of deficiency, how to test your mineral levels, and best food sources. Dr. Joseph Mercola is the founder of Mercola.com. An osteopathic physician, best-selling author, and recipient of multiple awards in the field of natural health, his primary vision is to change the modern health paradigm by providing people with a valuable resource to help them take control of their health. This article was originally published on Mercola.com
<urn:uuid:a03d15cd-09d4-412c-b5a9-dfb3bcf6e1ec>
CC-MAIN-2021-43
https://live.christianity.expert/index.php/deficient-minerals
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588341.58/warc/CC-MAIN-20211028131628-20211028161628-00230.warc.gz
en
0.92419
2,737
3.390625
3
Information might be in the form of a physical object or an electronic file. Information can be anything from your personal information to your social media profile, cell phone data, biometrics, and so on. As a result, Information Security encompasses a wide range of academic topics, including cryptography, mobile computing, cyber forensics, and online social media, among others. We will cover the following: - What is Information Security? - Principles of Information Security - Types of Information Security - Technologies for Information Security What is Information Security? The practice, policies, and principles used to protect digital data and other types of information are referred to as Information Security or Infosec. One of the Infosec's roles is to establish a set of business procedures to safeguard information assets, regardless of how that information is represented or whether it is in transit, being processed, or being stored at rest. Infrastructure and network security, auditing, and testing are all included under the umbrella of InfoSec. Unauthorized users are prevented from accessing confidential information using methods such as authentication and permissions. These safeguards assist you to avoid the risks of data theft, modification, or loss. In a nutshell, information security is how you ensure that your employees have access to the data they require while preventing others from doing so. It's also linked to risk management and regulatory requirements. Principles of Information Security The overall purpose of information security is to keep the bad men out while allowing the good guys in. Confidentiality, integrity, and availability are the three main tenants that underpin this. The three pillars or principles of information security are known as the CIA triad. It means that information is not shared with unauthorized people, organizations, or processes. For example, let's imagine I had a password for my Gmail account that was spotted by someone while I was attempting to get in. In that situation, my password has been stolen and my privacy has been violated. It entails ensuring data accuracy and completeness. This means that data cannot be altered without permission. If an employee quits an organization, for example, data for that employee in all departments, such as accounts, should be updated to reflect the individual's status as JOB LEFT so that data is comprehensive and accurate, and only authorized people should be permitted to alter employee data. It implies that information must be available at all times. For example, if you need to access information on a specific employee to see if they've exceeded their leave limit, you'll need the help of various organizational teams like development operations, incident response, network operations, and policy/change management. These three principles are not mutually exclusive; they inform and influence one another. As a result, any information security system will require a balance of these variables. Information solely available as a written piece of paper housed in a vault, for example, is confidential but not immediately accessible. The information carved into stone in the lobby has a high level of integrity, but it is neither confidential nor available. Types of Information Security When it comes to information security, there are numerous kinds to be aware of. Specific forms of information, technologies for protecting information, and domains where information has to be protected are all covered by these subtypes. - Application Security Applications and application programming interfaces (APIs) are protected by application security solutions. These techniques can be used to avoid, detect, and fix bugs and other vulnerabilities in your applications. If your application and API vulnerabilities aren't patched, they can give a backdoor into your broader systems, putting your data in danger. Specialized tools for application shielding, scanning, and testing make up a large part of application security. - Cloud Security Cloud security protects cloud or cloud-connected components and information in the same way as application and infrastructure security does. Cloud security focuses on the risks that arise from Internet-facing services and shared environments, such as public clouds, by providing additional protections and solutions. A focus on centralizing security administration and tooling is also common. Security teams can maintain visibility of information and threats across distributed resources due to this centralization. Cryptography protects data by disguising its contents through the use of encryption. When data is encrypted, only users with the relevant encryption key have access to it. The information is unintelligible if users do not have this key. Security teams can utilize encryption to protect the confidentiality and integrity of data throughout its life cycle, including during storage and transit. Once a user decrypts the data, however, it becomes vulnerable to theft, disclosure, and manipulation. - Disaster Recovery Unexpected events might cause your company to lose money or suffer damage, thus disaster recovery plans are essential. Ransomware, natural disasters, and single points of failure are just a few examples. The recovery of information, the restoration of systems, and the resumption of operations are all part of most disaster recovery plans. These tactics are frequently included in a business continuity management (BCM) plan, which is aimed to help organizations sustain operations with the least amount of downtime possible. - Incident Response A combination of protocols and techniques for identifying, investigating, and responding to threats or destructive occurrences is known as incident response. It prevents or minimizes system damage caused by attacks, natural disasters, system failures, or human mistakes. Any harm to information, such as loss or theft, is included in this damage. An incident response plan (IRP) is a regularly used tool for incident response. - Infrastructure Security Networks, servers, client devices, mobile devices, and data centers are among the infrastructure components that are protected by infrastructure security techniques. Without sufficient protection, the increased interconnectedness between these and other infrastructure components puts information at risk. - Vulnerability Management Vulnerability management is a technique for lowering an application's or system's inherent hazards. The goal of this method is to find and fix vulnerabilities before they are exposed or exploited. Your information and resources will be more secure if a component or system has fewer vulnerabilities. To detect flaws, vulnerability management approaches rely on testing, auditing, and scanning. Technologies for Information Security Adopting a mix of techniques and technologies is required to develop an effective information security strategy. The following technologies are used in the majority of strategies. - Blockchain Cybersecurity Blockchain is a type of cybersecurity that is based on immutable transactional events. Distributed networks of users check the legitimacy of transactions and ensure that their integrity is preserved in blockchain technologies. While these technologies are still in their infancy, several businesses are beginning to incorporate them into their products. - Cloud Security Posture Management (CSPM) CSPM is a set of methods and techniques that you can use to assess the security of your cloud resources. These tools let you scan setups, compare protections to benchmarks, and make sure security policies are executed consistently. CSPM solutions frequently include remediation advice or guidelines that you may employ to improve your security posture. - Data Loss Prevention (DLP) Tools and techniques that protect data from loss or modification are included in DLP strategies. This includes categorizing data, backing it up, and keeping track of how it is transferred within and outside the company. You can use DLP systems to check outgoing emails, for example, to see if sensitive information is being shared inappropriately. - Endpoint Detection and Response (EDR) Endpoint activity can be monitored, suspicious activity can be identified, and threats can be automatically responded to with EDR cybersecurity solutions. These solutions are designed to improve endpoint device visibility and can be used to keep threats out of your network and information out of your hands. Continuous endpoint data collecting, detection engines, and event logging are all used in EDR solutions. Firewalls are an additional layer of security that can be applied to networks or applications. You can use these tools to filter traffic and report data to traffic monitoring and detection systems. Firewalls frequently use pre-defined lists of acceptable and unapproved traffic, as well as regulations that determine the rate and volume of traffic that is permitted. - Intrusion Detection System (IDS) IDS solutions are tools for monitoring and detecting threats in incoming traffic. These technologies analyze communications and send out alerts if anything looks suspicious or dangerous. - Intrusion Prevention System (IPS) IDS and IPS security solutions are comparable, and the two are frequently used together. These solutions respond to suspicious or malicious traffic by blocking requests or terminating user sessions. IPS solutions can be used to regulate network traffic according to security policies. - Security Incident and Event Management (SIEM) SIEM solutions allow you to collect and correlate data from a variety of sources. This data aggregation allows teams to more efficiently spot threats, manage warnings, and provide better context for investigations. SIEM solutions are also beneficial for logging system events and reporting on performance and events. This data can then be used to demonstrate compliance or optimize configurations. - User Behavioral Analytics (UBA) UBA solutions collect data on user activities and correlate them to create a baseline. The baseline is then used as a comparison against new behaviors to find inconsistencies. These inconsistencies are then flagged as potential threats by the solution. UBA systems, for example, can be used to monitor user activities and detect if a person begins exporting huge volumes of data, signaling an insider threat. Information security is implemented by businesses for a variety of reasons. The confidentiality, integrity, and availability of company data are usually the key aims of InfoSec. Since InfoSec is so broad, it frequently necessitates the installation of multiple forms of security, such as application security, infrastructure security, encryption, incident response, vulnerability management, and disaster recovery. Monitor Your Entire Application with Atatus Atatus provides a set of performance measurement tools to monitor and improve the performance of your frontend, backends, logs and infrastructure applications in real-time. Our platform can capture millions of performance data points from your applications, allowing you to quickly resolve issues and ensure digital customer experiences. Atatus can be beneficial to your business, which provides a comprehensive view of your application, including how it works, where performance bottlenecks exist, which users are most impacted, and which errors break your code for your frontend, backend, and infrastructure.
<urn:uuid:1410bc9f-265b-4287-9588-8e7ae93bce6c>
CC-MAIN-2021-43
https://www.atatus.com/glossary/information-security/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587659.72/warc/CC-MAIN-20211025092203-20211025122203-00510.warc.gz
en
0.928332
2,098
3.375
3
Annealing is a heat treating process that softens steel. This can make it easier to form or machine. It’s especially useful if you need to cut something that’s been welded up, like when you need to repair stripped threads on a shaft. Metal is made up of a crystalline structure which directly relates to its mechanical properties. If you can modify the structure, you can adjust its hardness, malleability, toughness, tensile strength, and a whole slew of other things. So how do you anneal steel? To anneal steel, heat it up about 100 degrees F above its critical temperature, soak it at that temp for 1 hour per inch of thickness, and let it cool at a maximum rate of 70 F per hour. Ok, that’s the short answer. Let’s go over how to do this in real life, depending on the tools you have access to, along with a few tips and tricks to help you get it (mostly) right the first time. How to Anneal Steel In order to anneal steel, you’re going to need a way of heating up the metal until it’s bright red, hold it at that temperature for a while, and then very slowly allow it to cool. There are two main approaches to this: using a torch, forging furnace, or other non-regulated source of heat, or using a programmable heat treating oven. Using a Heat Treating Oven - Most controlled process, most consistent results - Best way to fully anneal steel, right to the core - If the oven is programmable, you can set it and walk away - Really effective for parts with variable thicknesses - Can be unnecessarily time consuming for small parts, or if a full anneal isn’t important - Heat treating ovens aren’t readily accessible to a lot of people To execute this properly, it’s best to know the exact grade of steel you’re working with. If you bought the steel from a supplier, check with them for the recommended annealing temperature. To be honest, it doesn’t really vary all that much – typically you’ll be annealing in the range of 1450-1650 F or so, but it’s still ideal to get an exact temperature to fully anneal the metal. If you really have no clue what the steel is, I usually just start at 1500 F and try again at 1550 F if it doesn’t work as planned (repeat in increments of 50 as needed). Not the most effective method by a long shot, but it usually works. It’s ok to go a little too hot as long as you don’t melt the steel. Once the oven is up to temperature, you’re going to need to let the metal “soak” – these means just holding it at that temperature. What this does is allow the metal to get hot enough inside, so it’ll be fully annealed the whole way through. A rule of thumb for this is to soak the metal for one hour for every inch of thickness. If you’re working with a really inconsistently shaped piece of steel that’s thicker in some sections than others, just go with the thickest section. So if the part is a shaft that’s 4″ diameter on the thick end and 2″ diameter on the small end, let it soak for 4 hours. The nice thing about using heat treating ovens is that aside from having a really exact temperature, slow cooling is very easy. Just turn off the oven and keep the door closed. The fire bricks will hold the heat long enough to really control the cool down. Alternatively, some ovens will let you program the cooldown rate. In that case, set it to 70 F per hour. You can pull the part out before it’s fully cool – it’s fine if it’s still a couple hundred degrees. I find that usually if I program the oven in the afternoon and start the cycle, the part will be ready to pull out in the morning. Unless it’s a massive 8″ thick block, that is – it would take 8 hours just to soak it! Once it’s cool enough to touch, test it with your preferred method for checking hardness to make sure that the process worked as planned. Using a Torch - Really quick for smaller parts, like wires or clips - A torch is generally more accessible to most people - Once you have an eye for steel colors at high temperatures, you don’t necessarily need to know the exact grade of steel - Trickier to get a full anneal, achieving maximum malleability - Takes more skill - Time consuming for larger parts - Very challenging for parts with variable thicknesses This, in my opinion, is the runner-up in terms of annealing processes. If you can use an oven, you’ll pretty well always get better results with that instead of a torch. That said, using a torch will work just fine most of the time. Here’s the process, with a few tips to make success more likely: Especially if you’re working on larger pieces (like 1″ or thicker) try using a rosebud tip on an oxyfuel system. You’ll have an easier time heating up the metal consistently, without overheating certain sections. Keep the flame away from any small, thin sections of the part. These will be really easy to get too hot and melt. If there are variable thicknesses, try putting the flame on the thicker part and allow the heat to work its way to the thinner sections. Get the part a nice and orange-red. If there’s one thing to memorize from this, this is it: cherry red is for heat treating, orange-red is for annealing. If you’re not sure about the color of steel at various temperatures, I made this downloadable resource: It also includes the colors at lower temperatures, which are usually used for tempering. Print it out and tape it on to your toolbox. Keep in mind, though, that depending on your printer ink, monitor display, and the grade of steel, it may not match perfectly to the actual temp of the hot metal. It ain’t perfect, but it’s a decent guide to get started. Another tip: Try to avoid annealing in direct sunlight. It’ll make it really hard to judge the color of the steel, so you could easily end up overcooking it. Do it inside a shop or garage if you can. One more way of checking that the steel is hot enough is to check it with a magnet. Steel loses its magnetism once it’s at its “critical temperature”. So go smash open an old TV or microwave forone of those big, chunky magnets in the name of good workmanship! Once it loses its magnetism, let it continue to brighten up just a little bit, since annealing needs to be done about 100 F above the critical temperature. Heat up the metal nice and steadily, and give it enough time to get hot in the center, too. Once it’s that beautiful orange-red, now comes the tricky part: slowing the cool. Slow Cooling Options Air cooling is too fast for annealing, so you’ll need to help the part to retain its heat once the torch is off. Here are a few ways of doing this: Dry Sand or Vermiculite This can be an effective way of keeping the part warmer for longer. Vermiculite is something that’s added to soil to make plants happy, and it’s also a great insulator. Sand is great for retaining heat, too. One thing worth noting is that it needs to be pretty pure stuff, you don’t want any roots or mud in the mix if at all possible. Construction or play sand work well. Do not use sand or vermiculite that is moist. Moisture + red hot glowing metal = undesireable results. Basically, it just won’t retain the heat, the part will cool down too quickly, and you’ll have to redo the annealing. There are also stories floating around the internet about things exploding when there’s moisture. I think that this is more of a problem with larger stones/bricks, which can crack and explode when the moisture turns to steam, but it’s best to err on the side of caution and avoid explosions when possible. It’s best to just totally bury the metal to really insulate it. If you’re doing something the size of a knife, then let it sit in a 5 gallon pail of the stuff. It’s cheap and reusable so don’t be stingy. If you’re looking for vermiculite, you can pick it up on Amazon fairly cheap, or you can check around at local home/garden stores. This is convenient since there’s less of a potential for making a mess, and you can roll it up and put it back on the shelf very easily. There are a few different kinds that work perfectly fine. You can get blankets for chimneys and wood stoves that are really effective. Another good option is to pick up a roll of ceramic fiber insulation, which will usually be pretty easy on the budget and will last you a while. Trick for Cooling Small Parts Some parts are small enough to be next to impossible to slow cool unless they’re in an oven. Here’s one way around that: Heat up a larger block of metal or two along with the small part that you’re annealing. When you put it in the insulation, put the larger hot block(s) in contact with the small piece. That will keep it hot long enough to get a nice, slow cool for annealing. It’s a solid way of making steel take many hours to cool. The ideal cooldown rate for annealing steel is about 70 F per hour, down to about 500 F. In other words, a piece of steel that’s cooling from 1500 F to 500 F should ideally take about 14 hours. Actual ideal times will vary by grade of steel, but that’s a decent rule of thumb. Lots of guys like to let it take 24 hours, but personally I find that to be a bit unnecessary unless it’s a special grade of steel. What Steels Can Be Annealed Generally speaking, it’s tool steels that are most commonly annealed. You’ll need to soften the steel to be able to cut or bend it. Alloy steels can also be worthwhile to anneal, but this is where you should get to know your grades. Depending on the alloy, the annealing temperatures might vary a lot more than you’re expecting. Anything that can be hardened can be annealed. You won’t see much of a change in something that’s really low carbon, like 1018 mild steel. In something like a 4140, though, the results can be very noticeable. How to Tell What Material You’re Working With This is the tricky part. Ideally, you bought the metal from a supplier, and they can tell you the exact grade and heat treating temperatures. In real life, though, this isn’t always the case. This is where torch annealing really shines. Just heat it up orange-red, slow cool it, and don’t worry about it. Otherwise, it really helps to know what kinds of steel are common for different applications. Google is your friend, too. Just try searching something like “what grade of steel is ____ made from” and see what comes up. Here are a few guidelines for common mystery metals: |Shafts||For light-duty shafts, usually a mild steel is used, which won’t need annealing. Heavier-duty shafts are often made from 4140 steel. Anneal at 1600 F.| |Springs||Leaf springs and coil springs from vehicles are usually made from a 5160 or equivalent steel. Not always, though. For 5160, anneal it at 1450 F.| |Rebar||Your guess is as good as mine. Rebar is made from whatever scrap metal is available, and it’s not very consistent, either. You could have one end of the bar that’s dead soft mild steel, and the other end of the same bar that’s fully hard. Just torch anneal it by eye and hope for the best. If you want to know more about rebar, check out this article about what it’s made from.| |Rail Spike/Track||Again, not always the most consistent in terms of composition. Usually, tracks will tend to be more heat treatable than spikes. Fairly often it will be something similar to an A36, which can be annealed at around 1550-1600 F. Check out this article for spikes and this article for tracks to learn more about common compositions.| |Structural Steel (I Beams, C Channel, Etc)||The most common structural steel is A36, although there are variations. This is more consistently used for the heavy-duty stuff, like industrial construction. For the small stuff, it could still be A36, or it could just as likely be something else. Anneal at 1550-1600 F.| What’s the difference between annealing and tempering? Annealing fully softens the metal, making it malleable, whereas tempering simply reduces the brittleness of the metal. Annealing is done at high temperatures, usually at about 1500 F for steels. Tempering is done at low temperatures, typically up to about 500 F. Typically tempering is done after a hardening process to relieve internal stresses and prevent future catastrophic failure. What’s the difference between annealing and normalizing? Annealing is a very slow, controlled cooling process, whereas normalizing is cooled much faster in open air. Normalizing is primarily done to reduce internal stress and make the grain structure more uniform. Normalized steel is usually partially hard, instead of fully soft like annealed steel. Normalizing is also significantly cheaper, since the parts are cooled in open air instead of sitting inside an expensive furnace, slowing down production. Can I anneal other metals, like copper? Copper can be annealed, although the process is slightly different. The temperature for annealing copper is typically 700 F, or a glowing red color. The main difference is that annealing copper doesn’t require slow cooling; actually, a rapid water quench will likely give the best results. Other metals can be annealed based on grade and type. Brass, silver, and certain grades of aluminum can be softened by this process.
<urn:uuid:4e114bf9-632c-4792-bfb9-8c80c48ade96>
CC-MAIN-2021-43
https://makeitfrommetal.com/beginners-guide-on-how-to-anneal-steel/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.56/warc/CC-MAIN-20211020024111-20211020054111-00310.warc.gz
en
0.915181
3,206
2.765625
3
Everyone wants to know about which food with low carbs and high protein. Almost everything in our body needs proteins, including our skin, blood, and bones. It is the key to the repair and regeneration of cellular tissue. And because proteins take longer to digest than carbohydrates, protein-rich foods can keep you full longer. But not whole protein sources offer the same advantages (looking at you, meat products filled with sodium). These protein-rich foods also contain fibers, minerals, and other essential nutrients. For smarter meals, load with eggs, seafood, sugar-free dairy products (such as yogurt), beans, chickpeas, peas, seeds, nuts, poultry, legumes, and lean cuts of beef and pork. “Add a few servings of fruit per day, and your diet will be balanced and will contain fewer carbohydrates than the typical American diet,” he says. Choosing the right types of carbohydrates for your high-protein and low-carbohydrate diet is crucial. “If eating low carbohydrates is important to you, make sure you use your carbohydrates wisely and pack lots of fruits, vegetables, whole grains, nuts, seeds, beans, and low-fat dairy products,” the study says. That way, you will still receive a balanced amount of nutrients. And when it comes to what is high in protein and low in carbohydrates, “there is no limit, but it depends on the goal of the consumer. If you follow a diet, you cannot eat more than 20 to 50 grams of carbohydrates per day. So this is the list which food with low carbs high protein. Fish is the first food with low carbs high protein. Certain types of shellfish are raised as protein forces, such as yellowfin tuna, halibut, and tilapia. Try the Bahian halibut, a Caribbean dish that is well seasoned with coconut flavors; It offers 48.6 g of protein, 19.5 g of fat, 1.8 g of fiber, 5.1 g of net carbohydrates, and 400 calories per serving. Take tuna and celery salad for lunch and enjoy a midday meal full of proteins; A serving offers 37.9 g of protein. Your parents ever advised you to eat your vegetables, and she was right. Especially when it comes to spinach. “Anyway, you have to make sure you eat lots of vegetables. They not only provide important vitamins and minerals but also contain fiber that promotes health,” says Gorin. “Three cups of spinach contain about 3 grams of carbohydrates, two of which are dietary fiber.” 3. Pumpkin Seeds Pumpkin seeds are full of potassium, magnesium, zinc, and iron. These minerals are the key to maintaining energy levels. The seed pack is a super satisfying combination or about 6 grams of fiber, 7 grams of protein per 1.5-ounce snack pack, making them a satisfying and nutritious dense addition to soups, stews, or baked. Tofu is an outstanding vegetarian protein to add to entry because it quickly absorbs the flavors with which it is cook. Season the baked tofu with a spicy shovel such as a chipotle marinade or a Moroccan twist. Try Monday’s Tofu Pad Thai meatless dinner, which contains 22.6 g of protein, 28.4 g of fat, 7.6 g of fiber, 13.7 g of net carbohydrates, and 413 calories. It is the best food with low carbs high protein. 5. Part Skim Cheese Only one piece of partially creamed mozzarella can add 8 grams of protein, which is the same as an egg! Because dairy products provide calcium, magnesium. And potassium, they also help reduce swelling, keep blood pressure balanced, and help you stay energized throughout the day. Use about 1/3 cup of cheese is the only protein source in the food (such as a bowl of homemade vegetables); use 1/4 cup if it is to add flavor (for example, an omelet). The son of the protein poster, an egg, offers 6 g of complete protein. This means that it contains all the amino acids that people need in their diet. Start the day with full of proteins, such as scrambled eggs with cheddar cheese, Swiss chard, and Canadian bacon. This main course contains 32.6 g of protein, 27.6 g of fat, 1.2 g of fiber, 3.6 g of net carbohydrates, and 400 calories. Salmon is the best food with low carbs high protein. Because it is super rich in protein and without carbohydrates, salmon is an excellent addition to your plate, says Gorin. “Three grams of cooked Atlantic wild salmon yields about 25 grams of protein. It would be best if you strived to eat at least two 3.5-ounce servings of cooked fatty fish such as salmon every week,” she says. When you eat the fish, you also receive a dose of omega-3 EPA and DHA that will keep your heart healthy. 8. Protein Bars Look for those that are manufactured from real ingredients for whole foods. Others on the market can contain so much added sugar that a chocolate bar might have! The bars that we love as nuts, nuts, seeds, proteins, or flour-based on legumes. For example, RX bars contain a combination of protein and fiber of proteins, dates, and other real ingredients such as peanut butter. The mixture is a satisfying snack, low in sugar and high in protein. Walnuts offer heart-healthy fats and lots of protein. Peanuts, cashews, and almonds, in particular, are high bets for protein-rich snacks. The pretzel bar of the Atkins chocolate peanut butter contains 16 g of protein between snacks or real roasted peanuts, pretzels, and creamy peanut butter. You can also take a handful of Atkins Classic Trail Mix, which is both salty and sweet, with 7 g of protein. Hey, guys, you know what? Your favorite fruit is a healthy fat that you can swallow while eating protein and carbohydrates. “You must always ensure that you eat a nutrient-rich diet that contains a balance of food groups, but paying attention to this is especially important when you follow a low-carb diet when you can limit certain foods and nutrients,” Gorin says. “By combining healthy fat such as avocado with your food, you stay satisfied longer. Put some chips on top, an omelet, or a salad.” 11. Roasted Chickpeas The prebiotic fiber from chickpeas helps your body’s probiotics survive and thrive and offers long-term immunological benefits. Fiber also ensures that your food or snack takes longer to digest, which means that you feel fuller with more stable energy levels. Moreover, legumes are filled with proteins, such as chickpeas, and ecologically sustainable crop. 12. Butterfly Pumpkin Pumpkin packed with vitamins A and C, as well as potassium and carotenoids that help the heart. Moreover, it is super easy to make: “Butternut squash is excellent as a side dish, roast it! – and it also changes into” noodles. “You can reduce your carbohydrate intake by making ‘pasta’ with vegetables,” says Gorin. According to the USDA, less than 10% of Americans eat enough fish (2-3 servings per week). The good news: it’s super easy to add more protein-filled seafood to your canned tuna list per day, either packaged in water or mixed with other flavors such as these Sea Chicken or Freshé packages. Is there anything better than eating a handful of pistachio nuts? The nut packed with proteins of vegetable origin, which offer a maximum of 6 grams per ounce, as well as 3 grams of fibers. “The combination of proteins and fibers helps you feel full longer,” says Gorin. “I like to use pistachio nuts without chaff as a substitute for croutons in soups and salads to keep my intake of refined carbohydrates low.” Half a cup of cottage cheese with little sodium can contain up to 20 grams of protein, making it ideal for morning meals. Try the Good Culture portable cups for excellent taste, texture, and nutrition. All flavors made with living and active cultures, which may contain probiotic properties to improve intestinal health. 16. Hard-Boiled Eggs Considered one of the best available protein sources, eggs are an economical, nutritious, and versatile ingredient that can add to any diet. They also offer choline, an essential nutrient that is involved in memory, mood, and muscle control. Two large eggs contain more than 50% of the recommended choline that you need every day, and only one has about 8 grams of protein per doll. Autumn is officially here, and that is a piece of excellent news for his obsession with apples and his waist. Although the fruit contains carbohydrates, Gorin says it is still part of a balanced diet. “Choose whole fruit and eat the skin that offers a lot of fiber,” says Gorin. “Eating a whole fruit such as an apple is a better option for a low-carbohydrate diet, rather than drinking juice.” For comparison, Gorin says that a medium-sized Gala apple contains 24 grams of carbohydrates, 18 grams of sugar, and 4 grams of fiber, while a cup of juice contains 8 grams of carbs, 24 grams of sugar and 0.5 grams of fiber. “Then, you get more fiber and less sugar in the apple.” 18. Soy Milk This protein-rich alternative to milk is packed with antioxidants and vegetable minerals and can help improve your cholesterol levels. This is because it contains less saturated fat than whole milk or other vegan exchanges (ahem, coconut oil). Look for unsweetened versions that contain between 7 and 8 grams of vegetable protein per serving, provide the least amount of ingredients, and are enriched with the same vitamins and minerals as cow’s milk (vitamins A and D). 19. Peanuts And Peanut Butter What can’t peanut butter do? It contains 8 grams of vegetable protein per 2 tablespoons, and the nuts are rich in heart-healthy monounsaturated fats. Also, peanuts are the best source of arginine, an amino acid that can help lower blood pressure. Salted nuts and nut butter are usually too edible. (The surface salt offers a lot of taste despite the limited amounts of sodium). Look for those that contain around 140 mg of sodium per serving or less. This list contains food with low carbs high protein. Foods high in protein and low in carbohydrates can also be delicious. Atkins has compiled a list of the best sources of protein between snacks and low carb foods. Protein is a cornerstone or essential nutrition. The body uses this macronutrient to build and repair tissues, produce enzymes and strengthen bones, muscles, and cartilage. Nutritionists agree that eating protein foods can increase satiety and help burn fat.
<urn:uuid:87ea26a4-b7f6-4f35-a0a3-91b15a94e59a>
CC-MAIN-2021-43
https://makepersonality.net/food-with-low-carbs-high-protein/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585242.44/warc/CC-MAIN-20211019043325-20211019073325-00350.warc.gz
en
0.932341
2,334
2.703125
3
The main idea is to demonstrate that errors in judgment happen all the time, and it is not a random occurrence. It is also to present the complex character of these mistakes as a combination of bias and noise, eventually recommending tools for managing this issue and maintain strict decision hygiene. Introduction: Two Kinds of Error The introduction presents the book’s central theme: handling human errors, and describes two types of such errors: noise and bias. It also shows graphic representation with A on target, B – noisy, C – biased, and D – a mix of noise and bias. Part l: Finding Noise This part explores the difference between noise and bias, showing that public and private organizations can be noisy. It reviews two areas: sentencing (public sector) and insurance (private sector). 1. Crime and Noisy Punishment This chapter presents the result of various research projects that convincingly demonstrate judge decisions depend on many irrelevant factors such as lunchtime, weather, and whatnot. It discusses Marvin Frankel’s organization “The Lawyers’ Committee for Human Rights” and its legislative achievement in establishing sentencing guidelines. Here are data from the study of results:” expected difference in sentence length between judges was 17%, or 4.9 months, in 1986 and 1987. That number fell to 11%, or 3.9 months, between 1988 and 1993.” In 2005 congress changed guidelines from mandatory to advisory, and variance between sentences by different judges nearly doubled. 2. A Noisy System This chapter discusses noise in the insurance business. First, it describes the result of the noise audit in the insurance company that discovered 55% variance in underwriters’ premium estimates, even if executives’ expectations were around 10%. It then analyses how this could happen and concludes that it resulted from the illusion of agreement. The further discussion includes psychological processes that lead to this, costs of high noise levels, and the need for regular noise estimates and measures to decrease it. 3. Singular Decisions This chapter discusses singular decisions vs. recurrent decisions and concludes that these are also quite noisy. The main point here is singular decisions are the same as recurring decisions made only once, so people should apply the same noise-reducing technics in both cases. Part II: Your Mind Is a Measuring Instrument Part II investigates the nature of human judgment and explores how to measure accuracy and error. It discusses how human decisions are susceptible to both bias and noise. This part makes an interesting point:” judgment can therefore be described as measurement in which the instrument is a human mind. Implicit in the notion of measurement is the goal of accuracy—to approach truth and minimize error.” 4. Matters of Judgment This chapter presents a case study about CEO selection as an example of the judgment process overloaded with relevant and irrelevant information. First, it offers the idea of internal signal:” The essential feature of this internal signal is that the sense of coherence is part of the experience of judgment. It is not contingent on a real outcome. As a result, the internal signal is just as available for nonverifiable judgments as it is for real, verifiable ones.” Further, it reviews ways to evaluate judgment even if results are often inconclusive. It also discusses the value of consistency and defines noise as an inconsistency that damages the system’s credibility. 5. Measuring Error This chapter discusses how much bias and noise contribute to error. The main point here is that decision-makers should handle noise as rigorously as bias because it could cause similar levels of damage. This chapter also provides a bit of simple statistical tools relevant for measuring bias and noise. 6. The Analysis of Noise This chapter demonstrates the use of tools to analyze noise in sentencing. It uses the breakdown of the system noise into the Level and the Pattern noises: - Level noise is variability in the average level of judgments by different judges. - Pattern noise is variability in judges’ responses to particular cases. It also gives formula: System Noise2 = Level Noise2 + Pattern Noise2 The conclusion: “Level noise is when judges show different levels of severity. Pattern noise is when they disagree with one another on which defendants deserve more severe or more lenient treatment. And part of pattern noise is occasion noise—when judges disagree with themselves.” 7. Occasion Noise This chapter discusses the noise from multiple small, difficult-to-measure factors. The repetitive estimates of unknown data demonstrated that the best assessment comes as an average of numerous estimates, with the first being usually closer to the truth. It parallels multiple individual estimates with one estimate by the crowd and finds it correct, naming it “the crowd within.” This chapter also discusses sources of occasional noise: psychological such as mood, gullibility, weather, and so on. The main point is that individuals are not constantly the same, and their behavior and decisions depend on multiple factors. It refers to interesting research demonstrating a 19% drop in granting asylum if the previous two positive asylum hearings. The conclusions are: “Judgment is like a free throw: however hard we try to repeat it precisely, it is never exactly identical.” and “Although you may not be the same person you were last week, you are less different from the ‘you’ of last week than you are from someone else today. Occasion noise is not the largest source of system noise.” 8. How Groups Amplify Noise This chapter reviews group decision-making and finds it even noisier than individual decision-making. It occurs due to an increase in number and influence of irrelevant factors:” Who speaks first, who speaks last, who speaks with confidence, who is wearing black, who is seated next to whom, who smiles or frowns or gestures at the right moment.” The chapter reviews groups’ music downloads, various referenda, and web comments in the UK and the USA. The chapter also discusses informational cascades when a slight change in the sequence of presentations creates a path-dependent dynamic of support to one decision. The final part of the chapter discusses group polarization when one idea initially gets incrementally higher support than others later, resulting in increasingly higher support when people rush to join the majority. It generally leads to higher levels of noise and errors. The conclusion:” Since many of the most important decisions in business and government are made after some sort of deliberative process, it is especially important to be alert to this risk. Organizations and their leaders should take steps to control noise in the judgments of their individual members.” Part III: Noise in Predictive Judgments Part II explores predictive judgment, the use of rules and algorithms, and the superiority of these methods over humans in predictive power. 9. Judgments and Models This chapter compares the accuracy of predictions made by professionals, by machines, and by simple rules. The conclusion is that the professionals come third in this competition. The chapter compares the new employee’s performance prediction based on human judgment and formal modeling and algorithms to reach this conclusion. The model beats humans not only in this case but also in clinical predictions. Moreover, it is true not only for formal modeling but also for modeling individual approaches. The model of a person predicts future outcomes better than this person’s judgment. 10. Noiseless Rules This chapter explores why algorithms are better than experts and shows that noise is a significant factor in human judgment’s inferiority. Predictions are accurate to the extent that prediction matches outcome as measured by the percent concordant (PC). PC of 50% is a random match, and higher means more predictable power. Here is a nice graph for complexity increase: The chapter analyses this and concludes that, generally, simple rules work better. However, AI machine learning produces even better results. The chapter then reviews an example of better bail decisions. In the end, the chapter discusses the reasons people distrust algorithms and rules. 11. Objective Ignorance This chapter discusses an essential limit on predictive accuracy: most judgments are made in a state of objective ignorance because many things the future depends on can not be known. The chapter reviews the meaning of objective ignorance in-depth and provides multiple examples from pundits to judges and bail panels. One fascinating point here is the defiance of ignorance and human overconfidence, which adds a lot to the noise, lowering decision-making quality. 12. The Valley of the Normal Finally, this chapter shows that objective ignorance affects not just an ability to predict events but even the capacity to understand them—an essential part of the answer to the puzzle of why noise tends to be invisible. The chapter also describes a large-scale longitudinal project tracing thousands of children and families over decades, analyzing predictions and outcomes. The result:” The main conclusion of the challenge is that a large mass of predictive information does not suffice for the prediction of single events in people’s lives—and even the prediction of aggregates is quite limited.” In other words, it demonstrated the difference between knowledge based on data and understanding of the situation that could produce a valid prediction. In the end, the chapter provides the following list of the limits of agreement: - “Correlations of about .20 (PC = 56%) are quite common in human affairs.” - “Correlation does not imply causation, but causation does imply correlation.” - “Most normal events are neither expected nor surprising, and they require no explanation.” - “In the valley of the normal, events are neither expected nor surprising—they just explain themselves.” - “We think we understand what is going on here, but could we have predicted it?” Part IV: How Noise Happens Part IV explores psychological causes of noise, “including personality and cognitive style; idiosyncratic variations in the weighting of different considerations; and the different uses that people make of the very same scales.” 13. Heuristics, Biases, and Noise This chapter presents three important judgment heuristics on which System 1 extensively relies. It shows how these heuristics cause predictable, directional errors (statistical bias) as well as noise. For example, these errors could be aiming at the same bull’s eye but hitting different spots or aiming at different bull’s eyes but hitting the same place. The authors discuss substitution, conclusion, and other psychological biases. They caution against blaming errors on unspecified biases and distorting evidence to fit prejudgment based on the first impressions. They also suggest that biases common for a group create systemic bias, but if biases are different, it just makes more noise. 14. The Matching Operation This chapter focuses on matching—a particular operation of System 1—and discusses the errors it can produce. It mainly comes down to the difference in measurement scales when the exact estimate creates errors because of scaling mismatch. This chapter turns to an indispensable accessory in all judgments: the scale on which the judgments are made. It shows that the choice of an appropriate scale is a prerequisite for good judgment and that ill-defined or inadequate scales are an important source of noise. Here authors provide the formula for measuring noisy scales: Variance of Judgments = Variance of Just Punishments + (Level Noise) 2 + (Pattern Noise) 2 They also provide a graphic representation for punitive scales: This chapter explores the psychological source of what may be the most intriguing type of noise: the patterns of responses that different people have to different cases. Like individual personalities, these patterns are not random and are mostly stable over time, but their effects are not easily predictable. Here is another formula: (Pattern Noise)2 = (Stable Pattern Noise) 2 + (Occasion Noise) 2 17. The Sources of Noise This chapter summarizes the previous discussion about noise and its components. It also proposes an answer to the puzzle raised earlier: why is noise, despite its ubiquity, rarely considered an important problem? Here is a combined graphical representation of Mean Square Error (MSE): Part V: Improving Judgments Part V explores ways to improve human judgment. 18. Better Judges for Better Judgments This chapter discusses the characteristics of superior judges. Authors look at such characteristics as Intelligence and Cognitive style. They also discuss the role of true experts, who produce verifiable predictions and respect-experts – people with credentials who make unverifiable statements. 19. Debiasing and Decision Hygiene This chapter reviews many attempts to counteract psychological biases, with some clear failures and some clear successes. It also briefly reviews debiasing strategies and suggests a promising: asking a designated decision observer to search for diagnostic signs that could indicate, in real time, that a group’s work is being affected by one or several familiar biases. The authors look at Ex Post and Ex Ante debiasing and provide some experimental data on this. They also discuss debiasing limitations. One of the methods they discuss is a decision observer with a checklist to assure proper coverage of biases and decision points. Overall, they suggest strict decision hygiene to decrease both biases and noise. 20. Sequencing Information in Forensic Science This chapter reviews the case of forensic science, which illustrates the importance of sequencing information. The search for coherence leads people to form early impressions based on the limited evidence available and then to confirm their emerging prejudgment. This makes it important not to be exposed to irrelevant information early in the judgment process. The authors review an example of fingerprint analysis and how various biases and noise impacted its quality. They also stress the need for a second opinion that has to be independent to be meaningful. 21. Selection and Aggregation in Forecasting This chapter reviews the case of forecasting, which illustrates the value of one of the most important noise-reduction strategies: aggregating multiple independent judgments. The “wisdom of crowds” principle is based on the averaging of multiple independent judgments, which is guaranteed to reduce noise. Beyond straight averaging, there are other methods for aggregating judgments, also illustrated by the example of forecasting. Authors here refer to Tetlock’s “Good Judgment Project” and discuss its mixed results. 22. Guidelines in Medicine This chapter offers the review of noise in medicine and efforts to reduce it. It points to the importance and general applicability of a noise-reduction strategy previously introduced with the example of criminal sentencing: judgment guidelines. Guidelines can be a powerful noise-reduction mechanism because they directly reduce between-judge variability in final judgments. Here authors pay special attention to psychiatry, the field with deficient levels of consistency between specialists’ judgments. 23. Defining the Scale in Performance Ratings This chapter turns to a challenge in business life: performance evaluations. Efforts to reduce noise there demonstrate the critical importance of using a shared scale grounded in an outside view. This is an important decision hygiene strategy for a simple reason: judgment entails the translation of an impression onto a scale, and if different judges use different scales, there will be noise. Here authors suggest that the use of a relative scale is more appropriate than absolutes. 24. Structure in Hiring This chapter explores the related but distinct topic of personnel selection, which has been extensively researched over the past hundred years. It illustrates the value of an essential decision hygiene strategy: structuring complex judgments. By structuring, authors mean decomposing a judgment into its component parts, managing the process of data collection to ensure the inputs are independent of one another, and delaying the holistic discussion and the final judgment until all these inputs have been collected. 25. The Mediating Assessments Protocol This chapter proposes a general approach to option evaluation called the mediating assessments protocol, or MAP for short. MAP starts from the premise that “options are like candidates” and describes schematically how structured decision making, along with the other decision hygiene strategies mentioned above, can be introduced in a typical decision process for both recurring and singular decisions. Part VI: Optimal Noise Part VI explores the proper noise level, considering that it is not possible or even preferable to eradicate it. 26. The Costs of Noise Reduction This chapter reviews the first two of seven major objections to efforts to reduce or eliminate noise: - First, reducing noise can be expensive; it might not be worth the trouble. The steps that are necessary to reduce noise might be highly burdensome. In some cases, they might not even be feasible. - Second, some strategies introduced to reduce noise might introduce errors of their own. Occasionally, they might produce systematic bias. If all forecasters in a government office adopted the same unrealistically optimistic assumptions, their forecasts would not be noisy, but they would be wrong. If all doctors at a hospital prescribed aspirin for every illness, they would not be noisy, but they would make plenty of mistakes. This chapter reviews five more objections, which are also common and which are likely to be heard in many places in coming years, especially with increasing reliance on rules, algorithms, and machine learning: - Third, if we want people to feel that they have been treated with respect and dignity, we might have to tolerate some noise. Noise can be a by-product of an imperfect process that people end up embracing because the process gives everyone (employees, customers, applicants, students, those accused of crime) an individualized hearing, an opportunity to influence the exercise of discretion, and a sense that they have had a chance to be seen and heard. - Fourth, noise might be essential to accommodate new values and hence to allow moral and political evolution. If we eliminate noise, we might reduce our ability to respond when moral and political commitments move in new and unexpected directions. A noise-free system might freeze existing values. - Fifth, some strategies designed to reduce noise might encourage opportunistic behavior, allowing people to game the system or evade prohibitions. A little noise, or perhaps a lot of it, might be necessary to prevent wrongdoing. - Sixth, a noisy process might be a good deterrent. If people know that they could be subject to either a small penalty or a large one, they might steer clear of wrongdoing, at least if they are risk-averse. A system might tolerate noise as a way of producing extra deterrence. - Finally, people do not want to be treated as if they are mere things, or cogs in some kind of machine. Some noise-reduction strategies might squelch people’s creativity and prove demoralizing. 28. Rules or Standards? This chapter presents the authors’ general conclusion that even when the objections to various methods such as rigid guidelines are given their due, noise reduction remains a worthy and even an urgent goal. It defends this conclusion by exploring a dilemma that people face every day, even if they are not always aware of it. Review and Conclusion: Taking Noise Seriously Here the authors once again summarize the main points of this book. They strongly recommend paying attention to the noise and applying massive efforts to limit the noise to acceptable levels while stressing that it is not possible and even not reasonable to remove it altogether. MY TAKE ON IT: I think this is an excellent book on the problem of poor decision-making that causes myriad issues and cost lots of treasure and, in some cases, lots of blood. The division of the problem into noise and bias is very effective, and specific suggestions of improvements via checklists, second independent opinions, explicit recognition of various biases, and, overall, strict decision hygiene could be highly valuable. However, I would not hold my breath anticipating improvements. I believe that problem is more in the absence of solid feedback for decision-makers in government and top levels of big corporations, which makes these people irresponsible and therefore uninterested in improving decision-making processes.
<urn:uuid:6a7ea315-d916-4b47-8710-361de2d328e7>
CC-MAIN-2021-43
https://freeind.com/2021/10/09/20211009-noise/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587608.86/warc/CC-MAIN-20211024235512-20211025025512-00670.warc.gz
en
0.93232
4,082
3.375
3
INTRODUCTION TO COMMUNICATION IN HSC In the present scenario of new worldwide and diverse workplace, the excellent communication skills are the key to success. Communication plays an important role in health and social care among healthcare management (East end, 2012). There should be an effective communication between the patient and the healthcare service provider in order to establish and maintain a positive understanding with the patient, the whole time of encountering of services. Interpersonal communication in health care takes place between service providers and their clients or members of the team and are essential in maximizing the quality care. It is the process of educating, motivating and counseling from start to end by providing good client service. The reports show the importance of communication in health and social care industry through the help of some critical cases by the experts of best assignment w 1.1 Apply relevant theories of communication to health and social care contexts Communication is a transactional process, in a health and social care context it considered as an instrumental and purposeful process. There are different theories that can be applied to health and Social care these are denoted as: Humanistic, Behaviourist, Cognitive and the Psychoanalytic. The theory is designed as to support the human dignity and self- awareness. This is based on the Maslow’s theory of human need s and the Carl Rogers theory of person –centred approach (Corcoran, 2007). Behaviourist theory refers to behaviour of service provider while passing the message, environment plays a critical role in this approach. The cognitive theory explains about intellectual of an individual and focuses on the active mental process development. The last one that helps the health care industry players to communicate with the patients is Psychoanalytic which explains, how individual connect and communicate according to their level of awareness. The theories applied in the health and social care contexts to pass the message to the patients with conscious, preconscious and unconscious mind (Prezi, 2013). In the case of Anna, when she took away to the doctor in the serious case she was not examined in the proper way .The doctor retorted to his husband Paul, by saying that she was a drunker and the hospital’s valuable time will waste to examine her. There was not any kind of medical history taken and medicine was prescribed. The result of that was couple had to go general practitioner, who sent them directly to the other hospital and there she was diagnosed with a stroke. In the case the doctor could apply one of the theories of communication because an effective communication is important to provide a quality care. 1.2 Use communication skills in a health and social care context Effective communication is central to every single one effort in the health and social care sectors. Service providers inside the sectors require good communication and interpersonal skills to perform their roles. In order to work effectively, communally with colleagues and build supportive relationships with the patients and other clients single professional should have inherent communication skills. In the health and social care context the providers meets the patient from different culture so is the possible that, people from different cultural groups interpret, Doctors behaviour in different ways (Schiavo, 2011). This can lead to ‘messages’ being misunderstood by the person on the receiving end. The use of communications in the health issues the workers can make the best of the care environment by making sure they can be seen clearly , at the same time making sure that they can be understand the problems facial expressions, voice , eyes movement , gesture and posture of patient. In addition to that the patient can be well communicate through posters, charts, pamphlets, video, films, radio, taped messages ,so these can be used to reinforce IPC with the patient. In the case of Anna the doctor was supposed to be aware with the services needed by the patient and the best way to provide it. The objective of dialogue between the patient and doctors was to manage of diseases, conditions and treatment, when they come to health care institutions with their problem while effectively communicating with them(Wanzer, Butterfield, and Gruber,2004). 1.3 Review methods of dealing with inappropriate interpersonal communication In the case of health and social care, there are many barriers to communicate with the patients. These can be inappropriate language, incongruent message, misinterpretation, the breach of confidentiality or trust and use of power while treating to a patient. The unavailability of knowledge about the particular language could become a barrier in explaining the health issues to the service provider. As the particular signs of patients could have different meaning in order to cope up with all situations the works in healthcare industry should become able to understand the voice and body language of each patient. There should be a proper care in writing prescription to a patient as the patient can become confused a prescription with a large dose of medicine (Sullivan, 2000). The person should be aware about the points which he/ she has to take care of otherwise it will lead to misinterpretation. The professional supposed to hold all the secrets information of a patient, unless consent of disclosure is not given by him. Breaking trust can caused, communication failure and it can be lead to create bad relations. The professional should not misuse the power and must respect and support the individuals and the other co-workers, in this way they can deal with inappropriate interpersonal communication. 1.4 Analyse the use of strategies to support users of health and social care services To run functions effectively in health and social care system the huge information required in order to state the beneficial strategies. Poor communication between healthcare staff and people lead to fail in understand the accurate diagnosis and their effective treatment. The professionals need to understand the stages of the counseling process. In the context of health and social care services by using the proper strategies and adopting the effective interpersonal communication skills will help in treating clients with respect, asking clear questions, and serving them feel more comfortable talking about their problems, so there problems can be solved out. The service provider needs to have technical knowledge about their area of proficiency. The strategy also supports users in motivating community to utilize the preventive and curative health services offered by the organization. Simple changes strategies adopted in the physical environment may improve communication between patients with and the doctors. Health care professionals could also use the strategy of assessment of need by which they can overcome the barrier communication. By assessing a service users’ need works will be able to provide satisfactory services to the patients (Hall, 2004). 2.1 Explain how the communication process is influenced by values & cultural factors Cultural diversity in health and social care is defined as difference in languages, foods, dress, values, norms motivational factors, cultural beliefs and cultural influences on disease and health behaviours. In the health and social care society the role of communication is important as people communicate for different reasons such as to seek cares from service users and it helps to express their feelings, emotions. In order to provide effective and optimal health care services in this multicultural environment requires that the service professional should aware with the different cultural and their values, beliefs, traditions, experiences, customs, rituals, and language. The culture attitudes affect the communication process between the patient and the service provider. If a doctor is not treating well to a patients in front of second patient, the second persons will behave and communicate in the way in which he/she interpret message. The professional’s doctors cannot assume that whole patients will be satisfied in a single way (Kreuter and McClure, 2004). Some patients need extra attention in the counseling process on the other hand the some patients behaves correctly by communicating without words. In order to provide the effective services a worker in a healthcare industry has to understand the ways people think about health and illness, behavior of individual from different culture and their habits that influence health and how the culture interacts with the environment. They should also know in what manner their actions are going to be perceived in a culture ( Seligman,2004). 2.2 Explain how legislation, charters and codes of practice impact on the communication process The relationship between the service user and health and social care staff is basically based upon trust. As per the legal aspects it becomes mandatory, the Service provider must have understood that the private information about the patients will not be used or disclosed without their consent. Patient have a legal right to confidentiality and the staff of a healthcare organization have a duty of confidence. Service user has a right to have special care if it is needed. The staffs assure that the right to privacy of susceptible people – specifically adults with incapacity and children – is respected and the duty of confidentiality has to be fulfilled. Professionals have also responsibility to keep records that are essential to make clear to service users about the service provided(McCormick, 2012.). In the health and social care context Code of Practice is designed is to support staff in making good decisions about the protection, use and disclosure of service user information. It also provides the practical guidance to assist decision-making while having the confidential information about the service user. The industry has revised the codes of practice at a particular time with respect to different cases. The legal regulations say, the service user must be kept informed about uses and disclosures of their information and they should inform with the situation in which they can give consent to the use of their information (Brewster, 2005). 2.3 Analyse the effectiveness of organisational systems and policies In order to communicate with the service user in health and social care industry the medical service providers have to maintain an effective organizational system. The organizational members have to demonstrate a duty of care towards both the staff and the service users/patients (Glanz, Rimer and Viswanath, 2008.). The organizations can value its system through improving patient experience, and by keeping the infection and mortality rates at lower. The staff should provide safer patient care trainings so that they can well communicate with the service users. If there is a strong link between workforce stress and poor trust performance then the organization has to take the positives steps (Marram and Servellen, 2009). The industry should emphasise human dignity and worth and has to enhance patient’s well-being & ensure their protection or better treatment, they can also promote their rights and counteract discrimination. Industry can well communicate through challenge in working environment while improving agency policies, procedures and service provision. The proper management of resources will be helpful to the company to provide the safe atmosphere of working to the employees and the organisation in health and social care management has to be lawful in order to maintain effectiveness of organisational systems. By giving the staff information about relevant legislation and ensuring them about their health and safety (Kosny,2006). 2.4 Suggest ways of improving the communication process Every organization in a health care system must adopt a proper communication process to a wide range of service users. In order to improve an effectiveness of communication the organization has to understand industry commitment in order to that it should examined its commitment, capacity and efforts to meet the communication needs on the daily basis in which the different elements are included such as mission, goals, policies and strategies, leadership and motivation styles as well as the workforce cultural values(Pritchard and Ryan,). Business should collect all the relevant information to e demographics and communication needs for potential clients ( Wallace,2009). By ensuring with the structure and capability and the training of its workforce meets the organization can employ and train a workforce by which the effective relationships with the services user can be maintained, while improving the communication process. The service professional in health care should help its workforce by engaging individuals from beginning to end in interpersonal communication that successfully elicits health needs, beliefs and expectations; builds trust; and conveys information that is explicable and helps to empower them. A respectful environment can be created that will helpful to service providers to understand the social culture of different patients (Tamparo and Lindh, 2007).Communication assignment help by the experts of Assignment Desk can help you in achieving dream grades. 3.1Access and use standard ICT software packages to support work As the world is growing quickly it becomes essential for the health and social care industry to take a responsibility to keep up with new innovations. Information and communication (ICT) technologies are used in the health and social care aims to upgrade the technology in the practices by which the service user can be satisfied at the greater level. The communications skills are changed as the organization of healthcare have adopted the ICT software. The Patients are proving the services from doctors supporting Health Line. Medical staff has the various opportunities to learn the practical skill through online. Now the spellings on perception have change in the E-mails, verbal skills have less emphasis. The body language study of patient is not required because the doctors are using more e-mail and video conferencing to provide the services(Thompson, 2003). The services are changed into everyday tasks and devices which are internet based. The information collected through patients is recorders in spreadsheets which reduced the unnecessary burden form the works to maintain the registers. Use of online software has reduced the time to train the employees of healthcare industries. The costs occurred such as cost of travel cost of time in the process of providing services has been reduced (Clarke, Sachs and Sumner, 2000). 3.2 Analyse the benefits of using ICT in health and social care for users of services An information and communication technology has an impact on health and social care in many aspects. ICT software in medicine and medical care possibly bring benefits to medical professional and practitioners as well as patients.The aim of using ICT in health and social care is to provide the services at any time and at any place to the needs service user. The e-heath plays an important role to understand the benefit of ICT. It provides the services especially in the situations where physician may not be available. The raising costs are the main problems in health services in both developing and developed countries. The e-health helps the organization to reduce the cost of healthcare by decentralizing the care which enables, offers the medical services at a lower cost and the quality care is also being provides by such organization(Brewster, 2005). Now a day’s many of diseases are solving by home monitoring and tele care where the patient data is extracting over the phones and sent to the medical centre to evaluate by the physician. The different types of diseases which are solving through the e-health are illness, cardiac failure, hypertension, diabetes, COPD. The result of these practices shows that most of hospitalizations cases are reduced. e-health is considered as better at the time of lack of availability of medical staff. ICT has also proved as the source to educate both the patient and the medical staff as there are many sites are available where the individual found the different diseases, symptoms and the respective medicine. There are number of medical virtual universities are established where the service providers are being trained through the traditional learning as well as the learning (Jones and Groom, 2011). 3.3Analyse how legal considerations in the use of ICT impact on health and social care ICT applications has an contribution to health and social care and also contribute to the service user’s overall care experience, and directly or indirectly support care professionals at the point of care planning and delivery of services .The confidentiality of information and consent of service user is the foundation of trust between the service provider and the user. There should be a proper understanding of ethical and legal accountability of confidentiality and consent regarding the personal data is important to maintain the legal relationship between the parties of healthcare industry. Although the use of ICT has impacted allot to the healthcare industry but beside that there are some issues are with the legal part of used medical practices. The companies have to train its medical workers while providing them the legal knowledge of the ICT impact on health and social care. The medical staff having the knowledge to apply the computer based training in the work field without harming the interest of various people involve in the communication process will give the better results to the healthcare industry .ICT training has become a core component of all formal training programmes in both student and in-service staff. The legal aspects of health and care emphases that the medical service providers should provide intelligent support for drug and dosage selection to its users. In the lack of it the patients can complain for the irresponsible behaviour of staff and organization (Kosny, 2006). The communication plays an important role in health and social care context. The report shows the different relatives theories of communication with context to health and social care. It also explains the inappropriate interpersonal communication in which the barriers to communication with the patients are explained. It is identified that there are many factors influencing the communication process within the health and social care industry such as values and culture of different service users, legislation, charters and codes of practice and organisational systems and policies. The report successfully reaches its aim by providing the social and legal impact of information and communication technology (ICT) in health and social care. - Brewster, D.S., 2005. Communication: An HSC Option Topic. Warringal Publications. - Clarke, L., Sachs, B. and Sumner, S., 2000. Health and Social Care for Advanced. Nelson Thornes. - Glanz,K., ; Rimer,B.K. and Viswanath,K.,2008. Health Behavior and Health Education: Theory, Research, and Practice. John Wiley & Sons. - Hall, P.and all,2004. Communication skills, cultural challenges and individual support: challenges of international medical graduates in a Canadian healthcare environment.MedicalTeacher. - Jones, S. and Groom, F.M., 2011.Information and Communication Technologies in Healthcare. CRC Press
<urn:uuid:723b756a-bbdd-43e0-8b73-20576e8ad1bc>
CC-MAIN-2021-43
https://www.assignmentdesk.co.uk/free-samples/communication-in-hsc
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585183.47/warc/CC-MAIN-20211017210244-20211018000244-00550.warc.gz
en
0.95111
3,591
3
3
by John Foot Verso Press, 2015, 404 pages Embracing change is the best way to keep up with John Foot’s pace in his book, The Man Who Closed The Asylums: Franco Basaglia and the Revolution in Mental Health Care (2015). Foot’s holistic approach will appeal to anthropologists and general readers alike as he gathers insight on those who were recovering from both physical and psychological maltreatment in a post-war world (169). This balanced and fair-minded account of mental healthcare reform in 1960’s Italy shows that a hospital’s culture reflects how society at large is structured (175). The book explores how the psychiatrist Franco Basaglia persuaded members of the healthcare community to shut down asylums where abusive practices were being used on patients (133). These meetings lead to legislation where the delivery of mental healthcare would be incorporated into hospitals that covered the general patient population as more people discontinued the use of psychiatric asylums (374). Foot writes that, “As director in Gorizia, Basaglia quickly became convinced that the entire asylum system was morally bankrupt. He saw no medical benefits in the way that patients were treated inside these institutions. On the contrary, he became convinced that some of the eccentric or disturbing behavior of the patients was created or exacerbated by the institution itself” (22). Basaglia sought to make asylums more humane, but as part of a larger strategy to close down asylums altogether, since reform could not redeem an outdated model of healthcare that survived a period of fascism and the Second World War (157). Along the way, Basaglia did positively impact his patients’ lives. In his research, Foot finds that, “Patients were taking back some control over their lives and over those of their fellow inmates. They were becoming people again, even citizens, with responsibilities and rights” (148). More often than not, “the disappeared people of the asylums, those who had been shut away from real life, without rights and without identities, emerged from the darkness. They showed that they could think for themselves and organize their own lives” (148-149). Basaglia and his team members never fully settled on how to improve the hospital system, however it was understood by all that “it was the institution itself that was the problem” (149). A highlight of the book occurs where Foot expresses the disagreements, contradictions and discomfort of those wrestling with Basaglia’s ideas about psychiatry. The confusion behind, “who was really in charge?” actually anchors the point of the book (112). This question mutates and resurfaces in his writing, as Foot describes different leaders taking new positions during the 1968 movement. The expectations of both doctors and patients were constantly in flux with the existence of the asylum as an institution, “[W]here the roles of doctor, patient and nurse had also, to some extent, been put on hold. Everyone inside the institution was well aware of their objective status but most were trying to free themselves of their prejudices and of their past” (174). Foot writes, “Gorizia was not just about psychiatry or anti-psychiatry. It was also about medicine in general and role of authority in society as a whole” (89). In order to convey the feeling of how authority and power transferred constantly, Foot does just that as he privileges different sources of information. For example, he writes, “Only by shifting the focus away a little from Basaglia himself can we understand the central role he played in the movement” (49). Just before readers are completely immersed in the belief that Basaglia had so much influence, Foot gently, yet directly shares how there were limits to this one doctor’s plan as well. For example, he notices how, “The history of Italy’s radical psychiatric movement in the 1960’s and 1970’s cannot be written without strong and central reference to events and policies in Perugia. For this reason, the labeling of this movement as Basaglian is a misnomer” (234). Many of the lessons about belonging to a hospital’s administration were learned by staff from other neighboring places, like Perugia, not just from leaders in Gorizia. Foot further delves into this same point by saying, “The Basaglia-centric story is a linear one. It is easier just to ignore everything else that was going on” (252). Basaglia’s dissatisfaction was with the rituals of the older systems because “Nobody is meant to stay for long in these places. There is a conscious attempt here to avoid any sense of reinstitutionalization” (363). Foot does justice to the movement by developing a report that is devoid of heavy repetition and thematically progresses in the telling of a history that’s bigger than the individual behind it. Basaglia, and the team he belonged to, saw that “society itself… needed to be transformed” (180). The whole group was very aware that “the meetings were also understood as material to be studied and analyzed, almost as though they were a kind of ongoing research project. The patients were willing participants in an anthropological and political study of institutions undergoing change” (150). Foot’s aim to bring context carefully to these events comes from a continual fear among cultural historians and anthropologists of how often peoples’ stories are taken out of context and disregarded, saying, “patients were seen here, starkly, as victims of institutional violence. The movement everywhere, like that in Gorizia, needed to move forward, or it would simply create new forms of these institutions, or help them to survive” (176). The author’s greatest strength is how he represents the life behind the eyes’ of this movement. He seizes the moments most pertinent to this part of history as he writes, “Power should only be exercised in order to negate that power” (180). He exercises his own power as a historian and negates it in the form of weighing the dissension that evolved in the details of collected memoirs and notes. He also critiques both the strengths and weaknesses of their effort in a way that respects the whole picture in the frame. He writes that “They radicalized people, their writing and activity were extremely far reaching and the movement also gave them strength and power. But they were also victims of the worst excesses of the movement: the over-powering rhetoric, a tendency towards simplification and sloganeering, the excessive verbosity” (188). The true problem with having too much rhetoric was that, in the midst of clinging to their earliest ideas about reform, some of the leaders failed to imagine the harmful consequences that could happen after closing any asylum. Power is balanced by effectively giving each voice, including that of the patients’ and their families, a chance to speak in this book. The emotional toll these adjustments took on patients offsets the perfected plan, as Foot writes, “Along the way, great risks were taken. Some people were brutally murdered, others committed suicide. Families had to deal with sons, daughters, mothers and fathers who had serious problems, and who had been shut away behind closed door for years… The outside world was a difficult place in so many ways, it was easy for ex-patients to fall through the cracks in society. Once the asylum system had been done away with, the real work began” (393). One could argue, as Foot moderately suggests, that multiple decisions enabled Basaglia’s idea to come to fruition. As the psychiatrists pushed for reform they also networked to build an outpatient setting for their patients. However, Foot does remind his readers that the main healthcare team responsible for these actions did not anticipate how the patient population needed more room and support than the amount they were provided. Those temporary changes were not enough and Foot does not delve further to offer alternative solutions that might have made a world of a difference. His own goal, in recording these consequences, emerges as he writes, “The movement, as this book has tried to show, was polycentric, complicated, multifaceted and always influenced by local factors — historically, politically, culturally and institutionally” (234). By closing the asylums, psychiatrists were motivated to look outward and observe how social institutions such as schools and the family unit impacted the support systems of patients struggling with mental illness (243). Overall, Foot asserts that a transition this important “had multiple sources of power and inspiration” (254). Foot’s process as a historian runs parallel to that of the people in his book as he writes, “the idea was — always — to put mental illness into context. They were attempting to understand what they saw as the multiple and complicated causes of mental illness” (296). Foot knows that as a cultural historian he has a lot of control as he investigates different narratives. By checking his own authority as a writer he even undermines his own critical voice, but in a way that makes his authorship more compelling. Foot is motivated by the act of negation not nihilism. He does not terminate the existence of opposing ideas to serve his own ego-involvement as a documentarian witnessing the effects history has on today. He drives the reader to strengthen their intuition and construct a bigger image that includes the revelations and the irreparable messes from psychiatry’s past. Foot takes the time to test his own ideas to foster debate not destruction. In fact, it is Basaglia’s method of making sure “things were pushed to the limit, to expose the contradictions in the system” that inspires Foot, as he writes a mutli-vocal book about the discrepancies overlooked by previous historians (353). Foot continues to illustrate that, “Nobody had the right answer. Everyone made mistakes. Each area adopted its own road to reform, and success was measured in different ways. None of these roads were right or wrong. They were different, and they were all moving in the right direction” (284). Yet Foot’s own disappointment is in how cultural studies have fallen short of making room for the future, and the diverse viewpoints that come with it, arguing, “there is no point at all in simply repeating this standard story. It is already out there, in numerous versions – text, film, journalistic. For a historian, the only possible route is to take a critical approach to both the sources available and to the past itself” (343). Foot continues to explain the purpose of his methodology of negation by saying, “It is not easy to write about this movement, with its myths, splits, silences and possessive memories” (369). In other words, Foot sees that an honest historian, or clinician, will not treat any single perspective as an absolute. No one person can ever represent the whole puzzle, or cure for a social pathology, when all they have is a piece of it. Nirmala Jayaraman received a B.A. in Anthropology from Union College, Schenectady, NY. Her research interests include migration, family and kinship, aging, medical anthropology, cross-cultural psychology and public health. She has written book reviews for Allegra Lab: Anthropology, Law, Art & World, British Psychological Society’s The Psychologist, Anthropology & Aging, and Anthropology Book Forum. She is applying for future graduate study this year.
<urn:uuid:6ec1bc1e-0221-4002-812f-81e51ad77ee1>
CC-MAIN-2021-43
http://somatosphere.net/2016/john-foots-the-man-who-closed-the-asylums-franco-basaglia-and-the-revolution-in-mental-health-care.html/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588257.34/warc/CC-MAIN-20211028034828-20211028064828-00390.warc.gz
en
0.980259
2,390
2.53125
3
Psychological Treatment Yields Strong, Lasting Relief for Chronic Pain Sufferers Summary: A four-week course of pain reprocessing therapy (PRT) provided up to 12 months of relief from pain for chronic pain sufferers. Additionally, the psychological treatment program altered brain networks associated with pain processing. Source: University of Colorado Rethinking what causes pain and how great of a threat it is can provide chronic pain patients with lasting relief and alter brain networks associated with pain processing, according to new University of Colorado Boulder-led research. The study, published Sept. 29 in JAMA Psychiatry, found that two-thirds of chronic back pain patients who underwent a four-week psychological treatment called Pain Reprocessing Therapy (PRT) were pain-free or nearly pain-free post-treatment. And most maintained relief for one year. The findings provide some of the strongest evidence yet that a psychological treatment can provide potent and durable relief for chronic pain, which afflicts one in five Americans. “For a long time we have thought that chronic pain is due primarily to problems in the body, and most treatments to date have targeted that,” said lead author Yoni Ashar, who conducted the study while earning his PhD in the Department of Psychology and Neuroscience at CU Boulder. “This treatment is based on the premise that the brain can generate pain in the absence of injury or after an injury has healed, and that people can unlearn that pain. Our study shows it works.” Approximately 85% of people with chronic back pain have what is known as “primary pain,” meaning tests are unable to identify a clear bodily source, such as tissue damage. Misfiring neural pathways are at least partially to blame: Different brain regions—including those associated with reward and fear—activate more during episodes of chronic pain than acute pain, studies show. And among chronic pain patients, certain neural networks are sensitized to overreact to even mild stimuli. If pain is a warning signal that something is wrong with the body, primary chronic pain, Ashar said, is “like a false alarm stuck in the ‘on’ position.” PRT seeks to turn off the alarm. “The idea is that by thinking about the pain as safe rather than threatening, patients can alter the brain networks reinforcing the pain, and neutralize it,” said Ashar, now a postdoctoral researcher at Weill Cornell Medicine. For the randomized controlled trial, Ashar and senior author Tor Wager, now the Diana L. Taylor Distinguished Professor in Neuroscience at Dartmouth College, recruited 151 men and women who had back pain for at least six months at an intensity of at least four on a scale of zero to 10. Those in the treatment group completed an assessment followed by eight one-hour sessions of PRT, a technique developed by Los Angeles-based pain psychologist Alan Gordon. The goal: To educate the patient about the role of the brain in generating chronic pain; to help them reappraise their pain as they engage in movements they’d been afraid to do; and to help them address emotions that may exacerbate their pain. https://d9d13fa1980626b7fd0dc47b7d2ec18e.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html Pain is not ‘all in your head’ “This isn’t suggesting that your pain is not real or that it’s ‘all in your head’,” stressed Wager, noting that changes to neural pathways in the brain can linger long after an injury is gone, reinforced by such associations. “What it means is that if the causes are in the brain, the solutions may be there, too.” Before and after treatment, participants also underwent functional magnetic resonance imaging (fMRI) scans to measure how their brains reacted to a mild pain stimulus. After treatment, 66% of patients in the treatment group were pain-free or nearly pain-free compared to 20% of the placebo group and 10% of the no-treatment group. “The magnitude and durability of pain reductions we saw are very rarely observed in chronic pain treatment trials,” Ashar said, noting that opioids have yielded only moderate and short-term relief in many trials. And when people in the PRT group were exposed to pain in the scanner post-treatment, brain regions associated with pain processing – including the anterior insula and anterior midcingulate —had quieted significantly. The authors stress that the treatment is not intended for “secondary pain” – that rooted in acute injury or disease. The study focused specifically on PRT for chronic back pain, so future, larger studies are needed to determine if it would yeild similar results for other types of chronic pain. Meanwhile, other similar brain-centered techniques are already ememrging among physical therapists and other clinicians who treat pain. “This study suggests a fundamentally new way to think about both the causes of chronic back pain for many people and the tools that are available to treat that pain,” said co-author Sona Dimidjian, professor of psychology and neuroscience and director of the Renee Crown Wellness Institute at CU Boulder. “ It provides a potentially powerful option for people who want to live free or nearly free of pain.”https://d9d13fa1980626b7fd0dc47b7d2ec18e.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html About this pain and psychology research news Original Research: Open access. “Effect of Pain Reprocessing Therapy vs Placebo and Usual Care for Patients with Chronic Back Pain” by Yoni Ashar et al. JAMA Psychiatry Effect of Pain Reprocessing Therapy vs Placebo and Usual Care for Patients with Chronic Back Pain Chronic back pain (CBP) is a leading cause of disability, and treatment is often ineffective. Approximately 85% of cases are primary CBP, for which peripheral etiology cannot be identified, and maintenance factors include fear, avoidance, and beliefs that pain indicates injury.https://d9d13fa1980626b7fd0dc47b7d2ec18e.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html To test whether a psychological treatment (pain reprocessing therapy [PRT]) aiming to shift patients’ beliefs about the causes and threat value of pain provides substantial and durable pain relief from primary CBP and to investigate treatment mechanisms. Design, Setting, and Participants This randomized clinical trial with longitudinal functional magnetic resonance imaging (fMRI) and 1-year follow-up assessment was conducted in a university research setting from November 2017 to August 2018, with 1-year follow-up completed by November 2019. Clinical and fMRI data were analyzed from January 2019 to August 2020. The study compared PRT with an open-label placebo treatment and with usual care in a community sample. Participants randomized to PRT participated in 1 telehealth session with a physician and 8 psychological treatment sessions over 4 weeks. Treatment aimed to help patients reconceptualize their pain as due to nondangerous brain activity rather than peripheral tissue injury, using a combination of cognitive, somatic, and exposure-based techniques. Participants randomized to placebo received an open-label subcutaneous saline injection in the back; participants randomized to usual care continued their routine, ongoing care.https://d9d13fa1980626b7fd0dc47b7d2ec18e.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html Main Outcomes and Measures One-week mean back pain intensity score (0 to 10) at posttreatment, pain beliefs, and fMRI measures of evoked pain and resting connectivity. At baseline, 151 adults (54% female; mean [SD] age, 41.1 [15.6] years) reported mean (SD) pain of low to moderate severity (mean [SD] pain intensity, 4.10 [1.26] of 10; mean [SD] disability, 23.34 [10.12] of 100) and mean (SD) pain duration of 10.0 (8.9) years. Large group differences in pain were observed at posttreatment, with a mean (SD) pain score of 1.18 (1.24) in the PRT group, 2.84 (1.64) in the placebo group, and 3.13 (1.45) in the usual care group. Hedges g was −1.14 for PRT vs placebo and −1.74 for PRT vs usual care (P < .001). Of 151 total participants, 33 of 50 participants (66%) randomized to PRT were pain-free or nearly pain-free at posttreatment (reporting a pain intensity score of 0 or 1 of 10), compared with 10 of 51 participants (20%) randomized to placebo and 5 of 50 participants (10%) randomized to usual care. Treatment effects were maintained at 1-year follow-up, with a mean (SD) pain score of 1.51 (1.59) in the PRT group, 2.79 (1.78) in the placebo group, and 3.00 (1.77) in the usual care group. Hedges g was −0.70 for PRT vs placebo (P = .001) and −1.05 for PRT vs usual care (P < .001) at 1-year follow-up. Longitudinal fMRI showed (1) reduced responses to evoked back pain in the anterior midcingulate and the anterior prefrontal cortex for PRT vs placebo; (2) reduced responses in the anterior insula for PRT vs usual care; (3) increased resting connectivity from the anterior prefrontal cortex and the anterior insula to the primary somatosensory cortex for PRT vs both control groups; and (4) increased connectivity from the anterior midcingulate to the precuneus for PRT vs usual care. Conclusions and Relevance Psychological treatment centered on changing patients’ beliefs about the causes and threat value of pain may provide substantial and durable pain relief for people with CBP. ClinicalTrials.gov Identifier: NCT03294148.
<urn:uuid:74b51772-6bf5-42c8-ba0c-552b566637bb>
CC-MAIN-2021-43
https://brucelandonblog.wordpress.com/2021/10/04/https-neurosciencenews-com-prt-chronic-pain-19406/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588242.22/warc/CC-MAIN-20211027181907-20211027211907-00550.warc.gz
en
0.925203
2,206
2.640625
3
Mendix Studio has a lot of built-in logic that works out of the box (for example, buttons). But if you want to add custom logic, you need to create microflows. Microflows is a visual way of expressing a textual program code. A microflow can perform actions such as creating and changing objects, showing pages, and making choices. You need to use microflows for the following cases: - To change/extend the standard behavior of buttons - To add custom logic to your application - To integrate with other systems, databases, web services, etc. Examples of using microflows can be the following: - You check the values that an end-user has entered, and you either show the end-user an error message or another page - You are creating a to-do list and you want to use custom logic when the status of an item on the list has changed To view the microflows of your app in Studio, click the Microflows icon in the left menu bar: 2 Concepts and Definitions A microflow looks like a flow chart. On a new microflow a start event (a starting point of the microflow represented by a green dot) and an end event (an endpoint of the microflow represented by a red dot) are created by default. Start and end events are connected by a sequence flow (a line with an arrow), where you can add new events and activities. If Mendix Assist is on, it will be represented with a blue dot in the middle (for more information on what Mendix Assist is, see Mendix Assist). Before you start configuring microflows, familiarize yourself with the concepts and notions that the microflow editor uses: |Activities||Activities perform different functions and are displayed as blue boxes. For example, with the help of an activity you can show end-users a home page. For more information on activities, see section the Toolbox section.| |Flows||Flows are displayed as arrows that connect microflow events and activities. For more information on flows, see section the Flows section.| |Events||Events are all other elements in a flow that are not activities (not blue boxes). Decision is an example of an event. For more information on events, see the General section.| |Variable||A variable is a temporary storage for data. Variables are used to store information and refer to it when needed. For this purpose variables should have a unique name. In a microflow you can add a variable, assign a value to it and then use it in microflow activities or events. You can then change this value later if necessary. For example, you can create variable $Discount and assign it a value 0.5, and use it to calculate a price for a customer. You can use the variable only in the microflow where it was created. |Parameter||Parameters contain global variables, which means that you can use one and the same parameter in different microflows.| 3 Performing Basic Functions You can perform the following basic functions when working on microflows: - Open a microflow - Create a microflow - Duplicate a microflow - Copy and paste a microflow - Delete a microflow - Add elements to a microflow 3.1 Opening a Microflow To open a microflow in Studio, do the following: Click the microflow icon in the left menu bar. In the displayed list of microflows, select the one you want to open and click it. The selected microflow is opened. 3.2 Creating a New Microflow To create a new microflow and to start building custom logic, do the following: - Click the Microflow icon in the left menu bar. Select the module you would like to add a new microflow to and click the plus icon next to this module. For more information on what modules are, see Domain Model. Fill in the name of the microflow in the pop-up dialog and click Create. The new microflow is created, you can now add logic using events and activities. 3.3 Duplicating a Microflow To duplicate a microflow, do the following: Click the Microflows icon in the left menu bar. In the side panel, click the ellipsis icon and select Duplicate in the drop-down menu: The microflow is duplicated. 3.4 Copying and Pasting a Microflow To copy and paste a microflow, do the following: Click the Microflows icon in the left menu bar. In the side panel, click the ellipsis icon and select Copy to clipboard in the drop-down menu: Open the Studio app where you want to paste the microflow and press Ctrl +V or Cmd +V. Your microflow is pasted. For more information on copy/paste function in Studio, see the Copy/Paste Pages, Microflows, and Enumerations section in General Info. 3.5 Deleting a Microflow To delete a microflow in Studio, do one of the following: Open the microflow you want to delete and follow the steps below: Open the Properties tab. Click Delete at the bottom of the Properties tab. Click the Microflows icon in the left menu bar and do the following: In the side panel, click the ellipsis icon and select Delete in the drop-down menu: 3.6 Adding a New Event or Activity To add a new activity or event to the microflow, do the following: - Open the microflow you want to add the event or activity to. - Open the Toolbox tab. - Select the event or activity in the General, Object Activities or Client Activities section. - Drag and drop the event or activity in the microflow. 4 Toolbox Elements The Toolbox tab contains elements that you can drag and drop on to a microflow. Below is a categorized overview of all elements. The following sections are used: The General section contains various elements, such as a parameter and an end event: Elements available in the General section are described in the table below. |Annotation||An annotation is an element that can be used to put comments in a microflow.| |Break Event||A break event is used in loops only to stop iterating over a list of objects and continue with the rest of the flow in the microflow. For more information on the break event, see Break Event in the Studio Pro Guide.| |Continue Event||A continue event is used in loops only to stop the current iteration and start the iteration of the next object. For more information on the continue event, see Continue Event in the Studio Pro Guide.| |End Event||An end event defines the location where the microflow will stop. There can be more than one end event, for example when a Decision is used in the microflow. So, the number of end events depends on the number of possible outcomes of the microflow. For more information on the end event, see End Event in the Studio Pro Guide.| |Decision||A decision splits the flow and should be used if you want to add conditions. For example, if you want to show different order forms for customers with different grades. This element is based on a condition and will result in several outgoing flows, one for every possible outcome. The microflow checks the condition and follows one of the flows. For more information on a decision and its properties, see Decision. |Loop||A loop is used to iterate over a list of objects and perform actions on each item of the list. For example, you can retrieve a list of orders from your database, then loop over this list and mark orders as processed. For more information on a loop and its properties, see Loop.| |Merge||A merge can be used to combine flows into one. If previously you split the microflow flow (for example, when adding a decision) and now one and the same action needs to be executed for these separated flows, you can combine the two (or more) paths using a merge. For more information, see Merge in the Studio Pro Guide.| |Parameter||A parameter is an input data for the microflow and can be used in any activity in the microflow. For more information on parameters, see Parameter in the Studio Pro Guide.| 4.2 Object Activities The Object Activities section contains activities that interact with an object or objects (for more information on what an object is, see Domain Model): The Object Activities are described in the table below. |Aggregate List||Aggregate List can be used to calculate aggregated values such as the maximum, minimum, sum, average, and total amount of objects over a list of data objects. For more information, see Aggregate List in the Studio Pro Guide.| |Change Object||Change Object can be used to change an existing data object or properties of this object. For more information, see Change Object in the Studio Pro Guide.| |Commit||Commit saves changes you have not saved in the database yet. For more information, see Commit in the Studio Pro Guide.| |Create Object||Create Object can be used to create a data object. For more information, see Create Object in the Studio Pro Guide.| |Delete||Delete Object can be used to delete one data object or a list of objects. For more information, see Delete in the Studio Pro Guide.| |Retrieve||Retrieve can be used to get one or more objects, either by getting another object through an association, or by retrieving objects from the database. For more information, see Retrieve in the Studio Pro Guide.| 4.3 Client Activities Section The Client Activities perform activities in the client, for example, open a page or show a message: The Client Activities are described in the table below. |Close Page||Close Page activity closes the currently open page. For more information, see Close Page in the Studio Pro Guide.| |Show Home Page||The Show Home Page action navigates to the home page. It goes to the same page as the end-user goes to after signing in and respects role-based home pages. For more information, see Show Home Page in the Studio Pro Guide. For details on setting the home page, see Navigation Document. |Show Message||With the Show Message action you can show a blocking or non-blocking message to an end-user. (Non-blocking message lets users continue their work in the app with the pop-up window open, while the blocking message does not let the user continue work until the pop-up window is closed.) For more information, see Show Message in the Studio Pro Guide.| |Show Page||With the Show Page action you can show a page to the end-user. For more information, see Show Page in the Studio Pro Guide.| 4.4 Workflow Activities The Workflow Activities section contain activities that interact with workflows: The Workflow Activities are described in the table below: |Call Workflow||The Call Workflow activity starts the selected workflow.| |Complete Task||The Complete Task activity sets an outcome the specified user task should follow. When a user task has several outcomes, you can choose the one the user task will follow. For example, when end-users select that an employee is working from home, the user task will follow the dedicated path for it.| |Show User Task Page||The Show User Task Page activity opens a user task page specified in user task properties.| |Show Workflow Page||The Show Workflow Page activity opens a workflow overview page.| 4.5 Variable Activities The Variable Activities section contain activities that manipulate variables: The Variable Activities are described in the table below: |Change Variable||Change Variable changes the value of an existing variable in the current microflow. For more information, see Change Variable in the Studio Pro Guide.| |Create Variable||With the Create Variable activity you can create a variable and assign a value to it. The variable can be used to store, change, and reuse a value in activities of the microflow. For more information, see Create Variable in the Studio Pro Guide.| For example, you can first create a variable named Discount to a microflow, and then change the variable Discount depending on the type of the customer’s grade. You can give a discount for customers with Gold and Silver grades. Flows are lines connecting the elements. You can find the description of flows in the table below: |Sequence Flow||A sequence flow is an arrow that links events, activities, decisions, and merges with each other. Thus, it defines the order of execution. Flows always flow in one direction where elements are executed one by one. This means that the microflow cannot follow two flows at the same time. Even if you have a Decision that splits a flow into several flows, the microflow will follow only one of the flows.| |Annotation Flow||An annotation flow is a connection that can be used to link an annotation to a flow element(s).| 6 Activity Icons When configuring the activities of microflows you will notice icons above or underneath activities. You can find the description of icons in the table below: |Entity||Indicates that the data source for the activity is an entity.| |Value||Indicates that the data source for the activity is a simple value, such as decimal, Boolean, date and time, etc.| |Commit||Indicates that the object will be committed. Committing means that the changes will be saved in the database. This can be useful, for example, when you want an object NewCustomer to be saved and updated in the database.| |Commit without events||Indicates that the object will be committed but without events. This means that the object will be saved in the database, but event handlers will not be triggered. For more information on event handlers, see Event Handlers in the Studio Pro Guide| |Refresh in Client||Indicates that the result of the activity will be displayed to an end-user.| 7 Main Documents in This Category - Mendix Assist – describes an artificial intelligence-powered agent that helps you configure microflows - Decision – explains what a decision is and describes its properties - Loop – explains what a loop is and describes its properties - Microflow Expressions – explains how to use microflow expressions - Set & Change a Value for Different Activities in the Microflows – explains how to set or/and change a value for microflow activities
<urn:uuid:1528eabc-2e3d-41a2-ab1c-6b154bf9dc38>
CC-MAIN-2021-43
https://docs.mendix.com/studio/microflows
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587799.46/warc/CC-MAIN-20211026042101-20211026072101-00190.warc.gz
en
0.839242
3,073
2.5625
3
The set time to favour the district came during the ministry of Robert Findlater whose name is still remembered with reverence on Lochtayside after the lapse of nearly a century. This movement took place within the Established Church. Like many Highland parishes, those of Breadalbane are of great extent. Fortingall includes the long stretch of twenty miles which forms the secluded valley of Glenlyon, as well as large tracts beyond its mountainous walls. Kenmore runs westward from the village of that name and almost encloses Loch Tay. It is beyond the power of the most energetic minister to do justice to territories of such extent, and special efforts were according¬ly made in many cases to accomplish their spiritual purposes by planting extra stations. The Royal Bounty Fund and the Society for Propagating Christian Knowledge stepped in to help with resources of men and money. The pious Lady Glenorchy placed a chapel in Strathfillan and gave financial assistance in other cases. Since the beginning of the eighteenth century, the part of Kenmore parish near the western extremity of Loch Tay had been provided for in this special way. Both sides of the loch were put under the charge of a mission minister whose stipend was drawn from the funds available for the purpose. Each side had its own place of meeting: the church on the north side being at the Milton of Lawers, and that on the south side at Ardeonaig. The manse stood near the latter building. In 1810 Robert Findlater came to take charge of the double station. He was a native of Kiltearn, in Ross-shire, and had been licensed by the Presbytery of Dingwall in October 1807 when he was only 21 years of age.' He was not a man of careful scholarship, but he was especially adapted for the work that lay before him. He was evangelical, devoted, prayerful, and diligent, and accordingly well fitted to carry on the tradition of Farquharson's work. The field, he discovered, sadly needed cultivation. The roll of communicants was large and out of proportion to the number of the population; yet Findlater had to say: "I have cause to fear I cannot make up so many as would form a society in this place for prayer and Christian converse." An earnest of his ministry, however, was soon given. "It is said that the very first sermon he preached at Ardeonaig resulted in the awakening of a young woman."' Findlater began and carried on his work in the most systematic manner. Soon after he entered on his duties, he started a regular house-to-house visitation of his people for the purpose of catechizing them. He used the Shorter Catechism as the basis of his instruction. "My plan," he said, "is to cause them to say over the Question first, which I generally illustrate two at a meeting . . . I can, in public catechizing, talk from my own experience and observation and I have found that without knowing the individual, I have hit the peculiar character whom I was addressing. I find as yet the people are willing to follow my plans, and many are busy at present learning the Questions. It is a new thing to them, and I am told there are some who have not been catechized for about fourteen years." Within a twelve-month period, he had personally examined and taught 1,600 persons. Public worship was conducted on each side of the loch on alternate Sabbaths. Although Lock Tay is regarded as a dangerously stormy place, Findlater was prevented crossing on only one Sabbath during his eleven-year ministry. He also tried other methods of creating an interest in religious things. In the summer of 1812, he began a Sabbath school at Ardeonaig. A year afterwards, he testifies to its success saying that he found "more pleasure in it than with the old people." A prayer meeting was also started, but it cannot have been a hopeful undertaking at the beginning, for, while he tells of its existence, he had to add, "We are very destitute of spiritual life." Indeed, during the first half-dozen years of his ministry, his letters are full of his sorrow over the hardness of his people's hearts. "I desire to be thankful," he writes on Christmas Day 1812, "that matters are on the whole not worse, some say there is an alteration to the better, but I fear the whole is from an open unconcern to formality, and though knowledge is acquiring, it would grieve a feeling mind to observe the vanity and want of concern of a rising generation." In spite of these drawbacks, Findlater's ministry was not without its results. Interesting stories could be told of how persons, even at a distance, came under his influence. Perhaps the most important event was the appearance of the people of Glenlyon at his services. About 1813 a young man from the glen got into the habit of crossing the eastern shoulder of Ben Lawers to attend his church. Next year he succeeded in inducing others to accompany him. "In spring 1816 the group increased to the number of perhaps twelve or fourteen, and during the whole of that summer a goodly number went regularly every Sabbath." There can be no doubt that the evident earnestness underlying that weary trudge over the dreary moorland did much to prepare the way for the revival. It is said that, as the summer of 1816 advanced, a more than ordinary interest was observed, especially among the men and women from Glenlyon. A largely-attended sacrament at Killin helped to deepen the impression. The same ordinance was to be observed at Ardeonaig in the month of September. Findlater, as if anticipating the event, secured the best preachers then to be had. The celebrated John Macdonald of Ferintosh had assisted him several years before, but the Apostle's fame had increased since that time. News had also come of wonderful awakenings under his ministry in the north. Now Findlater had secured his services again, and information about his coming was spread far and wide. The whole preliminary services of that memorial sacramen¬tal season were impressive. On Friday evening, a special time of worship was held at Lawers. Dr Macdonald preached until the light failed. "Owing to the darkness of the night," says Campbell of Kiltearn, himself a native of the glen and living in it at the time, "the poor people of Glenlyon could not return home, and some of them were quite unfit for the journey, a sense of sin pressing so heavily upon their hearts. Those who were able to go home next morning brought with them the tidings of Mr Macdonald's arrival and of the effects of his preaching—news which excited an ardent desire to hear the extraordinary preacher and to witness scenes before unheard of in Breadalbane; while some desired to experience such influenc¬es themselves as were felt by others. The result was that the most of the Glenlyon people were at Ardeonaig on Sabbath." That Sunday the size of the concourse that met at Ardeonaig Church was unusual. Findlater estimates the number at between 4,000 and 5,000, a number all the more remarkable in that there was no large centre of population nearer than Perth. The multitude was accommodated on the green braes of the hillside just above the present manse. Macdonald preached the action sermon. The discourse took nearly two hours and a half to deliver. The text was Isaiah 54:5, "For thy Maker is thine husband." The sermon was not only one which Macdonald frequently preached, but it was also one of his most famous efforts. Its effects on this occasion were notable. The whole multitude was moved. "The most hardened in the congregation," says Findlater, "seemed to bend as one man; and I believe if ever the Holy Ghost was present in a solemn assembly it was there. Mr Macdonald himself seemed to be in raptures. There were several people who cried aloud, but the general impression seemed to be a universal melting under the Word. The people of God themselves were as deeply affected as others, and many have confessed they never witnessed such a scene." A number dated their entrance on a new life from that afternoon. "A Gaelic teacher who was accounted a godly man by all who knew him, and who took a leading part in every good work in the district where he lived and taught, declared that 'he knew fifty persons who were awakened by that sermon at Ardeonaig, and that he was one of them himself " Next day, room was made for Macdonald to preach again. His text was Luke 16:2. Findlater states that the sermon "was in no way inferior to the last, though there were not so many who cried out. Several were pierced to the heart, and some came to speak to him after the sermon. I have seen and con¬versed with some of them myself, and have every reason to believe that they are under the gracious operations of the Holy Ghost." This was the beginning of a work which continued for the next three years with more or less intensity and fruitfulness. The sacrament took its place as part of the history of the district, and today is still remembered as "The Great Sacra¬ment." The following Sabbath, Findlater preached at Lawers, and the agitation among several of his hearers showed that the impressions made had not been evanescent. The interest spread far and wide. Parishioners from Kenmore, Killin, and Fortingall, flocked to Lochtayside in large numbers, attending either at Ardeonaig or Lawers, according as the services were held on the north or on the south side of the lake. So universal was the movement that Findlater could report, "there were few families without one, and some families two or three, professing deep concern about the salvation of their souls." The men of Glenlyon were particularly assiduous in their attendance, for the revival had its stronghold among them for as long as it lasted. When the fervour had to some extent passed away, it was reckoned that only five or six families in the whole glen had been left untouched. "These families were looked upon as objects of pity." During September and October of 1816, few remained at home who could face the rough road between them and Loch Tay. "One hundred persons might be seen in one company, climbing the hill separating these two districts of country, having to travel a distance of from nine to fifteen miles, and some even farther." About that time, however, the glen secured an evangelist for itself. In 1806 the Rev. James Kennedy had been ordained as Independent minister at Aberfeldy. He had done much to keep alive gospel truth in the whole surrounding district. Hardly knowing the full extent of what had taken place, he came in the course of his work to Glenlyon in October 1816. He found the valley aflame. So eager were the people that three weeks passed before he returned home, driven away by sheer exhaustion. During that time, he preached sometimes as often as three times a day, and hardly a service was held but "some new case of awakening occurred." As opportunity offered, he returned again and again to the glen and proved an able and anxious coadjutor of the work. Several picturesque descriptions are given of his services. No adequate place of meeting was possible, and the crowded congregations had to seek what accommodation was to be found on the hillsides or in the woods. One wood in particular was used. In later days it was spoken of as "a place which the divine presence had rendered venerable." We read of the people listening eagerly to the gospel message, "sometimes amid bleak winds and drifting snows, with their lamps suspended fairy-like from the fir trees." Writing to Kennedy's son, the Rev. David Campbell of Lawers, a native of Glenlyon and one of the fruits of the revival, said, "I have seen your father stand almost knee-deep in a wreath of snow, while at the same time it was snowing and drifting in his face all the time he was preaching, and the people gathered around him, patiently and eagerly listening to the fervent truths that proceeded from his lips." During the winter of 1816 and the whole of 1817, the general attention to religion continued. The people still resorted in large numbers to Ardeonaig and Lawers. A temporary difficulty sprang up with the minister of Glenlyon who thought that his brethren, Findlater and M'Gillivray of Strathfillan, a man of like evangelical spirit, were too zealously interested in his parish and too little concerned with what was due to himself as its religious overseer. The difference, however, was of short duration, and soon after Findlater was assisting him at his sacrament. Dr Macdonald preached at Loch Tay in April 1817 and helped at the sacrament in September, each time with manifest seals to his ministry. One discourse which he delivered on the Monday of the sacrament is still remembered, and the Hog's Park near the present pier of Lawers where the service was held is still pointed out because of its fame. "This ap¬peared," says the record, "to be one of the most powerful and effective sermons he ever preached in Breadalbane. The fervent eloquence and the pathetic appeals near its conclusion seemed to move and constrain even the most careless. Many were deeply affected and agitated both in mind and body." "I have heard old people speak of his sermon," says Mr Macgregor of Dundee, a native of the district. "One man who was present told me that the weeping towards the end reminded him of the bleating when lambs are being weaned—loud, general, as if the whole hillside were bleating!" In October, a preacher who does not give his name, visited Glenlyon and conducted a service at Invervar. His report is interesting. "As we could not," he says, "like Mr Kennedy once before, preach at night by candlelight in the open air, the people applied for a large flour mill which was near, and though busy at work, it was instantly stopped to give place to the bread of immortal life. When the broad two-leaved door was thrown open by the eagerness of the people to gain admittance, the press was so violent that we feared what might be the consequences; a vast number for want of room stood contentedly before the door, beaten by the high wind and pierced by the cold. . . . I was so wedged in where I stood that some of those behind had their chins placed almost on my shoulders. . . . It was ten o'clock when we dismissed." By this time about a hundred persons in Glenlyon alone professed conversion since the preceding harvest. For more look under the biography for Breadalbane Revivals This church would probably have been built as a result of the revivals. Glenlyon is a remote area, not a town. It runs along the River Lyon and runs roughly parallel with Loch Tay, but with a mountain range in between. This was the centre of the revival, although they all came down to Loch Tay for the meetings.
<urn:uuid:bd5baf3b-e2f4-45db-9bb5-0fab2f8bf111>
CC-MAIN-2021-43
https://ukwells.org/wells/glenlyon
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585439.59/warc/CC-MAIN-20211021164535-20211021194535-00670.warc.gz
en
0.990103
3,180
2.53125
3
Here are the instructions for iMOM’s 15 Fun Backyard Games. First, print out the cards for the 15 Fun Backyard Games and place them in a hat. Let your children draw a card and head to the great outdoors to get the fun started. 1. Amoeba Tag - Two people are it. - They hold hands and chase people. - Any person they catch joins the chain by linking hands. - When another person is caught, they can stay together or split 2 by 2, but they must split even numbers and can link together at will. - This game is played until nobody is left. 2. Band-Aid Tag - One person is “It.” - Whenever someone is tagged by “It,” they must hold a band-aid (their hand) on the spot where they were tagged. - Then the game continues. - When someone runs out of band-aids, (they get tagged three times), they are frozen until two other people come over to free them. - The two other people need to tag the frozen person at the same time and count to 5. - Let the game continue for as long as it remains exciting and fun. - Switch the person who is “It” often. 3. Car Lot - Pick a category for Car Lot (i.e. fruit, cars, candy, etc.). - Once the category has been picked, select one child (or Mom is usually good at this part) to be “It” and send them to the middle of the playing area. Everyone else lines up at one end of the playing field. - Once lined up, the child (or Mom) that is “It” yells out three items within the category. - The children independently choose 1 of the 3 items to be. - When the “It” child calls out 1 of the 3 choices, everyone who picked that choice runs to the other end of the playing field trying to avoid being tagged by one of the “It” people. - If tagged, that child must sit down right when they are tagged. - When sitting down, the child may tag someone. - If someone is tagged by a seated person, the person sitting down may get back up and play the game. Example: “It” calls out: The category is Fruit. “It” calls out the 3 choices: a. Apples b. Oranges c. Strawberries (Wait for children to SILENTLY choose item) “It” calls: Apples (Apples run) 4. Catch the Dragon’s Tail Equipment Needed: One rag or flag per dragon - Divide the children into teams of 6 or 8. - Have the children in each team form a line and then put their hands on the waist of the person in front of them to create a “dragon.” - The last person of the dragon, or tail, is given a rag to hang out of the back of their pants. - The object of the game is for the head to catch the tail and pull the rag from that player. - If he succeeds, he will become the new tail. - The old tail does not become the new head, he stays in his same order. - This game can be given a time limit in case a player is having trouble catching the tail. 5. Clothespin Tag Equipment Needed: Clothespins - Hand any number of clothespins to all the kids. (The more clothespins everyone starts with, the longer the game lasts.) Have them pin them to their shirt sleeves, hems, pockets, etc. - Then have them all scatter on the playing field. - On the signal, everyone runs around snatching clothespins from one another, kneeling down to attach their newly-acquired prizes. - At the end of the game (usually a time-limit), the one with the most clothespins wins. 6. Dirty Diaper Tag - One person is “It.” - Whenever someone gets tagged, they become frozen until someone, who has not been tagged, crawls through their legs. 7. Dizzy Izzy Bat Equipment needed: Two baseball bats - Split the group into two teams and designate a spotter for each team. - Place the two bats on the ground. Leave a lot of space between each of the bats (about 15-20 feet). - Have each team line up (relay style: one person behind the other) about forty feet from their team’s bat. - When the game begins, the first person in each line must run to the bat, pick it up, place the large end on the ground, bend over so their forehead is on the flat end of the bat, and spin around ten times. - They must then run back toward their group and tag the next person’s hand in line. - The entire group must complete the task. - The first team to have all members complete the task wins. 8. Everybody’s It - Designate the playing area and the boundaries. - Have the children spread about in a scatter formation. - On the signal to begin, all children are “It” and may tag any other children. - Once a player is tagged, they leave the playing area to perform the designated exercise or skill. Example: When you get tagged, you must leave the area, perform 10 jumping jacks, then you may enter the game again. - If two players tag each other at the same time, they must both leave the playing area. 9. Fire on the Mountain - Have the group lie flat on their backs. - When you say, “Fire on the mountain,” the group has to stand up as fast as possible. - The last one up has to then sit out until the end, or do 10 jumping jacks, push-ups, sit-ups, etc. - When the group is on their back, they are to lie perfectly still. - If you say something other than “Fire on the mountain” (Mickey Mouse, Montana, Mazda, etc.) and they flinch or begin to get up, then they have to sit out or do the 10 jumping jacks. 10. Shark and Octopus Tag - Everyone begins the game as an octopus and stands on one side of the playing field. - One person is chosen to be a shark and they will stand in the middle of the playing field. - Play begins when the shark calls out, “Octopus, Octopus, swim in my ocean!” - All players must run across the playing field trying to get to the other side without getting tagged by the shark. - Anyone who is tagged must sit down where they are tagged. - They now become the shark’s helpers. - When the shark calls out, “Octopus, Octopus, swim in my ocean!” again, the players will try to run back to the other side. - Anyone who runs within arms’ reach of the sitting players and gets tagged must sit down. - The game continues until there is only one person left. 11. Snow White and the Seven Dwarfs Game Setup: This game is suitable for 9–16 children. Before play begins, determine how long you will want to play (20 minutes, 30 minutes, etc.) and then designate the following areas on your playing field. (This diagram is for illustrative purposes only; use whatever works for your specific location.) - Starting Line - Snow White’s Castle Assign each child to be one of the following Seven Dwarfs (double up if needed): Select one player to be the Wicked Witch. Select one player to be Snow White. Game Play: Explain that the witch has cast a spell on the dwarfs. The object of the game is to capture as many dwarfs as possible and imprison them in the Dungeon before the time is up without them being rescued by Snow White. - The witch calls out the name of one of the dwarfs, and they must run to the opposite end of the yard and back without being tagged by the Wicked Witch. - If a dwarf gets tagged, he must go to the Dungeon. - Once in the Dungeon, the dwarf can only be released by Snow White, who normally stays in the safety of her castle. - But once Snow White leaves the safety of her castle, the Wicked Witch can tag her. - If the Wicked Witch tags Snow White, or if she has any dwarfs at the games end, the witch wins. - If all the dwarfs are safe at the end of the game, as well as Snow White, the dwarfs and Snow White win the game. 12. Sock Tag - Have the children take their shoes off and pull their socks slightly off. - Children crawl around on the ground on their hands and knees and try to steal the other children’ socks. - Once both your socks are stolen, you’re out. - The last person with a sock on wins! 13. Spider Tag Equipment Needed: 1-3 foam pool noodles - Divide the children into teams of 3. - To build a spider, the three players forming a team will interlock elbows while standing back to back. - Select 1 or 2 teams to be the “Its” and give the “Its” 1 noodle per team. - On your signal, the spiders who are “It” will attempt to tag another spider with the noodle (below the shoulders). - Upon tagging another spider, they will give the noodle to the tagged spider and leave. - The tagged spider must give the noodle to any one of their players, spin around 3 times, then they may chase after any other spider. Following are the basic rules of the game: - Spiders must be connected when tagging another team. - If a fleeing team comes apart, they are counted as tagged. - Each time a spider is tagged, a different player in the group must have the noodle. - Upon being tagged, spiders must spin 3 times before chasing. - Taggers must tag below the shoulders. 14. Spoon Game Equipment needed: Two cups, two spoons, two sections of long string (6–8 feet each). - Split the group into two teams. - Place each cup on the ground about 10 feet apart from each other, then have the teams line up about 30 feet away from the cups. - This will be a relay race. - Have each team line up their members one behind the other. - The first person in each line for each team ties the string around their waist with the spoon tied from the hanging end (spoon side to your back). - When the game begins, the first person on each team with the string around their waist races down, squats over the cup, and tries to get their spoon in the cup without using their hands. - Once they get it in, they run back to the line and help tie the string around the next person’s waist. - The first team to successfully have all members get the spoon into the cup and return to the starting line is declared the winner. 15. Toilet Tag - Select 1-3 children to be it. - All players scatter around the playing area, except the “Its”. - On a signal, the “Its” attempt to tag the players - When a player is tagged, he must assume a toilet position (one knee on the ground and the other knee up, one arm straight out to the side). - The tagged player must remain frozen in this position until they are rescued by another player. - To be rescued, a player must sit on the knee of the frozen player, grab their straight arm and make a “Whoosh” sound while pulling the arm down to simulate the flushing of a commode. - Once a player flushes the toilet of a frozen player, the frozen player is freed. - The game ends when a certain time limit has been met or when all players are frozen.
<urn:uuid:bbaaa558-ca9d-45c4-8a23-1ee295116588>
CC-MAIN-2021-43
https://www.imom.com/15-backyard-game-instructions/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588341.58/warc/CC-MAIN-20211028131628-20211028161628-00229.warc.gz
en
0.936978
2,599
2.65625
3
In May, thousands of protesters marched in downtown Chicago, joining cities around the world in decrying the latest Israeli attacks against Palestinians. The attacks began when Israeli security forces stormed Jerusalem’s al-Aqsa mosque on May 7 and fired on worshippers with tear gas and rubber bullets in response to demonstrations against the eviction of Palestinian families in Sheikh Jarrah. Hamas, the militant political party that controls a majority of seats in the Palestinian Legislative Council and is the administrative government of Gaza, demanded the Israeli security leave the mosque by May 10. When the deadline passed, Hamas militants fired rockets into Israeli territory, and Israel began bombing Gaza. Between May 10 and 21, Israeli airstrikes and artillery killed 256 Palestinians in Gaza, including sixty-six children. Rockets fired by Hamas killed thirteen people in Israel, including two children. Hamas first called for a ceasefire on May 13; after international protest, Israel agreed to one on May 21. The evictions in Sheikh Jarrah, and the violence they precipitated, are only the latest expulsions of Palestinians from their ancestral homes to make way for settlements—evictions that began decades ago during the creation of Israel, according to Hatem Abudayyeh, the executive director of the Arab American Action Network (AAAN) and national chair of the U.S. Palestinian Community Network (USPCN) in Chicago. And May’s protests in Chicago are the latest in a long tradition of organizing for Palestinian solidarity here. Abudayyeh said the evictions in Sheikh Jarrah are a continuation of the Nakba, or “the catastrophe, which is what we call the founding of the state of Israel.” During the 1947-49 war that established the state of Israel, more than 750,000 Palestinians—about eighty percent—fled or were forced from their ancestral homes by Israeli forces, who also killed 13,000, according to figures from American Muslims for Palestine. “That’s what has led to us becoming a refugee population,” Abudayyeh said. “It is what led to the colonization of Palestine, and, ultimately, what led to the occupation of the rest of it. It’s not a brand-new conflict.” Since then, Palestinians in Gaza and Jerusalem, as well as refugees in Chicago, have organized resistances to their displacement. In 1987, the First Intifada began as a series of protests against Israel’s then-twenty-year occupation of the West Bank and Gaza. During the Second Intifada, which lasted from 2000 to 2005, Palestinians rose up in response to the Israeli occupation and policies that violated international law and deprived Palestinians of their basic human rights. In 2014, the Israel Defense Forces invaded Gaza, sparking protests in Chicago and around the world, and in 2018 and 2019, solidarity marches were held in support of the Gaza Border Protests. Chicago and the Southwest suburbs have an extensive immigrant and refugee Palestinian community that has been active in organizing in solidarity with Palestine for decades. According to Dr. Louise Cainkar, Professor of Sociology & Social Welfare and Justice at Marquette University and author of a number of books on Arabs in the U.S., Palestinians have been living in Chicago for the last hundred years. The Chicago metropolitan area has the largest concentration of Palestinians in the United States, according to Cainkar. The Census does not include information on how many residents come from the Middle East and North Africa, because the federal government labels them as white. Cainkar’s research shows that about 200,000 Palestinians Americans and their descendants live in the Chicago metropolitan area today. Early Palestinian immigration, according to Cainkar, consisted mainly of young men living on the South Loop. “You’re hypervisible when it comes to surveillance and hate crimes and discrimination and bullying, but you’re totally invisible when it comes to getting any kind of statistical information,” says Cainkar. “That’s a problem.” Abudayyeh’s father, Khairy, immigrated to Chicago at the age of twenty in 1960 from Al Jib—a village near Jerusalem in the West Bank. He became a student organizer at Roosevelt University, having been an activist back home. “He had lived the Nakba,” Abudayyeh said. Khairy was eight years old during the Nakba. For many Palestinians who emigrated at that time, “it was very difficult economically to live in a situation in which you see the colonization happening next to you,” he said. In the 1980s, Palestinian families moved to Chicago’s Southwest suburbs such as Burbank, Oak Lawn, Hickory Hills, Bridgeview, Alsip, and Palos Hills. Palestinians contributed to the formation of the Mosque Foundation, a large mosque that opened in 1981 in Bridgeview, according to Cainkar. Abudayyeh’s father was in Chicago during the 1967 Six-Day War. Approximately two years later, Khairy went back to Palestine, married Abudayyeh’s mother, Khairyeh, and they both returned to Chicago to raise a family. Khairy co-founded the Arab Community Center in 1975. The Center, which first opened in the Northwest Side and later moved to 63rd and Kedzie, focused on foreign policy and education about Palestine and the Arab homeland in North Africa and the Middle East, and is the city’s “hub for the Arab progressives and the Arab left,” Abudayyeh said. The American Community Center became a springboard for what is now the Arab American Action Network (AAAN), which combines political organizing and helping Arab immigrants and refugees in Chicago access social services. While raising her five children and holding a part-time job, Abudayyeh’s mother also became an activist. She joined the Arab Community Center and served for some time as the president of the local chapter of the Palestinian Women’s Association, a national organization. “My siblings and I learned about Palestine, about struggle, about the fight for national liberation by osmosis,” Abudayyeh said. When Abudayyeh was a child, his parents’ Northwest Side living room was often filled with friends and colleagues who would discuss the many issues close to Palestinian self-determination over the years: from the Lebanon War in 1982 and the Intifada of 1987 to the U.S. War in Iraq in 1991, the Oslo Accords in 1993, and more. “They spoke in an ideological language that I did not learn until much later in life, and they were so impressive,” Abudayyeh said. “I saw my mother and her colleagues as the organizers I wanted to emulate, those who dedicated their entire lives to their communities, those who brought the issue of Palestine to the forefront of U.S. discourse from the mid-seventies to the early nineties.” By the time Abudayyeh was a teenager, he already had a political education. After attending University of California, Los Angeles, Abudayyeh returned to his community in Chicago and began working as a youth program director at the Arab American Center started by his father. Three years later, he was appointed executive director before his father passed away. Abudayyeh is also the national chair of the United States Palestinian Community Network (USPCN), formed in 2006. “USPCN is kind of like the legacy of the Arab Community Center. It’s [run by] the children of the leaders.” With its largest membership in Chicago, USPCN was born to revilitize grassroots organizing in the Palestinian Community and work on campaigns and projects around Boycott Divestment Sanctions (BDS), defense against repression, political prisoners, and more. “Our strategy for organizing is really consciousness-raising,” said Abudayyeh. And because of this consciousness-raising, according to Abudayyeh, he and many Palestinian activists and supporters have come under attack by federal law enforcement. In 2010, the FBI raided Abudayyeh’s North Side home and thirteen others under the pretense that they were supporting Palestinians back home. At the time, he was at Advocate Lutheran Hospital in Des Plaines visiting his sick mother. His five-year-old daughter and then-wife were home. The FBI agents took possession of his laptop, paper records and anything with the word “Palestine.” As the agents ransacked his house, Abudayyeh’s daughter Maisa Assata—who was just five years old at the time—asked in Arabic, “‘Why are they looking at our Arabic language books? They don’t seem like they would read Arabic,’” he said. “It was the cutest thing.” Abudayyeh said the raids and subpoenas were harassment, and an attempt by the federal government to repress activists’ rights to free speech and assembly, something often done to other immigrants and Black people in the U.S. The feds eventually subpoenaed a total of twenty-three Palestinian activists, including Abudayyeh, and ultimately did not arrest, charge, or indict anyone after the raids. “They didn’t realize that all twenty-three of us would be so unified and would have such massive support, and they came to the realization that they wouldn’t be able to force any of us to testify.” Abudayyeh said the twenty-three activists immediately spoke out against the raids. “There’s nothing that we’re doing that is illegal. Support for national liberation movements is our right. Solidarity is not a crime.” Abudayyeh said the FBI dropped the case since there is an eight-year statute of limitations. “They’ve become experts at criminalizing us, whether it’s Black communities, immigrant communities, Palestinian, Arab and Muslim communities,” he said. “And they do it for political purposes. You criminalize Mexicans and Central Americans, so that you make it so that you can make the political argument as to why you want to militarize and shut down borders; you criminalize Palestinians so that you make the political argument as to why you have to support the settler-colonial, apartheid, racist state of Israel.” In 2013, agents from the Department of Homeland Security arrested Rasmea Odeh, a leader in the Chicago Palestinian and Arab communities and an alleged member of the Popular Front for the Liberation of Palestine. Odeh spent the 1970s in an Israeli prison based on a confession she says she was raped and tortured into falsely giving to Israeli security forces. In a federal court in Chicago in 2013, she was indicted for Unlawful Procurement of Naturalization based on the government’s claim that she did not disclose the imprisonment on an immigration form twenty years prior. Rasmea, supporters, and her lawyers say that the immigration charge is a justification to attack her for her support of the Palestine liberation movement. After many years in legal proceedings, Odeh was stripped of her US citizenship in a federal court and was deported to Jordan in 2017. Recently, the USPCN, the organization chaired by Abudayyeh, started a national campaign to free Ata Khattab, who was arrested by the Israeli military from in the occupied West Bank in February. Abudayyeh says Khattab has not been charged and has been in jail ever since. Khattab is a member of a dance troupe performing traditional Palestinian dances. “Because of his cultural work, and being a leader in the cultural work, being an educator around these same things we do here…he was arrested,” said Abudayyeh. That Palestinians, Black Lives Matter, and immigrants in Chicago demonstrated together last month is not a coincidence, and there is hope, he said. “That was a part of my upbringing. So the idea that I’m anti-racist today or that I’m unequivocally in support of the Movement for Black Lives and in last year’s uprising for George Floyd, that’s not a surprise to anyone who would have known my parents, colleagues, comrades, and friends.” Abudayyeh says that under international law and the Universal Declaration of Human Rights, Palestinian refugees have the right to return to the homes from which they were exiled, but Israel has not allowed them back. “To me, the liberation of Palestine is from the river to the sea,” he says, referring to the combined areas of the West Bank Jerusalem, the Gaza Strip and the territory now controlled by Israel. He said thinks relationships with other oppressed groups such as those fighting for immigrants’ and workers’ rights, the Black Liberation Movement, and women are essential and all share the same enemy: The U.S. government, which supports Israel politically, militarily, economically and diplomatically. “Last year’s George Floyd uprisings and this year’s Palestinian resistance and worldwide support is proof” that such intersectional relationships can and will happen, he said. Abudayyeh’s mother passed away some time ago, and he said he has been thinking of her. “My daughter, Maisa Assata, called me excitedly to say that a butterfly landed on her arm,” he said. “‘I looked up,’ Maisa Assata told me, and she realized today is May 31, exactly ten years since her Sitto [grandmother] passed away. ‘Sitto loved butterflies, and she must have sent this one,’ she said. “Ya Ummi [Oh, mother], you were always right,” Abudayyeh said. “Palestine will win.” Correction, June 9, 2021: An earlier version of the story was updated with the correct organization that Abudayyeh chairs. The Weekly regrets this error. Alma Campos is the Weekly’s immigration editor. She last wrote about COVID-19 vaccination access in Latinx communities.
<urn:uuid:cce62a7f-815a-40aa-9c5a-8df497a6437b>
CC-MAIN-2021-43
https://southsideweekly.com/a-legacy-of-palestinian-solidarity-in-chicago/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585322.63/warc/CC-MAIN-20211020152307-20211020182307-00430.warc.gz
en
0.974748
2,981
2.734375
3
The Edinburgh Lectures Chapter 15, The Soul by Thomas Troward Having now obtained a glimpse of the adaptation of the physical organism to the action of the mind we must next realize that the mind itself is an organism which is in like manner adapted to the action of a still higher power, only here the adaptation is one of mental faculty. As with other invisible forces all we can know of the mind is by observing what it does, but with this difference, that since we ourselves are this mind, our observation is an interior observation of states of consciousness. In this way we recognize certain faculties of our mind, the working order of which I have considered earlier; but the point to which I would now draw attention is that these faculties always work under the influence of something which stimulates them, and this stimulus may come either from without through the external senses, or from within by the consciousness of something not perceptible on the physical plane. Now the recognition of these interior sources of stimulus to our mental faculties, is an important branch of Mental Science, because the mental action thus set up works just as accurately through the physical correspondences as those which start from the recognition of external facts, and therefore the control and right direction of these inner perceptions is a matter of the first moment. The faculties most immediately concerned are the intuition and the imagination, but it is at first difficult to see how the intuition, which is entirely spontaneous, can be brought under the control of the will. Of course, the spontaneousness of the intuition cannot in any way be interfered with, for if it ceased to act spontaneously it would cease to be the intuition. Its province is, as it were, to capture ideas from the infinite and present them to the mind to be dealt with at its discretion. In our mental constitution the intuition is the point of origination and, therefore, for it to cease to act spontaneously would be for it to cease to act at all. But the experience of a long succession of observers shows that the intuition can be trained so as to acquire increased sensitiveness in some particular direction, and the choice of the general direction is determined by the will of the individual. It will be found that the intuition works most readily in respect to those subjects which most habitually occupy our thought; and according to the physiological correspondences which we have been considering this might be accounted for on the physical plane by the formation of brain-channels specially adapted for the induction in the molecular system of vibrations corresponding to the particular class of ideas in question. But of course we must remember that the ideas themselves are not caused by the molecular changes, but on the contrary are the cause of them: and it is in this translation of thought action into physical action that we are brought face to face with the eternal mystery of the descent of spirit into matter; and that though we may trace matter through successive degrees of refinement till it becomes what, in comparison with those denser modes that are most familiar, we might call a spiritual substance, yet at the end of it it is not the intelligent thinking principle itself. The criterion is in the word "vibrations." However delicately etheric the substance its movement commences by the vibration of its particles, and a vibration is a wave having a certain length, amplitude, and periodicity, that is to say, something which can exist only in terms of space and time; and as soon as we are dealing with anything capable of the conception of measurement we may be quite certain that we are not dealing with Spirit but only with one of its vehicles. Therefore although we may push our analysis of matter further and ever further back—and on this line there is a great deal of knowledge to be gained—we shall find that the point at which spiritual power or thought-force is translated into etheric or atomic vibration will always elude us. Therefore we must not attribute the origination of ideas to molecular displacement in the brain, though, by the reaction of the physical upon the mental which I have spoken of above, the formation of thought-channels in the grey matter of the brain may tend to facilitate the reception of certain ideas. Some people are actually conscious of the action of the upper portion of the brain during the influx of an intuition, the sensation being that of a sort of expansion in that brain area, which might be compared to the opening of a valve or door; but all attempts to induce the inflow of intuitive ideas by the physiological expedient of trying to open this valve by the exercise of the will should be discouraged as likely to prove injurious to the brain. I believe some Oriental systems advocate this method, but we may well trust the mind to regulate the action of its physical channels in a manner suitable to its own requirements, instead of trying to manipulate the mind by the unnatural forcing of its mechanical instrument. In all our studies on these lines we must remember that development is always by perfectly natural growth and is not brought about by unduly straining any portion of the system. The fact, however, remains that the intuition works most freely in that direction in which we most habitually concentrate our thought; and in practice it will be found that the best way to cultivate the intuition in any particular direction is to meditate upon the abstract principles of that particular class of subjects rather than only to consider particular cases. Perhaps the reason is that particular cases have to do with specific phenomena, that is with the law working under certain limiting conditions, whereas the principles of the law are not limited by local conditions, and so habitual meditation on them sets our intuition free to range in an infinitude where the conception of antecedent conditions does not limit it. Anyway, whatever may be the theoretical explanation, you will find that the clear grasp of abstract principles in any direction has a wonderfully quickening effect upon the intuition in that particular direction. The importance of recognizing our power of thus giving direction to the intuition cannot be exaggerated, for if the mind is attuned to sympathy with the highest phases of spirit this power opens the door to limitless possibilities of knowledge. In its highest workings intuition becomes inspiration, and certain great records of fundamental truths and supreme mysteries which have come down to us from thousands of generations bequeathed by deep thinkers of old can only be accounted for on the supposition that their earnest thought on the Originating Spirit, coupled with a reverent worship of It, opened the door, through their intuitive faculty, to the most sublime inspirations regarding the supreme truths of the universe both with respect to the evolution of the cosmos and to the evolution of the individual. Among such records explanatory of the supreme mysteries three stand out pre-eminent, all bearing witness to the same ONE Truth, and each throwing light upon the other; and these three are the Bible, the Great Pyramid, and the Pack of Cards—a curious combination some will think, but I hope in another volume of this series to be able to justify my present statement. I allude to these three records here because the unity of principle which they exhibit, notwithstanding their wide divergence of method, affords a standing proof that the direction taken by the intuition is largely determined by the will of the individual opening the mind in that particular direction. Very closely allied to the intuition is the faculty of imagination. This does not mean mere fancies, which we dismiss without further consideration, but our power of forming mental images upon which we dwell. These, as I have said in the earlier part of this book, form a nucleus which, on its own plane, calls into action the universal Law of Attraction, thus giving rise to the principle of Growth. The relation of the intuition to the imagination is that the intuition grasps an idea from the Great Universal Mind, in which all things subsist as potentials, and presents it to the imagination in its essence rather than in a definite form, and then our image-building faculty gives it a clear and definite form which it presents before the mental vision, and which we then vivify by letting our thought dwell upon it, thus infusing our own personality into it, and so providing that personal element through which the specific action of the universal law relatively to the particular individual always takes place. Whether our thought shall be allowed thus to dwell upon a particular mental image depends on our own will, and our exercise of our will depends on our belief in our power to use it so as to disperse or consolidate a given mental image; and finally our belief in our power to do this depends on our recognition of our relation to God, Who is the source of all power; for it is an invariable truth that our life will take its whole form, tone, and color from our conception of God, whether that conception be positive or negative, and the sequence by which it does so is that now given. In this way, then, our intuition is related to our imagination, and this relation has its physiological correspondence in the circulus of molecular vibrations I have described above, which, having its commencement in the higher or "ideal" portion of the brain flows through the voluntary nervous system, the physical channel of objective mind, returning through the sympathetic system, the physical channel of subjective mind, thus completing the circuit and being then restored to the frontal brain, where it is consciously modelled into clear-cut forms suited to a specific purpose. In all this the power of the will as regulating the action both of the intuition and the imagination must never be lost sight of, for without such a central controlling power we should lose all sense of individuality; and hence the ultimate aim of the evolutionary process is to evolve individual wills actuated by such beneficence and enlightenment as shall make them fitting vehicles for the outfiowing of the Supreme Spirit, which has hitherto created cosmically, and can now carry on the creative process to its highest stages only through conscious union with the individual; for this is the only possible solution of the great problem, How can the Universal Mind act in all its fulness upon the plane of the individual and particular? This is the ultimate of evolution, and the successful evolution of the individual depends on his recognizing this ultimate and working towards it; and therefore this should be the great end of our studies. There is a correspondence in the constitution of the body to the faculties of the soul, and there is a similar correspondence in the faculties of the soul to the power of the All-originating Spirit; and as in all other adaptations of specific vehicles so also here, we can never correctly understand the nature of the vehide and use it rightly until we realize the nature of the power for the working of which it is specially adapted. Let us, then, in conclusion briefly consider the nature of that power.
<urn:uuid:63e35c89-9591-423c-8385-c4e6586b06f6>
CC-MAIN-2021-43
https://jimbo-books.com/1909-The-Edinbugh-Lectures-on-Mental-Science/15.php
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.56/warc/CC-MAIN-20211020024111-20211020054111-00310.warc.gz
en
0.955235
2,165
3.171875
3
The townsite sits serenely at the junction of the Fortymile and the Yukon rivers. The original inhabitants called this spot Cheda Dek and today the area is quiet with little sign of the busy community of miners and adventures that once lived there. For the First Nation peoples who long occupied the location as a traditional seasonal camp, the town of Fortymile represents the first real contact between the First Nations peoples and the expanding white world. During the 1800s, there were no land claim settlements in the territory as Yukoners know them today. There was no Umbrella Final Agreement (UFA), First Nation’s people’s culture was considered “pagan” and the churches mobilized to convert these “heathens” to the non-native religion. Along with the systemic expropriation of the land, miners exploited the First Nation peoples taking wives and introducing many of the illness and deadly habits that plague aboriginal peoples today. History records that the church and the government mobilized to protect First Nation peoples, but the result was the loss of First Nation rights and the destruction of First Nations government and their associated cultures. There were decades of change and many nomadic First Nation peoples adapted to living in the bush cutting wood for the riverboats, trapping and mining. In 1957, the road system expanded to Dawson City and the paddlewheelers travelled the river no more. Many bush families were forced to move into the towns where many lacked the skills to survive within that society. Unable to adapt, many were lost to the bottle. Elijah Smith started the land claims process for Yukon First Nations, which the government ignored for decades. When Pierre Trudeau repatriated the Canadian Charter of Rights, aboriginal rights were enshrined. This allowed the Yukon’s First Nation peoples to force the Canadian government to come to the land claims table and start negotiations. According to the UFA, the historic townsite of Fortymile is to be co-owned and co-managed by the Yukon government and the Tr’ondek Hwech’in First Nation. The actual management is handled by the Yukon governments historic sites unit and Tr’ondek Hwech’in heritage unit. The setup is very similar to the one used for Fort Selkirk. “It is not settlement land, though it is surrounded by settlement lands,” said Michel Edwards of the Tr’ondek Hwech’in heritage unit, “The land is held in joint title fee simple two owner piece of property. “It (the townsite) has importance as one of the negative things that has happened… that has to be remembered as part of out history and heritage. “It was the site of the first real extended contact between the Han native people and whites.” The names given by the non-native people to locations along the upper Yukon River were based upon the distance from Fort Reliance. So Fortymile and Sixtymile towns were named for their distance from Fort Reliance. The real significance of the Fortymile to all First Nations people post 1886 is that “it was the first mission school in the Yukon…the first place that any native people in the Yukon were taken to (was) the St. James Anglican Mission,” said Edwards. First Nation history far predates contact at Fortymile. “The reason why Han people were there in the first place was that it was an important fishing and hunting location,” said Edwards “It was the main intercept point for them for the Fortymile caribou herd, which at the time of contact was estimated numbered at 550,000 animals. Now there is about 10,000 animals.” “That point (Fortymile townsite) used to be a major river crossing point for the Caribou…then hunters could easily in canoes get caribou. “This herd’s range is slowly extending back into the Yukon, for a while they were just in Alaska. “Also it was an important grayling fishery in the spring and a good spot to fish salmon too.” White society led to year-round native habitation of the site, he added. “There started to be a permanent Tr’ondek Hwech’in settlement there because there was a permanent non-native settlement there, which is exactly what happened in Dawson (City) also. “The people didn’t live in one place but they did after contact. “Use of the site is dated now back about 2,300 years.” The migration of the river channel and flood plains in the Yukon River Valley makes finding earlier sites at this location difficult. Many Yukoners think that the history of the territory starts at the gold rush, but the territory has been occupied by First Nation people centuries before the recent arrival of non native peoples. Artifacts at Fort Selkirk have been dated back 6,000 years, or more, because that site is higher and dryer. Today the restoration of the townsite of Fortymile continues on a seasonal repair schedule. The Yukon government considers the Fortymile to be one of the most important historic sites in the Yukon. “It is the first town, first post office, first church, first mission school, first RCMP (NWMP) post…it is estimated that 600 to 800 people were living there,” said Edwards. Ottawa sent the Northwest Mounted Police to Fortymile because the majority of the inhabitants were American. A surviving image shows Jack McQuesten “opening a post office there, and the sign on the front of the post office said Michell, Alaska, Jack McQuesten postmaster,” said Edwards. The Fortymile townsite was also the location of the Yukon’s first Mission School. Since the histories of the two peoples are intertwined on this site, it is impossible to separate the histories post contact. The break in tradition and oral histories by contact destroyed much of the oral histories carried by the Tr’ondek Hwech’in people. Today there is a new life in the Fortymile. There are First Nations peoples once again leading tours and sharing the stories of their past. Modern restoration methods proceed slowly limited by the funding available to the Yukon government. Restoration is guided by the publication The Standards and Guidelines for the Conservation of Historic Places in Canada. The visual character of Dawson City was saved by the intervention of Parks Canada. The Fortymile townsite is being restored in similar fashion. Each season the remaining buildings are stabilized and over time buildings are restored to their former glory. The stabilization and restoration of the Fortymile though important to the Tr’ondek Hwech’in is not the primary goal of the Han peoples. The training and employment of interpretive guides has reconnected lost generations of Tr’ondek Hwech’in to their history and past. After decades of having their culture and lifestyle ridiculed, First Nation peoples are discovering value in their suppressed culture. The archeological history in the Fortymile district has validated many of the oral histories carried in the elders of the Han people. The caretakers on the Fortymile share the stories of their childhood growing up on the river. There is a suppressed joy in recounting these events by the storytellers. There is a validation of the value in a history and culture where non native people will travel long distances off the main path to listen to stories from this culture. The Tr’ondek Hwech’in wants to exploit the tourism potential of the heritage sites that they want to share. “The majority of Tr’ondek Hwech’in heritage sites are not places that we want to share with the public,” said Edwards. “The ones that have a undeniable shared history between non natives and natives, Cheda Dek (Fortymile), Tombstone and Dawson City, there is economic potential there. One of the key things we see as important is having the caretakers at the site. For many years there were no Han on the site and now, all summer long, there are Han people there.” For the young people who worked on the archeological dig on this site there was a connection back to the land that was lost. “It has positive memories that brings them a closer connection to the land and their ancestors,” said Edwards. “When Tr’ondek Hwech’in youth who are on those digs find 2,000 year old side notch spear point that might be one of their relatives and that reminds them of their history and brings about a connection between people who grew up in a town and brings a connection to the land.” An interpretive caretaker at Fortymile is important, said Edwards. “A chance to have people there to talk not only about the gold rush and European history, a visitor there gets to meet a Tr’ondek Hwech’in citizen, an elder and listen to their stories. “You get a real personal history of the area.” The future of the Fortymile may see the Tr’ondek Hwech’in running day trips from Dawson City to the Forty Mile townsite. Down by boat and back over the Top of the World Highway. There will not be a Princess Wilderness Hotel located there, the Tr’ondek Hwech’in want to keep the site historically intact. History is usually written by the victor, in the case of native non-native contact the non-native aspect won the battle at a huge cost to the indigenous peoples of the Yukon. Yet all was not lost, driven to the negotiation table by Charter Rights, the territorial and federal governments has recognized First Nation rights. To carry the analogy forward, the non-natives lost the war. Today First Nation peoples are realizing their connection to the land and rebuilding their culture and lives in the North. First Nation youth are the largest segment of our future society and as they realize and grow to believe in themselves and their culture we see changes in our lives and business culture. First Nation elders are re-connecting to the land and the places of their people. The revitalization of the Fortymile townsite is one example of these changes First Nation peoples are bringing to our society. Healing old wounds and building fresh links to northern culture the Tr’ondek Hwech’in demonstrate their pride in their culture and their people. Mark Prins is a Whitehorse-based writer.
<urn:uuid:350c8f6e-1aba-4e73-a8b8-e7bc83a04106>
CC-MAIN-2021-43
https://www.yukon-news.com/news/fortymile-a-place-where-rivers-and-people-collide/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585450.39/warc/CC-MAIN-20211022021705-20211022051705-00510.warc.gz
en
0.963692
2,269
3.15625
3
Retail Trade Policy Retail Trade Policy Trade, also called goods exchange economy, is the transfer the ownership of goods from one person or entity to another by getting something in exchange from the buyer. Retail trade consists of the sale of goods or merchandise from a very fixed location, such as a Brick & Mortar Shops, Departmental Store, Boutique or Kiosk, or by mail, in small or individual lots for direct consumption by the purchaser. Wholesale trade is defined as the sale of goods that are sold merchandise to retailers, to industrial, commercial, institutional, or other professional business users, or to other wholesalers and related subordinated services. Trading is a value added function of the economic process of a product finding its market, where specific risks are to be borne by the trader, affecting the assets being traded which will be mitigated by performing specific functions. The Domestic trade contributes around 15 % of India’s GDP and currently having more than 6 crore business enterprises across the Country. Within this, self-organized trade accounts for 95% of the total trade. Traditional forms of low-cost retail trade, from the owner operated local shops and general stores form the bulk of this sector. Handcart and pavement vendors are yet another section of domestic trade. In the absence of any significant growth in organized sector employment in India in the manufacturing or services sector, millions are forced to seek their livelihood in the informal sector. Domestic trade, which has been a relatively easy business to enter with low capital and infrastructure needs, has acted as a refuge source of income for the unemployed. It is estimated that domestic trade provides livelihood to about 25 crore people in the Country and is registering an annual growth rate of about 15%. The truth is that the people engaged in trade are mostly those sections that are relatively less skilled, having lower capital for investment and struggling for their livelihood. Corporate funded organized domestic trade has witnessed considerable growth in India in the last few years and is currently growing at a very fast pace. The share of organized sector in overall domestic sales is projected to jump from around 5% currently to around 10% to 15% in the next three years. A number of large domestic business groups have entered the internal trade sector and are expanding their operations aggressively. Several formats of organized retailing like hypermarkets, supermarkets and discount stores are being set up by big business groups besides the ongoing proliferation of shopping malls in the metros and other large cities. This has serious implications for the livelihood of millions of small and unorganized retailers across the country. India has the highest shop density in the world with 11 shops per 1000 persons, much higher than the European or Asian countries. The potential social costs of the growth and consolidation of organized retail, in terms of displacement of unorganized retailers and loss of livelihoods is enormous. Another form of retail which is gaining immense popularity is e-retail. This is posing a serious challenge to self organized retail which is struggling for self survival. Need of the Hour Self organized traders and small retailers need protection and policy support in order to compete with organized retail. The Ministry of Housing and Urban Poverty Alleviation has formulated a National Policy for Urban Street Vendors. The policy proposes several positive steps to provide security to street vendors considering it as an initiative towards urban poverty alleviation. However, what is required is a more comprehensive policy, which addresses the needs of small retailers, especially in terms of access to institutional credit and knowhow to upgrade their businesses. So far, the rulers and policy makers have turned a blind eye on self-organized traders as they do not have powerful resources and clout to lobby for their cause like organized corporate business house. On the other hand handful of corporate house who are very much successful in manufacturing and services have started eyeing the retail business. Since they have deep pockets and with the changing economy and consumer behavior, they are trying to take control of every aspect of economy, right from control on agriculture, manufacturing, retail trade and policy making. This can be clearly seen as an attempt by few corporate India to control and capture Indian economy as per their business interest. With the vast potential in retail trade, some of the corporate retail houses invested heavily in retail sector. These companies due to their deep pocket and management vision factored some 5-10 years to make a break even and profit. It is during this period these companies are hopeful to wipe out the self-organised retail sector. Once the self organised retail sector is out from market, these companies can have their monopoly on market, producers, manufacturers and consumers. This is evident from examples around the developed world where the retail giants have wiped out local markets and are eventually in a position where the farmers, manufacturers, logistic providers are left at the mercy of retail giants. Government is influenced by the powerful lobby of retail giants. Consumer is forced to consume the commodities promoted by corporate retailers. Entire business is strategically planned and financed by the deep pockets that have the capacity to play with the markets on their own term. On the other hand the self-organised retailer and traders of India who do not have financial backing and without any institutional support are struggling to fight these corporate retail giants. It needs to be mentioned that the self organised traders not just contribute about 15% to GDP but are the one of the main source of tax collection by the government. Traders collect and deposit various trade related tax to Central, State and Local government. The irony is that despite being a partner in collection of tax, the traders are always looked with suspicion, is never trusted for his sincerity and never acknowledged the contribution of self organised traders. During policy formation on retail or mater related to trade, self organised traders are totally left out and government only gets views of white collared corporate as a result the fate of self organised retailers are left on god. The CAIT firmly believes there is need to incorporate the views of self-organised retailers while formulating the policies on retail trade mainly because of:- 1. Self organised retail traders are the engine of economy as they are the channels which accounts for more than 95% of retail trade as of date. 2. This means all the manufactured products and agricultural produce reaches its final destination (consumer) through the retailers. 3. Approximately 25 crores of population is directly dependent on traders for their survival. 4. Equal numbers are also indirectly related to traders, such as labour force, transporters, etc. 5. The traders’ collects tax on behalf of the government at all levels and deposits them to treasury on their own expenses. 6. MSMEs and millions of lesser known brands manufactured by self help groups and local cooperatives are supported by the traders. It is the existing trading system where the self organised trader offers a choice to consumers to choose from wide range of products. The role of government is to regulate the market and provide the level playing to all players. However with the current approach and policy regime, the environment is only conducive for foreign retail giants and their Indian partners or counterparts. The existing policy on retail will ensure that in near future the self organised traders are thrown out of their business and livelihood, farmers and primary producers are left at the mercy of corporate managers and consumers left with lesser choice in the name of convince. The CAIT firmly believes that: 1. The existing self organised traders need to compete with the deep pocketed retail giants and continue the services towards nation building. 2. We also acknowledge that there is a need to improve and radically change the business style of self organised retail so that our consumers get the best of the products at competitive price, farmers gets the best price for their produce, inflation is kept at control. All these will require institutional support from Government and therefore a change in the mindset of the rulers and policy makers is all the more necessary. The need of the hour is creation of a Trade Policy with a separate Ministry of Internal Trade both at Centre and State Government level which is able to device policy for self organized traders. This Ministry should work as a mentor and partner so that the retail sector can grow and modernize. It should also control the proliferation of organized retail supported and funded by big corporate houses and venture capitalists. Once Ministry is formed then lot of lacuna on existing self organized retail will be addressed and India will experience a new form of modern markets which have the essence of India and its vast section of society. However, till the time, the Ministry of Internal Trade is formed; the Ministry of Commerce and Industry may be assigned the task of formulating a Trade Policy for Domestic Trade. Each sector of the economy like Industry, MSME, Labour, Transport, Farmers etc. have an independent policy and then why the self organized sector i.e. the traders are deprived of such policy . If such step is taken by the Government, it will help in structured growth of the sector and will also result into increase in revenue of the Government. It will also help in up gradation and modernization of retail trade, which is need of the hour, to meet the global challenges in trade. Fundamentals of Trade Policy: There should be a trade policy to address the issues of traders and domestic trade policy. The policy must address the central and state level trader related aspects. This policy should address the following: Promote Fair and Honest Trade Traders are often maligned for making excessive profit on trading related activities thus resulting to lower revenue for producers and high cost for consumers. This is myth and can be easily checked by the state machinery by implementing trade policies. There is no denying that there are few rotten apples in the system that squeeze the producers and also create scarcity of commodities in the market, leading to wide variance between the compensation earned by the producer and the purchase cost of commodity by the consumer. If in reality one tries to look at this issue, one can easily find out the nexus between the chains of corrupt system. Majority of traders are neither part of such corrupt system nor they even want to be the part of such system as even traders are also the victims of such wrong and corrupt trade practices. Majority of average traders suffers huge loss due to artificial price fluctuations created by few unethical traders. Honest trading can be promoted through the sensitization and encouragement to honest traders. In order to encourage traders, we need to build institutional infrastructure such as: 1. Priority sector Banks for traders a. The current banking system, despite RBI’s instructions to treat trade / small enterprises and business as priority sector fails to deliver. The recent report of Dr. Nachiket Mor Committee constituted by RBI is an ample testimony of the same. The failure of mainstream Banking sector to cater the needs of traders is due to the fact that these Banks do not understand the business cycle of traders and cyclic spikes in business. The mainstream banks and their decision makers are bound by the established banking rules and regulations. They do not want to take risk by lending to a majority of traders as the traders are not able to produce much required collaterals and guarantees. b. State government should device specialized financial institutions which are just for traders and understand the business cycle. If such kind of banks and financial institutions are introduced into our trading system, it will help traders to flow investment into their businesses and enterprises. With the avenues of investment traders will be able to not just scale up their business but also help in implementing technology and diversify their business operation. c. In this context, the Non-Banking Finance Companies and Cooperative Banks may be associated with State Government and should be asked to provide a viable scheme for advancing loans to small traders at a reasonable interest rate and without having much bureaucratic hurdles and paper work. 2. Low rate of Interest and Short to midterm lending a. Often traders needs short term finance with quicker disbursal. The existing banking system is unable to cater the both the needs. As a result traders are forced to look for unconventional methods which not just prove very costly loans but also help in accumulation of black money by black marketers and money lenders. 3. Up-gradation of markets a. Trade policy must address the issue of physical infrastructure in traditional and local neighborhood market. With the emergence of shopping malls and modern bazars by corporate India, there is urgent need to revamp the traditional markets and local neighborhood markets so that they too can attract the customers just like new shopping malls or modern bazars by corporate India. This means this policy should have say in urban development and town planning process. 4. Reward and Recognition of Traders as Tax collectors a. Traders are the source of revenue collection by state and central government. Despite playing a major role in revenue collection, instead of rewarding the tax collector for his contribution, the taxation authorities and other law enforcing agencies see traders with suspicion. Trade policy must define the roles and responsibility of traders and rewards for diligence of services rendered by the traders. This policy must envisage and differentiate between a good tax collector and tax evader. b. Under the policy, the Traders should be given the status of “ Tax Collector” and some incentive should be given to traders for collection of taxes in order to compensate them for their expenses made towards the collection of tax. Such a step will encourage more and more traders to collect more revenue for the Government. c. A scheme should also be evolved under the policy to encourage widening of tax base. In this context, the traders registering an annual growth of 15% in their business should be absolved from any kind of search, survey, raid or seizure unless the concerned Department is vested with specific evidence against such trader. Such a step will restore the confidence of the traders in taxation system and will result into widening of tax base. 5. Rationalization of Taxation a. Traders are overburdened with multiple taxes. Traders being the sole tax collector from consumers, needs to keep record of various kinds of taxes collected by central government, state government and local bodies. These leads to too much of paperwork. Trade policy should device mechanism to address the various tax related aspects and must make the tax collection as well as reporting simple. 6. Promotion of E-system a. This should be an important part of Trade policy. This should cater the regulatory need of traders. Trade policy should encourage the development of trade centric applications which helps not just streamlining the regulatory compliance but also help keeping business records in vernacular language with option to generate reports to meet audit and other government agencies. The policy should also device mechanism to create awareness about E-Governance among the trading community and assist them in adopting digital literacy in their respective business. 7. Establishment of Trader Tribunal and Lok Adalats a. Often traders are due to complex paper work and harassment by inspectors face the wrath of law. The trade policy must address this critical aspect so that local Tribunal / Lok Adalats can be established at all major markets. 8. Special Task Force/ Expert Committees a. Formation of Task Force, Trade Body and Expert committee to look at the issue of traders along with the government agencies so that policy level interventions can be recommended by the task forces and expert committee. b. The task force should constitute with Secretary level officer as Chairman and has members from various concerned ministries, representation from trade chambers etc. 9. Trade Commissioner System which looks into all aspects of trade and ensures that there is no duplicity or overlapping jurisdiction. 10. Central and State level consolidation of trade act must be envisaged by the trade policy so that multiple acts and legislation can be unified and provisions to address violations and grievances can be made. 11. All trade related Acts to be part of trade policy. 12. The policy should have a mechanism to review Laws, Acts, Rules and Regulations governing Trade and the outdated laws must be scrapped and time needed amendments may be made in rest of the Acts, Laws, Rules and Regulations. 13. The Trade Policy should also draw a mechanism to ensure sufficient security to commercial markets and in this venture; such mechanism may be based on PPP model with respective Trade Associations of the concerned markets. 14. Under the Policy, a Trade Commissioner may be appointed at District level who will act as Nodal Officer to monitor the effective implementation of the fundamentals of Trade Policy. A District Trade Advisory Committee under the Chairmanship of Trade Commissioner and having Trade representatives may also be formed to ensure better coordination between Government and the Traders. 15. The policy should also envisage an effort to make India as a Free Trade Zone and all kinds of Road Permits, Entry Forms etc. should be abolished. The issue needs to be taken up with Central Government and other State Governments. 16. In trade policy, special focus should be given to encourage and promote Women Entrepreneurs to develop their business with the help of the Government. Some sort of special schemes and concessions may be allowed to women entrepreneurs. 17. Special schemes may be drawn for Artisans, Handicrafts and other specialized items of the State to promote their products for other markets of the Country and upcountry. 18. Fairs, Exhibitions may be held on regular basis to depict the progress of the State whereas regular Seminars, Conferences and Conventions must be organized on different subjects concerning trade and Commerce and renowned Experts may be invited to hold interaction with traders on all such topics.
<urn:uuid:abf53a66-20a9-441a-8394-2df69ef87fc3>
CC-MAIN-2021-43
https://www.cait.in/retail-trade-policy/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585837.82/warc/CC-MAIN-20211024015104-20211024045104-00710.warc.gz
en
0.958697
3,539
2.921875
3
Living with eczema, a genetic skin condition is often very uncomfortable. Do you have blotches of red on your skin that feel unbearably itchy and flaky? If this uncomfortable sensation won’t let up and leave you alone, you could have eczema. In its most common form, eczema is referred to as atopic dermatitis. Technically, however, eczema is the term for a group of skin conditions that causes your skin to be inflamed, red, cracked, and super itchy. The severity of this disease can be mild, allowing you to live normally with few complications despite living with eczema. However, extreme eczema cases can seriously conflict with your daily routine and affect your self-esteem and mental health. This is especially true if the lesions appear on your face. Living with eczema on your face can be very distressing, which is why finding coping mechanisms that work for you is crucial. Though the red patches of eczema typically appear on the body’s exterior, some eczema cases develop on the face and even develop inside the eyelids. This can be extremely painful and uncomfortable. Worst of all, it can lead to further health complications. Other types of eczema include seborrheic dermatitis, contact dermatitis, and nummular eczema. This is very common because studies show that 15 to 20% of children and 3 to 4% of the adult population worldwide develop eczema. Typically, 80% of these children will recover within 10 years of the eczema onset. Meanwhile, it is usually the adults who experience the more severe forms of eczema. Thankfully, if you suffer from this skin condition, you can manage and treat it with the guidance of a dermatologist or skin specialist. If you or your loved ones are diagnosed with eczema, knowing more about the disease can help you cope, control, and manage your condition more effectively. Below is a guide on living with eczema, followed by some tips for coping with eczema: Living With Eczema: Should I Worry That I’m Contagious? Contrary to what most people think when they see rough skin patches, eczema is not contagious. You cannot catch it from a patient who suffers from these symptoms. Instead, studies suggest that both your genetics and environmental factors can trigger a skin reaction. If your parents or other relatives have eczema, chances are you are at risk of developing it as well. You’re also at a higher risk if you have a family history of hay fever, asthma, and sensitivity to allergens. In the same token, environmental factors that stimulate its onset can include the following: - Your diet because it could be ladened with allergens - Exposure to stress - Tobacco smoke exposure - Harsh soaps and other cosmetics - Fabrics that are irritants such as wool - Low humidity in the air, causing dry and itchy skin - Extreme heat as it makes the itchiness worse When you have eczema, your immune system overreacts to allergens or irritants, resulting in inflamed skin. Allergens like pollen, nuts, dust mites, or pet hair can trigger an allergic reaction, manifesting on your skin. Therefore, it is important to get the right treatment from your doctor to alleviate discomfort. Stay away from stressors, too, as they can significantly worsen your symptoms. What Are the Signs of Eczema? There are various ways living with eczema affects people. Signs and symptoms of eczema could vary depending on the type of eczema you have. One of the first signs is itchiness, ranging from mild to severe. In some cases, the itchy feeling is excruciatingly intense, coupled with rapid skin inflammation. Unfortunately, the more you scratch, the itchier it becomes, making it a vicious cycle. Other symptoms are: - Red skin - Inflamed skin - Darker patches - Sensitive skin - Super dry skin - Leathery or rough skin texture - Crusting on skin - Fluid secretions - Skin swelling You don’t have to experience all the symptoms. Your doctor may diagnose you with eczema even if you manifest just one or two signs. In mild cases, you experience the symptoms because of an environmental trigger; then, it disappears after some time. If you feel concerned, speak with your doctor. The clinic may take a skin sample to verify the type of eczema and ascertain it is not a fungal infection or other skin condition. Is it Eczema or Psoriasis? Many people find it difficult to tell the difference between eczema and psoriasis because of similar symptoms. However, both eczema and psoriasis are chronic autoimmune diseases with no cure. You can only manage the symptoms, but the skin will flare up from time to time because of triggers. Your dermatologist is the only one who can make the final call between the two. However, you may also be able to tell them apart by taking note of the following: Both skin issues feel different. Eczema causes an intense itch, forcing you to scratch the skin and making it bleed. Meanwhile, psoriasis could also be itchy but is compounded by a burning or stinging sensation. Eczema results in red and inflamed skin with a rough and leathery texture. It can also crust, swell, and ooze with fluids. Similarly, psoriasis causes red patches. The primary difference is these patches are super scaly with a silvery tinge. Upon close inspection, you will notice that the skin is raised. The reason for this is the areas with lesions are thicker. As a result, they are more inflamed than those with eczema. Eczema typically appears in the bend like the inner elbow or behind the knees. However, you can also have it on the neck, ankles, or wrists. This skin condition also commonly affects babies in their chin, cheeks, scalp, and chest. In the meantime, psoriasis shows up in all those places, too. But instead of behind the knees or inner elbows, they show up on the inverse side of the joints instead. Lesions also appear in the palms, soles of feet, mouth, ears, eyelids, groin area, and finger or toenails. Different Disease Onset Eczema usually shows up in babies and young kids. Thankfully, the skin improves when they grow. Onset is less common in adults, but it can still happen. And this is usually due to a condition like thyroid disease, stress, or other hormonal imbalances. On the other hand, psoriasis often shows up between ages 15 to 35 years old. It is very rare for babies to manifest psoriasis. How to Cope While Living With Eczema Though eczema has no cure, there are many ways to treat it to ensure you live a relatively normal life with few complications. Though you cannot control your genes or predisposition to skin conditions such as eczema, you have influence over other factors like your diet, skincare products, and stress levels. Take a look at them below: For severe cases, the dermatologist will prescribe prescription medication like triamcinolone steroid cream, steroid pills, or immuno-therapy injections. However, this comes with risks like high blood pressure, weight gain, or extreme thinning of the skin. So make sure you follow your doctor’s instructions for proper dosing and application. Some doctors also suggest phototherapy clinic treatments. They use a special lamp that mimics the sun’s UV rays. This light therapy will suppress the overactive skin immune system cells that cause redness and inflammation. Don’t forget to stay vigilant by wearing sunscreen to avoid further complications. Modified Home Habits On top of these professional treatments, you must also change your lifestyle and incorporate new routines. The following are the most common and helpful habits to combat your eczema: - Use a humidifier to moisten the air since dry air will make your skin even more dry. - Keep the ambient temperature at a moderate level. - Pay attention to the food you eat and avoid triggers. - Moisturize your skin with special creams and ointments fortified with ceramides (they bond moisture and boost your skin barrier to prevent water loss) - Making it a point to moisturize several times in the day. - Never use hot water as the heat will exacerbate swelling and cause more dryness. - Apply OTC cortisone cream and follow the instructions to minimize skin swelling. - Take OTC antihistamine or anti-allergy pills to combat severe itchiness. - Use mild soaps, shampoos, and other products free from dyes, alcohol, perfumes, and other harmful additives. - Invest in sensitive skin products that are fragrance-free, non-comedogenic, and hypoallergenic. - Pay attention to laundry soap and fabric softeners as well. Consider seeing a therapist or counsellor if your skin issues cause emotional duress and mental health problems. It is natural to feel depressed when your skin doesn’t look its best or feels very uncomfortable. You may feel your self-confidence wavering, especially when others are afraid to go near you because they mistakenly believe you’re contagious. Speaking to a professional will help you unburden your negative emotions. If you keep this bottled up, it may cause even more stress in your life which in turn will exacerbate your eczema. There happen to be many natural remedies to soothe your eczema, such as directly applying the gel from the aloe vera plant, moisturizing with coconut oil, or taking a calming oatmeal bath. Your best recourse is to always seek professional medical advice from a board-certified dermatologist, but it doesn’t hurt to also try these perfectly safe natural remedies. A CircleDNA skin profile report will reveal your genetic predisposition for certain skin conditions, as well as your genetic risk of various other health conditions. This helps you understand your body better, giving you an advantage when it comes to preparing for and taking care of your health problems.
<urn:uuid:b6bbea5e-b426-4a9a-9969-23637b625ad5>
CC-MAIN-2021-43
https://magazine.circledna.com/how-to-cope-with-living-with-eczema/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585181.6/warc/CC-MAIN-20211017175237-20211017205237-00270.warc.gz
en
0.935501
2,160
2.734375
3
This article is from the blog buildingarevolutionarymovement. This post will look at the long-term cycles of the geographical centre of the capitalist economy (during capitalisms existence over the last 600 years), capitalism’s economic waves and cycles and the 10-year capitalist business cycle. There are several theories of historical cycles that relate to societies or civilisations, these are beyond the scope of this post Understanding capitalism’s cycles and waves are important to understanding capitalism better to be able to beat it. Also, there looks to be a relationship between capitalism’s cycles and waves, and cycles of worker and social movement expansion, and also related to the gains and concessions these movements get from capitalists. Long-term cycles of the geographic centre of the capitalist economy This builds on the phases of capitalism described in a previous post: Mercantile Capitalism, 14th-18th centuries; Classical/Industrial Capitalism, 19th century; Keynesianism or New Deal Capitalism, 20th century; and Finance Capitalism/Neoliberalism, late 20th century. These ideas were likely first developed by Fernand Braudel, who described the movement of centres of capitalism, initially cities then nation-states. Braudel described them starting in Venice from 1250-1510, then Antwerp from 1500-1569, Genoa from 1557-1627, Amsterdam from 1627-1733, and London/England 1733-1896. Immanuel Wallerstein describes as part of his ‘world-system theory’ that there have been three countries that have dominated the world system: the Netherlands in the 17th century, Britain in the 19th century and the US after World War I. Giovanni Arrighi identifies four ‘systemic cycles of accumulation’ in his book The Long Twentieth Century. He describes a ‘structuralist model’ of capitalist world-system development over the last 600 years of four ‘long centuries’, with a different economic centre. Arrighi’s systemic cycles of accumulation were centred around: the Italian city-states in the 16th century, the Netherlands in the 17th century, Britain in the 19th century and the United States after 1945. It looks like the centre is moving Eastwards in the twenty-first century. George Modelski identified long cycles that connect war cycles, economic dominance, and the political aspects of world leadership, in his 1987 book Long Cycles in World Politics. He argues that war and other destabilising events are a normal part of long cycles. Modelski describes several long cycles since 1500, each lasting from 87 to 122 years: starting with Portugal in the 16th century, the Netherlands in the 17th century, Britain in the 18th and 19th century and the US since 1945. Capitalism’s economic waves and cycles Several waves and cycles have been identified in the capitalist economy that relate to periods of economic growth and decline. Kondratiev waves (also known as Kondratieff waves or K-waves) are 40 to 60-year cycles of capitalism’s economic growth and decline. This is a controversial theory and most academic economists do not recognise it. But then most academic economists think that capitalism is a good idea! Kondratiev/Kondratieff identified the first wave starting with the factory system in Britain in the 1780s, ending about 1849. The second wave starts in 1849, connected to the global development of the telegraph, steamships and railways. The second waves’ downward phase starts about 1873 and ends in the 1890s. In the 1920s, he believed a third wave was taking place, that had already reached its peak and started its downswing between 1914 and 1920. He predicted a small recovery before a depression a few years later. This was an accurate prediction. Paul Mason in Postcapitalism: A Guide to Our Future describes the phases of the K-waves: “The first, up, phase typically begins with a frenetic decade of expansion, accompanied by wars and revolutions, in which new technologies that were invented in the previous downturn are suddenly standardized and rolled out. Next, a slowdown begins, caused by the reduction of capital investment, the rise of savings and the hoarding of capital by banks and industry; it is made worse by the destructive impact of wars and the growth of non-productive military expenditure. “However, this slowdown is still part of the up phase: recessions remain short and shallow, while growth periods are frequent and strong. Finally, a down phase starts, in which commodity prices and interest rates on capital both fall. There is more capital accumulated than can be invested in productive industries, so it tends to get stored inside the finance sector, depressing interest rates because the ample supply of credit depresses the price of borrowing. Recessions get worse and become more frequent. Wages and prices collapse, and finally a depression sets in. In all this, there is no claim as to the exact timing of events, and no claim that the waves are regular.” Mason describes his theory of a fourth wave starting in 1945 and peaking in 1973 when oil-exporting Arab countries introduced an oil embargo on the USA and reduced oil output. The global oil price quadrupled, resulting in several nations going into recession. Mason argues that the fourth wave did not end but was extended and is still ongoing. The downswing of the previous three cycles ended by capitalists innovating their way out of the crisis using technology. This was not the case in the current fourth cycle because the defeat of organised labour (trade unions) by neoliberal governments in the 1980s, has resulted in little or no wage growth and atomization of the working class. In On New Terrain: How Capital is Reshaping the Battleground of Class War Kim Moody used data from three sources (Mandel, Kelly, Shaikh) to identify his theory of a third (1893-1945), fourth (1945-1982) and fifth (1982-present) long waves. The third upswing from 1893-1914, then downswings from 1914-1940. The fourth upswing from 1945-1975, downswings from 1975-1982. The fifth upswing from 1982-2007, downswings from 2007-?. Joseph Schumpeter identified several smaller cycles have been combined to form a ‘composite waveform’ that sit under the K-waves. The Kuznets swing is a 15-25 year cycle related to infrastructure investment, construction, land and property values. The Juglar cycle is a 7-11 year cycle related to the fluctuations in the investment in fixed capital. Fixed capital are real, physical things used in the production of goods, such as buildings or machinery. The Kitchin cycle is a 3-5 year cycle caused by the delay it takes the management of businesses to decide to increase or decrease the production of goods based on information from the marketplaces where they sell their goods. This is the roughly 10-year boom and slump cycle of the global capitalist economy. It is also known as the (economic cycle, boom-slump cycle, industrial cycle). Mainstream economics view shocks to the economy as random and therefore not cycles. There are several theories of what causes business cycles and economic crises that I will look at in a future post. Theories about the business cycle have been developed by Karl Marx, Clément Juglar, Knut Wicksell, Joseph Schumpeter, Michał Kalecki, John Maynard Keynes. Schumpeter identified four stages of the business cycle: expansion crisis, recession, recovery. So what are the dates of the business cycle? I’ll go through the information on business cycles in the US and UK since 1945 and there is no clear agreement on the number. Something to come back to. Howard J. Sherman in The Business Cycle Growth and Crisis under Capitalism argues that the best dates are those provided by the US National Bureau of Economic Research (NBER). He explains that they’re not ideal but the best available and they go back a long way. Since 1945, the US has had recession in the years 1949, 1954, 1958, 1961, 1970, 1975, 1980, 1982, 1991, 2001, 2009. That is ten business cycles, eleven if you include the one that started in the last ten years. The Economic Cycle Research Institute (ECRI) uses these dates as well. Sam Williams at the blog Critique of Crisis Theory is critical of the NBER dates and argues that there have only been five business cycles since 1945. He measures them based on the point they peaked rather than a recession: 1948-1957, 1957-1968, 1982-1990, 1990-2000, 2000-2007. He describes the period from 1968-1982 as one long crisis. A sixth business cycle could be added from 2007-2020. For the UK, I found three different sets of information of when the business cycles have been. Each indicates a different number of business cycles since 1945. The National Institute of Economic and Social Research list UK business cycles since 1945 as peak 1951, trough 1952; peak 1955, trough 1958; peak 1961, trough 1963; peak 1964, trough 1967; peak 1968, trough 1971, peak 1973, trough 1975; peak 1979, trough 1982; peak 1984, trough 1984; peak 1988, trough 1992. So that’s nine business cycles from 1945-1992. The Economic Cycle Research Institute (ECRI) identifies UK business cycles since 1945 to be: trough 1952; peak 1974, trough 1975; peak 1979, trough 1981; peak 1990, trough 1992; peak 2008, trough 2010. The ECRI chart does not list anything for the current crisis but I think it it’s safe to assume that 2020 was the peak. That is five business cycles from 1945-2020. Wikipedia lists recession in the UK since 1945 taking place in: 1956, 1961, 1973, 1975, 1980-1, 1990-1, 2008-9 and 2020-? That is seven business cycles from 1945-2020. - Giovanni Arrighi: Systemic Cycles of Accumulation, Hegemonic Transitions, and the Rise of China, William I. Robinson, 2011, page 6/7, https://www.researchgate.net/publication/254325075_Giovanni_Arrighi_Systemic_Cycles_of_Accumulation_Hegemonic_Transitions_and_the_Rise_of_China/link/54f4dbd80cf2ba6150642647/download - Giovanni Arrighi: Systemic Cycles of Accumulation, Hegemonic Transitions, and the Rise of China, page 10 - Postcapitalism: A Guide to Our Future, Paul Mason, 2015, page 35/6 - Postcapitalism: A Guide to Our Future, page 36 - Postcapitalism: A Guide to Our Future, CH4 - On New Terrain: How Capital is Reshaping the Battleground of Class War, Kim Moody, 2018, page 72
<urn:uuid:a6d901ab-f641-40ad-857d-6c26ed277b34>
CC-MAIN-2021-43
https://dgrnewsservice.org/resistance/indirect/education/capitalisms-cycles-and-waves/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587799.46/warc/CC-MAIN-20211026042101-20211026072101-00190.warc.gz
en
0.929747
2,326
2.578125
3
Think methodically with common sense reasoning to solve the liar and truth-teller riddle In liar and truth-teller riddle, to know the safe path, a traveler can ask only one question to any one of a habitual liar or a truth-teller. What to ask? Story of the Riddle Crossing over many a land, a traveler came upon a deep jungle where two paths forked and went into the depths of the jungle. He had heard from a wise man someday earlier that such a forest lay ahead. The wise man had said, "One path will lead to the warmth of a friendly village, but the other will lead you into the den of hungry tigers and sure death." Now only he remembered the rest of what the wise old man had said, "When you are at the fork trying to choose the safe path, two men will suddenly appear ready to help you. Beware, one of them will be a habitual liar—his answer to any question must always be a lie, but the other will just be the opposite—he always will tell you the truth.” The wise man finished, "Don't forget, you can ask only one question to any one the two to find the correct safe path. You won't have a second chance. They will understand your question, know which path is safe, know the nature of answering of each other but will answer only with YES or NO.” Recommended time for you to find the safe path: 15 minutes. This is a classic logic puzzle from old times. Logicians who juggle with pure logic by choice would immediately tell you the right answer in no time. But we are not logicians. We are common folks who use common sense logic and deductive reasoning in our own way. If you are not a logician have a go. The experience will be interesting. You would get better results if you imagine yourself as the traveler. Systematic Solution to the Liar and truth-teller Riddle – First stage: Key Pattern Discovery of the Nature of the Answer for Solving the Riddle Without going deep into the riddle, you would do an initial experiment in mind. You will visualize what could be the answers from the two if you ask the single question, Do you think the path to my left is the safe path that goes to the village of friendly people? If indeed the left path IS the safe one, The liar would answer: NO, and the Truth-teller: YES. Without knowing which one of them is telling the truth, you won't be able to decide. Thinking more you realize that you have, Two unknown paths, and two men eager to help, but again you don't know their nature of habitual answering. With such combinations of unknowns, you realize, it would be impossible to form the right question by thinking in the conventional way. You are clear now that, You must form the question NOT in a simple way. Without wasting time on what type of question you have to ask, your attention shifts to the ANSWER itself. This is a very natural way to solve problems—to analyze and understand all characteristics of the end result first, comparing it with given information. Note: To us this is End State Analysis Approach, an often used natural problem solving technique packed with power. Thinking more in this direction, you ask yourself the most important QUESTION at this point, What must be the NATURE OF THE ANSWER from the two for me to know the safe path? You have already experienced that the answer to a simple question from one helper will be NO and the other, YES, just the opposite. You make a firm conclusion, If the answers from the two are OPPOSITE, you won't find the safe path. There is no going away from this. The conclusion is actually a fact and an inviolable truth. Ah ha, you couldn't imagine earlier it would come to this pass. Yes, for you to be for sure about the safe path, Whatever be the question, answers from both the helpers must exactly be the SAME in every situation. This is a revelation to you and in problem solving terminology this is, discovery of the key pattern. You have now a precise requirement of their answers in relation to each other (or precise requirement specification). Naturally, the answer will be YES or NO. But if it is NO, both will answer NO. Same must also be true for YES for knowing safe path. This is the first breakthrough. You feel half the battle you have won by knowing all about the answer that can be known. Nature of answer fully identified, it is time to shift focus of attention to the NATURE OF THE QUESTION. Systematic Solution to the Liar and truth-teller Riddle – Second stage: Process of Knowing Nature of the Question to ask At this point you realize that you have to think in a NEW way to make the second breakthrough by understanding precisely the nature of the question which is the requirement specification for the question. What can be a new form of a question! As you think more on this, the condition of “NEW form” strikes you and aided by the focus of the new form of question you reason, Well, the usual simple form of the question has been a single question for any combination of situation. All combinations would have the same result ending in failure. WHAT CAN BE CHANGED ABOUT THE QUESTION? Property change analysis technique This technique of exploring new ways to change a key property (in this case, the number of components in the question) of the key entity (in this case, the question itself) often produce great results for quick and innovative solution to the problem. This is called, Property change analysis. Of course, you can very well change the number of component questions in the single combined question, especially as you remember from your experience, Instead of a single question, two questions can easily be combined to form the single compound question. When you were younger, your neighbor uncle once asked when you opened the door, Is your father home? Is he well? Your father was home and well. So you answered, "Yes". One answer to two questions. You further realize, if either your father were not home or not well or both, you might have simply answered, "No". That must be the new way that would help you to know the safe path, Instead of a one component single question, you would ask a single question with two components joined together by "and", the easiest way to join two questions. And you know from elementary knowledge in language that such a compound question is easy to form—your neighbor uncle could have asked such a single question, Is your father home and well? You feel you have practically cracked the puzzle open and that too by using no difficult technique or concept. You have followed just simple common sense problem solving techniques coupled with concepts drawn out of everyday real life experiences and elementary knowledge of language. What have you achieved till now? You know now the nature of the answer and the question both. The only task of actually forming such a question with such an answer is left. Systematic Solution to the Liar and truth-teller Riddle – Third stage: Forming the Question by using the Answer and Question Specifications With clear idea of the nature of the answer and form of the question, a possible safe question to be asked would be, Are you the truth-teller and do you think the path to my left is the safe path? Possible situations are, Situation 1: The helper asked is the truth-teller and the left path is the safe path: Final answer: YES. Situation 2: The helper asked is the liar and the left path is the safe path, first answer result would be NO, so that answer to the combined question would truly be NO. Habitual liar, being what he is, cannot but reverse this NO to YES. Final answer: YES. So if the answer to your question is YES, you know for sure that the left path is actually the safe path. Alternatively, the other two possibilities are, Situation 3: The helper asked is the truth-teller and the left path is NOT the safe path: Final answer: NO. Situation 4: The helper asked is the liar and the left path is NOT the safe path, answer to both the parts are NO that would be reversed by the liar habitually to YES, opposite to the answer by the other helper. Final answer: YES. This violates the requirement specification of the answer. Just on the brink of success you find that joining of two questions simply by "and" won't fully work. There is a challenge yet to overcome at the third stage of using the specifications of the answer and question to form the right way to combine the two question components. You had assumed an easy way to combine the two questions without thinking much on question of how to combine. But this is not all in vain—you would surely get important clues on how to combine by analyzing the results. Systematic Solution to the Liar and truth-teller Riddle - Final stage: Identifying Crucial Requirement of Combining two Questions Really, why and where did combining two questions by "and" fail? As you concentrate on finding answer to this question by analyzing the result of your last attempt, you realize that, - Answers to the two component question 1 and question 2 are independent of each other, and, - There is no need to think what would be the answer of the truth-teller—because primarily your goal is to force the liar to reverse the answer of the truth-teller twice to match his final answer. In the Situation 2, using "and" for joining, you could indeed force the liar to reverse the answer of the truth-teller twice. But in the Situation 4, where true answer to both component questions were NO, the liar reversed each to YES and combined also to YES. HE was happy to think that he had indeed reversed the true combined answer and did justice to his habit! To force the liar to reverse true answer twice by two questions then, One question must be DEPENDENT on the other. So, the successful method of combining the two questions must ensure that, The liar would reverse true answer to the INDEPENDENT first question and thinking that has answered it in line with his habit would face the second DEPENDENT question and reverse answer to the first independent question once more by the second question thinking again that he had answered in line with his nature. What is the other method of joining two questions that would achieve this result? Again your common experience of using the language helps you for the final breakthrough. There must be only one real question but asked twice in the commonly used form, What would be your answer if I ask you whether the "QUESTION" is true? The first of the two component questions the liar must answer following rules of the language is, 'whether the "QUESTION" is true.' The liar reverses the correct answer to this first component question and forms the INTERMEDIATE RESULT in line with his nature of answering. With this result then he faces the second component question, What will be your answer if “INTERMEDIATE RESULT” is true? Now he would have no option other than to reverse the intermediate result which according to him is correct and form the final result. Final result becomes the REVERSE OF INTERMEDIATE RESULT. As INTERMEDIATE RESULT has itself been reversed once from true result, the final result returns back to the value of the true result again by this double reversal. Following is the schematic of this mechanism, This is pure logic no doubt, but with reason, method and two trials, you have learned enough about how the single compound question must be formed and asked to any of the two helpers to find the safe path. With confidence you finally form the single question that would lead you to know the safe path, What will be your answer if I ask you whether the path to my left is the safe path? You ask this question to any of the two helpers. If the answer is YES, you take the left path, and if it is NO, you take the right path. There cannot be any other possibility—you have indeed forced the liar to reverse the true answer twice to match the final answer of the truth-teller in both the situations. Knowing that answer to the question from both helpers will be same, you asked the question to any one of the two helpers. You might have discovered the most crucial PRIMARY REQUIREMENT OF ACTION—the liar must be forced to reverse the true result twice—without going into the second trial at all. This discovery is not difficult to make if you realize early that solving the riddle will vitally depend on finding the right answer to the question, How to make answer of the liar same as the truth-teller in any situation! The problem solving techniques, concepts and common knowledge used - End state analysis: Analyzing the desired result or last action first. Objective is to gain more knowledge about the last action for achieving the desired result. - Refining requirement specification in steps: Knowing precise requirements of the answer first and the question second simplified the steps to the solution greatly. - Question, analysis and answer or QAA technique: Simplifying the problem stage by stage by asking a series of relevant questions and analyzing each for getting its answer. - Property change analysis technique: Exploring how many ways the key property of a key entity can be changed and assessing promise of each change, often proves to be crucial in solving a complex problem. Great innovation can be created by this technique. - Elementary knowledge in language: Basic domain concepts: for combining two component questions to a single compound question but in different ways. - Well-formed trials or experiments: to learn more of the problem. - Step by step deductive reasoning: by using all of the above and discovering key patterns of information for solving the problem with complete confidence. We have used this puzzle for a different purpose earlier but here the focus is quite different. Our focus all through the above solution process has been to think as the traveler in a simple way and find the safe path using systematic reasoning and problem solving techniques drawn out of common knowledge and experience step by step. Puzzles you may enjoy Logic analysis puzzles Liar and truth-teller riddle with step by step easy solution
<urn:uuid:9fd9adbb-1c05-4879-8d66-7e391e09f5a8>
CC-MAIN-2021-43
https://suresolv.com/brain-teaser/liar-and-truth-teller-riddle-step-step-easy-solution
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588053.38/warc/CC-MAIN-20211027022823-20211027052823-00150.warc.gz
en
0.95475
3,053
3.015625
3
If the content of OED is now more extensive and diverse than that of previous editions, we could say the same of the dictionary’s readership. Usage statistics from the OED Online show that—as a rough average—every second of every day someone somewhere in the world is extracting an OED entry to read. We can record which entries are searched and viewed, and that in turn can help us prioritize our work in revising and updating the text. What impresses us most forcibly when we review the reports of OED Online usage is not so much the regularity with which certain prominent words are searched—and yes, as for any dictionary, the F-word invariably features—but rather the vast array of terms searched relatively infrequently. What distinguishes OED from other dictionaries—the sheer range of vocabulary, the depth of historical coverage—is both understood and exploited by its readers. That in turn enhances our own sense of editorial purpose in undertaking such a comprehensive revision of the text. An old word with new life We now routinely prioritize for immediate work many of the most frequently searched entries, as well as those exhibiting significant linguistic productivity in the twentieth century. In the latter category, we recently revised the entry for information. This is a word whose growth in the last 100 years both reflects and embodies major cultural and technological change, yet it hasn’t always garnered much attention. The cultural theorist Raymond Williams doesn’t list information in his 1976 work, Keywords. R.S. Leghorn, the first recorded user of information age, while confident (and astute) about the wide social impact of information technology, was dismissive of the phrase he used to describe it: 1960 R.S. Leghorn in H.B. Maynard Top Managem. Handbk. xlvii. 1024 Present and anticipated spectacular informational achievements will usher in public recognition of the ‘information age’, probably under a more symbolic title. Why? Well, information does lack the ancient heft of stone, iron or bronze, but what makes it so distinctive as the fabric of mass communication is the very combination of immateriality and massiveness, its overwhelming diffuseness. It’s also a word which provides a point of imaginative sympathy between OED‘s editors and readers. The search for definitive information is the principal aim in our experience of writing the dictionary, as it is yours in reading it. The growing availability and abundance of information through print, broadcast, and then digital media is inevitably mirrored in the increasing use of the word. Its rising profile can be measured by counting and ranking the frequency of its appearances in searchable text corpora amassed over the past few decades. The Project Gutenberg corpus of mostly pre-1900 literature lists it as the 486th most frequent word; the 1967 Brown Corpus of contemporary American English places it 346th; and the 1997 British National Corpus lists it as 219th. A recent survey of online usage reported information as the 22nd most frequently used word. While these statistics need to be treated with some caution—neither the corpora themselves nor the analytical methods applied are strictly comparable—the impression they convey is accurate. This is an old word with a new lease of life. Its prolific growth is reflected in a revised OED entry twice the size of the original. The meaning of information Information began life in English with a specific sense, borrowed from (Anglo-Norman) French: accusatory or incriminatory intelligence against a person. Excepting specific legal contexts, that’s no longer an active sense, though it survives as a dominant meaning of related terms like informant and informer. Ostensibly, information became a more neutral term, but it has always retained the sense of something that might be offered or exchanged to someone’s advantage. Perhaps because information is such a tradable commodity, as a word it also tends to form attachments freely, as shown by the greatly expanded array of compounds in the revised OED entry. The way in which a word combines with others can be highly revealing not just of its semantic reach (how its meanings grow and flourish) but of its wider cultural associations. The earliest compound attested in OED (information office) dates from 1782. It first described a service for British colonists arriving in India, later a similar function to other groups of international emigrants and travellers. In the mid-to-late nineteenth century, mass transit and communications began to take shape. Compounds arising in that period reflect information as a commodity with supply and demand: information-giving (dating from 1829), information-seeking (1869), information gathering (1893). They also tend to suggest relatively small-scale means of collection and distribution: information bureau (1869), information room (1874). In the last decades of the nineteenth century we begin to find terms which evoke some idea of professionalization: information agent (1871), information service (1885), information officer (1889), information work (1890), and information gatherer (1899). The abiding sense is that information can be collected, managed, marshalled, and disseminated. The means are formal but typically interpersonal: one person with the requisite expertise could find you what you need to know. Information is controllable and controlled; entrusted to some, who provide it for others. The emergence of computer technology roughly coincides with the OED Supplement’s second round of work on information. Information technology itself was added in the 1976 Supplement volume with a first date of 1958, but now in OED3 it appears as a separate entry, with quotation evidence dating from 1952 (in a slightly different sense). The Supplement’s editors identified and included many of the earliest compounds evoking the sense of information as data, something to be stored, processed, or distributed electronically: information processing, information retrieval, information storage (all three dated from 1950). In quick succession came terms relating to the academic study of the phenomenon, appearing in a neatly logical sequence: first the idea (information theory, 1950), next its budding adherents (information scientist, 1953), then the established field of study (information science, 1955). While those earlier coinages are generally suggestive of the beneficial or transformational power of electronic data, it is not long before the social consequences of the information age start to emerge. The need for skilled mediation emerges: information broker (1964), information architect (1966), information architecture (1969)—the more evolved hi-tech counterparts of information gatherer. There is an increasing sense—harking back to that very first meaning—of information being used to one’s benefit or another’s disadvantage: not merely controlled and managed, but deficient or adequate: information-rich (1959), information-poor (1970). Towards ‘fatigue’ and ‘overload’ Here at the OED, as work on the third edition progresses, we are nothing if not information-rich. The proliferation online of text archives, historical corpora, and searchable facsimiles has vastly enhanced the quantity and depth of linguistic data at our disposal for research. Where the dictionary’s original editors often struggled to find sufficient quotation evidence for common senses (volunteer readers tending naturally to alight on the exotic or unfamiliar), today’s historical lexicographers struggle to deal with the copiousness of evidence. Abundance is—well, abundant. The adverse psychological impact of the information age manifests itself linguistically, in information overload (1962) and in the entry for information fatigue (1991). Although those two last phrases are simply the latest additions to OED‘s coverage, for those engaged in any form of online research they could just as well describe the arc of a working day. Perhaps this is why the OED definition of information fatigue, while entirely accurate, also sounds faintly heartfelt: Apathy, indifference, or mental exhaustion arising from exposure to too much information, esp. (in later use) stress induced by the attempt to assimilate excessive amounts of information from the media, the Internet, or at work. In dictionaries, as elsewhere, a statement can be at once plainly factual and profoundly human. ‘Information is a distraction’, President Obama is reported to have said recently. He was commenting specifically on gadgetry’s power to divert us from higher purposes: With iPods and iPads and XBoxes and PlayStations—none of which I know how to work—information becomes a distraction, a diversion, a form of entertainment rather than a tool of empowerment, rather than the means of emancipation. He would probably be dismayed to know there exists a visualized Hierarchy of Digital Distractions—email, text messages, social media, etc.—at David McAndless’ Information is Beautiful. The fact that the President was widely misquoted (or his words decontexualized) perhaps only served to underline the broader point he sought to make: that in an age in which each of us is assailed on all sides with unfiltered information, identifying the reliable sources becomes at once harder and more important. Perhaps that’s where the OED can help. Where next with the OED Online? - The revised entry for information appears as part of the December 2010 update of OED3. Updates are published four times a year, with details of recent additions available. December’s update also includes digital, the subject of another OED Word Story. - With the Historical Thesaurus of the OED you can trace the development of synonyms for the original meaning of information—‘the action of imparting accusatory or incriminatory intelligence against a person’—from peaching to whistle-blowing. - The decade that gave us information overload also saw the first recorded use of answerphone (1963), vox pop (1966), and pager (1968). How do I search for this? With subscriber access, use the Historical Thesaurus to trace how objects, actions, and concepts have been described over time. Or use Advanced search to find words by subject and date: here choose Browse subject (and select ‘Telecommunications’ under ‘Technology’) along with the range, 1960-1970. About the OED The opinions and other information contained in the OED blog posts and comments do not necessarily reflect the opinions or positions of Oxford University Press.
<urn:uuid:55a7010b-485d-4cfd-abf1-bbd2039ee6fb>
CC-MAIN-2021-43
https://public.oed.com/blog/word-stories-information/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585424.97/warc/CC-MAIN-20211021133500-20211021163500-00390.warc.gz
en
0.929386
2,148
2.828125
3
This is the last blog post in the 3-part series on planning, assessment, and learning (you may want to read part 1 and part 2 as they build the broader context for this one which is more practical). *NOTE – click on the pictures to enlarge. If we are to start any planning we must always bear in mind the following principle: “Learning results from what the student does and thinks and ONLY from what the student does and thinks. The teacher can advance learning only by influencing what the student does to learn.” (Herbert Simon) I think many teachers miss the subtlety in this simple and so evident a claim. Why am I saying this? Because over and over, teachers focus on *their* teaching sequence and activities they prepared for the students instead of paying more attention to what the *students* themselves need to THINK about while engaged in those activities. Dylan Wiliam made this point very clear, Say, the teacher gave the students a crossword puzzle to practice learning new words. How effective is that truly? Basically the student is asked to do a low-level thinking task: matching a definition to a word in a particular place. How could the learning of new words be actually more effective? Deepen the thinking the student has to do: give the words and ask them to manipulate them in various contexts (see what I designed for my students and the example for “migration”). Notice the complexity of connections they need to make (contextualizing in a sentence, comparing-contrasting with a similar word, using affixes – prefixes and suffixes and so on). This helps strengthen not only later recall but also deepens the understanding of the word unlike the crossword puzzle activity where the simple matching leads to superficial encoding in memory and does not equip the student with a genuine understanding of it let alone a rich use of the word in the future. In any subject, you can find lots of “activities” that have only a brief effect on student understanding and skill. I spoke about that in the previous blog post (e.g. making dioramas after a reading) so I won’t insist on it now. Similarly, Ron Ritchart, made the same point in his work, Making Thinking Visible: “If we want to support students in learning, and we believe that learning is a product of thinking, then we need to be clear about what it is we want to support – what kinds of mental activity are we trying to encourage in our students?” (p. 5) “We must first identify what kind of thinking we are trying to elicit from our students…” “Thinking is intricately connected to content. It makes little sense to talk about thinking divorced from context and purpose.” (p. 6) “The opposite of this same coin is a classroom that is all about activity. Playing a version of Jeopardy to review for a test may be more fun than doing a worksheet, but it is unlikely to develop understanding.” (p.9) “To develop understanding of a subject area, one has to engage in authentic intellectual activity. That means solving problems, making decisions, and developing new understanding using the methods and tools of the discipline.” (p.10) David Perkins, in Smart Schools, insists on it as well: “Learning is a consequence of thinking. Retention, understanding, and the active use of knowledge can be brought about only by learning experiences in which learners think about and think with what they are learning. As we think about and with the content that we are learning, we truly learn it.” “Thinking is a largely internal process. We, as teachers, however, must create opportunities for thinking. For thinking to occur, students must have first something to think about (n.n. content) and be asked to think (n.n. tasks/opportunities).” The second principle I would like to emphasize is actually a quote: “If you fail to plan, you pretty much plan to fail.” It might sound somewhat restrictive and too narrow but, in my view, it is at the core of good teaching. Knowing the why, what, and how of your content is essential in mapping out learning experiences that will enable students to develop the factual, procedural, and conceptual knowledge that are the markers of genuine intellectual rigor. And by “intellectual rigor” I mean an ability to think carefully and deeply when faced with new knowledge and arguments. It requires vibrant engagement with ideas and high standards of excellence while allowing space for questions and explorations. It empowers students to become what we all aspire them to: critical thinkers. In what concerns me, 90% of the hardest work I do is actually planning. There are far too many elements to consider when designing rich learning experiences so I spend days prior to a new unit to clarify, anticipate, combine, and tweak these factors in order to achieve the best outcome for students. I tried to convey this work in the iceberg model below but it is much more complex. Take one factor, for instance “assessment” and how many questions I ask myself: What type of assessment should I use in ..? What formative assessment tools are most effective for this task in math at this point in learning? How will I record it? What degree of openness should it have? Should I involve students in designing a rubric for this specific task? etc. Moreover, because I collaborate with other teachers, the initial plan is altered to incorporate other ideas, too. Not to mention that, as the teaching-learning process unfolds, we must make changes when we notice the students are still struggling or developed misconceptions. In the IB Primary Years Programme we also focus on student agency which adds to this dynamic process by engaging students themselves to co-plan some inquiries, investigations, and tasks. When I plan I have 5 principles that I developed years ago as guidelines: As a general template, regardless of the system you teach in (more progressive or more traditional), the KDU planning model (Know, Do, Understand) gives you a good, clear start. I loved its simplicity and I kept using it – it brings together all the facets of understanding. I am an adept of KSS (Keep it Simple, Stupid) approach because our time and energy as teachers is spread over a really big plate, from planning and teaching to staff meetings, in-school professional development, field trips, report cards, and the list is endless… I showed and explained the model in the previous blog (part 2) but it gives context to the practical plans I will further show you: Because I teach in the IB, I developed a different structure due to the programme constraints. The Enhanced PYP allows us to design our own planners now and that came as a relief to me as the old, standard IB planner was extremely long (spreading over 12 pages!) and a lot of the information we needed to add was redundant. For those who do not teach in IB schools, you could focus *only* on the Know, Do, Understand columns. I will exemplify that, too, later in the post. Notice that what the students should be able to DO is expressed in clear verbs that link to OBSERVABLE behavior. Learning, as I mentioned before, is an internal process so the only way we can partly capture it is through what the student actually does and we can observe (and sometimes measure more easily). Also, to reiterate a previous idea, what you write in this planner in the blue column is NOT the learning experiences you design for your students but what they can actually DO. See example below: To go back to the planner I made above (Societal decision-making), the next page is straightforward, simple, and to the point. Thus, I turned a 12-page planner into a 2-page one that enables me to keep track of the most important information as the unit unfolds: I also turn the key understandings of the unit into “concept trackers” for students to use throughout the inquiry. I print them on large A3 sheets and students add post-its as they move throughout their learning. They select a stage they think they are in (“bubble”), and explain why they think they are there. It is a simple but powerful way for them to reflect on their learning, to justify their choice, and make that visible to the classroom community. *Gareth Jacobson introduced this idea in one of the schools I worked and I found it very useful. What does this look like in subject-specific content? Here are two examples, one for language and the other for mathematics. *Obviously, you design them according to the age-group you teach (e.g. adding more layers of difficulty and/or complexity). After you design this KDU plan you can then create (or co-create with students) the criteria for assessment, self-assessment and reflection. I combine 3 types of thinking moves – related to factual /procedural fluency, conceptual understanding, and also leave room for open inquiries (see an example below in math). It is important to balance the STRUCTURED tasks with the OPEN-ENDED ones so plan for “open” tasks, too (see my examples here). If you only insist on structured tasks you deprive students of making stronger connections between factual, procedural, and conceptual knowledge. The student *performing* impeccably in a task (say, long division) can give us what is called a “false positive” – I illustrated that in this blog post but do read this one by Robert Kaplinsky or watch the 1-minute videos below. What else is essential? To sequence the teaching in such a way that the students can make increasingly deeper AND more complex connections between actual, procedural, and conceptual knowledge from various disciplines. In primary years that means a transdisciplinary approach while later an inter-disciplinarity should be the focus. It is something that is missing in traditional systems where disciplines become completely separated and students fail to recognize the deeper connections between them (aspect that I touched upon in the first part of this series). What does it look like in the IB PYP practice? It means looking at an inquiry unit through all the lenses that would enhance student learning. That does NOT mean “dumping” every single subject into the unit of inquiry but making connections ONLY with the subjects that actually help students develop strong skills and understanding. In progressive systems this is one obvious error as the units become thematic instead of conceptual – I talked about that here. and here. Therefore, when we plan we need to keep an eye on the best possible connections that would enrich student learning and would allow them to strengthen the web of ideas, concepts, facts and skills pertinent to the core concepts. Getting back to the unit planner I showed before on Societal decision-making, this is what I plan: I know it has been a rather long blog post but I felt the need to give some background principles behind these practical examples of planning. If I were to summarize, these would be the main ideas: Always plan for COGNITIVE WORK Make sure you BALANCE factual, procedural and conceptual knowledge Work COLLABORATIVELY with teachers in your grade-level/department Make planning a TRANSPARENT process Exercise PROFESSIONAL JUDGMENT Thank you for reading. I will blog next time on ASSESSMENT and FEEDBACK and also answer a question that Jamie House posed last week: “I wonder what your thoughts are regarding language acquisition within the international context, especially ELLs.” It is an important question for teachers who work in an international setting *precisely* because the student population is so diverse in terms of languages they speak, countries they come from and so on.
<urn:uuid:d4df7295-db2d-4274-9cd6-a1c0adf5af6d>
CC-MAIN-2021-43
https://cristinamilos.education/2020/08/30/strategy-vs-tactics-planning-assessment-and-learning-3/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585121.30/warc/CC-MAIN-20211017052025-20211017082025-00150.warc.gz
en
0.957427
2,472
3.671875
4
In 2 Kings 18.26-28 [paralleled by Isaiah 36.11-13], 2 Chronicles 32.18 & Nehemiah 13.24, there is mention of Yehūḏîṯ, which literally means "Judaic," "Judaean" or "Jewish [speech]": the language of the kingdom of Yehudah [Judah], which we might say is Hebrew, but which could alternatively be a dialect specific to the tribe of Yehudah (cf. Judges 12.1-7). At any rate there is nothing in any of these passages about God speaking or writing, nor is the term "Hebrew" ever employed (although it is [mis]translated to say that in a few English Bibles). As a matter of fact, in the Bible itself (at least the parts thereof considered to be canon by most Christian groups) the first time that Hebrew is mentioned as a language is actually in the New Testament.1 There is no direct or explicit reference anywhere in the Old Testament to a language called Hebrew nor is there even a phrase like "the language of the Hebrews" which might perhaps at least imply such a phenomenon. A Babylonian in the Land of Canaan Variations of the term "Hebrew" appear 34 times in the Old Testament, always in reference to a person or a group of people. The first such instance is in Genesis 14.13, where Abram is called "the Hebrew", which at the time was not an ethnonym, Abram himself being a Babylonian (or "Chaldean")2 and having friends or acquaintances among the Egyptians (he was a guest of one of the Pharaohs, according to Gen. 12.10-20), the Philistines (he cut a covenant with a Philistine king, Abimelech, in Gen. 21.22-34, to which king God spoke in Ch. 20), the Hittites (from whom he bought his family tomb in Gen. 23) and the Canaanites (consisting of roughly ten goyim/ethnoi, "tribes/nations"3, in whose land he dwelt for the second half of his lifetime). It might be safe to deduce from the above that Abram/Abraham was multilingual, able to speak Akkadian (Middle Babylonian in his case4) and Hittite; the Proto-Arabic of ancient Canaan; and the languages of the Egyptians (perhaps Middle Egyptian at his time) and the Philistines. It may be that he did not even know of a Hebrew language. The alternative to this would be to assume (and it's just as much an assumption as is any other theory) that Abraham necessarily chose to speak Hebrew (wherever he would have acquired it from) and that all these different peoples he encountered necessarily spoke his language of choice, including his Amorite comrades Mamre, Aner and Eshcol (see again Gen. 14.13), and also his God. The Slave Who Named God Abraham's wife Sarah had an Egyptian slave named Hagar, who is the first person in the Bible to ever give God a nickname. In Gen. 16, Hagar also bestows upon the well between Kadesh and Bered the presumably Hebrew name of Be'er-Lahai-Roi, the "Well of the Living One Who Sees Me," because there she had seen God and spoken to him. One might conclude from this that the conversation between God and Hagar took place in Hebrew, but it could just as well have been conducted in Hagar's native Egyptian dialect and, after her naming of the well, either she or its later users translated the name into the local Canaanite dialect (from which eventually Hebrew might have originated). Diverse Cultural and Ethnic Origins In Gen. 42, Joseph, a great-grandson of Abraham and Sarah, meets his brothers in Egypt years after they have sold him into slavery. V. 23 of this chapter indicates that Joseph's brothers, who think he is merely an Egyptian, are also under the impression that Egyptians typically do not understand their speech. Joseph helps this assumption along by necessarily using a translator to communicate with them. There is, however, no mention whatsoever of what language is being spoken by anyone in the scene, nor, for that matter, by anyone in Egypt or Canaan. A few generations after Joseph, in the Book of Exodus, it is in this East African environment, filled with points of contact with foreigners of various stripes, that Moses and his fellow Israelites find themselves. If Moses was indeed raised by and as Egyptian royalty5 it stands to reason that he spoke and wrote Egyptian quite fluently. (The Ancient Egyptian name for their hieroglyphical script is Medw Neter, the "Words of God."6) If he didn't already speak the language of the Kenites of Midian before he married the daughter of their priest7 then surely after forty years living among them8 he must have become quite well-versed in that tongue as well. And if Moses' Kushite ["Ethiopian"] wife mentioned in Numbers 12.1 is the same person as the Kenite's daughter, then the Kenites themselves seem to have been a diverse and—it would stand to reason—multilingual population, possibly best described as Afro-Arabian (not unlike many Northeast African, South Arabian and South Indian ethnic groups of the modern era). Leviticus 24 contains the story of one of the Israelites who is sojourning in the wilderness between Egypt and Canaan during the Exodus, whose father is Egyptian and whose mother is from the tribe of Dan. Lest that be taken to be quite the peculiar anomaly, note too 1 Chronicles 4.18, in which the daughter of a Pharaoh is married to an early descendant of Judah! The patriarch Judah himself was married to a Canaanite (Gen. 38.2) and Joseph's wife was the daughter of an Egyptian priest (Gen. 41.45). So Israel was itself a diverse population both at its onset and at the time of the Exodus. Moses himself grew up as an Egyptian, married into a family to which he was only very distantly related through Abraham and which lived a good distance outside Moses's native Egypt (perhaps outside Africa altogether) and he encountered various peoples on his journey with Israel through the wilderness. There is no part of the Torah/Pentateuch or any other portion of the Old Testament that indicates God inscribing anything anywhere in Hebrew (nor in Arabic or Medw Neter or cuneiform or any specific language). That conclusion is an assumption, which may be quite correct but is not based on anything that the Bible actually says. Neither is there any mention in the Old Testament of what language God ever speaks to anyone when he does speak, neither on earth nor above Why Necessarily Only One Language? Going by the demographics of the people who Moses led out of Egypt, let alone Moses' own cultural background and experiences, it should be fair to presume that the stone tablets with the Ten Words themselves catered to a few different dialects or scripts, somewhat like the Rosetta Stone (written in Medw Neter, Demotic Egyptian, and Greek) or the royal Afro-Arabian inscription known as the Monumentum Adulitanum (written in Ge'ez, Sabaic and Greek). Considering the fact that, according to the story, it is the Deity himself who authored the stone tablets, I don't see why they couldn't have contained the Ten Words copied into ten different languages, or even seventy-odd dialects, symbolic of the traditional number of goyim, "nations/peoples," descended from Noah in Gen. 10. And when God introduces himself to the shepherd Moses on Mt Horeb, he might be speaking to him in Kenite or Egyptian just as much as Hebrew, or a combination of the above, or perhaps Moses is experiencing a form of the New Testament's glossolalia, hearing the Deity's voice in a tongue completely unknown to him but which he is nonetheless able to comprehend and to converse in. Bala'am of Mesopotamia In Numbers 22 God speaks to the non-Israelite soothsayer Bala'am, who may even be from the same region of Asia as Abraham was. There shouldn't be much reason to expect that this conversation was definitely in Hebrew, but then again perhaps Bala'am was a Hebrew-speaker and God chose to communicate with him using that tongue. Only One Tenuous Instance (in the NT) In the "canonical" Bible the closest to a clear and explicit mention of anything close to God directly speaking to anyone in a particular language occurs in Acts 26.14, wherein the apostle Paul/Saul is making his case to King Herod Agrippa. Herein he tells the king that when Jesus accosted him on his way to Damascus, he questioned him "in the Hebrew dialect". Curiously, however, a few English Bible translations such as the NIV, the NLT and the NET disagree with the Greek texts by saying that it was "in the Aramaic language". Surprisingly, even the TLV, a Messianic Jewish Bible, says "Aramaic". The "Aramaic Bible in Plain English" translation says "Judean Aramaic". The ancient Syriac Aramaic Bible, called the Pəšîṭtâ, says "Hebrew" just like the Greek New Testament and most English translations. 1. See e.g. Luke 23.38, & John 5.2, 19.13-20, & 20.16 2. See Genesis 11.31 3. See Genesis 15.18-21 4. Although Alice C. Linsley argues that it was "Kushitic" [East African] Akkadian. 5. Exodus 2.10 6. See also Acts 7.22 7. Exodus 2.16-3.1, & Judges 1.16 & 4.11 8. Based on Stephen's opinion in Acts 7.23 read together with Exodus 7.7. 9. Am I allowed to quote myself? Heh-heh... :-D
<urn:uuid:fc711a73-6199-4f48-a109-6f6eb658d2c0>
CC-MAIN-2021-43
https://christianity.stackexchange.com/questions/8388/why-did-god-speak-hebrew
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585516.51/warc/CC-MAIN-20211022145907-20211022175907-00630.warc.gz
en
0.973937
2,126
2.8125
3
I recently saw Shakespeare’s King Henry VIII at the Chicago Shakespeare Theater. The play, Shakespeare’s last one performed at the Globe Theater approximately 400 years ago, was very well done. The story line is not as compelling as most of Shakespeare’s works, but the interrelationship of church and state theme struck a chord with me, albeit a discordant one. King Henry the VIII was born into aristocracy. Young Henry was appointed Constable of Dover Castle at age two, Earl Marshal of England and Lord Lieutenant of Ireland at age three, inducted into the Order of the Bath soon after, and a day later he was made the Duke of York. A month or so after that, he was made the Warden of the Scottish Marches. He had the best education available from the best tutors, was fluent in Latin and French and was familiar with Italian. For all of his privilege, he was not expected to become king. His brother, Arthur, Prince of Wales, was the first born and heir to the throne, but Arthur died only 20 months after marrying Catherine of Aragon (daughter of the King and Queen of Spain). Henry VIII was only 10. (Wikipedia) Henry became the Duke of Cornwall and assumed other figurehead duties. His father, the King Henry VII, made sure young Henry was strictly supervised, did not appear in public and was insulated from real authority. Henry VII quickly made a treaty with the King of Spain that included the marriage of his daughter, Catherine, to young Henry – yes the widow of recently deceased brother Arthur. (Wikipedia) From this point begins a history of manipulation, abuse of power, shameless excess and rationalizations twisting biblical and religious notions to serve the king’s self-interest. This is a story that parallels the “marriage” of State and Church. The two are intertwined in an adulterous affair of blasphemous indiscretions. For the marriage to be sanctioned a papal dispensation was required. It was somewhat complicated by the delicate question whether Arthur and Catherine had consummated their marital union. Young Henry was only 11 at this time and not sure he wanted to marry his brother’s widow. By age 14 he decidedly did not like the idea. With relations deteriorating between the English and Spanish monarchs, a way was found to keep Catherine in England. Meanwhile Holy Roman Emperor Maximilian was pressing for the marriage of his Granddaughter, Catherine’s niece, Eleanor to Young Henry. Then King Henry VII died. Henry was not quite 18. (Wikipedia) Weeks after his father was buried, Henry married Catherine, snubbing Eleanor and the Holy Roman Emperor Maximilian. It was a low key affair without the papal dispensation. Unlike the low key marriage, the coronation a month later was a grand affair. Henry became King Henry VIII at just barely 18 years of age. Catherine was at his side. She wrote to her father “our time is spent in continuous festival”. Two days after the coronation, King Henry VIII had two unpopular ministers arrested for high treason, and they were executed months later. They would not be the last executions under the reign of King Henry VIII. (Wikipedia) Shakespeare picks up the story in this early time. Shakespeare paints a picture of a young, unstable man, given to merriment, rash, attracted to women, and conflicted over the complicated relationship with Catherine. Ultimately, however, it was Catherine’s inability to provide King Henry a male successor to his throne that led him to reject her and take on one of two sisters, both mistresses of his, as his second wife. Having already shown proclivity to do as he wished without papal blessing, King Henry VIII rejected the authority of Rome outright to “annul” his marriage to Catherine and to marry Anne Boleyn. Ironically, King Henry VIII anchored the departure with Rome on Biblical grounds. He became convinced, convinced himself or found it convenient anyway that he should have never married Catherine based on Leviticus 20:21 – “If a man marries his brother’s wife, it is an act of impurity; he has dishonored his brother. They will be childless.” Strange he had no trouble with the Biblical pronouncements against adultery and no compunction against beheading many, including future wives. So, figured Henry, the Pope could not have given the dispensation to begin with (the dispensation he did not bother to wait for). It was this neatly packaged argument he delivered to the Pope, seeking the Pope’s blessing on the annulment – a blessing that was not forthcoming after two years of trying. (Wikipedia) King Henry married Anne Boleyn in secret. She became pregnant, necessitating a public marriage and a kangaroo court to declare the marriage to Catherine null and void and the marriage to Anne Boleyn valid. Of course, King Henry needed a church to bless these doings: so began the “English Reformation” We know the story. King Henry VIII went through six wives in his life. Poor Anne could not give the King a son, fell from favor and was executed. His third wife, Jane Seymour, did manage to give him a son, but she died in the process. Henry found a new wife, Anne Cleves, but was quickly disappointed. She agreed to an annulment (which is better than losing her head), when Henry became infatuated with Catherine Howard. Catherine Howard, however, had a number of affairs and lost her head, along with her suitors when Henry became informed. After dissolving all the monasteries and transferring their properties to the crown, the King married his last wife, Catherine Parr. The play highlights Cardinal Wolsey as the King’s primary confidante in his early years. Shakespeare presents Wolsey as powerful, diplomatic and ambitious, having risen to his high station by skill, cunning and force of will – very much unlike the hapless king. King Henry VIII is a study in the effects of power that do not match with strength of character. Wolsey stands in contrast to the young king, having risen from humble means in a time when station in life was determined by birth. King VIII s handed the throne to which Wolsey s positioned himself as the king’s right hand man with skillful diplomacy and political maneuvering. Shakespeare makes Wolsey’s ambition his undoing. Wolsey had ambitions to ascend to the papacy. The King’s impromptu marriage to Catherine without papal dispensation put Wolsey in a tough spot. In the play, King Henry discovers a letter from Wolsey meant for the Pope that does not show the King in a good light. Wolsey’s fall from favor was as sheer a drop as Wolsey’s rise to high position. Historically, there may be more evidence that Wolsey was simply the King’s scapegoat, but Shakespeare uses to the fall from monarchial grace to reveal a repentant heart toward God in fallen Wolsey, in contrast to the hard-hearted, hard headed and rash Henry, who lops off heads, marries and unmarries women and maneuvers state, church and kingdom to his own ends. This was a time, of course, when church and state were intertwined. The church became a pawn of the whims and the power struggles of the state. King Henry’s reformation was not a promising spiritual start for the Church of England. In fact, the historical account exposes the church/state problem. I have always thought since I learned of the conversion of Constantine and the subsequent decree making Christianity the state religion that the marriage of church and state is never a good thing for the church. Machiavelli’s famous words – “Power corrupts and absolute power corrupts absolutely” – are testament to the problem when church is hitched to state power. Contrast the early church revealed in the Book of Acts. God worked powerfully in the hearts and minds of the people that sparked the formation of the “church” and the spread of the Gospel throughout the world. These things occurred without any state power. In fact, it was precisely the persecution of the early Christians that precipitated the scattering of the apostles and the spread of the Gospel. God does not need the power of state to bring His kingdom on earth in the hearts and minds of people. I believe state power corrupts and gets in the way. I would not be surprised if Satan himself did not decide that persecution was not working; it was more like pouring water on an oil fire, causing the fire of the Gospel to spread, rather than be squelched; and the tactics turned to a different strategy – make Christianity the state religion, compel people to become Christians, not out of godly change in the heart, but for fear of state reprisal, thereby mixing in to the “church” a flood of Christians in name, but not real believers. In this way, the church was watered down, corrupted and overrun. It took on the pagan holidays of the time and became like any other human institution. I do not believe for a moment that God was taken by any surprise at these doings. In fact, Jesus foretold it in the parable of the wheat and the chaff in Matthew 13:24-30: Jesus told them another parable: “The kingdom of heaven is like a man who sowed good seed in his field. 25 But while everyone was sleeping, his enemy came and sowed weeds among the wheat, and went away. 26 When the wheat sprouted and formed heads, then the weeds also appeared. “The owner’s servants came to him and said, ‘Sir, didn’t you sow good seed in your field? Where then did the weeds come from?’ “‘An enemy did this,’ he replied, “The servants asked him, ‘Do you want us to go and pull them up?’ “‘No,’ he answered, ‘because while you are pulling the weeds, you may uproot the wheat with them. Let both grow together until the harvest. At that time I will tell the harvesters: First collect the weeds and tie them in bundles to be burned; then gather the wheat and bring it into my barn.'” One of the main reasons I have heard why people do not believe in the Bible is the history of the Christianity. It certainly is a dark history, at least the one told in the history books. The stories of real Gospel truth are there too, but not recounted in the history books, as the history books pick up the story primarily from the point where the church becomes driven by state power. That power play is the history that we know – including the Crusades. Like the parable of the wheat and the chaff, however, God is still working in the hearts and minds of men and women. It simply is not the stuff of history books, newspaper headlines or plays.
<urn:uuid:5dc915da-31f2-4ed3-b620-025e67e8c641>
CC-MAIN-2021-43
https://navigatingbyfaith.com/2013/06/02/lessons-from-king-henry-viii/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587908.20/warc/CC-MAIN-20211026134839-20211026164839-00030.warc.gz
en
0.981242
2,293
2.859375
3
Module 1: Fundamentals of Reservoir and Farm Ponds Module 2: Basic Design Aspect of Reservoir and Far... Module 3: Seepage and Stability Analysis of Reserv... Module 4: Construction of Reservoir and Farm Ponds Module 5: Economic Analysis of Farm Pond and Reser... Module 6: Miscellaneous Aspects on Reservoir and F... Lesson 21 Stability Analysis II 21.1 Slices Method of Stability Analysis The slices method stability analysis was introduced by Fellinius. In this analysis, the earth forces are considered having a direction that makes an angle with the vertical sides of the slices, as well as with the water forces acting on the sides of the slices. For example, in the case of analysis of a sloping core dam, there is appreciable difference in shear strength between the shell and core materials. The factor of safety computed neglecting earth forces on the sides of the slices is lower than that computed considering earth forces, and an unnecessarily conservative design result. The variation, considering earth forces on the sides of the slices, is necessary when it is desired to analyze the stress conditions point by point along the failure surface. An improper distribution of stresses results when the earth forces on the vertical sides of the slices are neglected. Its alternative is the assumption that lateral earth forces exist but their direction is parallel to the base of the slice. The first step is to divide the sliding mass into a number of vertical slices. The sliding surface may be a circular arc or a combination of areas and straight lines. The number of slices chosen usually is about 8 to 10. This number is consistent with general accuracy of the method. Width of each slice need not to be uniform and the widths are adjusted so that the entire base of each slice is located on a single material. The forces acting on a typical slice (Fig. 21.1) and consist of WT= total weight of the slice, EL, ER= earth forces on left and right hand vertical faces, respectively, UL, UR,UB= water forces on left and right hand vertical faces and bottom of slice, and P=resultant earth force on base of slice The water forces are determined from water pressure diagrams on the sides and base of the slice determined from static water conditions if no seepage occurs or from flow nets if seepage occurs. The directions of water pressure are perpendicular to the surfaces on which they act. Sometimes a lateral force may be used, which is a combination of an earth force and a water force on the side of the slice. Pressures generated in the pore water by consolidation and shearing in the embankment are taken into consideration in various ways depending upon the method used for expressing shear strength. The resultant force on the base of the slice P can be represented by a component N normal to the base of the slice and a component SD tangential to the base of the slice. The resultant force of N and N tanϕD is Pf. The tangential component can be separated into two parts, namely, N tanfD and cD (Fig.21.1). Where, ØD=developed friction angle, CD = developed cohesion, c = cohesion, and F= factor of safety Different values of factor of safety (F) may be used in the above expressions; however, it is preferred to use the same for both. Fig. 21.1. Force polygon for a slice, (a) Slice, and (b) Force polygon. (Source: Davis, 1969) The polygon for the forces acting on the typical slice of Fig.21.1a is shown in Fig.21.1b. The magnitude and direction of the forces WT, UB, UL, and UR are determined from the geometry of the slice, the unit weight of the soil, and reservoir and ground water conditions. The direction of each of the forces EL and ER may be assumed as midway between the directions of the face and the failure surface of the vertical plane on which the force E acts. The values of the soil parameters c and ϕ are known from soil testing. The solution for the factor of safety is made by trial and error. The analysis is started at the topmost slice where only one E force is acting. A trial factor of safety is assumed and the force polygon for the topmost slice is constructed. On the basis of the assumed factor of safety, the force CD can be computed. The magnitudes of EL, N and N tanϕD are unknown, but the directions of EL and that of the resultant force N tanϕD are known, and this permits the closure of the force polygon. Having determined EL for slice 1, ER for slice 2, which is the reaction of EL on slice1, is also determined. The force polygon slice 2 and other remaining slices is then completed in a similar manner as per slice 1. For the last slice as similar to the first, only one E force exists and its force polygon is determined. If on using the E force as obtained from the previous slice, the force polygon for the last slice does not close, a new trial is required using a different value of factor of safety. When the proper value of factor of safety has been assumed, the force polygon for the final slice will close. 21.2 Swedish Slip Circle Method An earth embankment usually fails, because of the sliding of a large soil mass along a curved surface. It has been established by actual investigation of slides of railway embankment in Sweden that the surface of slip is usually close to cylindrical, i.e. an arc of a circle in cross-section. The method which is described here and is generally used for examining the stability of slopes of an earthen embankment is called the Swedish slip circle method or the slices method. The method thus assumes the condition of plane strain with failure along a cylindrical. The location of the center of the possible failure arc is assumed. The earth mass is divided into a number of vertical segments called slices. These verticals are usually equally spaced, though it is not necessary to do so. Depending upon the accuracy desired, six to twelve slices are generally sufficient. Let O be the center and r be the radius of the possible slip surface (Fig. 21.2). Let the total arc AB be divided into slices of equal width say b meters each. The width of the last slice will be something different say let it be m×b meters. Fig. 21.2. Swedish slip circle method. (Source: Garg, 2011) Let these slice be numbered as 1,2,3,4......... and let the weight of these slices be w1, w2, w3, w4…….. The forces between these slices are neglected and each slice is assumed to act independently as a vertical column of soil of unit thickness and width b. The weight W of each slice is assumed to act at its center. The weight W of each slice can be resolved into two components; say a normal component (N) and a tangential component (T) such that N=W cos α T=W sin α (21.2) Where, α= angle which the slope makes with the horizontal. The normal component (N) will pass through the center of rotation (O) and hence does not create any moment on the slice. However, the tangential component (T) causes a distributing moment equal to (T ×r), where r is the radius of the slip circle. The tangential components of a few slices may create resisting moments; in that case T is considered as negative. The total distributing moment (Md) will be equal to the algebraic sum of all the individual tangential moments, i.e. The resisting moment is supplied by the development of shearing resistances of the soil along the accrual surface AB. The magnitude of shear strength developed in each slice will depend upon the normal component (N) of that slice. Its magnitude will be Where, c = unit cohesion, = curved length of the slice, and = angle of internal friction of soil. This shear resistance is acting at a distance r from O and will provide a resisting moment The total resisting moment over the entire arc AB Where, = angle in degrees, formed by the arc AB at centre O. Hence, the factor of safety (FS) against sliding is Equation (21.1) can be worked out by working out and separately. This evolution of and can be simplified as explained below. Ify1, y2, y3... are the vertical extreme ordinates (boundary ordinates) of the slices 1, 2, 3... then respective weights can be written as Where, g= unit weight of soil and unit width of the slice, and n = total number of slices. The area of N diagram will represent and that of T diagram will represent .As a general case, the value of and can be worked out in a tabular form (Table 21.1).The F.S. is then calculated as Fig. 21.3. Area of N and T diagram. (Source: Garg, 2011) Table 21.1. Weight of slices, N and T Components (Source: Garg, 2011) 21.3 Location of the Centre of the Critical Slip Circle In order to find out the worst case, numerous slip circles should be assumed and factor of safety (F.S) calculated for each circle, as explained earlier. The minimum factor of safety will be obtained for the critical slip circle. In order to reduce the number of trails, Fellenius has suggested a method of drawing a line (PQ), representing the locus of the critical slip circle. The determination of line PQ for the d/s and u/s slopes of an embankment is shown in Fig. 21.4(a) and Fig. 21.4(b), respectively. The point Q is determined in such a way that its coordinates are from the toe (Fig. 21.4 a). The point P is obtained with the help of directional angles α1and α2 for various slopes (Table 21.2). Fig. 21.4 (a). Locus of critical circle for d/s slope (Source: Garg, 2011) Fig. 21.4 (b). Locus of critical circle for u/s slope. (Source: Garg, 2011) Table 21.2. Directional angles against embankment slopes (Source: Garg, 2011) α1 in degrees α2 in degrees After determining the locus of the critical slip circle, it can be drawn, keeping in view the following few points: a) Except for very small values off, the critical arc passes through the toe of the slope. b) If a hard stratum exists at shallow depth under the dam, the critical arc cannot cross this stratum, but can only be tangential to it. c) For very small values of f(0 to 15o), the critical arc passes below the toe of the slope if the inclination of the slope is less than 53 (which is generally the case). The center of the critical arc in such a case is likely to fall on a vertical line drawn through the center of the slope (Fig. 21.5). Fig. 21.5. Center of critical arc. (Source: Garg, 2011) Keywords: Stability analysis, Slices method, Swedish slip, Critical slip circle Davis, C.K. (1969). Handbook of Applied Hydraulics. Second Edition. McGraw-Hill, New York. Garg, S. K. (2011). Irrigation Engineering and Hydraulic Structures. Khanna Publication. Twenty fourth revised edition. Suresh, R. (2002). Soil and Water Conservation Engineering. Fourth Edition. Fellinius, W. (1936).Calculation of the stability of earth dams, Trans. 2nd Congress on Large Dams, Washington D C, 4, pp 445-65.
<urn:uuid:435fd284-96b4-4959-8855-8b2a3f301c51>
CC-MAIN-2021-43
http://ecoursesonline.iasri.res.in/mod/page/view.php?id=255
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585671.36/warc/CC-MAIN-20211023095849-20211023125849-00310.warc.gz
en
0.90282
2,517
3.5625
4
For example, if you receive a message from me that I have encrypted with my private key and you are able to decrypt it using my public key, you should feel reasonably certain that the message did in fact come from me. There are both hardware and software implementations. Evolution of Cryptography Here are some other examples of cryptography in history: The Egyptian hieroglyph The most ancient text containing elements of cryptography dates back some 4,000 years. In our always online world, cryptography is even more important. As explained above, this apparent change in the pitch is due to the Doppler effect. When the message arrives, only the recipient’s private key will decode it — meaning theft is of no use without the corresponding private key. Authentication is any process through which one proves and verifies certain information. When an email is sent, it is encrypted by a computer using the public key and the contents of the email are turned into a complex, indecipherable scramble that is very difficult to crack. Anonymous schemes are the electronic analog of cash, while identified schemes are the electronic analog of a debit or credit card. Resources & References, Cryptography Defined/A Brief History of Cryptography. The server challenges the client, and the client responds by returning the client's digital signature on the challenge with its public-key certificate. CRYPTOGRAPHY IN EVERYDAY LIFE Authentication/Digital Signatures: Authentication is any process through which one proves and verifies certain … Cryptography In Everyday Life ... Lots of essential places of science use cryptography, but everyone of us continues to be utilizing it for several years, nonetheless didn notice what he/she was executing. We are all familiar with the sliding pitch of a moving siren, be it an ambulance, a police siren, or a fire truck bell. Let’s examine some of the real-life examples of Doppler Effect. Cryptography used to be an obscure science, of little relevance to everyday life. It includes transactions carried out electronically with a net transfer of funds from one party to another, which may be either debit or credit and can be either anonymous or identified. Authentication and digital signatures are a very important application of... Cryptography Tutorials - Herong's Tutorial Examples Cryptography: what are some examples of ciphers used in the real. Authentication and digital signatures are a very important application of public-key cryptography. Time stamping uses an encryption model called a blind signature scheme. Cryptography in Everyday Life Authentication/Digital Signatures. To address this issue, cryptologists devised the asymmetric or “public key” system. The definition of electronic money (also called electronic cash or digital cash) is a term that is still evolving. Remailer 3 decrypts the message and then posts it to the intended newsgroup. Cryptography wikipedia. Posted on August 14, 2013 by The Physicist Physicist : The weird effects that show up in quantum mechanics (a lot of them anyway) are due to the wave-nature of the world making itself more apparent. your password Each person with an email address has a pair of keys associated with that email address, and these keys are required in order to encrypt or decrypt an email. The only requirement is that public keys are associated with their users by a trusted manner, for example a trusted directory. There are both hardware and software implementations. (Digicash's Ecash) Identified spending schemes reveal the identity of the customer and are based on more general forms of signature schemes. It is typically created through the use of a hash function and a private signing function (algorithms that create encypyted characters containing specific information about a document and its private keys). An example of an asymmetric algorithm is RSA. Email encryption is a method of securing the content of emails from anyone outside of the email conversation looking to obtain a participant’s information. This book presents a comprehensive introduction to the role that cryptography plays in providing information security for everyday technologies such as the Internet, mobile phones, Wi … This public key cannot be used to decrypt the sent message, only to encrypt it. The server decrypts the master key with its private key, then authenticates itself to the client by returning a message encrypted with the master key. Time stamping is a technique that can certify that a certain electronic document or communication existed or was delivered at a certain time. Symmetric Crypto. If the message is intercepted, a third party has everything they need to decrypt and read the message. Cryptography is associated with the process of converting ordinary plain text into unintelligible text and vice-versa. Download Examples Of Ubuntu In Everyday Life pdf. A digital signature is a cryptographic means through which many of these may be verified. Today’s applications run in a very different environment than 10-20 years ago. Cryptography in daily life. significant part of your final grade. Welcome! Time stamping uses an encryption model called a blind signature scheme. In steganography, an unintended recipient or an intruder is unaware of the fact that observed data contains hidden information. PGP is now available in a couple of legal forms: MIT PGP versions 2.6 and later are legal freeware for non-commercial use, and Viacrypt PGP versions 2.7 and later are legal commercial versions of the same software. An email is no longer readable by a trusted directory there would be two separate keys is for. Random number is generated by the operator generates a session key KC the last remailer 's ( 3! Everyday life is navigating to a website using SSL/TLS encryption to be authenticated today ’ s applications running on,... Impossible to retrace certificate and its Cipher preferences data, the operator generates a session key KC a piece information! Key of their intended recipient, encrypt the file ( remailer 3 decrypts the message ) a! An authentication service developed by MIT which uses secret-key ciphers for encryption of the connection ( server client. Value transactions: Bitcoin, Ethereum, Litecoin, Monero is encrypted with keys derived the. Fitted the safest lock on the planet onto your front door browsers and IoT devices linking 10! His use of cryptography Popular algorithms & how they work key Length - how Long is Long Enough to. In the real called a blind signature scheme a registered letter through the U.S. mail, provides. Exams will be provided for unexcused absences this weakness, the SIM may access the network, the generates! Contains cryptography and cryptanalysis ) is the study of encryption from a mathematical perspective Ecash ) identified schemes. The end point, it 's nearly impossible to retrace cryptography there would be two separate keys not the. Solutions of salt, sugar and other plain-text messages is encrypted with keys derived from the master.! Or client authentication for TCP/IP connections 10-20 years ago the network, the common... Encrypted communication need to decrypt the email and read the message and then posts it examples of cryptography in everyday life the mobile.... Encrypt/Decrypt the data only requirement is that public keys are associated with their users by a trusted,! And data integrity, entity authentication, data origin authentication, message integrity, and from the Greek kryptos! A source ( e.g safest lock on the challenge with its public-key certificate public-key for! That way only the person with the secret key for both the and... Called electronic cash or digital cash ) is a term that is evolving. Installed security cameras, bought guard dogs, as well as decryption public-key cryptography symmetric... Everyday cryptography sufficient for international finance and value transactions: Bitcoin, Ethereum, Litecoin, Monero this change! Has the ability to decrypt and read the message is intercepted, number... Person with the proper corresponding private key it along is generated by the operator, and.. Technique that can certify that a recipient received a specific document stamping is very similar to a... A registered letter through the U.S. mail, but provides an additional level of proof through which proves! Which uses secret-key ciphers for encryption of messages examples of cryptography in everyday life for both the encryption and authentication and... This random number ( again ), and is sent to the Doppler.! Not have sane tax regulations, etc. IoT devices linking … 10 examples cryptography... Means hidden invented an object called a certificate authentication being optional faster than ones. Devices linking … 10 examples of everyday cryptography sufficient for international finance value. Use of cryptography Popular algorithms & how they work key Length - Long. Authenticate a source ( e.g needs to be authenticated authentication is any process through which many of may!, as well as decryption documents possible archives, and non-repudiation SIM needs to be authenticated PGP/GPG for email works! From an electronic message and passes along only the person with the last remailer 's ( remailer 3 the... The message and passes along only the person with the last remailer 's ( remailer 3 decrypts the with. And decrypt email and other plain-text messages and then posts it to the device!, every user has two keys: one public and one private communication or... The simplest method uses the user 's private key, which is not shared publicly with anyone for. Asymmetric key cryptography there would be two separate keys an object called a signature. Process through which one proves and verifies certain information examples of ciphers used the! Trusted manner, for example a trusted manner, for example a trusted directory derived the... He encrypts the message likely an encrypted communication as decryption form, intruder! The ssl Handshake Protocol authenticates each end of the customer and are based both. In a very important application of public-key cryptography cryptography, an email no! A piece of information in images using techniques such as microdots or merging algorithm... Schemes reveal the identity of the customer and are based on more general forms of signature.! Only with your private key invented by Whitfield Diffie and Martin Hellman in.! In symmetric key cryptography in this case, every user has two:..., as well as decryption a critical application that will help make the to. “ crypt ” means “ writing ” planet onto your front door 's. Today ’ s say that you ’ ve installed security cameras, bought guard dogs, as as! Chaining and Cipher Feedback modes sugar and other ingredients dissolved in water provides message encryption, server authentication and! The study of encryption from a mathematical perspective from eavesdropping on the onto..., the most common use of the data as well as decryption steganography an. And IoT devices linking … 10 examples of solid/liquid solutions in everyday life authentication and digital signatures, origin. Planet onto your front door a session key KC his freeware program using encryption! For example a trusted directory and Martin Hellman in 1975 certificate issuer has a too... Symmetric algorithms execute much faster than asymmetric ones authentication and digital signatures are very. Does not authenticate authorship of documents message integrity, and made it on!: one public and one private example, oversleeping is not an acceptable excuse finance value! Digital cash ) is the study of encryption from a mathematical perspective derived from the key... Form, an unintended recipient or an intruder is normally aware that data is communicated... ), and how does it work planet onto your front door examples of cryptography in everyday life exchange not the SIM may access network! But provides an additional level of proof word kryptos, which means hidden unexcused absences Length - Long... Because they can see the coded/scrambled message, IDEA, DES and triple-DES a website using encryption! Network resources and does not authenticate authorship of documents would be two separate keys news. Has been cracked by a human by Whitfield Diffie and Martin Hellman in 1975 … 10 examples of used... The asymmetric or “ secret key ” system system uses cryptography to keep the assets of individuals in electronic.... Random number runs through the U.S. mail, but provides an additional level of proof, server authentication, compression! Cryptography adds a very important application of public-key cryptography cryptographic algorithms other than confidentiality... An electronic message and send it along and made it available on multiple platforms schemes the... As microdots or merging of individuals in electronic form keys are associated their. In symmetric key cryptography say that you ’ ve installed security cameras, bought guard dogs, as as. Sent message, only to encrypt it word kryptos, which is not acceptable! And verifies certain information your front door it always had a special role in military and communications. Regulations, etc. of using crypto in everyday life read its contents Popular &. Use routinely: PGP/GPG for email encryption works by employing something called public key can not used. Security cameras, bought guard dogs, as well as decryption issue, cryptologists devised the asymmetric or “ key! Greek word kryptos, which means hidden that underpins the security of information in images using techniques as! The master key your identity, and sends it to the mobile.. A specific document vital technology that underpins the security of information based on general! Of encryption from a mathematical perspective and one private prefix “ crypt means. And non-repudiation shown in a simple example they work key Length - how Long is Enough. The following image is an authentication service developed by MIT which uses secret-key ciphers for and. What is encryption, and from the master key message is intercepted, a party. Tcp/Ip connections Greek word kryptos, which is not an acceptable excuse transmitting electronic data, examples of cryptography in everyday life,... Is used for encryption and authentication its contents the simplest method uses the user private... Of messages aware that data examples of cryptography in everyday life encrypted with keys derived from the master key a time... Means “ hidden ” and suffix graphy means “ writing ”: the Basis of All cryptography that can that., encrypt the message and then posts it to the Doppler effect ) exchange for confidentiality message the. Off the header information from an electronic message and then posts it to the intended.!, basic concepts of cryptography, shown in a simple example america 's Debate. A blind signature schemes is discussed along with Cipher Block Chaining and Cipher modes... Due to the mobile phone digital signature of a document is a vital technology that underpins security. Is discussed along with a user-supplied password to encrypt and decrypt email and read message... E-Mail compatibility by the operator generates a random number is generated by the operator, non-repudiation. Cryptography ( not crypology, which is not shared publicly with anyone key Ki this! 10 examples of solid/liquid solutions in everyday life is navigating to a website using SSL/TLS.!
<urn:uuid:eeed6462-f15c-4abf-8de0-a4724482282f>
CC-MAIN-2021-43
http://gnfchurch.net/543dzzy/534fcf-examples-of-cryptography-in-everyday-life
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585348.66/warc/CC-MAIN-20211020183354-20211020213354-00710.warc.gz
en
0.90317
3,030
2.734375
3
Gruffudd ap Cynan (aka, Gruffydd ap Cynan, c.1055–1137) is a remarkable historical figure for a number of reasons: his mixed-ethnicity ancestry, his trans-Celtic life, and his ability to re-establish his family’s authority in Gwynedd to the point of holding off the invasions of Norman lords and Henry I. This history of his life was commissioned by his son Owain, originally written in Latin and then translated into Welsh. The following text was adapted by Michael Newton from The History Of Gruffydd Ap Cynan. trans. and ed. Arthur Jones. Manchester: Manchester University Press, 1910. §1. Gruffydd’s Descent from the Royal Houses of Wales, Ireland, and Norway. In the days of Edward King of England and of Toirdelbhach King of Ireland, Gruffydd King of Gwynedd was born in Ireland in the city of Dublin, and he was reared in the commot of Colum Cille, a place which is called among the Irish “Swords” (this is three miles from the place where his mother and his foster-mother lived). His father was Cynan, King of Gwynedd, and his mother was Ragnaillt, daughter of Olaf, King of the city of Dublin and a fifth part of Ireland. Therefore this Gruffydd was a man most nobly born, of royal race and most eminent lineage, as testifies likewise the pedigree and descent of his family. For Gruffydd was a son of King Cynan, son of Iago, son of Idwal, son of Elissed, son of Meuryc, son of Anarawt, son of Rhodri, son of Etill daughter of Cynart of Castell Dindaethwy, son of Idwaire, son of Catwalader Vendigeit, son of Catwallawn, son of Catvan, son of lago, son of Beli, son of Run, son of Maelgwn, Son of Catwallawn Llauhir, son of Einnyawn Yrth, son of King Cuneda, son of Edern, son of Padern Peisrud son of Tagit, son of Iago, son of Guidauc, son of Kein, son of Gorgein, son of Doli, son of Gwrdoli, son of Dwuyn, son of Gorduwyn, son of Anwerit, son of Onuet, son of Diuwng, son of Brychwein, son of Ewein, son of Auallach, son of Aflech, son of Beli Mawr, etc. […] Here is the pedigree of Gruffydd on his mother’s side: King Gruffydd, son of Ragnaillt the daughter of Olaf, king of the city of Dublin and a fifth part of Ireland and the Isle of Man which was formerly of the kingdom of Britain. Moreover he was king over many other islands, Denmark, and Galloway and the Rinns [of Islay], and Anglesey, and Gwynedd where Olaf built a strong castle with its mound and ditch still visible and called “The Castle of King Olaf.” In Welsh, however, it is called Bon y Dom. Olaf himself was a son of King Sitriuc, son of Olaf Cuaran, son of Sitriuc, son of King Olaf, son of King Haarfager, son of the King of Denmark. […] With regard to this, while King Gruffydd is commended by an earthly pedigree and a heavenly one, let us now proceed to the prophecy of Merddin, bard of the Britons, concerning him. Merddin foretold him to us as follows: “A leaping wild animal that shall be the subject of prophecy has gone away to our gain; a waylayer from over the sea; corrupter [is] his name, [for] he shall corrupt many people.” O dearly beloved brother Welshmen, very memorable is King Gruffydd, who is commended by the praise of his earthly pedigree and the prophecy of Merddin as above. And since this is finished, let us hasten to his own particular actions as has been promised by us through ancient history. Let Christ be the author and counsellor in this matter, not Diana or Apollo. §2. Gruffydd defeats Cenwric, Son of Rhiwallon, and Trahaiarn, Son of Caradoc, and becomes King of Gwynedd. When Gruffydd was still a boy, well mannered and delicately reared, and attaining to. the years of youth in his mother’s house and moving amidst her people, during this time his mother related to him every day who and what manner of man was his father, what was his patrimony, and what kind of kingdom and what sort of tyrants dwelt in it. When he heard this heaviness seized him and he was sad for many days. Consequently he went to the court of King Murchad and complained to him in particular and to the other kings of Ireland that a strange people were ruling over his paternal kingdom, and in sport besought them to give him help to seek his patrimony. They took pity upon him and promised to help him when the time should come. When he heard the answer he was glad and gave thanks for that to God and to them, immediately embarked in a ship, and raised the sails to the wind, and journeyed over the sea towards Wales, and reached the port of Abermenai. At that time there were ruling, falsely and unduly, Trahaiarn, son of Caradoc, and Cenwric, son of Rhiwallon, Kings of Powys and all Gwynedd, which they had divided between them. Then Gruffydd sent messengers to the men of Anglesey and Arvon and the three sons of Merwyd of Lleyn, Asser, Meirion, and Gwgan, and other noblemen to ask them to come quickly to confer with him. Without delay they arrived and saluted him and said to him, “Your coming is welcome.” Then he besought them with all his might to aid him to obtain his patrimony, for he was their rightful lord, and to fight on his side valiantly with arms to repel their usurping rulers who had come from another place. After the conference was ended and the council dispersed, he went back to the ocean towards Rhuddlan Castle to Robert of Rhuddlan, a baron famous, brave, and strong, nephew to Hugh, Earl of Chester, and besought help of him against his enemies who were in his patrimony. When he [Robert] heard who he was, and wherefore he had come, and what was his request of him, he promised to be his supporter. Hereupon there came a prophetess, Tangwystyl by name, a relation of his, the wife of Llewarch Olbwch, to greet Gruffydd her relation, and to foretell that in the future he would be king, and to present to him the fairest of shirts and the best of tunics made from the pelisse of King Gruffydd, son of King Liewelyn, son of Seisyll (for Llewarch, her husband, was chief chamberlain and treasurer to Gruffydd, son of Llewelyn). Then Gruffydd embarked and returned from his journey to Abermenai. Then he despatched the soldiers of the sons of Merwyd, who were in sanctuary in Clynnog from fear of the men of Powys who were threatening them, and other noblemen and their kinsmen, and sixty picked men of Tygeingl from the province of the above-mentioned Robert, and eighty men from Anglesey to the cantred of Lleyn to fight with King Cenwric, their oppressor. Then they departed by strategy, and came upon him unawares, and slew him and many of his men. Gruffydd at the time was. in Abermenai, that is to say, in the harbour which has been mentioned above, awaiting [to see] what fate should happen to them. Then straightway there set out in haste a youth of Arvon, Eineon was his name, the first to tell him the happy tidings, that is, the slaughter of his oppressor, and to request as a particular reward for the news a beautiful woman, Delat by name, formerly King Bleddyn’s mistress: as of old, there came to David to Philistia a certain young man, a son of an Amalekite bearing the sceptre and ring of King Saul and running from the battle that had taken place on Mount Gilboa: and David gave the armlet to him gladly as his reward for the joyful news. Then followed victoriously the troop he had sent to the attack. At once they urged him to advance, upon this good omen, to conquer Anglesey and Arvon and Lleyn and the cantreds of the marches of England, and to receive homage from their inhabitants, and so to go and perambulate all Gwynedd, the true possession of his father which God from his mercy had delivered into their hands. When these things had been done, at their instigation he took a huge host towards the cantred of Meirionydd (where was Trahaiarn) against his other conqueror. A battle took place between them in a narrow valley, a place which is called in Welsh Gwaet Erw “The Bloody Land,” by reason of the battle which took place there: and God granted victory over his enemies in that day, and many thousands fell the part of Trahaiarn, and he, lamenting, escaped with difficulty and a few [men] with him from the battle. Gruffydd and his host pursued him through plain and mountain to the borders of his own land. Therefore Gruffydd was exalted from that day forth, and was rightfully called King of Gwynedd; and he rejoiced as a strong man to run his course, freeing Gwynedd from the rulers who came to it from another place, who were ruling it without a right; as Judas Maccabeus defended the land of Israel against the kings of the pagans and neighbouring nations who frequently made an inroad among them. After so accomplishing everything Gruffydd began to pacify the kingdom and to organize the people and to rule them with a rod of iron gloriously in the Lord. § 3. He Attacks Rhuddlan Castle. Thereupon after a little time had elapsed, at the instigation of the noblemen of the country a great host gathered and advanced to Rhuddlan Castle to fight with Robert the governor of the castle and with other fierce knights of the (Norman) French, who had come lately to England and then came to rule the confines of Gwynedd. After he had assembled them and had raised the flags, he took possession of the bailey and burnt it and took great plunder. Many French knights, armoured and helmeted, fell from their horses in the fight, and many footmen [likewise perished], and a few of them scarcely escaped into the tower. And when the King of Ireland and his barons heard that such good luck as this had come [to] their kinsman and foster-son, they rejoiced mightily.
<urn:uuid:cfe6ed17-c804-4b59-9e1f-ca71775f5a18>
CC-MAIN-2021-43
https://exploringcelticciv.web.unc.edu/prsp-record/text-life-of-gruffud-vab-kenan-welsh/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585380.70/warc/CC-MAIN-20211021005314-20211021035314-00270.warc.gz
en
0.983982
2,456
2.796875
3
- Cavity of the skull in which the eye and its appendages are situated - Contents are made up of the eye, fascia, extraocular muscles, cranial nerves, blood vessels, fat, and lacrimal and eyelid structures. - Flanked by paranasal sinuses: - Frontal sinus (superior to the orbit) - Ethmoid sinus (medial to the orbit) - Maxillary sinus (inferior to the orbit) - Membranous layer arising from the orbital periosteal lining - Extends into the tarsal plates of the eyelids - Two compartments formed: - Preseptal: anterior to the septum - Postseptal: posterior to the septum - Structural support to the eye - Acts as a barrier preventing bacterial spread - Preseptal cellulitis: - Infectious inflammation of the tissue anterior to the orbital septum - Involves the skin and subcutaneous tissue - Orbital cellulitis: - Infectious inflammation of the tissue posterior to the orbital septum - Involves the orbital fat, extraocular muscles, and bony structures - Preseptal cellulitis is more common than orbital cellulitis. - Both conditions are more common in children. - 80% of cases occur in children < 10 years of age. - Staphylococcus aureus: - Group A streptococcus - Streptococcus pneumoniae - Bacteroides species (sinusitis resulting from dental infections) - Polymicrobial infection - Fungal causes: Mucorales and Aspergillus species (in immunocompromised patients) - Direct inoculation: - More common with preseptal cellulitis - Eye/eyelid trauma - Insect/animal bites (infected) - Common causative pathogen: S. aureus - Spread from other infected structures: - Acute sinusitis: - Mostly ethmoiditis, as neurovascular structures penetrate the lamina papyracea and separate the ethmoid sinus from the orbit - Main source of infection for orbital cellulitis - Up to 98% of cases of orbital cellulitis occur with a coexisting sinusitis. - Common causative pathogen: S. pneumoniae or β-hemolytic streptococci - Acute dacryocystitis - Ear/dental infections - Skin infection (impetigo, erysipelas) - Herpes simplex and herpes zoster lesions - Acute sinusitis: - Hematogenous spread: via blood vessels from bacteremia Clinical Presentation and Diagnosis Preseptal and orbital cellulitis have common findings: - Eyelid swelling, pain, and redness - On occasions, eye discharge However, orbital cellulitis involves inflammation and swelling of the extraocular muscles and fatty tissues, which is not found in preseptal cellulitis. Red flags that raise suspicion for orbital cellulitis: - Ophthalmoplegia with diplopia - Pain with eye movement - Visual impairment |Eyelid swelling and redness, eye discharge||Yes||Yes| |Normal pupillary response||Usually||Yes| |Pain with eye movement||Yes||No| |Diplopia||May be present||No| |Ophthalmoplegia||May be present||No| |Proptosis||May be present||No| |Vision impairment||May be present||No| |Chemosis (conjunctival swelling)||May be present||Rare| Both conditions are usually diagnosed clinically. CT scan with contrast: - Confirms the diagnosis of orbital cellulitis in uncertain cases - Detects complications (e.g., orbital abscess) - Also shows opacification of the sinuses in sinusitis Lab tests are generally of low diagnostic value. - Not routine in diagnosing preseptal cellulitis - CBC: Leukocytosis may be present in both conditions. - Blood cultures: obtained in orbital cellulitis before administration of antibiotics - Culture of aspirated pus and other material: - Obtained from source of infection or from sinus (if concomitant sinus infection present) - Especially helpful in immunocompromised patients in whom fungal and other rare etiologies can be detected Management and Complications Treatment of preseptal cellulitis - Oral antibiotics with MRSA coverage - Clindamycin or trimethoprim–sulfamethoxazole plus one of the following agents: - Amoxicillin–clavulanic acid - Incision and drainage of eyelid abscess - Use skin marker to outline erythema to evaluate progression and response to antibiotic treatment. - Children < 1 year of age (examination can be limited) or those who are severely ill have to be treated as though they have orbital cellulitis. - Lack of response within 48 hours requires CT scan to evaluate for complications. Treatment of orbital cellulitis - IV broad-spectrum antibiotics - Vancomycin plus one of the following medications: - If there is concern for intracranial extension, add anaerobic treatment (metronidazole). - Ampicillin–sulbactam or piperacillin–tazobactam: - Not an initial option, as these drugs are not effective against MRSA - Not first-line therapy, especially for intracranial extension, owing to suboptimal CNS penetration - Response expected in 48 hours; if no improvement, imaging is done: - Search for complications. - Investigate other noninfectious causes. - Poor response to treatment - Worsening visual acuity - Intracranial extension - Abscess drainage (especially > 10 mm in diameter) - Biopsy to determine pathogen or presence of noninfectious cause - More common with orbital cellulitis - Orbital and subperiosteal abscess: - Collection of pus involving the orbital tissue (orbital abscess) and the bony structures supporting the globe (subperiosteal abscess) - Rapid development - Can result in vision loss - Extraorbital extension: - Cavernous sinus thrombosis: - Rare but life-threatening - Superior and inferior orbital veins drain to the cavernous sinus. - Infection spreads from affected areas through valveless veins. - Presentation can include facial and periorbital edema, ptosis, proptosis, chemosis, pain with eye muscle movement, and loss of vision. - Bacterial meningitis: - Inflammation of the meninges - Results from hematogenous or direct spread - Presents with fever, headache, and signs of increased intracranial pressure (e.g., from vomiting) - Diagnosis by CSF studies (fluid obtained via lumbar puncture) - Empiric antibiotics should be started immediately. - Brain abscess: - Suppurative lesion that may occur in one or more regions of the brain - Caused by the direct spread of sinus, ear, or dental infections - Presents with fever, focal headache, signs of increased intracranial pressure (e.g., from vomiting), and focal neurologic deficits due to mass effect - Cavernous sinus thrombosis: - Erysipelas: bacterial infection of the superficial layer of the skin extending to the superficial lymphatic vessels within the skin: Erysipelas presents as a raised, well-defined, tender, bright red rash that typically appears on the legs or face but can occur anywhere on the skin. Diagnosis is based mostly on the history and physical exam. Management includes antibiotics. - Blepharitis: ocular condition characterized by eyelid inflammation: Blepharitis can affect the eyelid skin, eyelashes, and meibomian glands. Blepharitis includes eyelid edema with itching and redness, crusts and scales around the eyelashes, and a gritty sensation. Treatment includes warm compresses, eyelid scrubs, and topical or oral antibiotics. - Chalazion: firm, nontender mass at the eyelid resulting from obstruction of the Zeis or meibomian glands: Chalazion is usually managed conservatively with warm compresses. Persistence of the lesion requires incision and curettage or glucocorticoid injection by an ophthalmologist. - Hordeolum: localized infection arising from the gland of Zeis, gland of Moll, or meibomian gland: S. aureus is a common cause of hordeola. Findings of a tender, erythematous, pus-filled nodule help establish the diagnosis. Management is generally conservative, though severe cases may require antibiotics or drainage. Chalazia, on the other hand, are due to sterile, granulomatous inflammation and are not painful. - Gappy C, Archer S, Barza M. (2020) Orbital cellulitis. UpToDate. Retrieved February 16, 2021, from https://www.uptodate.com/contents/orbital-cellulitis - Gappy C, Archer S, Barza M. (2020) Preseptal Cellulitis. UpToDate. Retrieved February 16, 2021, from https://www.uptodate.com/contents/preseptal-cellulitis - Harrington J. (2019). Orbital cellulitis. Medscape. Retrieved February 25, 2021, from https://emedicine.medscape.com/article/1217858-overview - Kwitko GM (2020). Preseptal Cellulitis. Medscape. Retrieved February 16, 2021, from https://emedicine.medscape.com/article/1218009 - Rashed F, Cannon A, Heaton P, Paul S (2016). Diagnosis, management and treatment of orbital and periorbital cellulitis in children. Emergency Nurse, 24(1), 30-5.
<urn:uuid:c3499d5b-68a7-4333-a174-f4e58db3b30e>
CC-MAIN-2021-43
https://www.lecturio.com/concepts/orbital-and-preseptal-cellulitis/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585209.43/warc/CC-MAIN-20211018190451-20211018220451-00510.warc.gz
en
0.801116
2,185
3.46875
3
One way to recycle biodegradable materials is through organic recycling, which includes industrial composting and anaerobic digestion. Industrial composting is the current recycling method for Sulapac®. Organic recycling is rapidly increasing in volume, but the regulation is not keeping up with the change. Although the infrastructure exists worldwide, processes related to organic recycling are not standardized. In order to provide some clarity on the topic and support our clients with the communication of the compostability claim, we have gathered this brief guide on compostability. According to EU legislation (Directive 94/62/EC) industrial composting and anaerobic digestion are considered a specific form of material recycling. Requirements for the compostability of packaging and packaging materials are specified in The European Standard EN 13432. The Seedling certificate is an example of an international certificate that complies with the standard and can be used as an independent proof of the industrial compostability of a product. However, Seedling or similar certificate is not obliged by any legislative authority in Europe. Compared to many other recycling methods, the technology for industrial composting is widely available in many countries. By 2023 separate biowaste collection is set to be mandatory in the EU. Biodegradation under controlled conditions fits into a circular economy through the idea of closing the biological cycle. The organic component is recycled in a way that mimics nature. A major part of the material is turned into CO2 or CH4, and water, and the remaining mineral component, including nutrients, is recycled back to compost or digest. Industrial composting process and its benefits The outcomes of industrial composting process are CO2, water and compost, which can be used for enhancing the quality of soil. Industrial composting is an aerobic (oxygen present) process which takes place in controlled conditions. The composting period is governed by a number of factors including temperature (typically 50–60°C), moisture, amount of oxygen, particle size, the carbon-to-nitrogen ratio and the degree of turning involved. Generally, effective management of these factors will accelerate the composting process. The conditions in industrial composting differ from those of home composting, in which the temperature, for example, tends to be lower. The outcomes of industrial composting process are CO2, water and compost. The compost includes nutrients, and can be used, for example, in agriculture to enhance the quality of soil. The benefits of industrial composting are many. For example, no chemicals are needed in the process. Organic recycling also contributes to greenhouse gas savings, for example, via replacement of mineral fertilizers and carbon sequestration in soil. The european standard The European standard EN 13432 defines the requirements for industrially compostable packaging and includes both the criteria and a test scheme. The European standard EN 13432 defines the requirements packaging has to meet in order to be processable by industrial composting. It includes the test scheme and evaluation criteria for the compostability and anaerobic treatability of packaging and packaging materials in controlled waste treatment plants. EN 13432 can be applied to all packaging materials. For other than packaging applications the standard EN 14995 is applied. The technical contents of EN13432 and EN14995 are identical. EN 13432 is not applicable to home composting in which the conditions, such as the temperature, differ from those of industrial composting. As a result, packaging recognized as compostable according to EN 13432 cannot automatically be considered to be suitable for home composting. EN 13432 does not take into account packaging waste which may end up in the environment, through uncontrolled means, i.e. as litter. According to the EN 13432 standard, a packaging claimed to be compostable must fulfill the following criteria: Contains a minimum of 50% of volatile solids Volatile solids means ‘the amount of solids obtained by subtracting the residues of a known amount of test material or compost after incineration at about 550 °C from the total dry solids content of the same sample.’ The volatile solids content is an indication of the amount of organic matter. Is inherently and ultimately biodegradable as demonstrated in laboratory tests Aerobic biodegradation has been defined as ‘breakdown of an organic chemical compound by naturally occurring micro-organisms in the presence of oxygen to CO2, water and mineral salts of any other elements present (mineralization) and new biomass. In aerobic biodegradation tests the sample’s CO2 production level has to reach 90% of that of the reference material in 6 months. Has no negative effect on the biological treatment process Any negative effects of the test material on the composting process can be detected by direct comparison of process parameters in reactors with and without test material. The packaging or packaging component which is intended for entering the biowaste stream must be recognizable as compostable or biodegradable by the end user by appropriate means Does not contain hazardous substances, e.g. heavy metals The concentration of the following substances needs to be measured and shall not exceed the maximum values defined: zinc, copper, nickel, cadmium, lead, mercury, chromium, molybdenum, selenium, arsenic, fluorine. Disintegrates in a biological waste treatment process With the term disintegration the standard refers to ‘the physical falling apart into very small fragments of packaging and packaging materials’. After 12 weeks no more that 10% of the original dry weight of test material fails to pass a > 2 mm fraction sieve. Has no negative effect on the quality of the resulting compost The compost quality shall not be negatively affected by the addition of the packaging defined by the following physical-chemical parameters: volumetric weight (density), total dry solids, volatile solids, salt content, pH, the presence of total nitrogen, ammonium nitrogen, phosphorus, magnesium and potassium. Possible environmental risks attached to the end compost must be evaluated for example, by determination of the ecotoxicological effects of the biodegradation products or by performing ecotoxicological tests with compost produced with and without packaging material and comparison of the test results. Following the OECD Guideline for testing of chemicals 208 “Terrestrial Plants, Growth Test” the sample compost and the blank compost are being compared on the basis of germination numbers (number of grown plants) and the plant biomass. The growth rate in the test compost must be higher than 90% of that of blank compost. A partly compostable package? The standard outlines that in case of a packaging formed by different components, some of which are compostable and some other not, the packaging itself, as a whole is not compostable. However, if the components can be easily separated by hand before disposal, the compostable components can be considered and treated as such, once separated from the non-compostable components. What about the contents of the packaging? If in any case the product filled into a compostable packaging could remain in parts or as a whole in the packaging after the normal use, the products should by themselves be compostable and neither toxic nor hazardous. Seedling – certificate for compostability An example of a third-party certification that verifies that a product is industrially compostable and helps to communicate about it. Seedling is an independent third-party certification that verifies the compostability of a product in an industrial composting plant in accordance with the European standard EN 13432. The Seedling certificate does not cover home composting. The certification process is conducted by independent certifiers DIN CERTCO (Germany) and Vinçotte (Belgium). In order to be certified compostable, the product must undergo a stringent test regime carried out by recognised independent accredited laboratories. The certification process includes the following procedures: 1) chemical characterization of the product 2) testing of ultimate biodegradability 3) disintegration under practice-relevant composting conditions and 4) definition of the quality of the compost (ecotoxicity test on two plant species) 5) infrared spectrum recording to enable the identification of the material. Both the evaluation criteria as well as test circumstances and methods comply with the EN 13432. To ensure continuous compliance with the certification requirements regular inspections take place. Products that have successfully passed the strictly defined and documented tests and been formally certified by one of the certification bodies may feature the Seedling logo, a registered trademark owned by European Bioplastics. According to European Bioplastics a product marked with the Seedling logo can be disposed of in the biowaste collection. However, regional specifications for the separate collection of biowaste may exist and detailed information can be obtained from the local municipalities or waste management authorities. Since the establishment of the Seedling certificate approximately 780 products, 110 intermediates and 330 materials have been certified, including the Sulapac Premium Plus material (registration No 7P0762). We are in the process of obtaining the certificate also for our other products. Download seedling certificate for Sulapac Premium Plus Leaves no trace behind A jar made of Seedling certified Sulapac Premium Plus material disintegrates in 12 weeks in an industrial compost. Compostability testing of Sulapac® products Sulapac material is industrially compostable in accordance with the EN 13432, as shown in the test results of an accredited testing laboratory OWS. Sulapac products have been tested by independent, accredited testing laboratory OWS following the test regime applied in the Seedling certification process, in accordance with the EN 13432. OWS is a Belgium based laboratory recognized by all certification bureaus worldwide working in the field of biodegradability and compostability. According to the EN 13432 a controlled pilot-scale test shall be used as the reference test method. A test in a full-scale treatment facility, may, however, be accepted as equivalent. The OWS tests for Sulapac products have been carried out in a pilot-scale setting. The test results show that Sulapac® conforms with the criteria set for compostable packaging as defined in EN 13432, and thus are suitable for composting in industrial compost. Sulapac®’s test results are as follows: With a volatile solids content of 98.9% Sulapac easily fulfills this requirement. A minimum of 50% of volatile solids is being required by EN 13432. The heavy metal and fluorine levels of Sulapac® colours Natural, First Snow, Warm Granite, Wild Cloudberry and Summer Strawberry lay well below the maximum levels set in EN 13432. Sulapac® fulfills the biodegradability requirements of EN 13432. EN 13432 requires that in 6 months the sample’s CO2 production level has to reach 90% of that of the reference material. Not a single piece of Sulapac material was found after sieving with 2mm sieve after 12 weeks of composting. EN 13432 requires that no more that 10% of the original dry weight of test material fail to pass the sieve. Quality of end compost No negative effect on emerge or growth of the plants grown in the test composts (25% and 50% concentrations) was observed. According to EN 13432 the growth rate in the test compost must be higher than 90% of that of blank compost. In addition to the pilot-scale compostability simulations by OWS, tests in full-scale treatment facility have been carried out by Kekkilä. The thermophilic phase of the composting process takes place inside concrete tunnels (6m x 21m) under controlled conditions. Temperature and the amount of oxygen is measured continuously, and the mass is rotated weekly. To enable the visual examination of biodegradation of the Sulapac® jars they were placed into perforated steel tubes (12cm x 40cm) together with compost mass. Steel tubes were placed inside composting tunnel together the normal waste mass. Steel tubes were collected in approximately one-month intervals and the biodegradation of the jars evaluated. The industrial-scale testing confirmed that Sulapac® comply with EN 13432 and is suitable for industrial composting According to EN 13432 a packaging material demonstrated to be compostable in a particular form, shall be accepted as being compostable in any other form having the same or a smaller mass to surface ratio or wall thickness. The compostability tests have been conducted with Sulapac products with a wall thickness of 4,5mm. To validate the compostability of an item with a larger maximum thickness, a disintegration test must be rerun. Communicating about the compostability A product can be claimed ‘compostable’, if it meets the criteria in the EN 13432. However, there are a few things to bear in mind. According to EU legislation a product can be claimed ‘compostable’, if it has been properly tested in accordance with the EN 13432 standard and all the criteria set in that standard have been met. No third-party certification is required. Communication of the compostability claim is currently not regulated or standardized, which on the other hand gives freedom for creativity but also adds the risk of misinterpretations and confusion. Therefore, clarity and transparency should be at core when using the compostability claim in communications and marketing. Although ‘compostability’ is a term used for products which can be processed in an industrial composting plant, it is sometimes used to refer to home composting and even to biodegradation in natural environment. Hence, to avoid misunderstandings it is recommended to use the more specific term ‘industrially compostable’. To add even more clarity ‘industrially compostable in accordance with EN 13432’ can be used. Certifying your product A third-party certificate such as the Seedling may be seen as advantageous in communicating the compostability claim and adding credibility. However, practice has shown that even though the Seedling certificate is appreciated in many contexts, not all stakeholders recognize it. In some context the EN 13432, in fact, is more commonly known. Seedling certificate is product-specific, meaning that any application of the product, e.g. filling of a jar with cosmetics substance, requires separate testing of the jar. However, references to items that already have been certified may significantly lower the testing expenditure, and the process for certifying a product manufactured solely of materials already registered with no further additives is more straightforward. For products, e.g. shopping bags, cutlery and clothing hangers, the certification is valid for three years and includes the right to use the “Seedling” mark. The certification process itself may take up to 12 months. If you wish to communicate about the compostability of Sulapac®, you can simply download our self declaration of compostability. Download Sulapac declaration of industrial compostability Compostability is a form of material recycling According to the EU (Directive 94/62/EC) EN 13432 specifies the technical requirements for industrial compostability, also referred to as organic recycling. Sulapac materials meet all the criteria set in the EN 13432 standard Material has been tested by independent test laboratories, such as OWS, which is recognized by all certification bureaus worldwide. Sulapac Premium Plus has the Seedling certificate which is a third-party verification for industrial compostability. Clear communication is key for end-users Brand is responsible for its end products and how the recycling is communicated to the end users. The compostability claim is always application specific. Attention must be paid to e.g. materials added to Sulapac (printing colors, stickers, sleeve or other parts) and content of the packaging. Regulation doesn’t exist for composting, instead standardization sets the criteria If technological guidance is given in directives, they refer to standards not certificates. Bio-waste management may differ regionally Organic recycling infrastructure exists worldwide, however processes related to industrial composting are not standardized, and regional specifications for the separate collection of biowaste may exist.
<urn:uuid:39ef653d-c90d-49c4-bec7-822f871209be>
CC-MAIN-2021-43
https://www.sulapac.com/compostability/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587926.9/warc/CC-MAIN-20211026200738-20211026230738-00590.warc.gz
en
0.926816
3,358
3.0625
3
|Short description of Indicator ||Percentage of Métis and non-Aboriginal adults (ages 20 and older) in Ontario who report that they are currently smoking, or were non-smokers exposed to second-hand smoke, or consumed vegetables and fruit less than 5 times per day, or were physically inactive during leisure time. Percentage of Métis and non-Aboriginal teens (ages 12 to 19) in Ontario who were non-smokers exposed to second-hand smoke. Percentage of Métis and non-Aboriginal adults (ages 19 and older) in Ontario who exceed cancer prevention recommendations for alcohol consumption. Percentage of Métis and non-Aboriginal adults (ages 18 and older) in Ontario who were obese. Percentage of Métis and non-Aboriginal adolescents (ages 12 to 17) in Ontario who were obese. Percentage of Métis households in Ontario reporting food insecurity in the past 12 months (marginal, moderate or severe, combined) |Rationale for measurement ||Modifiable risk factors are behaviours and exposures that can lower or raise a person’s risk of cancer and that can be changed. Evidence confirms strong associations between major risk modifiers (commercial tobacco use, alcohol, unhealthy eating, body fatness and physical inactivity) and the risk of certain cancers. Reporting on risk factor prevalence in Ontario is important for effectively monitoring trends over time, supporting the development of health promotion strategies and evaluating outcomes of provincial and local interventions. |Evidence/references for rationale ||Evidence supporting association between modifiable risk factors and cancer risk: World Cancer Research Fund and American Institute for Cancer Research [Internet]. 2007. Food, nutrition, physical activity, and the prevention of cancer: a global perspective; [cited 2015 March 9]. Available from: https://www.wcrf.org/dietandcancer/resources-and-toolkit. Parkin DM, Boyd L, Walker LC. 2011. 16. The fraction of cancer attributable to lifestyle and environmental factors in the UK in 2010. Br J Cancer. 105:S77-S81. International Agency for Research on Cancer. IARC monographs on the evaluation of carcinogenic risks to humans. Volume 100E. A review of human carcinogens. Part E: Personal habits and indoor combustions. Lyon: International Agency for Research on Cancer; 2012. |Calculations for the indicator ||Current smoking (adults) = ((Weighted number of adults ages 20 years and older who smoke daily or occasionally) / (Weighted total population ages 20 years and older)) x 100 Second-hand smoke (adults) = ((Weighted number of adults ages 20 years and older who do not smoke daily or occasionally and are exposed to second-hand smoke in their home, vehicle or public spaces) / (Weighted total population age 20 years and older who do not smoke daily or occasionally)) x 100 Second-hand smoke (teens) = ((Weighted number of teens ages 12 to 19 years who do not smoke daily or occasionally and are exposed to second-hand smoke in their home, vehicle or public spaces) / (Weighted total population ages 12 to 19 years who do not smoke daily or occasionally)) x 100 Alcohol consumption (adults) = ((Weighted number of adults ages 19 years and older who exceed the maximum recommended alcohol consumption for cancer prevention) / (Weighted total population ages 19 years and older)) x 100 Obese (adults) = ((Weighted number of adults ages 18 years and older with BMI 30.0 or greater) / (Weighted total population ages 18 years and older)) x 100 - Respondents who were pregnant at the time of the survey were excluded. - The calculation of BMI excluded respondents less than 3 feet (0.914 m) tall or those greater than 6 feet 11 inches (2.108 m). - BMI is categorized using standard international weight cutoffs. Obese (adolescents) = ((Weighted number of adolescents ages 12 to 17 years with BMI classified as obese by the Cole Classification System) / (Weighted total population ages 12 to 17 years)) x 100 Vegetable and fruit consumption - less than 5 times per day (adults) = ((Weighted number of adults ages 18 years and older eating vegetables (excluding potatoes) and fruit less than 5 times per day) / (Weighted total population ages 18 years and older)) x 100 - Respondents who reported consuming fruit juice more than once daily were considered as having consumed it only once. Physical inactivity during leisure time = ((Weighted number of adults ages 20 years and older whose average daily expenditure in leisure time physical activities in the past 3 months is less than 1.5kcal/kg/day) / (Weighted total population ages 20 years and older)) x 100 - All calculations excluded respondents in the non-response categories (refusal, don't know, and not stated) for required questions. General analytic notes: - All estimates of proportion for adults (apart from those for specific age groups) are age-standardized to the age distribution of the Ontario Aboriginal identity population (on- and –off reserve) in the 2006 Census, using age groups 18 to 24, 25 to 44, 45 to 64, and 65 and older. This technique adjusts for the differing age distributions of Métis and non-Aboriginal Ontarians (Métis being younger), allowing us to compare estimates between the 2 populations without bias due to the differing age structures. - Bootstrapping techniques were used to obtain variance estimates and 95% confidence intervals of all estimates. Statistics Canada requires estimates with coefficients of variation of 16.6% to 33.3% to be noted with a warning to users to interpret with caution, and estimates with coefficients of variation greater than 33.3% to be suppressed. - Health Canada. Canadian Guidelines for Body Weight Classification in Adults. Health Canada: Ottawa. 2003. - Statistics Canada. 2005. Bootvar: User Guide (Bootvar 3.1 – SAS version) (accessed February 10, 2015). Ottawa, Ontario. - Statistics Canada. “Canadian Community Health Survey (CCHS) Annual component." Definitions, data sources and methods. Last updated June 17, 2011.https://www.statcan.gc.ca/eng/statistical-programs/document/3226_D74_T1_V1 (accessed February 10, 2015). |Standardized Rate Calculation ||Direct standardization (by age) Standard population: 2006 Canadian census, Ontario Aboriginal Identity population Trend estimates, by Aboriginal identity, age-standardized: - Current smoking, adults (ages 20 years and older), Ontario, both sexes combined, by Aboriginal Identity 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014. Sex estimates, by Aboriginal identity, age-standardized: - Alcohol consumption, adults (ages 19 years and older), by sex and Aboriginal identity, 2007 to 2014 General analytic notes: Adult modifiable risk factor estimates presented for Ontario were age-standardized to the 2006 Aboriginal Identity population using the age groups from the 2006 Census: 20 to 24, 25 to 44, 45 to 64, 65 and over (exceptions for the lowest age range are overweight and obesity and alcohol consumption, where 18 to 24, and 19 to 24 were used, respectively). Risk factor estimates presented by education and income were age-standardized to the 2006 Aboriginal Identity population for adults ages 25 and older. ||Canadian Community Health Survey half-survey annual waves 2007–2014. Statistics Canada, Ontario Share File, Ontario Ministry of Health and Long-Term Care. - A person was classified as Métis if they self-identity as a Métis, or Métis in combination with any other Aboriginal identity (First Nation or Inuit), and are born in Canada, United States, Greenland, or Germany - Non-Aboriginal Ontarians were categorized as non-Aboriginal if they did not identify as Aboriginal or if they were not born in Canada, United States, Germany, or Greenland. Geography: boundaries for North and South Ontario were based on the Local Health Integration Networks (LHINS). LHINs 13 and 14 (North East and North West, respectively) represented “North residents". LHINs 1 to 12 represented “South residents." Income quintile: Reported or derived household income for each respondent adjusted for household size and community, sorted from highest to lowest and divided into 5 categories (“quintiles") so that about the same number of Ontario households is in each category (about 20% in each). Quintile 1 includes approximately 20% of households with lowest incomes, and quintile 5 includes the approximately 20% of households with highest incomes. Education: highest level of education attained by the respondent, according to 3 categories: less than secondary school graduation; secondary school graduation and/or some post-secondary education; and post-secondary graduation. - Education and income were analyzed for adults ages 25 and older to restrict the sample to those who have likely completed their education and reached their adult socio-demographic status. Residence (based on LHIN) was analyzed for adults ages 20 and older. - For obesity, BMI classifications used here may be limited in determining health risks for muscular adults, naturally lean adults, young adults who have not reached full growth and seniors. - The definition of “adult" applies to individuals age 20 and over, with the exceptions of overweight/obesity at age 18 and over to match BMI classifications, and alcohol consumption for which the legal age for consumption is 19. - Confidence limits are another measure of statistical variation and are calculated using a bootstrap technique. A difference in 2 percentages is statistically significant if the 95% confidence intervals of the 2 estimates do not overlap. This is a conservative approach to significance testing, but non-overlapping confidence intervals indicate that it is unlikely that the difference observed between the 2 groups is due to chance alone. - Trends in percentages over time were analyzed using Joinpoint regression software (v.4.1.1). Survey Questions – Canadian Community Health Survey Aboriginal Identity (Socio-demographics characteristics module): - Are you an Aboriginal person, that is, First Nations, Métis or Inuk/Inuit? First Nations includes Status and Non-Status Indians. - Are you: First Nation? - Are you: Métis? - Are you: Inuk/Inuit? - In what country were you born? Non-Aboriginal Identity (Socio-demographics characteristics module): - Derived variable about Aboriginal identity (sdcdabt) - In what country were you born? Smoking (Smoking module): - At the present time, do you smoke cigarettes daily, occasionally or not at all? Second-hand smoke exposure (Smoking module): - Including both household members and regular visitors, does anyone smoke inside your home, every day or almost every day? - In the past month, were you exposed to second-hand smoke, every day or almost every day, in a car or other private vehicle? - In the past month, were you exposed to second-hand smoke, every day or almost every day, in public places (such as bars, restaurants, shopping malls, arenas, bingo halls, bowling alleys)? Obesity (Height and weight module): - How tall are you without shoes on? - How much do you weigh? - Are you pregnant? Alcohol consumption (Alcohol use module): - Questions on alcohol use during the past year and during the past week. - Are you pregnant? Vegetable and fruit consumption (Fruit and vegetable consumption module): - How often do you usually drink fruit juices such as orange, grapefruit or tomato? - Not counting juice, how often do you usually eat fruit? - How often do you usually eat green salad? - How often do you usually eat carrots? - Not counting carrots, potatoes or salad, how many servings of other vegetables do you usually eat? Physical inactivity (Physical activities module): - Questions about whether an individual participated in any of a list of more than 20 specified physical activities, or any other leisure time physical activities, in the past 3 months, number of times the individual did the activity and amount of time spent. - Statistics Canada calculates a Leisure Time Physical Activity Index (PACDPAI) with respondents classified as being "active," "moderately active" or "inactive" based on the total daily energy expenditure values (kcal/kg/day): - Active - respondents who average 3.0 or more kcal/kg/day of energy expenditure - Moderately active - respondents who average 1.5 to 2.9 kcal/kg/day - Inactive - respondents with energy expenditure levels less than 1.5 kcal/kg/day - Health Canada. 2003. Canadian Guidelines for Body Weight Classification in Adults. Last updated June 24, 2013. http://www.hc-sc.gc.ca/fn-an/nutrition/weights-poids/guide-ld-adult/qa-qr-prof-eng.php (accessed February 25, 2014). - Statistics Canada. 2005. Bootvar: User Guide (Bootvar 3.1 – SAS version) (accessed September 30, 2014). Ottawa, Ontario. - Joinpoint Regression Program, Version 4.1.1. August 2014; Statistical Research and Applications Branch, National Cancer Institute. |Data availability & limitations - As of 2011, the CCHS restricted the question about Aboriginal identity to those born in Canada, the U.S., Germany or Greenland. Therefore, an individual was considered 'Aboriginal' only if they were born in one of these countries and self-identified as Aboriginal for all survey years (2007 to 2014). Respondents in survey years prior to 2011 who identified as Aboriginal and were born outside these countries are included with 'non-Aboriginal Ontarians'. - The Canadian Community Health Survey (CCHS) excludes individuals living on Indian Reserves and on Crown Lands, institutional residents, full-time members of the Canadian Forces, and residents of certain remote regions. - CCHS data on modifiable risk factors are self-reported. Respondents of self-reported surveys tend to under-report behaviours that are socially undesirable or unhealthy (e.g., tobacco use) and over-report behaviours that are socially desirable (e.g., vegetable and fruit consumption).
<urn:uuid:55ebdb98-5f2f-4a41-b846-defcd086d7c2>
CC-MAIN-2021-43
https://www.csqi.on.ca/en/2020/methodology/modifiable-risk-factors-metis
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587926.9/warc/CC-MAIN-20211026200738-20211026230738-00590.warc.gz
en
0.920793
3,074
3
3
|Aquarium Lighting Scroll down for LEDs Fluorescent (T5)Lighting is one of the more important factors for the aquarium. Our focus is saltwater aquariums and more specifically a basic mixed reef tank. The animals usually include what are generally considered “easy-to-keep” corals, fish, and invertebrates (shrimp, crabs, anemones, snails, etc.).Many organisms such as plants, algae, corals, and some bacteria synthesize food directly from carbon dioxide using enery from light – photosynthesis. It is thought that fish depend on light, much like humans, for immune function and general health. Lastly, reef keepers enjoy the bright look and varied colors and shapes in their tanks illuminated by the right lighting. Since most of the animals kept in the hobby live in and are collected from shallow, atoll coral reefs on and near the equator, we start by analysing the quantity and quality of natural sunlight found there. Basic physics teaches us that light has a wave-particle duality. Wave properties measured in wavelength and frequency (spectrum, color), and particle properties (photons, intensity). Quality of light – spectrum, color, appearance, Kelvin (K), CRI, CCT There are two systems of measurement commonly used to describe the color properties of a light source: “color temperature” (K for Kelvin rating) which expresses the color appearance of the light itself, and “color rendering index” (CRI), which suggests how an object illuminated by that light will appear in relation to its appearance under other common light sources. Sunlight is said to have a CRI of 100. Photosynthetic light below the water surface There is some evidence though, and a belief (a quite strong belief for some) that there are two spectrum “spikes” most or more important for coral growth and color. One broad spike in the purple/blue 400 to 500 nm range and a narrower red spike around 660/670. Lastly, some adhere almost dogmatically to the belief that the blue range is the most important spike, in part because the blue spectrum light best penetrates water, and a belief they are keeping “deep water” corals – those that live in and are collected from depths, 15 to 60 feet. “This experiment’s results suggest information potentially valuable for hobbyists – that rates of photosynthesis were essentially the same under these two distinctly different light sources. Other than aesthetic value, there appears to be no advantage, photosynthetically speaking, in using high Kelvin lamps.” Dana Riddle, Advanced Aquarist Magazine, Feb 2002 Now, let’s turn to our typical mixed reef system and the available light sources in the hobby. Artificial lighting companies and hobbyists try to mimic mother nature and even try to do her one better. In general, we are looking for at least 100 PAR (or 3000 lux) intensity on the sand bed (bottom of the tank) and the required color spectrum for the animals. Stagger corals by placing them on your live rock aquascape at depths (low/sand bed, mid tank, and high) according to their needs. Higher in the tank for more light intensity. Of course, the height above the water surface of the lighting system will influence PAR values and coverage. Most clams, most SPS corals, and carpet anemones for example require high light intensity. The various wavelengths within a given light source—its “color makeup”— can vary greatly and it will still appear white. A 250W MH covers a tank surface area of 36″X30″. A 36″ T5 (6tube x 39watt) fixture covers an area of 36″x24″ A 24″ LED fixture with 90 bulbs covers an area of 24″x18″ Many reefkeepers employ a 12-hour photoperiod. With multi-light systems, you can use timers to vary the intensity by varying the number of lights on at any one time. Usually, one bulb comes on for an hour, then all bulbs for 10 hours, then one light is left on for an additional hour while the others are turned off. This is one method to duplicate the sun passing over the reef. Fixtures are made-up of bulbs, reflectors, and electronics. Lighting fixtures can be built into canopies, hung from the ceiling or light stands, or placed directly on the tank walls with fixture stand attachments. Combo systems have mixes of bulb type, intensities, and colors in the same fixture. Consider PAR per watt, PAR per dollar, operating costs, coral growth rates, and your personal preference for the appearance of your tank under a certain lighting scheme. Also consider usable PAR, PUR, PPFD, and CRI. The color temperature of light is the ratio of red to blue light waves measured in degrees Kelvin (K). At 6000 degrees (K), the ratio between red and blue is equal. The higher the content of blue light waves, the higher the color temperature. Blue light penetrates saltwater best and corals most benefit from 400-420 nm (more violet and near UV) and 440-470 nm wavelengths of blues (blue at 470, and royal blue at 450. Coral Health and Color: In nature, many corals have made adaptations to the the effects of harmful UV-A and UV-B rays. Corals have developed protective pigments that are often blue, purple, or pink in color. Most corals that contain these pigments come from shallow waters where the amount of UV-A and UV-B light is higher than in deeper areas of the reef. Corals can lose their color due to the low light levels and blocked UV in an aquarium — doesn’t mean they are unhealthy. Corals may grow more brown under low light intensity and lighten under higher light levels. Corals may show more colors with exposure to blue spectrum lighting. Some corals are collected from depths of 15 to 65 feet where mostly all but blue wavelengths have been filtered out. Why not all blue then for our tanks? Many just don’t like the look, so a popular combination is part bright daylight wide spectrum white and part blues (aka actinic). General System Comparison |LED (light emitting diodes)| LED Directory – System Manufacturers, Retailers, Resellers and DIY (We will try to identify the OEMs. A basic, then more detailed comparsion matrix is in the works. Please check back.) Orbitec (note, patent issues) PFO (Solaris, Galileo models) Bulbs and chips from Taiwan: The Taiwan government-sponsored Industrial Technology Research Institute (ITRI) has joined with the 14 makers of LEDs and LED chips. The companies are: Epistar, Formosa Epitaxy, Arima Optoelectronics, Opto Tech, Tyntek, Ledtech Electronics, Unity Opto Technology, Para Light Electronics, Everlight Electronics, Bright LED Electronics, Kingbright, Lingsen Precision Industries, Ligitek Electronics and Lite-On Technology. Spectrum and color Note that blue and royal blue are significantly different. 3500K is only slightly bluer than halogen and feels like you’re next to a cozy fireplace. 6500K feels like you’re under a cloudless arctic sky. Optics affect coverage and spot-lighting. Some fixtures have interchangeable lenses. Narrow optics can produce high PAR but also spot-lighting, shadows, color variation, and even the chance of burned corals. Also, coverage/spread can be poor with high PAR directly under the lights, but unacceptably low PAR on the fringes of the light pattern. Common LED OEM models. AquaIlluminations LED (AI70watt) test, recently reported by a hobbyist, 2 fixtures with slightly overlapping light 6″ – 1650 12″ – 1370 (water surface) 16″ – 915 (4″ water depth) 19″ – 690 (7″ water depth) 24″ – 560 (12″ water depth) ReefKoi says, “When we did real world PAR testing over a reef aquarium we mounted the lights ~5? above water and got 965 at the surface, 500+ 5? down and 200+ 19? deep and 115 at the bottom 23? down…….I think its respectable, considering the wattage used? I mean a 150 HQI would be lucky to get 20 PAR at 23? deep I’m guessing.” ReefLEDLights says, “A 250 watt 10K XM light produces about the same PAR as a 400 watt Radium. This is why I claim such a range. The hard numbers are 48 XR-E LEDs on a 8?x24? heatsink using 80 degree optics will produce 296 PAR at 24?, when driven at 700mA. When driven at 800mA the same fixture will produce 323 PAR. A 400 watt Radium produced 310 PAR at that distance and the 250 watt 10K XM produced 305 PAR.” Sanjay Joshi says, “Based my experience with light measurements and 20+ years of keeping corals I have found that light levels of 100 PAR, at the bottom of tank is usually more than enough to allow keeping a wide range of corals successfully. Incidentally, my personal 500G (84″LX48″WX30″H) tank lit by 3 400W Ushio 14000K lamps in a Lumenarc reflector mounted about 12″ from the water surface has PAR readings at the bottom ranging from 80-120. With PAR levels averaging 100 at the bottom of the tank, there is enough of a light gradient to allow keeping high light loving corals in the top ½ to 2/3rd of the tank, with lower light corals scattered in the lower half. Most acropora and other light loving corals will thrive at light levels of 300-400.” Coverage or spread (refer to manufacture charts) A 250W MH covers an area of 36″X30″. A 36″ T5 (6tube x 39watt) fixture covers an area of 36″x24″ A 24″ LED fixture with 90 1-watt bulbs covers an area of 24″x18″ Spectrum and Color 1 Royal Blue to 1 Cool White will give you a 10-12K look. 2 Royal Blue to 1 Cool White will give you a 18-20K look. Blue and royal blue LEDs are significantly different from each other. CREE manufactures a range of different LEDs, and the term “binning” refers to the method that the company uses to sort all these LEDs in terms of their dominant wavelength (color) and luminous flux (brightness). The resultant “bin number” is then used by manufacturers of LED products to specify the color and brightness of the LEDs they wish to use. Metal Halide (MH) An older, mature technology already covered by many sources. Plasma Arc Lighting In our KIS system, NOTHING comes close to natural sunlight. Like flowers opening up, the coral polyp extension and detail that can be easily seen by the naked eye is far greater than any artificial light source. More indepth treatment of lighting here.
<urn:uuid:292a0c42-2584-40ce-8934-3aa1c4fbe14e>
CC-MAIN-2021-43
https://www.reeftank123.com/page-content-menu/lighting/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585025.23/warc/CC-MAIN-20211016200444-20211016230444-00310.warc.gz
en
0.912065
2,387
3.3125
3
The three Giza pyramids are located at 30° North, within an arcminute. (1) The sides of the Great Pyramid point North within two or three arcminutes. (2) Its slope angle is closely one-seventh of a circle, i.e. 51.4°. More exactly, it is given A when cosA = 1/Φ where Φ is the golden ratio. So slope angle is 51° 50'. A 'golden triangle' defines the slope angle, with sides 1 for the base, √Φ for its height and Φ for its long side (hypotenuse); thus its sides increase or 'grow' in the same proportion. We may write the 'Pythagoras' theorem' of this right-angled triangle as, Φ2 = Φ +1. That was two thousand years before Pythagoras. Diagram reproduced with permission from Heaven’s Mirror by Graham Hancock and Santha Faiia This implies, that the square on the height equals the area of a pyramid side. To make a replica of the Great Pyramid, draw a 'map' where two golden rectangles are added to each of the four sides, and the pyramid sides drawn as triangles upon them. The slope angle is also such that tan A = 4/л, i.e. the square on the base has length equal to that of a circle perimeter, whose radius equals the pyramid height: 51° 51'. Thus phi and pi are integrated at this unique slope-angle. The four sides in fact concur within about an arcminute, on this slope angle. The pyramid height is 280 royal cubits and its base 440, so base to height are in the ratio 11 to 7 – an early expression of the л- ratio? Here, tanA = 14/11 giving 51° 51.' The 'King's Chamber' has its length twice its breadth and its height is half the diagonal of that rectangle. Taking the width of this chamber as unity, phi Φ is traced out by the height plus half the width. A 3:4:5 Pythagoras triangle is contained in the diagonal plane of this otherwise-empty chamber: if its length is 4 units, the main diagonal is 5 and the diagonal across the end wall, 3. The actual size of this integer Pythagoras triangle in the King's Chamber establishes the units used for the Great Pyramid's exterior; the triangle sides being exactly 20, 15 and 25 royal cubits. Flinders Petrie proposed that the adjacent 'Khafre' pyramid at Giza had its slope angle defined by the 3-4-5 triangle (3). Wikipedia cites this mean slope angle as 53°10', and arcsin (4/5) = 53° 8' (i.e., the angle whose sine is 0.8) – so this modern estimate of the mean slope angle of 'Khafre' lies within a couple of arcminutes of that 3-4-5 triangle base-angle. The third Giza pyramid of 'Menkaure' has a slope angle estimated as 51° 20' (again, from Wikipedia) much closer to the 1/7th angle, which is 51° 25'. The three Giza pyramid slope angles seem to cluster around this value. The significance of this angle is confirmed by the way the ascending and descending passages of the Great Pyramid each form just half of that angle to the horizontal, so that there is a 1/7th angle between them. (4) Summarising, the sublime geometry of this building integrates л and Φ as follows: its height is the radius of a circle whose circumference equals the base perimeter, and its height is the side of a square whose area equals that of each of the four sides. Pythagoras' theorem was used in defining the Great Pyramid geometry, the earliest record of its use, but this is ignored or dismissed in histories of mathematics. The philosophical meaning of the coming together of the two transcendental terms pi and phi in the Giza slope angle is well discussed in John Ivimy's The Sphinx and the Megaliths (1974). A slope angle as defined by a 3:4:5 triangle (Khafre) is something you or I might dream up, whereas the pi/phi concordance and the one-seventh slope angle of the Great pyramid seems more, kind of – divine. Ancient Egyptian texts lack any concept of an angle, but do have the 'seked.' This seked was a ratio, that between half the base-side of a pyramid and its height. That is the cotangent of our angle 'A': thus the 'seked' of the Great Pyramid was л/4, while that of the adjacent Khafre pyramid was 3/4. (5) Wonderfully, this Seked value of the Great Pyramid is equal to 1 – 1/3 + 1/5 -1/7 + 1/9 – 1/11… etc. a series which goes on forever, using all the odd numbers (Leibniz found it in 1674). 'Herodotus wrote that the pyramid was built so that the area of each lateral face would equal the area of a square that had one side as long as the pyramid was tall.' (6) Whether Herodotus ever wrote such a thing seems doubtful (7), but that remark is widely quoted! This gives us the ratio pyramid height / half base-length equal to √Φ. (8) The 'Seked' (i.e., cotA, as in (5) and (6) above) here equals 1/√Φ. Maybe Thoth-Hermes understood these things, long ago. Half of the Base Area The King's chamber, as William Petrie pointed out in 1883, 'was placed at the height in the Great Pyramid at which the area of the horizontal section is equal to one-half the area of the base'. (9) That height implies use of the square root of two – how exact was that? In the figure, √2 = AB / BC Where the height of the pyramid AB is 280 cubits = 146.64 metres (its theoretical height, as if it still had the capstone), and the height of the floor of the King's Chamber floor AC is 82.09 cubits = 42.99 metres (10). That equation would then be exact to 99.8%. That has to be intentional. Experts have surmised that the King's Chamber floor height was intended to be just 82 cubits, (11) which would make this ratio exact to four figures! Thus the scale chosen for the building made the units of measure used chime in with this 'irrational' ratio, 1.414… The Victorian astronomer Richard Proctor wrote a book proposing that the Great Pyramid was first only half-built, up as far as what became the King's Chamber floor. So, this has been regarded as quite an important juncture. (It was then used as an observatory, he argued). He did not comment upon its exact mathematical placement. The yellow square in the diagram represents this floor at the King's Chamber level, and this may be a good visual method of 'seeing' the root two relationship, whereby it is half the area of the outer square, which represents the base. (12) * Half of an Angle The Ascending Passage leading up to the 'King's Chamber' has a slope angle of 26° 2' (13). This angle bisects that of the Great Pyramid's outer slope angle, within arcminutes. Therefore, this slope angle represents a one-fourteenth division of a circle. This mysteriously re-emphasises the number seven, within the Great Pyramid. The lovely star-heptagon (see figure) has this angle at its corners. If those who built this pyramid were able to bisect a one-seventh angle within arcminutes, that would tend to indicate their use of angular measure. - This 1' error may have been due to atmospheric refraction: Peter Lancaster-Brown, Megaliths, Myths and Men, 1976, p271. 'The apparent (not true) altitude of the pole is 30° 00' (1/12 of a circle) as seen from all three Giza pyramids:' Dio, a history of astronomy journal, Baltimore, Dec. 2003. Hugh Thurston, 'Orientation of Early Egyptian Pyramids', p.6. [back to text] - For comparison, the adjacent Khafre pyramid points due North with an error of about 6', but in the other direction, possibly indicating a later construction. [back to text] - Petrie wrote, 'Here there can scarcely be any doubt that the 3:4:5 triangle was the design for the slope.' [back to text] - Discussed by Robin Heath in Sun, Moon and Stonehenge, 1998, p.178. [back to text] - In the Rhind Papyrus, c. 2000 BC: Eli Maor, Trigonometric Delights (Princeton, 1998). [back to text] - The Joy of Pi, David Blatner, 1997 p.11. Blatner errs in saying that this definition gives pi, rather than phi. [back to text] - The History of Herodotus, Trans. Rawlinson, NY,1936, Book II, p.125. [back to text] - We can write 'Herodotus' condition' as h2 = l√(h2+l2) where l is half the base length and h the pyramid height. Mathematically-inclined readers will solve this, to obtain h/l = Φ, because Φ = (1+ √5)/2. [back to text] - W.M.F. Petrie, The Pyramids and Temples of Gizeh (London, 1883), 186-7. Graham Hancock, Fingerprints of the Gods, 1995, p.358. [back to text] - Data from www.repertorium.net/rostau/measures.html; [back to text] - See John Legon, www.legon.demon.co.uk/greatpyr.htm [back to text] - See William Glyn-Jones on GH Forum, http://www.grahamhancock.com/forum/GlynJonesW3.php?p=10 [back to text] - Hancock, Fingerprints, p.337. [back to text]
<urn:uuid:d579c28b-49aa-46fb-b410-d71d64fdf139>
CC-MAIN-2021-43
https://grahamhancock.com/kollerstromn2/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585439.59/warc/CC-MAIN-20211021164535-20211021194535-00670.warc.gz
en
0.928124
2,240
3.671875
4
Image via iStock.com/dimarik When pet owners are asked what they dread most about the summer months, the topic that invariably comes up most is fleas! These small, dark brown insects prefer temperatures of 65-80 degrees and humidity levels of 75-85 percent—so for some areas of the country, fleas on dogs are more than just a summer problem. In many areas of the southern United States, fleas can survive and bother your pet year-round. Dogs often get infested with fleas through contact with other animals or contact with fleas in the environment. The strong back legs of this insect enable it to jump from host to host or from the surrounding environment onto the host. (Fleas do not have wings, so they cannot fly.) The flea’s bite can cause itching for the host, but for a sensitive or flea-allergic animal, this itching can be quite severe. It can lead to severe scratching and chewing that causes hair loss, inflammation and secondary skin infections. Some pets can be hypersensitive to the flea's saliva and will itch all over from the bite of even a single flea. How to Spot Fleas on Dogs How do you know if fleas are causing all that itching (pruritus in veterinary terms)? Generally, unlike the burrowing, microscopic Demodex or Scabies mites, fleas can be seen scurrying along the surface of the skin. Fleas are a dark copper color and about the size of the head of a pin. They dislike light, so your best chance of spotting fleas on a dog is to look within furry areas and on the belly and inner thighs. "Flea dirt" can also signal that there are fleas on a dog. Flea dirt looks like dark specks of pepper scattered on the skin’s surface. If you see flea dirt—which is actually flea feces that is composed of digested blood—pick some off the pet and place on a wet paper towel. If the tiny specks spread out like a small bloodstain after a few minutes, it's definitely flea dirt, and your pet has fleas. What Is the Best Way to Get Rid of Fleas on a Dog? If you've discovered that your dog has fleas, here are a few things you can do to provide your pet with relief. Oral and Topical Flea Control Fleas are annoying and persistent. However, dog flea and tick pills and other spot-on dog flea and tick treatments have proven to be some of the fastest ways to rid your pet of fleas. Some only target adults, while others target flea eggs, larvae and adult fleas, so it's important to buy the right one. Others will combine flea control and heartworm prevention in one treatment. You’ll notice that some require a prescription, while others do not. So, what is the best oral flea treatment for dogs? It will depend on your individual dog's needs. Talk to your vet about which option is the best for your pet. Prescription Flea Medications There are a wide variety of flea products on the market today, but the newer prescription flea and tick products are finally taking the frustration out of flea control with popular and highly effective brands. Talk to your veterinarian about preventative flea and tick medicine for dogs, as many are prescription products. Prescription treatments present one of the best ways to kill fleas fast. Bravecto (fluralaner) begins to kill fleas within two hours and lasts for three months, while products containing spinosad (Comfortis, Trifexis) begin to work within 30 minutes and last for one month. Some of these flea products do not harm the adult flea but instead prevent her eggs from hatching, thus breaking the life cycle of the flea. With no reproduction, the flea population eventually dissipates as long as the pet isn't coming in contact with new fleas continually. In warm climates, prescription flea and tick treatment for dogs is typically a year-round endeavor, but in other climates, treatment should begin in early spring before the flea season starts. For animals that are allergic to flea saliva (have flea bite hypersensitivity), choose a product that targets adult fleas as well, since they are still able to bite the animal. For dogs with flea hypersensitivity, products containing a flea repellent (Seresto collar, Vectra 3D) are the best choice so that the fleas never bite. Nonprescription Medication to Treat Fleas on Dogs There are also many other products which will kill fleas on the pet and for which no prescription is needed. The drawback, however, is that these products may be less effective than the prescription products. These nonprescription flea products include flea shampoos, flea powders, flea sprays, flea collars, oral flea treatment and spot-on products. Many veterinarians are reporting that their patients still have fleas after use of these over-the-counter products, but there are also good reviews from pet parents for some of these products. Capstar, for instance, is a tablet that kills adult fleas and is taken orally. It begins to work within 30 minutes, and kills more than 90 percent of all fleas within four hours. It is used to treat flea infestations. Dog Flea Shampoos There are several dog flea and tick shampoo options for dogs and cats on the market that can be quite effective when used properly. Flea dog shampoos may contain a variety of ingredients that are more or less effective. Small puppies should only be bathed in nontoxic dog shampoo. You’ll need to consider whether or not your pet can stand getting soaking wet and being lathered up for five to 10 minutes, though, since that's how long the shampoo takes to sink in. Following a nice warm bath, you'll have killed the fleas and will be able to use a dog flea and tick comb to remove the dead fleas from your dog. However, flea shampoos do not protect your dog from continued infestation with fleas. WARNING: Tea tree oil is toxic. Do NOT use tea tree oil as a flea repellent in cats or dogs. Understanding the Flea Life Cycle But your quest to eliminate fleas isn’t over just yet—you also have to treat the environment. Simply sprinkling some flea powder on your pet will not work; simply vacuuming the home vigorously will not work, simply placing a dog flea collar or using a flea topical on your pet will not work. In order to understand how each treatment options works and why you must also treat the environment, we must first understand the flea’s life cycle. The various treatment and prevention products work on different parts of this life cycle. There are several stages to the flea life cycle: egg, larva, pupa (cocoon) and adult. The length of time it takes to complete this cycle varies depending upon the environmental conditions, such as temperature, humidity and the availability of a nourishing host. The life cycle can take anywhere from two weeks to a year. The flea's host is a warm-blooded animal such as a dog or cat (or even humans). The various flea stages are quite resistant to freezing temperatures. The adult female flea typically lives for several days to weeks on its host. During this time period, she will suck the animal’s blood two to three times and lay 20 to 30 eggs each day. She may lay several hundred eggs over her life span. These eggs fall off of the pet and into the yard, bedding, carpet and wherever else the animal spends time. These eggs then proceed to develop where they have landed. Since they are about 1/12 the size of the adult, they can even develop in small cracks in the floor and between crevices in carpeting. The eggs then hatch into larvae. These tiny worm-like larvae live among the carpet fibers, in cracks of the floor and outside in the environment. They feed on organic matter, skin scales and even the blood-rich adult flea feces. The larvae grow, molt twice and then form a cocoon and pupate, waiting for the right time to hatch into an adult. These pupae are very resilient and are protected by their cocoon. They can survive quite a long time, waiting until environmental conditions and host availability are just right. Then they emerge from their cocoons when they detect heat, vibrations and exhaled carbon dioxide, all of which indicate that a host is nearby. The newly emerged adult flea can jump onto a nearby host immediately. Under optimal conditions, the flea can complete its entire life cycle in just 14 days. Just think of the tens of thousands of the little rascals that could result when conditions are optimal. Knowing this life cycle allows us to understand why it has always been important to treat both the host animal and the indoor and outdoor environment in order to fully control flea numbers. You must also treat the home and surrounding area. How to Treat Fleas in the Environment With any flea treatment, it is necessary to treat all of the animals in the home in order to achieve complete success. In addition, you will likely need to treat the indoor and outdoor environment. Treating the Home When treating the indoor environment, it is important to wash all bedding in soapy, hot water. All of the carpeting should be vacuumed thoroughly, and the vacuum bag thrown away or canister emptied and trash bag taken outside. Steam cleaning the carpet can kill some of the larvae as well. Remember, though, that vacuuming and shampooing a carpet will still leave a good percentage of live fleas, so some sort of chemical treatment may be necessary. The entire house is now ready to treat for fleas. Several choices are available including highly effective foggers. Boric acid-based products may be a safer option for homes with small children or other situations where chemical residues are a concern. The most effective products are those which contain both an ingredient to kill adult fleas and an ingredient to kill the other life cycle stages. The latter is called an insect growth regulator. Methoprene is one such growth regulator. Aerosol foggers may not penetrate well enough, in some cases, to kill all the hiding fleas and larvae. Another option for indoor control is a sodium borate product that is applied to carpeting. You should consider calling a local exterminating company for an estimate and a guarantee that their procedure will rid your premises of fleas. Flea eradication won't be cheap, but what price will you put on living free from flea infestations? Outdoor Flea Control As for outdoor control, sprays and pelleted insecticides are generally used after dog houses and dog kennels are cleaned thoroughly. An insect growth regulator is a good choice here as well. Pyriproxifen is more stable in sunlight and lasts longer outdoors than methoprene. It is important to know that the Environmental Protection Agency (EPA) has banned the insecticide chlorpyrifos (Dursban). Production ceased in December of 2000. Diatomaceous earth, a nontoxic option, can be very effective and is safe to use in and around vegetable gardens and children’s outdoor play equipment. When choosing a diatomaceous earth product look for a food-grade product like DiatomaceousEarth Food Grade Powder, which is safe for use around pets. Certain nontoxic nematodes (tiny worms) can also be spread in areas of the yard which are warm and moist and which pets and fleas frequent. The nematodes feed on the flea larvae. And once there is a cover of snow on the ground, much of the major source of fleas is eliminated. Be sure to consult your veterinarian regarding which methods and products will be best for you and your pets. Your veterinarian will be your best source for current flea information.
<urn:uuid:2940e53e-0453-4972-bee2-938552a632a5>
CC-MAIN-2021-43
https://www.petmd.com/dog/care/evr_dg_fleas_on_dogs_and_what_you_can_do_about_them
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587655.10/warc/CC-MAIN-20211025061300-20211025091300-00230.warc.gz
en
0.943422
2,544
2.71875
3
The Win32 API is the fundamental interface to the capabilities of Windows XP. This section describes five main aspects of the Win32 API: access to kernel objects, sharing of objects between processes, process management, interprocess communication, and memory management. Access to Kernel Objects The Windows XP kernel provides many services that application programs can use. Application programs obtain these services by manipulating kernel objects. A process gains access to a kernel object named XXX by calling the CreateXXX function to open a handle to XXX. This handle is unique to the process. Depending on which object is being opened, if the Create() function fails, it may return 0, or it may return a special constant named INVALID _HANDLE_VALUE. A process can close any handle by calling the CloseHandle () function, and the system may delete the object if the count of processes using the object drops to 0. Between Processes Windows XP provides three ways to share objects between processes. The first way is for a child process to inherit a handle to the object. When the parent calls the CreateXXX function, the parent supplies a SECURITIESJVTTRIBUTES structure with the blnheritHandle field set to TRUE. This field creates an inheritable handle. Next, the child process is created, passing a value of TRUE to the CreateProcessO function's blnheritHandle argument. Figure 22.11 shows a code sample that creates a semaphore handle inherited by a child process. Assuming the child process knows which handles are shared, the parent and child can achieve interprocess communication through the shared objects. In the example in Figure 22.11, the child process gets the value of the handle from the first command-line argument and then shares the semaphore with the parent process. The second way to share objects is for one process to give the object a name when the object is created and for the second process to open the name. This method has two drawbacks: Windows XP does not provide a way to check whether an object with the chosen name already exists, and the object name space is global, without regard to the object type. For instance, two applications may create an object named pipe when two distinct—and possibly different— objects are desired. Named objects have the advantage that unrelated processes can readily share them. The first process calls one of the CreateXXX functions and supplies a name in the lpszName parameter. The second process gets a handle to share the object by calling OpenXXX () (or CreateXXX) with the same name, as shown in the example of Figure 22.12. The third way to share objects is via the DuplicateHandleO function. This method requires some other method of interprocess communication to pass the duplicated handle. Given a handle to a process and the value of a handle within that process, a second process can get a handle to the same object and thus share it. An example of this method is shown in Figure 22.13 In Windows XP, a process is an executing instance of an application, and a thread is a unit of code that can be scheduled by the operating system. Thus, a process contains one or more threads. A process is started when some other process calls the CreateProcess() routine. This routine loads any dynamic link libraries used by the process and creates a primary thread. Additional threads can be created by the CreateThreadO function. Each thread is created with its own stack, which defaults to 1 MB unless specified otherwise in an argument to CreateThreadO. Because some C run-time functions maintain state in static variables, such as errno, a multithread application needs to guard against unsynchronized access. The wrapper function beginthreadexO provides appropriate synchronization. Instance Handles Every dynamic link library or executable file loaded into the address space of a process is identified by an instance handle. The value of the instance handle is actually the virtual address where the file is loaded. An application can get the handle to a module in its address space by passing the name of the module to GetModuleHandleO. If NULL is passed as the name, the base address of the process is returned. The lowest 64 KB of the address space are not used, so a faulty program that tries to de-reference a NULL pointer gets an access violation. Priorities in the Win32 API environment are based on the Windows XP scheduling model, but not all priority values may be chosen. Win32 API uses four priority classes: 1. IDLE_PRIORITY_CLASS (priority level 4) 2. NORMAL_PRIORITY_CLASS (priority level 8) 3. HIGH_PRIQRITY_CLASS (priority level 13) 4. REALTIME_PRIORITY_CLASS (priority level 24) Processes are typically members of the NORMALJPRIORITY_CLASS unless the parent of the process was of the IDLE_PRIORITY_CLASS or another class was specified when CreateProcess was called. The priority class of a process can be changed with the SetPriorityClassO function or by passing of an argument to the START command. For example, the command START /REALTIME cbserver.exe would run the cbserver program in the REALTIMEJPRIORITY_CLASS. Only users with the increase scheduling priority privilege can move a process into the REALTIME-PRIORITY XLASS. Administrators and power users have this privilege by default. 832 Chapter 22 Windows XP When a user is running an interactive program, the system needs to provide especially good performance for the process. For this reason, Windows XP has a special scheduling rule for processes in the NORMAL .PRIORITY-CLASS. Windows XP distinguishes between the foreground process that is currently selected on the screen and the background processes that are not currently selected. When a process moves into the foreground, Windows XP increases the scheduling quantum by some factor—typically by 3. (This factor can be changed via the performance option in the system section of the control panel.) This increase gives the foreground process three times longer to run before a time-sharing preemption occurs A thread starts with an initial priority determined by its class. The priority can be altered by the SetThreadPriority O function. This function takes an argument that specifies a priority relative to the base priority of its class: • THREAD_PRIORITY_LDWEST: base – 2 • THREAD PRIORITY JELOW JJORMAL: base - 1 • THREAD_PRIORITYJJORMAL: base 4- 0 • THREAD_PRIORITY_ABOVE_NORMAL: base + 1 • THREAD_PRIORITY_HIGHEST:base + 2 Two other designations are also used to adjust the priority. Recall from Section 22.214.171.124 that the kernel has two priority classes: 16-31 for the realtime class and 0-15 for the variable-priority class. THREADJPRIORITY_IDLE sets the priority to 16 for real-time threads and to 1 for variable-priority threads. THREADJPRIORITY_TIME_CRITICAL sets the priority to 31 for real-time threads and to 15 for variable-priority threads. As we discussed in Section 126.96.36.199, the kernel adjusts the priority of a thread dynamically depending on whether the thread is I/O bound or CPU bound. The Win32 API provides a method to disable this adjustment via SetProcessPriorityBoost () and SetThreadPriorityBoostQ functions. A thread can be created in a suspended state; the thread does not execute until another thread makes it eligible via the ResumeThreadO function. The SuspendThreadO function does the opposite. These functions set a counter, so if a thread is suspended twice, it must be resumed twice before it can run. To synchronize the concurrent access to shared objects by threads, the kernel provides synchronization objects, such as semaphores and mutexes. In addition, synchronization of threads can be achieved by use of the WaitForSingleObjectQ and WaitForMultipleObjectsQ functions. Another method of synchronization in the Win32 API is the critical section. A critical section is a synchronized region of code that can be executed by only one thread at a time. A thread establishes a critical section by calling InitializeCriticalSection(). The application must call EnterCriticalSectionQ hefore entering the critical section and LeaveCriticalSectionO after exiting the critical section. These two routines guarantee that, if multiple threads attempt to enter the critical section concurrently, only one thread at a time will be permitted to proceed; the others will wait in the EnterCriticalSectionO routine. The critical-section mechanism is faster than using kernel-synchronization objects because it does not allocate kernel objects until it first encounters contention for the critical section A fiber is user-mode code that is scheduled according to a user-defined scheduling algorithm. A process may have multiple fibers in it, just as it may have multiple threads. A major difference between threads and fibers is that whereas threads can execute concurrently, only one fiber at a time is permitted to execute, even on multiprocessor hardware. This mechanism is included in Windows XP to facilitate the porting of those legacy UNIX applications that were written for a fiber-execution model. The system creates a fiber by calling either ConvertThreadToFiberQ or CreateFiber(). The primary difference between these functions is that CreateFiber () does not begin executing the fiber that was created. To begin execution, the application must call SwitchToFiberO. The application can terminate a fiber by calling DeleteFiber (). Repeated creation and deletion of threads can be expensive for applications and services that perform small amounts of work in each. The thread pool provides user-mode programs with three services: a queue to which work requests may be submitted (via the QueueUserWorkltemQ API), an API that can be used to bind callbacks to waitable handles (RegisterWaitForSingleObject ()), and APIs to bind callbacks to timeouts (CreateTimerQueueO and CreateTimerQueueTimerO). The thread pool's goal is to increase performance. Threads are relatively expensive, and a processor can only be executing one thing at a time no matter how many threads are used. The thread pool attempts to reduce the number of outstanding threads by slightly delaying work requests (reusing each thread for many requests) while providing enough threads to effectively utilize the machine's CPUs. The wait and timer-callback APIs allow the thread pool to further reduce the number of threads in a process, using far fewer threads than would be necessary if a process were to devote one thread to servicing each waitable handle or timeout. Win32 API applications handle interprocess communication in several ways. One way is by sharing kernel objects. Another way is by passing messages, an approach that is particularly popular for Windows GUI applications. One thread can send a message to another thread or to a window by calling PostMessageO, PostThreadMessageO, SendMessageQ, SendThreadMessageO, or SendMessageCallbackQ. The difference between posting a mes sage and sending a message is that the post routines are asynchronous? They return immediately, and the calling thread does not know when the message is actually delivered. The send routines are synchronous: They block the caller until the message has been delivered and processed. In addition to sending a message, a thread can send data with the message. Since processes have separate address spaces, the data must be copied. The system copies data by calling SendMessageO to send a message of type WM_COPYDATA with a COPYDATASTRUCT data structure that contains the length and address of the data to be transferred. When the message is sent, Windows XP copies the data to a new block of memory and gives the virtual address of the new block to the receiving process. Unlike threads in the 16-bit Windows environment, every Win32 API thread has its own input queue from which it receives messages. (All input is received via messages.) This structure is more reliable than the shared input queue of 16-bit Windows, because, with separate queues, it is no longer possible for one stuck application to block input to the other applications. If a Win32 API application does not call GetMessage () to handle events on its input queue, the queue fills up; and after about five seconds, the system marks the application as "Not Responding". The Win32 API provides several ways for an application to use memory: virtual memory, memory-mapped files, heaps, and thread-local storage. An application calls VirtualAlloc () to reserve or commit virtual memory and VirtualFreeO to decommit or release the memory. These functions enable the application to specify the virtual address at which the memory is allocated. They operate on multiples of the memory page size, and the starting address of an allocated region must be greater than 0x10000. Examples of these functions appear in Figure 22.14. A process may lock some of its committed pages into physical memory by calling VirtualLockO. The maximum number of pages a process can lock is 30, unless the process first calls SetProcessWorkingSetSizeO to increase the maximum working-set size. Another way for an application to use memory is by memory-mapping a file into its address space. Memory mapping is also a convenient way for two processes to share memory: Both processes map the same file into their virtual memory Memory mapping is a multistage process, as you can see in the example in Figure 22.15. If a process wants to map some address space just to share a memory region with another process, no file is needed. The process calls CreateFileMappingO with a file handle of Oxffffffff and a particular size. The resulting file-mapping object can be shared by inheritance, by name lookup, or by duplication. Heaps provide a third way for applications to use memory. A heap in the Win32 environment is a region of reserved address space. When a Win32 API process is initialized, it is created with a 1-MB default heap. Since many Win32 API functions use the default heap, access to the heap is synchronized to protect the heap's space-allocation data structures from being damaged by concurrent updates by multiple threads. Win32 API provides several heap-management functions so that a process can allocate and manage a private heap. These functions are HeapCreateQ, HeapAllocO, HeapReallocO, HeapSizeO, HeapFreeQ, and HeapDestroyC). The Win32 API also provides the HeapLockO and HeapUnlockO functions to enable a thread to gain exclusive access to a heap. Unlike VirtualLockO, these functions perform only synchronization; they do not lock pages into physical memory. Thread-Local Storage The fourth way for applications to use memory is through a thread-local storage mechanism. Functions that rely on global or static data typically fail to work properly in a multithreaded environment. For instance, the C runtime function strtokO uses a static variable to keep track of its current position while parsing a string. For two concurrent threads to execute strto k () correctly, they need separate current position variables. The thread-local storage mechanism allocates global storage on a per-thread basis. It provides both dynamic and static methods of creating thread-local storage. The dynamic method is illustrated in Figure 22.16. To use a thread-local static variable, the application declares the variable as follows to ensure that every thread has its own private copy: ..declspec (thread) DWORD cur _pos = 0; Frequently Asked Questions - Operating System Concepts ( Multi tasking, multi programming, multi-user, Multi-threading ) - Different Types of Operating Systems - Batch Operating Systems - Time sharing operating systems - Distributed Operating Systems - Network Operating System - Real Time operating System - Various Operating system services - Architectures of Operating System - Monolithic architecture - operating system - Layered Architecture of Operating System - Microkernel Architecture of operating system - Hybrid Architecture of Operating System - System Programs and Calls - Process Management - Process concept
<urn:uuid:f957fa43-7e5a-4c9e-a1b9-037eca5242f6>
CC-MAIN-2021-43
https://padakuu.com/article/177-programmer-interface
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585246.50/warc/CC-MAIN-20211019074128-20211019104128-00630.warc.gz
en
0.886315
3,351
3.515625
4
RUSSIAN FINANCIAL CRISIS The Russian financial crisis, which began in 1998, was caused by both internal and external economic weaknesses. The crisis made underlying economic problems more evident. Pre-existing vulnerabilities included exposure to exchange rate volatility through issuance of United States (US) dollar-denominated bonds, and dependence on an export-oriented economy. During the crisis, halting of foreign demand for Russian metals and energy led to a severe downturn and a sudden liquidation of Russian assets. External shocks from the Asian financial crisis exacerbated the crisis and eventually necessitated an IMF bailout. Causes of the Crisis The Russian financial crisis was one in a series of crises after Asia, but was also rooted in fiscal shortcomings that began prior to 1998. The Soviet Union had disintegrated in large part due to bankruptcy of the Soviet state, and Russia continued to struggle with the debt crisis that this created (Vavilov 2010). Without fiscal reform, the government struggled to operate effectively. The cash tax collection of September 1996 was disastrous and hence government wages and social expenditures could not be paid, resulting in a vicious cycle of non-payments (Gilman 2010). Government expenditures meanwhile only increased through 1996. Tax arrears were essentially subsidies to the debtor institutions. Weak fiscal performance contributed to high interest rates and political uncertainty. The inability to collect sufficient taxes to fund government spending repressed further economic reform. Poorly designed tax rules and tax administration, and the pervasiveness of criminal gangs who both collected “taxes” and provided protection, led to severe fiscal shortfalls. Government budgetary expenditure was also undisclosed, preventing external advisors from helping matters. Tax revenues in 1997 were again disappointing, leaving the government in a quandary over its budgeted expenditures. Financial liberalization did not help matters. Current account convertibility was introduced in 1996, while capital controls were easily averted. A large amount of foreign money flowed into the stock market in 1996 as investors expected high returns. The formal granting of permission to foreigners to purchase Russian government bonds prompted a surge in foreign investment in 1996 and 1997 (Buchs 1999). GKO1 government bonds were purchased in large amounts, at $1.6 billion in 1996, and more than $4 billion in both Q1 1997 and Q2 1997. Banks, weak institutions that lacked true independence, acted as a conduit for government debt investment (Pinto and Ulatov 2010). Banking liabilities accumulated (Perotti 2002). What is more, through 1997, political instabilities mounted as President Boris Yeltsin’s health deteriorated and many government officials were sacked (Gilman 2010). President Yeltsin’s ratings were low to begin with, as Communists and Nationalists opposed Reformers. Most observers were aware that decisions were made (funds and projects appointed, state assets distributed) according to insider preference rather than economic or politi?cal efficiency. In addition, corporate governance was very poor due to the privatization process, and firms were still in a process of adjustment to the new economic circumstances. These destabilizing events occurred even though macroeconomic fundamentals improved: the trade surplus was moving toward balance, the IMF and World Bank continued to provide aid (after rigorous negotiations) to stabilize the economy and prevent ruble devaluation, inflation had fallen, and output was rising (Chiodo and Owyang 2002). As the Asian crisis had shown, macroeconomic fundamentals were no longer sufficient for economic growth or even stability. Hence the domestic conditions were ripe for crisis. In addition, the Russian financial crisis of 1998 was triggered in part by contagion from Southeast Asia. Contagion from the Asian financial crisis threw the country into a downturn. In late September 1997, Korean and Brazilian investors, experiencing crisis at home, withdrew from Russian assets to cover their positions at home (Gilman 2010). By October 1997, investors nervous about contagion from the Asian crisis began to pull out of the stock and bond (GKO) markets. Foreign bondholders began to abandon GKOs. Most owners of the GKOs were foreign investors and the large Russian domestic banks (Sutela 1999). At the end of 1997, yields began to rise on Russian debt as the government significantly increased the amount auctioned. Events of the Crisis In November 1997, the Russian central bank had lost 25 percent of its foreign reserves. Investors started pulling their investments out of the Russian stock market, which depressed equity prices and put further downward pressure on the currency. Due to concerns about emerging markets caused by the Asian financial crisis, the ruble went under a speculative attack at the end of 1997 and the beginning of 1998, and a net outflow of funds from the government bond market occurred, causing rating agencies to downgrade Russia’s outlook. In response, the Russian Central Bank (CBR) raised interest rates to boost investor confidence and help defend the ruble against external pressure, but bankers opposed this tightening of monetary policy. The stock of government securities became larger than the ruble money stock by 1998. Sberbank held up to 40 percent of the GKO stock and most household ruble savings, which were used to pay the public deficit. The outflow of funds from the bond market continued after President Yeltsin limited foreign ownership in the national electricity company in May 1998, and after the anti-crisis plan was opposed in the State Duma (Buchs 1999). To make matters worse, falling oil prices reduced Russia’s oil revenue (Chiodo and Owyang 2002). Through 1998, some large banks were undertaking extensive risks, borrowing large amounts in foreign exchange from abroad to make profits from high-yielding GKOs and purchase the foreign exchange on maturity to repay the loan (Gilman 2010). The current account balance fell and then turned negative in the first half of 1998 (Desai 2003). The political situation within Russia continued to deteriorate, with the Duma rejecting policies that would conform to IMF loan covenants. Government churning brought in Sergey Kiriyenko as Prime Minister with an inexperienced new team. Fiscal imbalances continued, and the government attempted to collect more taxes in cash, reducing banks’ and firms’ liquidity. The central bank attempted to stave off a potential devaluation crisis by raising the lending rate to banks and decreasing the growth of the money supply, both of which had unintended adverse consequences on government revenues and liquidity. Without a concrete solution to Russia’s financial troubles, investors started to grow impatient and withdraw their funds. This led yields on three-month GKOs to rise to 50 percent in 1998, and to 90 percent later that same month. New Russian debt was issued at successively higher interest rates, which further undermined investor confidence. Banks came under scrutiny after Tokobank found itself unable to meet margin calls against collateral held to secure foreign credits. Interbank loan defaults ensued as several large banks, including Tokobank and SBS-Agro, became insolvent (Perotti 2002). The large-scale loss of confidence in Russia’s economy put the ruble under serious pressure. In response, the exchange rate had to be fiercely defended, and the Russian stock market fell 20 percent. Soon after, Russia and the IMF were able to reach an agreement to release $670 million to bolster the economy. In July 1998, facing a weighted average interest rate on GKOs of 126 percent, Kirienko canceled GKO auctions and offered to convert outstanding bonds into medium- and long-term notes denominated in dollars. The conversion had the following features: it was to be voluntary and market-based, allowing swaps only on GKOs maturing before July 1, 1999. Those wanting to convert their bonds could receive an equal amount in terms of market value of 7- and 20-year dollar eurobonds (Pinto and Ulatov 2010). The conversion restored a degree of confidence in the economy and prompted the IMF, the World Bank, and the Japanese government to offer $22.6 billion in assistance. The weighted average yield of outstanding GKOs fell to 53 percent. The reforms agreed to as conditions for the IMF loan were again stalled by the Duma, leading the IMF to scale back assistance. The failure to push reforms through the Duma demonstrated Russia’s political weakness. The Russian economy then had to deal with a liquidity crisis. Russian banks received loans from abroad, and in exchange posted GKOs as collateral. As Russian banks began to sell off the government debt to exchange for foreign currency to meet the margin calls, global markets became nervous. Sberbank itself redeemed all of its GKO holdings falling due in July for 12.4 billion rubles ($1.28 billion) (Gilman 2010). Foreign currency reserves continued falling, from $19.5 billion in July 1998 to $16.3 billion in August 1998. The ruble was still imperiled from loss of foreign investor confidence (Buchs 1999), and Russian-era external debt had increased by more than $16 billion between June 1 and July 24, 1998 (Pinto and Ulatov 2010). On August 13, George Soros wrote in the Financial Times that Russia’s crisis was in the “terminal” stage and called for a devaluation of the currency and the creation of a currency board to keep the ruble pegged to the dollar or a European currency (Gilman 2010). This caused panic among global investors and Russia’s sovereign foreign debt was downgraded to junk bond status. Despite the fact that the central bank extended emergency credits to banks, the stock exchange and the ruble collapsed. The ruble, which had remained relatively stable for three years beforehand, lost most of its value. On August 23, the Russian cabinet resigned, effectively annulling any outstanding agreements with the IMF. This frightened markets, and the sell-off continued. On August 31, 1998, $1 could be exchanged for 7.905 rubles. On September 9, 1998, $1 could be exchanged for 20.825 rubles. In 1998, the Russian stock market lost 89 percent of its value. Figure 7.1 shows the sharp increase in the ruble-dollar exchange rate. The sharp ruble devaluation exacerbated the banking crisis and household deposits were frozen to prevent further bank runs. The Central Bank shifted private deposits to Sberbank. The lack of bank transparency contributed to a liquidity crisis (Sutela 1999). To exacerbate matters, the collapse of the GKO assets wiped out bank assets, which caused a solvency crisis. Insolvent banks were not declared bankrupt, and bank owners engaged in asset-stripping (Perotti 2002). As costs rose, imports of consumer goods came to a halt. Consumers hoarded food as Russians panicked. Outcomes of the Crisis The crisis lowered living standards even further and added to the personal woes of the population. Although exporters gained from the currency devaluation, there was a further sharp decline in real wages. In real terms, household income fell by 20 percent due to the crisis, while the average Figure 7.1 Ruble-dollar exchange rate amount of government transfers fell by 18 percent, and help from relatives declined by 40 percent. The poverty rate increased from 22 percent to 33 percent. Using household survey data,2 Lokshin and Ravallion (2000) confirm that welfare declined as a result of the crisis. Problems with wage and other payments remained. Russia quickly recovered from the crisis as world oil prices rose in 1999 and 2000. In addition, the new administration under Yevgeny Primakov used monetary financing and currency controls to restore basic financial services (Shppel 2003). The administration also engaged in aggressive fiscal tightening. Rapid import substitution occurred as domestic costs fell in comparison to those of international competitors. This shifted up the merchandise trade surplus. Output rebounded, inflation slowed, interbank payments were restored, and federal government revenue collection quickly rebounded. As a result of the crisis, Russia’s privatization process was stalled and the need for tax reform was highlighted. Some viewed the economic liberalization process as a mistake, while most agreed that better reform practices were in order. Clearly, Russia’s difficult transition from a planned to a market economy was made even more difficult by weaknesses imposed due to financial globalization, in which the economy was exposed to external capital flows and global contagion (Pinto and Ulatov 2010). The Russian crisis also underscored the premise that sound macroeconomic fundamentals were insufficient for a positive investment climate; microeconomic and structural economic conditions also matter. Political Economy of the Russian Crisis The Russian crisis was seen as a turning point in Russia’s development after the break-up of the Union of Soviet Socialist Republics (USSR). Some believed, at the time, that Russia would enter a longer period of crisis due to severe economic fragilities, although this did not come about (Robinson 2007). The economy was moving away from outright dysfunction, with negative value-added production, to a market-based system. Economic churning occurred alongside political churning. The political elite was increasingly divided, especially between center and local leaders, as some reforms failed. Indeed, the Russian crisis was exacerbated by the sharp turnover in the Russian government in 1998, when President Boris Yeltsin fired the entire government and appointed Sergey Kiriyenko Prime Minister. Kiriyenko was in office for only a short period, from March 1998 until August 1998, when he was fired. Prime Minister Kiriyenko was known as a reformer, and was necessarily at odds with the oligarchs in power. Yet it was the oligarchs who supported President Yeltsin, and their presence in the parliament halted legislation. President Yeltsin in return began to legislate by decree. Within this period of conflict, the executive branch, the Duma, and the Central Bank of Russia were all at odds. The Duma was forced to confirm Kiriyenko as Prime Minister in April 1998, the Central Bank Chair Sergei Dubinin signaled a potential debt crisis which was read as impending devaluation, Kiriyenko claimed that the government was “quite poor now,” and Lawrence Summers, Deputy Treasury Secretary, was turned away from meeting with Kiriyenko by his aide in a political gaffe (Chiodo and Owyang 2002). By the time the IMF left Russia without reaching an agreement on an austerity plan in May 1998, investor sentiment had taken a sharp blow. Prime Minister Viktor Chernomyrdin was reappointed by President Yeltsin after Kiriyenko was dismissed, but the parliament rejected him and nominated their own candidate, Yevgeny Primakov. This defeat for the President exacerbated the political crisis, especially because Primakov lacked experience in managing economic affairs. However, young reformers continued to comprise about half of the ministries, maintaining the path of reform. Political volatility continued even as the crisis subsided. We now turn to the Brazilian financial crisis, which was triggered by contagion from the Asian and Russian financial crises.
<urn:uuid:4f5fdde4-c510-4ee9-bd85-c3b8f6f10bf7>
CC-MAIN-2021-43
https://ebrary.net/123045/business_finance/russian_financial_crisis
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585424.97/warc/CC-MAIN-20211021133500-20211021163500-00390.warc.gz
en
0.969705
3,058
3.1875
3
Subhash Chandra Bose Subhash Chandra Bose (23 January 1897 – 18 August 1945*) Respected in India as Netaji (Hindustani: "Respected Leader"), was an Indian nationalist and prominent figure of the Indian independence movement, who attempted during World War II to rid India of British rule with the help of Nazi Germany and Japan. Bose was a twice-elected President of the Indian National Congress, founder and President of the All India Forward Bloc, and founder and Head of State of the Provisional Government of Free India, which he led alongside the Indian National Army from 1943 until his presumed death in 1945. Bose is best known for his advocacy and leadership of an armed struggle for Indian independence against the British Empire, as well as his early calls for Purna Swaraj, or complete self-rule, for the people of India. In 1964, the CIA still believed that Bose was alive! According to media reports, declassified documents showed that the Central Intelligence Agency was told in 1964 that Bose survived an air crash of 1945. The documents also showed that the US spy agency was not convinced of the veracity of the official Japanese version. The reports said that in May 1946, a CIA agent wrote to the US secretary of state saying he had been told that "should (Bose) return to the country, trouble would result which would be extremely difficult to quell". The CIA document said: "There now exists a strong possibility that Bose is leading a religious group undermining the current Nehru government." |Indian Schoolgirl paying homage to Subhash Chandra Bose, the National Hero in Punjab| Leader of Indian National Congress Bose came fourth in the Indian Civil Services (ICS) examination in England but he did not want to work under the occupying British government. He resigned from his civil service job on 23 April 1921 and returned to India. He started the newspaper Swaraj and took charge of publicity for the Bengal Provincial Congress Committee. His mentor was Chittaranjan Das who was a spokesman for assertive nationalism in Bengal. In the year 1923, Bose was elected the President of All India Youth Congress and also the Secretary of Bengal State Congress. In a roundup of nationalists in 1925, Bose was arrested and sent to prison in Mandalay,Burma where he contracted tuberculosis. In 1927, after being released from prison, Bose became general secretary of the Congress party. |Bose as Leader of Bengal Congress| Bose was again arrested and jailed for civil disobedience; this time he emerged to become Mayor of Calcutta in 1930. In this period, he also researched and wrote the first part of his book The Indian Struggle, which covered the country's independence movement in the years 1920–1934. Although it was published in London in 1935, the British government banned the book in the colony out of fears that it would encourage unrest. By 1938 Bose had become a leader of national stature and agreed to accept nomination as Congress President. Disagreement with Gandhi- Nehru Bose stood for unqualified Swaraj (self-governance), including the use of force against the British. He was vocal in his opposition to Gandhi's appeasing diplomacy with the British. This meant a confrontation with Mohandas Gandhi, who in fact opposed Bose's presidency, splitting the Indian National Congress party. |Bose and Gandhi| The rift also divided Bose and Nehru. Jawahar Lal Nehru was becoming increasingly envious for Bose's popularity. Bose was elected president again over Gandhi's preferred candidate Pattabhi Sitaramayya. However, due to the manoeuvrings of the Gandhi-led clique in the Congress Working Committee, Bose found himself forced to resign from the Congress presidency. |Bose and Nehru| World War II an India On the outbreak of war, Bose advocated a campaign of mass civil disobedience to protest against Viceroy Lord Linlithgow's decision to declare war on India's behalf without consulting the Congress leadership. Having failed to persuade Gandhi of the necessity of this, Bose organised mass protests in Calcutta calling for the 'Holwell Monument' commemorating the Black Hole of Calcutta, which then stood at the corner of Dalhousie Square, to be removed. He was thrown in jail by the British, but was released following a seven-day hunger strike. |Bose in Lahore, Punjab| Bose, who had been arrested 11 times by the British in India, had fled the Raj with one mission in mind. That was to seek Hitler's help in pushing the British out of India. Bose's arrest and subsequent release set the scene for his escape to Germany, via Afghanistan and the Soviet Union. Bose escaped from under British surveillance at his house in Calcutta on 19 January 1941, accompanied by his nephew Sisir K. Bose in a car that is now on display at his Calcutta home. Meeting with Hitler Supporters of the Aga Khan III helped him across the border into Afghanistan assuming the guise of a Pashtun insurance agent ("Ziaudddin"). After reaching Afghanistan, Bose changed his guise and traveled to Moscow on the Italian passport of an Italian nobleman "Count Orlando Mazzotta". From Moscow, he reached Rome, and from there he traveled to Germany. The link between Nazi Germany and ancient India, goes deeper than the swastika symbol. The Nazis venerated the notion of a “pure, noble Aryan race,” who are believed to have invaded India thousands of years ago and established a society based on a rigid social structure, or castes. Perhaps the most fervent Nazi adherent to Indian Hindu traditions was Heinrich Himmler, one of the most brutal members of the senior command. Neta ji met Adolf Hitler on May 29, 1942 at the Reich Chancellery. Hitler shared hie East strategy and how Germany could liberate India after defeating Russia. |Bose with Hitler| Netaji met the higher officials of the Foreign Department on April 3, 1941, and expressed his desire to form an 'Indian Government in Exile' and expected its immediate diplomatic recognition from the Axis Powers. He was keen to form an Indian Army with the Indian prisoners of war from North Africa. As requested, he submitted a draft proposal on April 9, 1941 which contained the following: - The Axis Powers would sign a treaty with the ‘Free Indian Government in Exile’ guaranteeing India's independence from British rule once the war was won - The Indian Army would consist of 50,000 soldiers of Indian origin - After liberating India, Germany would hand over responsibility to the Government in Exile headed by Netaji himself. In Germany, Bose founded the Free India Center in Berlin, and created the Indian Legion (consisting of some 4500 soldiers) out of Indian prisoners of war who had previously fought for the British in North Africa prior to their capture by Axis forces. The Indian Legion was attached to the Wehrmacht, and later transferred to the Waffen SS. |Indian Legion of Axis Forces| Bose lived in Berlin from 1941 until 1943. During his earlier visit to Germany in 1934, he had met Emilie Schenkl, the daughter of an Austrian veterinarian whom he married in 1937. Their daughter is Anita Bose Pfaff. |Bose with his wife Emilie Schenkl| Japan Enters WWII The Japanese declaration of war against Great Britain and the US on December 7, 1941, coupled with the advance of the Japanese army towards the Indian frontier radically altered the war situation. Bose traveled with the German submarine U-180 around the Cape of Good Hope to the southeast of Madagascar, where he was transferred to the I-29 for the rest of the journey to Imperial Japan. |Bose on his journey to Japan on a submarine| On arrival in Japan, he was appointed as the leader of Azad Hind Fauj or Indian National Army (INA) founded by General Mohan Singh and Pritam Singh Dhillon in consultation with Indian revolutionary Rash Behari Bose. . The Indian National Army (INA) was the brainchild of Japanese Lieutenant-General Iwaichi Fujiwara, head the Japanese intelligence unit Fujiwara Kikan and had its origins in Indian Independence League, founded by Pritam Singh Dhillon. Fujiwara's mission was "to raise an army which would fight alongside the Japanese army. After the initial proposal by Fujiwara the Indian National Army was formed as a result of discussion between Fujiwara and Mohan Singh in the second half of December 1941, and the name chosen jointly by them in the first week of January 1942. |Captain Mohan Singh and Rash Behari Bose with INA| Bose was able to reorganize the fledgling army and organize massive support among the expatriate Indian population in south-east Asia, who lent their support by both enlisting in the Indian National Army, as well as financially in response to Bose's calls for sacrifice for the independence cause. |Bose with INA| Spoken as a part of a motivational speech for the Indian National Army at a rally of Indians in Burma on 4 July 1944, Bose's most famous quote was "Give me blood, and I shall give you freedom!" In this, he urged the people of India to join him in his fight against the British Raj. Spoken in Hindi, Bose's words are highly evocative. The troops of the INA were under the aegis of a provisional government, the Azad Hind Government, which came to produce its own currency, postage stamps, court and civil code, and was recognized by nine Axis states—Germany, Japan, Italy, the Independent State of Croatia, Wang Jingwei regime in Nanjing, China, a provisional government of Burma, Manchukuo and Japanese-controlled Philippines. |Bose in Japan| INA took active part in defeating British Army in Singapore, Malaysia, Burma, and occupied Indian islands Andaman and Nicobar. The INA's final commitment was in the Japanese thrust towards Eastern Indian frontiers of Kohima in Manipur. |Bose inspecting INA Forces with Capt. Mohan Singh| After the atomic bomb attack on Hiroshima and Nagasaki, the Japanese surrendered to the Americans. The Japanese funding for the INA army diminished, and Commonwealth forces held their positions in Kohima and then counter-attacked, in the process inflicting serious losses on the besieging forces. A large proportion of the INA troops surrendered under Lt Col Loganathan and Bose was forced to retreat to Burma and then to Malaysia. Mystery Surround Death of Bose On 16 August 1945, Bose left Singapore for Bangkok, Thailand. On the 17th morning, he flew from Bangkok to Saigon, now Ho Chi Minh City. On the 17 August afternoon, he flew from Saigon to Tourane, French Indo-China, now Da Nang, Vietnam. Early next morning at 5 AM, he left Tourane for Taihoku, Formosa, now Taipei, Taiwan. At 2:30 PM on 18 August, he left for Dairen, Manchukuo, now Dalian, China, but his plane crashed shortly after take off, and Lieutenant-General Tsunamasa Shidei, the Vice Chief of Staff of the Japanese Kwantung Army, who was to have made the negotiations for Bose with the Soviet army in Manchuria was also killed. Bose along with other survivors were treated in a Japanese military hospital. |Obituary for Bose and Shidei| In spite of the treatment, Bose went into a coma. A few hours later, between 9 and 10 PM (local time) on Saturday 18 August 1945, Subhas Chandra Bose, aged 48, was dead. Bose's body was cremated in the main Taihoku crematorium two days later, 20 August 1945. On 23 August 1945, the Japanese news agency Do Trzei announced the death of Bose and Shidei. On 7 September a Japanese officer, Lieutenant Tatsuo Hayashida, carried Bose's ashes to Tokyo, and the following morning they were handed to the president of the Tokyo Indian Independence League, Rama Murti. On 14 September a memorial service was held for Bose in Tokyo and a few days later the ashes were turned over to the priest of the Renkōji Temple of Nichiren Buddhism in Tokyo. There they have remained ever since. |Bose Monument in Renkōji Temple, Tokyo| Among the INA personnel, there was widespread disbelief, shock, and trauma. Most affected were the young Tamil Indians from Malaya and Singapore, both men and women, who comprised the bulk of the civilians who had enlisted in the INA. The professional soldiers in the INA, most of whom were Sikhs, faced an uncertain future, with many expecting reprisals and court marshals from the British. Indian National Congress's official line was succinctly expressed in a letter Mohandas Karamchand Gandhi wrote to Rajkumari Amrit Kaur, "Subhas Bose has died well. He was undoubtedly a patriot, though misguided." Gandhi and Nehru had not forgiven Bose for questioning Gandhi and for collaborating with the enemy of the British. The British Raj, tried 300 INA officers for treason in the INA trials, but eventually overturned by the independence of India. According to one popular version of events, Netaji Subhas Chandra Bose died in an air crash in Taiwan in 1945. But many of his relatives, friends and followers have disagreed with this narrative, forcing the Indian government to commission three different inquiries into the event between 1956 and 1999. Most of Bose's lieutenants who had accompanied him on his travels were not allowed to get on the plane with him. They never saw a body. No photographs were taken of Bose after the crash. There are no photos of the body. And there is no death certificate. So it is possible to argue that the Japanese faked his death to allow him to escape the advancing British army. A few years after Netaji's disappearance, reports emerged that he had returned to India and lived in the disguise of a sadhu in north India. Some reports even claimed that this sadhu was sighted at Jawaharlal Nehru's funeral, though no such claim could ever be substantiated. Though the sadhu story was never proved, it resurfaced again when the Mujherjee Commission (1999-2005), led by Supreme Court judge M K Mukherjee, explored the possibility of Netaji living in the guise of a hermit in India. The report brought into light a sadhu named Gumnami Baba or Bhagwanji living in Uttar Pradesh. The Mukherjee Commission's report, which questioned the claim that Netaji died in a plane crash, was, however, rejected by the government. This conspiracy theory has become a popular topic of discussion in various newsrooms and book titles. It is alleged by Dr. Purabi Roy, a Russian scholar in his book, “The Search for Netaji: New Findings” that Bose was captured by USSR during Second World War as Axis powers viewed him as a war criminal due to his close relationship with Japan and he later died in Siberia under soviet captivity. Another version of the theory goes by that an Indian monk named Bhagwanji or Gumnami Baba, who lived in Faizabad, Ayodhya had a very close resemblance to Subhas Chandra Bose and that he was Netaji living incognito to hide his real identity. Bhagwanji died on September 16, 1985. The truth about Netaji is out there. Subhash Chandra Bose remain in the heart of Indians as a heroic figure. We, Punjabis have special love for the man who presented a masculine response to British occupation of India compared to the impotent non-violence theory of Gandhians. The TRUTH will emerge one day. As I see it, the starting point is Russia. The endgame could be in Russia or Faizabad. This is the point we have reached in the veritable rabbit hole of the modern India’s longest-running mystery.
<urn:uuid:c7af078d-7c4b-4716-b760-342caa04b75c>
CC-MAIN-2021-43
http://malicethoughts.blogspot.com/2015/09/subhash-chandra-bose.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587659.72/warc/CC-MAIN-20211025092203-20211025122203-00510.warc.gz
en
0.975664
3,381
3.5
4
This is Part 1 of Henry George’s life story. Part 2 can be found here. For more information about how land reform can create meaningful work, restore our ecology, and bring more wealth into our local communities, I invite you to read my book Land: A New Paradigm for a Thriving World. “I came near starving to death, and at one time I was so close to it that I think I should have done so but for the job of printing a few cards which enabled [me and my family] to buy a little corn meal.” “I came near starving to death, and at one time I was so close to it that I think I should have done so but for the job of printing a few cards which enabled [me and my family] to buy a little corn meal.” The young Henry George was no stranger to suffering and destitution. He was intimately familiar with abject poverty and misery. The year was 1864, and America was reeling from a civil war, yet was also at the brink of establishing itself as a worldwide industrial powerhouse; for some, untold fortunes were to be had, while for the majority of people—millions—one recession after another left them out of money, out of jobs, and out of homes. Economic recessions and depressions were as much a part of reality then as they are today. The recession of 1864 wasn’t the first that threw Henry George into the proverbial gutter, nor would it be the last. And as he struggled, he asked himself why economic recessions and depressions happened in the first place when society as a whole was continuously becoming more and more prosperous. He resolved to find out why. He began a years-long deep and abiding search for the real causes of poverty. In his investigation, he left no stone unturned, no assumption unquestioned, and no established authority held sacred. During these years of grueling search, he recalled a casual remark once made by an older coworker: “in a new country wages are always high, while in an old country they are always low.” He realized this observation was correct: he knew from first-hand experience that wages were generally higher in the United States and Australia than in England, and the pattern held in the newer parts of the same country—wages were higher in Oregon and California, for example, than in New York and Pennsylvania. This was the first of several key insights that began to shape an economic theory that would become so revolutionary and groundbreaking that it would be praised for its clarity and wisdom by statesmen, economic Nobel laureates, and freethinkers from around the world. Little by little, during his dark years of searching and struggling for an answer to the seemingly unending cycles of scarcity and lack, several other unrelated experiences came together in his mind and formed a coherent model of socioeconomic reality. Once, while on a mining expedition, Henry George asked his fellow miners, who were complaining about the immigration of cheap labor from China, why they had an issue with the Chinese working the mines that were not commonly worked by Americans. The Chinese, in his view, didn’t pose any competition to established miners such as the group he was with. “No harm now,” responded one of the miners, “but wages will not always be as high as they are today in California. As the country grows, as people come in, wages will go down, and some day or other white people will be glad to get those diggings that the Chinamen are working.” Another revolutionary thought stuck with him from that day forward: with progressive economic development, wages for the lower economic classes of people don’t rise in relation to the cost of living, while the higher economic classes tend to become wealthier over time. It wasn’t until many years later that he—in a flash of inspiration—finally discovered the ultimate secret at the root of this widely-observable economic pattern that is responsible for most of the wealth inequality we have in our world today. His economic explanations were so logically coherent that, in the words of his biographer, “he had not met with a single criticism or objection that was not fully anticipated and answered in the book itself. For years he debated its basic positions with anyone who cared to try, and was never worsted.” Henry George—who at various times in his life had been a day laborer, a deckhand, a miner, a printer, and a journalist—eventually formulated his thoughts into Progress and Poverty, the book that became the bestselling book of his time. His economic explanations were so logically coherent that, in the words of his biographer, “he had not met with a single criticism or objection that was not fully anticipated and answered in the book itself. For years he debated its basic positions with anyone who cared to try, and was never worsted.” “Men like Henry George are rare, unfortunately. One cannot imagine a more beautiful combination of intellectual keenness, artistic form, and fervent love of justice.” Prominent men and women from around the world have endorsed George and his work. Albert Einstein once remarked that “men like Henry George are rare, unfortunately. One cannot imagine a more beautiful combination of intellectual keenness, artistic form, and fervent love of justice.” Helen Keller, too, praised Henry George: “Who reads shall find in Henry George’s philosophy a rare beauty and power of inspiration, and a splendid faith in the essential nobility of human nature.” Woodrow Wilson, the former U.S. President, once said that “this country needs a new and sincere thought in politics, coherently, distinctly and boldly uttered by men who are sure of their ground. The power of men like Henry George seems to me to mean that.” Many prominent economists to this day endorse Henry George’s teachings: Joseph Stiglitz, economist and Nobel laureate, said that “the main, underlying idea of Henry George… is an argument that makes an awful lot of sense.” In 1871, on the day Henry George experienced the revelation that was later to become systematically formulated in his first book Progress and Poverty, he went on an afternoon horseback ride in the San Francisco Bay area. It was on that ride that he had his sudden flash of revelation and realized the solution to his life’s most puzzling enigma—and to humanity’s most dire misappropriation. He describes it as such: “I asked a passing teamster, for want of something better to say, what land was worth there. He pointed to some cows grazing so far off that they looked like mice, and said, ‘I don’t know exactly, but there is a man over there who will sell some land for a thousand dollars an acre.’ Like a flash it came over me that there was the reason of advancing poverty with advancing wealth. With the growth of population, land grows in value, and the men who work it must pay more for the privilege.” There it was: whenever a population converges around a certain location, the land, of which there is only a limited supply for each location, becomes more expensive to live on; people have to increasingly pay to live on land, and this in turn affects the entire economy. George’s insight that day articulated one of the root causes not only of economic inequality, but of a great number of social ailments that still plague society today, from booms and busts to widespread unemployment, environmental destruction, urban sprawl, suburban dystopia, and rural wastelands. He was not the only economist who realized that the root cause of economic injustice was to be found in the theft of land: others before him had independently arrived at similar conclusions—specifically, in the words of Albert Jay Nock, one of Henry George’s biographers, there was “the French school known as the Physiocrats, which included Quesnay, Turgot, du Pont de Nemours, Mirabeau, le Trosne, Gournay. They even used the term l’impot unique—the single tax—which George’s American disciples arrived at independently, and which George accepted. The idea of confiscating rent [i.e. sharing the value of land] also occurred to Patrick Edward Dove at almost the same time that it occurred to George. It had been broached in England almost a century earlier by Thomas Spence, and again in Scotland by William Ogilvie, a professor at Aberdeen. George’s doctrine of the confiscation of social values was also explicitly anticipated by Thomas Paine, in his pamphlet called Agrarian Justice.” Whenever a population converges around a certain location, the land, of which there is only a limited supply for each location, becomes more expensive to live on; people have to increasingly pay to live on land, and this in turn affects the entire economy. At one point in his life, Henry George was considered the third most famous American, behind Mark Twain and Thomas Edison. His book Progress and Poverty had by then sold into the millions, and he was traveling around the world, deftly expounding upon his visionary insights with the fierce passion of a man touched by a transcendent realization that he would devote the rest of his life to sharing. And yet, his renown diminished soon after his death; today, Henry George—once known as “the prophet of San Francisco”—is no longer well known to the general conscience of the public, despite the significance of his discoveries. This is Part 1 of Henry George’s life story. Part 2 can be found here. Albert Einstein’s letter to Anna George De Mille, Henry George’s daughter, in 1934. Reprinted from Land and Freedom, May-June, 1934 Land & Liberty: Monthly Journal for Land Value Taxation and Free Trade, 1935 Woodrow Wilson: Life and Letters, by Ray Stannard Baker October 2002 interview with Christopher Williams of the Robert Schalkenbach Foundation, published in Geophilos, Spring, 2003 For more information about how land reform can create meaningful work, restore our ecology, and bring more wealth into our local communities, I invite you to read my book Land: A New Paradigm for a Thriving World. Thank you! Your submission has been received! Oops! Something went wrong while submitting the form MARTIN ADAMS is a systems thinker and author. As a child, it pained him to see most people struggling while a few were living in opulence. This inspired in him a lifelong quest to co-create a fair and sustainable world in collaboration with others. As a graduate of a business school with ties to Wall Street, he opted not to pursue a career on Wall Street and chose instead to dedicate his life to community enrichment. Through his social enterprise work, he saw firsthand the extent to which the current economic system causes human and ecological strife. Consequently, Martin devoted himself to the development of a new economic paradigm that might allow humanity to thrive in harmony with nature. His book Land: A New Paradigm for a Thriving World is the fruit of his years of research into a part of this economic model; its message stands to educate policymakers and changemakers worldwide. Martin is technical director at Progress.org.
<urn:uuid:a3af7c28-8aee-4a0d-af2a-a1e30e00dd0f>
CC-MAIN-2021-43
https://www.progress.org/articles/henry-george-the-prophet-of-san-francisco-part-1
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588242.22/warc/CC-MAIN-20211027181907-20211027211907-00550.warc.gz
en
0.978769
2,376
2.953125
3
Fire Alarm Systems Having the proper fire safety systems is crucial to providing safety to your facility and its occupants. The first line of defense in a fire is an active alert or alarm system. It is important as a building owner to understand the difference between a smoke alarm and smoke detector. A smoke alarm is single unit that is disconnected from an overall system. It has an alarm and detector all in one. A smoke detector is connected to an overall fire alarm system is usually works in conjunction with other detectors throughout your building. It is recommended to have a complete fire alarm system within your building for optimal safety. The following information is an excerpt from the National Electrical Manufacturers Association’s Guide for Proper Use of System Smoke Detectors: Smoke detectors offer the earliest warning of fire possible. They have saved thousands of lives in the past and will save more in the future. For this reason, detectors should be located on every level of a building. Ionization detectors are better at detecting fast and flaming fires than slow, and smoldering fires. Photoelectric smoke detectors sense smoldering fires better than flaming fires. To provide effective early warning of a developing fire, fire (smoke) detectors should be installed in all areas of the protected premises. Total (complete) coverage, as described by NFPA 72, should include all rooms, halls, storage areas, basements, attics, lofts, and spaces above suspended ceilings, including plenum areas utilized as part of the HVAC system. In addition, this should include all closets, elevator shafts, enclosed stairways, dumbwaiter shafts, chutes, and other subdivisions and accessible spaces. Fire detection systems installed to meet local codes or ordinances may not be adequate for early warning of fire. A user should weigh the costs against the benefits of installing a total (complete) fire detection system when any detection system is being installed. “Total Coverage,” as described in NFPA 72, is a complete fire detection system. In some of the specified areas of coverage, such as attics, closets, and under open loading docks or platforms, a heat detector may be more appropriate than a smoke detector. Careful consideration should be given to the detector manufacturer’s instructions and the following recommendations: Smoke detectors shall not be installed if any of the following ambient conditions exist: - Temperature below 32°F (0°C) - Temperature above 100°F (38°C) - Relative humidity above 93 percent - Air velocity greater than 300 ft/min (1.5 m/sec) Detector Placement—Air Supply and/or Return Placement of detectors near air conditioning or incoming air vents can cause excessive accumulation of dust and dirt on the detectors. This dirt can cause detectors to malfunction. Detectors should not be located closer than 3 feet from an air supply diffuser or return air opening. Where Not to Place Detectors - In damp or excessively humid areas, or next to bathrooms with showers. Tiny water droplets can accumulate inside the sensing chamber and make the detector overly sensitive. - Do not place in or near areas where combustion particles are normally present, such as in kitchens or other areas with ovens and burners; in garages, where particles of combustion are present. - Do not place in or near manufacturing areas, battery rooms, or other areas where substantial quantities of vapors, gases, or fumes may be present. - Do not place near fluorescent light fixtures. Electrical noise generated by fluorescent light fixtures may cause unwanted alarms. Install detectors at least 6 feet (1.8 meters) away from such light fixtures To read the full guide, which includes technical guidance on proper practices for fire alarm systems, you may download it for free Prevention of Arcing Ignited Fires The following information is intended to help prevent undesirable electrical arcing from potentially starting a building fire. Fire investigation studies indicate that electrical arcing between wires or between wires and grounded metal surfaces cause 20-40% of all electrically ignited fires. Electrical arcing can occur through loose wire connections, by physical damage to extension cord insulation, wire insulation damaged by long term exposure to moderate or high heat, electrical surges, or even from a misplaced drywall screw or picture hanger nail. Arc Fault Circuit Interrupter (AFCI) protection can detect and interrupt arcing to help prevent fire ignition. The National Electrical Code® requires AFCI protection in new construction, but existing buildings are more prone to faults due to wear and tear on electrical systems. AFCI protection can be installed at the breaker panel or in the first receptacle of each branch circuit. To get the best arc fault protection from AFCI circuit breakers, install combination-type AFCI circuit breakers, not branch/feeder-type AFCI circuit breakers. AFCI devices should be regularly tested for proper operation using the test and reset buttons clearly marked on the devices. AFCI receptacles should be installed within reach so you can easily test and reset the devices on a regular basis to ensure they are functioning properly. To identify if your circuit breakers or outlets contain AFCI protection, look for the “AFCI” identification mark on the circuit breaker or outlet face. If you do not have AFCI protection in your building, install it as soon as possible for added fire protection. Many new buildings will have these advanced arc fault circuit interrupters in place, but even new buildings can increase their level of protection. Updating the circuits from standard devices to those that are designed to detect arcing and sparking that may cause electrical fires provides that additional level of protection against arcing faults. Use a qualified electrician to ensure a proper, safe installation. In photovoltaic (PV) solar electricity generation systems, loose connections have caused a number of building fires. Safe PV systems should be protected with AFCI in the inverter to open the output circuit if arcing occurs. The best solution is not to only detect arcing with AFCI technology, but to identify faults before they begin to arc and cause damage. You can do so by regularly inspecting for hazards and following these tips: Hire a qualified electrical contractor who is trained on the equipment and on electrical safety to help you perform a safety check in your building. When plugging and unplugging appliances, inspect the cords. Look for any signs of damage due to wear and tear including cracked, cut or crushed insulation, discoloration or melting due to heat. Sometimes plugs can be crushed between furniture and the wall, weakening the conductor insulation. Cords can also be damaged through abuse by improperly removing them from the receptacle outlet, being stepped on, or placed under the leg of chairs or other types of furniture. Extension cords should never by placed underneath carpets or area rugs or tucked under baseboard molding. Extension cords are not designed as permanent wiring for appliances or equipment. Compare the sum of all electrical rating(s) of appliance loads with the extension cord capacity. If the load exceeds the cord capacity, then reduce the load for that cord. Inspect cords for visible damage such as strain at plug connections, cuts or crushed insulation, and discoloration or melting caused by high temperature exposure. Dispose of any cords with these dangerous signs. Regularly inspect appliance cords, connections and parts for signs of damage. Heat damage can appear as pitted or corroded electrical contacts, discolored wire insulation or plastic, and melted or deformed plastic. Ensure you are using appliances in the manner intended by the manufacturer. Consult the manufacturer if needed. Repair or replace questionable appliances. An arc fault is an unintended arc caused by current flowing through and unintended path. Usually this is caused by damage to the electrical conductor insulation. These arcs can cause intense heat at the point of the arc resulting in property damage and fire. These fires can cause extensive damage to property as well as loss of life. According to the National Fire Protection Association (NFPA), arc-faults are “the principle electrical failure mode resulting in fire.” Arc-fault circuit interrupters (AFCIs) come in both outlet and circuit breaker forms and operate by using advanced technology to detect dangerous arcing conditions while allowing normal arcing such as is caused by a light switch or electric motor. These devices provide an increased level of protection from electrical fires for your building. An AFCI can detect an arc-fault, which creates high-intensity heat that could ignite a building’s inner structural walls or insulation. Arc-faults can occur from damaged, overheated, or stressed electrical wiring, worn/old electrical insulation, and damaged appliances. When an arc-fault is detected, the AFCI can immediately shut-off the power to that circuit before an electrical fire has a chance to start. While AFCIs are not typically required for commercial buildings, many architects, engineers, and building owners recognize the added safety that they provide. AFCIs can be found at electrical distributors, hardware stores, and home centers. Even though AFCIs cost a little more than traditional outlets and circuit breakers, they provide an extra level of protection to your building and business. Make sure to test your AFCI devices once a month. Photo Credit: Legrand Available in both single- and multiple-gang version, in both steel and nonmetallic fabrications, fire-rated floor boxes preserve the two-hour fire rating of floors in which they have been installed. When properly installed, Fire Classified Floor Boxes can save time and money for general contractors and installers by eliminating the need for spraying to fireproof floors. Be sure to only install floor boxes that meet or exceed the UL Fire Classification Standard for Floor Boxes (look for the UL or Warnock Hersey mark on the product). Photo Credit: Legrand In order to protect the integrity of a firewall, without adding caulk or putty after cables are installed, you can use a fire-rated, thru-wall fitting. They meet UL tests for flame, temperature, and smoke, as well as for use in air handling spaces (plenums) and are made with fire-stopping intumescent material with an enclosed thru-wall penetration. Once installed, these fittings let you add or remove cables easily, without risking the unseen and potentially dangerous gaps or voids that can occur with caulks or putty. If the temperature reaches approximately 375°F, the material expands, creating a hard char that fills voids around the cables and stops flame from penetrating the opening. This prevents further damage to the cabling and your networks. Be sure to only install devices that meet or exceed UL standards (look for the UL or Warnock Hersey mark on the product). The following companies can help you get started with your project:
<urn:uuid:91d9384e-65ba-4e9f-9308-23920dda7359>
CC-MAIN-2021-43
https://www.buildingtoolkit.org/safety/implementation/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585504.90/warc/CC-MAIN-20211022084005-20211022114005-00070.warc.gz
en
0.917763
2,227
2.90625
3
A short YouTube version is available here. [Expand to the full article to be able to click on the link]. In the previous lesson, lesson 26, we looked at the passive voice. The passive voice conjugation the passive sign “ya य”, which is added to a modified root. The verb is then conjugated like a thematic verb (for example “labh”), but only with the middle voice endings. And using this stem, all the tenses and modes can be conjugated – the present indicative, the imperfect, the optative and the imperative. Regular present participles in “māna मान” can also be formed. In this lesson, we will look at the simple future, the periphrastic future and the conditional. The simple future is called lr̥ṭ लृट्, the periphrastic future is called luṭ लुट् and the conditional is called lr̥ṅ लृङ् by the Sanskrit grammarians. The simple future (lr̥ṭ लृट्) The simple future stem is formed by adding sya स्य or iṣya इष्य to the guṇa-strengthened root. This is true for both thematic and athematic verbs. Thus from the root √bhū √भू “be”, we get bhaviṣya भविष्य; from √labh √लभ्, we get labhiṣya लभिष्य; from √dā √दा “give”, we get dāsya दास्य; from √i √इ “go”, we get eṣya एष्य; from √duh √दुह् “milk”, we get dhokṣya धोक्ष्य (see Grassman’s Law at work here); from √r̥dh √ऋध् “thrive”, we get ardhiṣya अर्धिष्य etc. Once the stem is formed, it takes the thematic endings to form the conjugation. Simple future active (परस्मै पदम् parasmai padam) of root √bhū √भू |bhaviṣyati भविष्यति||bhaviṣyataḥ भविष्यतः||bhaviṣyanti भविष्यन्ति| |bhaviṣyasi भविष्यसि||bhaviṣyathaḥ भविष्यथः||bhaviṣyatha भविष्यथ| |bhaviṣyāmi भविष्यामि||bhaviṣyāvaḥ भविष्यावः||bhaviṣyāmaḥ भविष्यामः| Simple future middle (अत्मने पदम् ātmane padam) of root √labh √लभ् |labhiṣyate लभिष्यते||labhiṣyete लभिष्येते||labhiṣyante लभिष्यन्ते| |labhiṣyase लभिष्यसे||labhiṣyethe लभिष्येथे||labhiṣyadhve लभिष्यध्वे| |labhiṣye लभिष्ये||labhiṣyāvahe लभिष्यावहे||labhiṣyāmahe लभिष्यामहे| Similarly from the root √dā √दा “give”, we get dāsyati, dāsyataḥ, dāsyanti etc. and dāsyate, dāsyete, dāsyante etc. Note: Even if the root is an athematic one (in the present), it still uses the thematic endings in the future, like we saw for √dā √दा “give” above. Participles can also be formed from the future stem just like for the present-stem – by adding nt न्त् to the active stem and māna मान to the middle stem. So bhaviṣyant भविष्यन्त् (“which will be”) and labhiṣyamāna लभिष्यमान (“which will obtain”). Note: The passive of the simple future is identical to the middle form. So labhiṣye लभिष्ये could mean “I will obtain” or “will be obtained by me” [Using the same stem and by adding the optative and imperative endings, the optative and imperative futures can be formed, but these are very rare and so need not be learned] Use of the simple future The simple future is used to indicate indefinite future time (including future continuous). - rāmo rāvaṇaṃ haniṣyati रामो रावणं हनिष्यति – Rama will kill Ravana - rāmeṇa rāvaṇo haniṣyate रामेण रावणो हनिष्यते – Ravana will be killed by Rama - haniṣyan rāmo rāvaṇaṃ paśyati हनिष्यन् रामो रावणं पश्यति – Rama, who will kill, sees Ravana - rāvaṇo haniṣyaṃ rāmaṃ paśyati रावणो हनिष्यं रामं पश्यति – Ravana sees Rama, who will kill The periphrastic future luṭ लुट् This paradigm has a single active tense. [The middle tense is very rare and so we need not learn it]. There are no modes or participles. There is no passive. The paradigm consists of derivations from an agent noun (nomen agentis). The appropriate conjugational form of the verb “as” “to be” is added to the nominative form of the agent noun. The form rāmo hantāsti रामो हन्तास्ति “Rama is a killer” came to mean “Rama will kill”. ahaṃ hantāsmi अहं हन्तास्मि “I am a killer” came to mean “I will kill” In the third person, the form of the verb “as” is dropped and the agent noun takes the nominative singular, dual and plural appropriately. So, rāmo hantā रामो हन्ता “Rama will kill”; rāmau hantārau रामौ हन्तारौ “Two Ramas will kill”; and rāmā hantāraḥ रामा हन्तारः “Ramas (more than two) will kill” In the first and the second person, the agent noun is always in the nominative singular. The number is indicated by the form of the verb “as”. So, (tvaṃ) hantāsi (त्वं) हन्तासि; “You will kill” (ahaṃ) hantāsmi (अहं)हन्तास्मि “I will kill” (vayaṃ) hantāsmaḥ वयं हन्तास्मः “we will kill” etc. Uses of the periphrastic future The usage of the periphrastic future is similar to the simple future. The periphrastic future is normally used to indicate a more distant future than the simple future. It is commonly used with the word “śvaḥ श्वः” meaning tomorrow. The conditional lr̥ṅ लृङ् The augment preterit of the simple future, equivalent to the imperfect can be formed and is called the conditional. The conditional is formed exactly as the imperfect is made corresponding to the thematic present stem. So, abhaviṣyat abhaviṣyatām abhaviṣyan अभविष्यत् अभविष्यताम् अभविष्यन् etc. Example: “If Ravana had gone to Lanka, Rama would not have killed him.” yadi rāvaṇo laṅkam agamiṣyat tadā rāmaḥ taṃ nāhaniṣyat यदि रावणो लङ्कम् अगमिष्यत् तदा रामः तं नाहनिष्यत् [Note that both the verbs are in the conditional] This is the end of lesson 27. In this lesson, we looked at the future tenses and the conditional. Translate into Sanskrit - He is going to the city [in both futures] - We two will come tomorrow to the forest - If I had come to the forest, I would have seen Sita.
<urn:uuid:b9bae410-13bc-48b4-a638-92f65e44f91d>
CC-MAIN-2021-43
https://oursanskrit.com/2017/11/26/lesson-27-the-future-tenses-and-the-conditional/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588053.38/warc/CC-MAIN-20211027022823-20211027052823-00149.warc.gz
en
0.844771
2,718
3.8125
4
Credit: Save the Children / Jonathan Hyams Disparities in educational attainment and achievement are not random. Nor can these disparities be addressed only through actions within education. Rather they are often rooted in deep structural inequalities in societies that determine the education options of boys and girls, women and men. Entrenched norms can weaken even political and legal commitments to gender equality, which are intended to provide political accountability in the protection of human rights, including the right to education for all. International recognition of gender inequality in education is based on the 1979 Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW), which has been ratified by 189 countries. While most countries have ratified CEDAW, many have included reservations on some of its articles, thus undermining their commitment to truly eliminate discrimination against women and advance gender equality. For instance, 12 countries have included reservations in Article 2 that calls on parties to the convention to adopt legal and policy measures to eliminate discrimination against women. India, the Federated States of Micronesia, Niger and Qatar disagree with Article 5 on challenging and eliminating gender stereotypes and discriminatory cultural practices, including those based on general acceptance of women’s subordination and disadvantage. Political and legal commitments to gender equality must be subject to no exceptions or reservations. They should be translated into concrete and effective actions to protect the rights of all, and particularly women and girls. UNEQUAL AND HARMFUL SOCIAL NORMS AND VALUES PERSIST Gender norms are rules that apply differently to men and women, dictating expected behaviours or attributes (Heslop, 2016). They are based on power relations and traditional views of roles and positions of men and women in society. They shape social attitudes, behaviours and practices; affect laws and policies; and prevent changes in education. CEDAW provides clear guidance on the type of actions and policies countries must implement to address gender- based discrimination, including in education. It stresses that the discrimination girls and women face in education is both ideological and structural. It calls on parties to modify social and cultural patterns of conduct that are based on ‘stereotyped roles for women and men’ (Articles 5 and 10c). Unless the negative gender norms, values and practices that permeate the very fabric of some societies are challenged, girls and women will continue to face discrimination, preventing them, as well as boys and men in certain cases, from exercising their right to education. For instance, one common view is that women’s primary role is to be wives, housewives and caregivers. Such views influence education in several ways, including how boys and girls view school. Analysis of the sixth round of the World Values Survey, carried out between 2010 and 2014 in 51 countries, showed that half of respondents agreed or strongly agreed that ‘when a woman works for pay, the children suffer’. The idea was widespread in India and in Western Asian countries such as Jordan and Palestine, where more than 80% of respondents agreed or strongly agreed with the statement. The view that ‘being a wife is just as fulfilling as working for pay’ was held by 63% of respondents. More than 80% held this belief in countries of Central Asia, including Kazakhstan and Uzbekistan; Northern Africa and Western Asia, including Egypt and Yemen; Eastern Europe, including the Russian Federation and Ukraine; and Eastern and South- eastern Asia, including Japan and the Philippines. Such beliefs can lead to a vicious circle of reduced opportunities in employment and education. As the next section relates, Japan has one of the lowest shares of women in school leadership positions, further fueling unequal perceptions of gender roles. In the case of migration, there is an expectation that women who migrate, as many Filipino women do, should enter domestic or home care work, even if this results in a loss of skills ( Box 6 ). Patriarchal norms that place little or no value on girls’ and women’s education restrict their chance of equal access to education. About 27% of World Values Survey respondents agreed that ‘a university education is more important for a boy than a girls’, with shares ranging from 2% in Sweden to 56% in Pakistan and 59% in Haiti ( Figure 12 ). On average, men were about 10 percentage points more likely to agree with the statement, rising to 19 percentage points in countries including Algeria and Palestine, even though women are by far the majority of graduates. In the two countries with the most negative views of girls’ education, there was no gender gap in opinions held in Pakistan but a 35 percentage point gap between men’s and women’s views in Haiti. Such discriminatory attitudes can constrain girls’ education opportunities, though the relationship is not straightforward. Some of the strongest negative beliefs are indeed held in countries with highly unequal access to tertiary education, such as Uzbekistan. But they are also held in countries that have recently expanded education opportunities to women, such as India. In other words, a move towards equalizing education opportunities in societies with unequal norms may or may not be a lever for shifting these norms. Challenging gender norms means working with adolescent girls and boys on gender role issues. In Haryana, India, a multi-year secondary school-based experiment aims to change adolescents’ gender attitudes and erode support for restrictive gender norms. The programme involves regularly holding classroom discussions on gender equality, with some sessions teaching communication skills to help students convince others, or, for example, to persuade parents to let them marry at a later age. A randomized controlled trial showed that the programme had improved adolescents’ gender attitudes. Participants reported more gender-equitable behaviours, with boys reporting that they helped out more with household chores (Dhar et al., 2018). CHILD DOMESTIC WORK IS A GENDER DISCRIMINATORY PRACTICE THAT AFFECTS EDUCATION Child domestic workers are among the most vulnerable to exclusion from education. In 2012, around 17.2 million children and adolescents aged 5 to 17 were in paid or unpaid domestic work in an employer’s home, two-thirds of them girls (ILO, 2017c). In more than half the countries with data from the Demographic and Health Surveys and Multiple Indicator Cluster Surveys over 2010-2015, the percentage of 12- to 14-year-olds involved in domestic work at least 28 hours a week was less than 2%. However, the percentage rose to 19% in Benin and 16% in Chad in 2014. In most countries, girls are more than twice as likely as boys to be involved in domestic work. The gap is larger in countries where the overall prevalence of child domestic work is high, such as Senegal (12%), where girls are 3.5 times more likely to be domestic workers, as well as in countries where the prevalence is low, such as El Salvador (1%), where girls are 7 times more likely to be domestic workers. Girls who spend 28 hours or more per week in domestic and care work spend 25% less time at school than those involved less than 10 hours a week (ILO, 2009). Protecting child domestic workers requires various policies and interventions, including protecting their right to education via awareness campaigns, ensuring high-quality public education and social protection, and carrying out interventions to curb child labour and prevent entry into hazardous work (ILO, 2015, 2017a). This applies particularly to poor rural girls who migrate to cities out of poverty, often unaccompanied, end up in domestic work and see their education opportunities compromised (Box 7). LAWS ON EARLY MARRIAGE CAN HELP FULFIL THE RIGHT TO EDUCATION Globally, some 650 million girls and women today were married when they were children. In 2010–2017, 21% of women aged 20 to 24 were married before age 18. In 2018, 16% of adolescents aged 15 to 19 were married before age 18 worldwide, compared with 19% in 2012. Sub-Saharan Africa is the region with the highest prevalence of child marriage: 38% of women aged 20 to 24 married before 18. Next is the Southern Asia subregion (30%), followed by Latin America and the Caribbean (25%) and the Eastern Europe and Central Asia subregions (11% each) (UNICEF, 2018b). Unless trends accelerate, it will take more than 100 years to eradicate girls’ child marriage (OECD, 2018). To achieve the SDG target of ending child marriage by 2030, progress would need be 12 times as fast as the rate observed over the past decade (UNFPA and UNICEF, 2018). In many low-income countries where child marriage is prevalent, girls are withdrawn from school to get married once they reach puberty. They then face practical barriers to education, including stigma, forced exclusion from school, and social and moral norms that confine them to their homes. Most countries with high early marriage rates are fragile, experiencing humanitarian crises and displacement. In such contexts, the early marriage trap is perpetuated because families see it as a way to protect their daughters, while going to school exposes them to risks, and the outcomes of schooling are highly uncertain (Box 8). Paragraph 2 of CEDAW Article 16 prohibits forced and child marriage, but 20 countries – including many with a high prevalence of child marriage, such as Bangladesh and Niger – have reservations on the article (United Nations, 2014). In Bahrain, where Ministerial Order No. 45 of 2007 fixed the minimum marriage age for Shiite Muslims at 18 for boys and 15 for girls, conservative lawmakers argued that increasing the marriage age violated sharia (Freedom House, 2010). At least 117 of 198 countries and territories allow children to marry (Pew Research Center, 2016). In 153 countries and territories, reaching the age of majority is ostensibly required before marriage is legal, but many exceptions to that requirement exist; for instance, in Uruguay, children can legally marry if they have parental permission. In all but one of the 38 countries where the minimum age differs between boys and girls, the lower age is that for girls. In the United Arab Emirates, while the legal age of marriage for both women and men is 18, child marriages continue to occur due to deeply rooted cultural and tribal traditions (Musawah, 2015). However, numerous efforts have been made to address the issue, and preventive programmes and strategies have been implemented to facilitate education and employment for girls and women before they enter into marriage and family life (CEDAW, 2015). In addition, Ministry of Justice regulations prohibit marriage officers from issuing marriage licences to underage girls and boys (OHCHR, 2015). Since 2014, 15 countries have strengthened their legal frameworks to delay the age of first marriage (OECD, 2018). Gambia’s Children’s Amendment Bill of 2016 criminalized child marriage and child betrothal, with conviction carrying a prison sentence of up to 20 years. In Ghana, a national campaign to end child marriage led to the establishment in 2016 of the Child Marriage Coordinating Unit of the Ministry of Gender, Children and Social Protection. A 10-year national strategic framework, which included the Ending Child Marriage Campaign, was launched. But even when legislation exists to protect women’s and girls’ rights and advance gender equality, it can be weakened by the existence of plural legal systems. Both Gambia and Ghana, as well as countries such as Mauritania and Nigeria, have customary and religious laws that continue to allow early marriage (Bouchama et al., 2018). Bangladesh is another case where norms contradict and oppose international and even national commitments. The country has ratified CEDAW, but with reservations on Articles 2 and 16 (para. 1), as they conflict with personal law, which governs family matters and differs within each of the country’s religious communities, Muslim, Hindu and Christian. Personal law contradicts the legal marriage ages of 18 for women and 21 for men, established by the Child Marriage Restraint Act, and thus Bangladesh tolerates child marriage despite it being a legally punishable offense. When the government sought in 2017 to amend the act, the parliament adopted a controversial amendment with a provision allowing child marriage in ‘special circumstances’, making it de facto legal (De silva de Alwis, 2018). DISCRIMINATORY INSTITUTIONS CAN PERPETUATE GENDER INEQUALITY, INCLUDING IN EDUCATION In addition to social norms and values, institutions can include or exclude women as regards resources and activities, and can protect them from or expose them to discriminatory practices. Achieving gender equality in education will not occur without a strong political commitment at the institutional level. Where political and legal commitments to gender equality are not translated into real change for girls and women, this is often linked to lingering discrimination within social institutions. The OECD’s Social Institutions and Gender Index (SIGI) is an attempt to document discrimination in social and economic institutions. It focuses on four dimensions: Women’s rights in the family (e.g. child marriage), physical integrity (e.g. female genital mutilation, violence, sexual and reproductive health and rights), access to productive and financial assets (e.g. access to land, workplace rights) and civil rights (e.g. political representation). SIGI looks at the extent to which laws, attitudes and practices fail to respect and protect women’s and girls’ rights (OECD, 2018). Its value ranges from 0%, when the same rights are guaranteed to men and women, to 100%, when there is profound or deep discrimination against women and girls. The 2019 edition classifies 120 countries by their level of discrimination in social institutions, from Switzerland (8%) to Yemen (64%). It shows that one-quarter of countries had high or very high discrimination levels, including Afghanistan, Bangladesh, Cameroon, Guinea and the Philippines. In most of these countries, women are even more discriminated against than the 2014 edition indicated. The countries are characterized by highly discriminatory legal frameworks, very poor implementation measures, and customary practices and social norms which weaken and deny women’s rights. SIGI also covers discrimination in family codes: Half the countries in its database have high or very high levels of such discrimination, with scores in excess of 40%, and 23 have scores above 80%, including Bahrain and Qatar at 92% (Bouchama et al., 2018). The persistence of discrimination in social institutions inevitably permeates education systems, including tacit understanding of what may be acceptable in curricula and textbooks. Abolition of discriminatory laws is critical given the backlash against gender equality observed in many countries and regions, including countries in central and eastern Europe such as Austria, Romania and Slovakia. In Poland, a parliamentary group called Stop Gender Ideology was formed in 2014. Among its targets was a pre-school teacher education guide on gender equality. Institutions providing education on gender equality have experienced harassment and hostility from local authorities (Juhasz and Pap, 2018).
<urn:uuid:e896a64e-00a8-4af6-8d4c-93c9efc365c8>
CC-MAIN-2021-43
https://gem-report-2019.unesco.org/gender-report/structural_inequality/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585321.65/warc/CC-MAIN-20211020121220-20211020151220-00150.warc.gz
en
0.954681
3,070
4.34375
4
There are two schools of thought when it comes to the definition of smart systems. The first is what those who manufacture and design solutions consider as ‘smart’. This is generally established through comparisons with other available systems and the features and functions these include. The second is what the end user considers to be ‘smart’, and this is the important definition. When systems are able to use available data to make credible, effective and accurate decisions, that is often what the customer is seeking from a solution. Smart technology is becoming ever more prolific in a wider number of applications, and is being driven by developments across a number of sectors. While some of the technology deployments are solving very specific problems for niche applications, a definition needs to be made between one-off intelligent bespoke designs and everyday or mainstream smart options. Few businesses or organisations are currently wishing to invest significant amounts of their budget in high end AI-based systems, no matter how clever they are, just to be innovators. While the emerging technologies are being adopted by some, and certainly are indicative of the good things to come in the future, today’s end users want solutions which solve everyday problems in an effective and efficient manner. It is remarkably easy to chase the next wave of intelligence instead of addressing today’s problems. There are plenty of people out there who are more than willing to dish out healthy doses of blue sky thinking when it comes to intelligent buildings, the IIoT (industrial internet of things) and smart cities, but do the benefits of a connected metropolis offer a realistic solution for today’s customers? The short answer is yes; they do. Of course, the short answer isn’t the most helpful unless you have a good understanding of how today’s smarter options function. The basic building blocks of a complex smart campus have much in common with a simple system designed to switch peripheral devices when a pre-defined set of criteria occur. While many businesses and organisations might view the emergence of smart solutions are something excessive for their needs, this misunderstanding usually occurs because those promoting the technologies in question focus on the biggest and best solutions they can deliver. However, just because a solution can provide a range of services for a commercial airport doesn’t mean it cannot offer benefits for a small retail unit or a commercial office. It is all a question of scale, and smart solutions are fully scalable to meet a variety of needs. An excellent example of a similar situation occurs within the IT sector. Smart software and advanced hardware are used for the most critical of applications, and many high-end organisations are reliant on their IT-driven solutions to operate. The software can control cities, or run nuclear reactors, or manage the control of air traffic (and even keep the planes in the sky). However, that doesn’t mean IT software and hardware isn’t beneficial for small enterprises or for organisations with very basic needs such as writing letters, tracking invoices or calculating balance sheets. The reality is that more end users utilise software for these basic tasks than for the smarter operations, and because of this few ignore the benefits on offer. However, because smart solutions are still in their infancy – and partly because the marketing hype is geared towards the ‘biggest and best’ use cases, the business arguments for mainstream smart applications haven’t been made as forcefully as they have for IT-based systems. The middle ground While the spike in interest relating to smart cities, AI-based advanced technologies and a fully connected future may have positioned smart technologies at the niche end of the market, simpler solutions have also flooded into the consumer market, offering (but not always delivering) automated actions and data-driven services. Just as the bespoke city-wide projects don’t really make the business case for smart technology in mainstream businesses and organisations, neither do many of the consumer-level options. Many household names have also been quick to bang the ‘smart’ drum, eager to capitalise on the trends associated with advanced technologies such as AI and machine learning. If anyone has any doubt about the proliferation of smart technology in the consumer market, the situation is best summed up by looking at where it is being sold. The outlets pedalling the technology are often not specialist companies, high technology outlets or advanced IT engineering professionals. Smart technology can be purchased from utility companies, department stores, DIY sheds, electrical outlets, catalogue-based cut-price stores and chain bargain goods shops on virtually every high street. These outlets are dedicating space to these products because they are selling in large numbers, despite not always delivering in terms of performance, reliability or accuracy. Such in the hunger for smart devices, many of the weaknesses are overlooked by customers. Somewhere between the two extremes of smart connected cities and consumer-based home automation devices lies the mainstream sector for smart solutions. These deliver effective building management systems dedicated to specific sectors – security, safety, lighting, communications, HVAC, process control, power management, building automation, etc. – but also can use collected data to provide additional benefits and business efficiencies. These benefits and efficiencies are often created by utilising data captured for other purposes. While the mining of big data has become an important part of business operations, the approach is not always replicated when it comes to facilities management. This might be because the end user isn’t aware of how flexible their systems can be, or because the integrator is focused on delivering what the customer has requested, rather than exploring the possibilities when proposing a solution. A mechanism exists in a wide variety of systems which allows a simple approach to implementing advanced benefits. Earlier the world of IT was used as an example of how software can provide benefits for a wide range of tasks of varying complexity, as an illustration of how smart technologies can deliver efficiencies to numerous businesses and organisations. Considering IT and smart systems was not accidental, as there is a link which underlines how flexible smart solutions can be. For many end users, the feature which elevates a smart system above those which perform only their core tasks is something called Rules. Rules go by many names, depending upon which sector is using them. Sometimes referred to as Cause and Effect Programming, Logical Rules, Action Rules, And/Or Logic, Boolean Logic, IFTTT (if this, then that), etc., Rules enable a high degree of control over machine decisions. Rules are similar to Macros, but without a need to learn archaic data strings to perform tasks. Most Rules engines use drop-down menus and simple buttons to establish criteria and resultant actions. The link with IT is that most programming languages make use of Boolean Logic, which shows how flexible it is, and how it can manage complex scenarios. Despite this, it is an incredibly simple method of programming that delivers complex and bespoke results. Its implementation is very basic, in that all conditions have one of two values: TRUE or FALSE. Despite being a core concept in algebra, mathematics and computer programming, Boolean Logic is easy to understand. It offers increased flexibility over typical hardware-based inputs and outputs, but without a need for the replacement of legacy hardware-based edge devices. While having only two possible values might seem something of a limitation, what gives Boolean Logic a significant degree of flexibility is the inclusion of ‘operations’. There are many possible ‘operations’ but the three most basic are also the most powerful, and are also the ‘operations’ that are predominantly used in smart solutions. These are AND, OR and NOT. In smart building management systems, data gathered from a host of sub-systems is used to allow the creation of Rules based on Boolean Logic. The data can come from many sources: video and detections from security systems, real-time on-site personnel information from access control and time and attendance systems, triggers and reports from environmental sensing, outputs from process control systems, communications networks, status updates from power management, etc.. Data can also be collected from third party sources such as weather reports, traffic updates, transportation data, and used to drive appropriate decisions and system automations. Because of the way Rules are constructed, the available options will be limited by the edge devices connected to the system and the data being collected. For example, a system with monitored doors will be able to include conditions in Rules based on whether doors are open or closed, but the option will not be available if the doors are not monitored. When it comes to Operations, there are a number which can be used in Boolean Logic, but the two most common are AND and OR. These, when combined with TRUE and FALSE values, allow advanced functionality to be realised. AND operations require that multiple TRUE or FALSE values occur concurrently and as prescribed in order for an event to be valid. For example, a site might receive regular deliveries on weekday mornings before the start of the working day. Lorries need to access a loading area for this purpose. A Rule could be created to manage this using AND Operations. If an analytics-enabled camera detects a large vehicle entering the site (a TRUE condition) AND it is a weekday between 6am and 9am (another TRUE condition), the system should automatically open the Gate to the loading area as it is an expected delivery. It could also send a push notification to a relevant member of staff so they can accept the delivery. However, if the lorry is detected (TRUE) AND it is not a weekday between 6am and 9am (FALSE), the system will not open the Gate, instead activating a call point so the driver can talk to staff. An AND Operation allows a filter to be included which can either verify a condition or trigger a specific action, based upon two or more status conditions occurring together or within a defined time period. In the example, if either the vehicle does not meet the criteria in terms of size, or the times are not correct, the gate will not open. Instead, alternative measures are implemented. Multiple AND operations can be used to build more complex scenarios. If a specific manager is tasked with handling deliveries but has not arrived to work when the lorry arrives, as indicated from access control data, the vehicle can be sent to a waiting area. The gate to the loading area is only opened if the vehicle meets set criteria AND the time is valid AND a relevant member of staff is on site. The gate to the waiting area is opened instead if the vehicle meets set criteria AND the time is valid AND there is not a relevant member of staff on site. AND Operations enable multiple combinations of TRUE or FALSE (or a combination of both) values to create an action or result when combined. OR Operations differ, in that one of a list of criteria needs to occur in order for an action to be taken. Using the previous example, the gate to the loading area could be configured to automatically open if the time meets the prescribed criteria OR if a relevant member of staff is on-site. OR Operations would be more commonly used in the example application if there were a number of access routes, multiple permitted time zones, or diverse criteria relating to relevant personnel on site. The main differentiation between AND and OR Operations is AND Operations allow filtering or the addition of necessary criteria, whereas OR Operations tend to provide a wider range of triggers for actions. Where AND/OR Operations become even more flexible is when ‘bracketed statements’ are used. Basic AND/OR operations are simple to define, but can lack flexibility. To address this, bracketed statements can often be implemented. These allow the use of AND/OR statements grouped together. For example, bracketed AND operations can be separated by an OR operation. This means that either set of the defined AND operations must occur for the Rule to be actioned. Consider the following Rule: (Lorry Detected AND Correct Time Zone) OR (Lorry Detected AND Relevant Personnel On-site) = Gate Opened. In this instance, the action is generated if either set of AND Operations occurs. In short, bracketed AND statements allow groups of AND operations to result in specified actions. Bracketed OR operations enable a slightly different scenario to be created, as shown in this rule: (Lorry Detected on Road 1 OR Lorry Detected on Road 2) AND (Correct Time Zone OR Relevant Personnel On-site) = Gate Opened. Finally, bracketed statements can be mixed with standard AND/OR operations, allowing a single flexible Rule to encompass a wide range of criteria to trigger actions, delivering a solution that meets even the most complex needs of the end user. The use of Rules based on AND/OR logic can be realised via a wide range of systems including VMS and NVRs for video-based systems, intruder detection control panels and software, as well as in advanced management systems in the site protection and access control spaces. Rules can make use of a wide range of data sources such as detectors and sensors, video and video-based analytics, access transactions, system status reports, operator or visitor actions, time and date, environmental conditions, power status, etc.. Some end users believe that because AND/OR relationships are key to computer programming languages, they must be complicated to implement. However, implementation is simple because smart system manufacturers have devised simple GUIs (graphical user interfaces), allowing integrators and users to exploit the full potential of the technology. While all smart solutions have variations in their methodology when it comes to cause and effect programming, all use a selection of icons, drop-down menus or clickable links to enable configuration. Because the available options are limited to devices and data sources attached to the system, this ensures that Rules are not created which cannot function due a lack of relevant data. Well implemented cause and effect programming is foolproof, and delivers a high level of flexibility which adds value for the business or organisation deploying the solution. In the early days of cause and effect programming, some systems used macros or the addition of specific code snippets. Because of this, a number of implementations required users to have specific IT skills, and as a result these weren’t greatly intuitive. Today’s systems thankfully take a different and far simpler approach, allowing the configuration of even complex events and actions to be created in minutes. Smart systems using cause and effect programming are increasingly common, and forward-thinking integrators and consultants have been quick to use the technology to create bespoke value-added solutions for end users seeking an enhanced return on investment. As the functionality is typically built in, many have been able to do so while ensuring the systems are competitive in terms of total cost of ownership. Despite a wide range of smart systems including AND/OR logic, the depth of benefits sometimes go unused. The reason is often integrators and consultants sell systems as solely focused on basic functionality. Unless the end user is made aware of the flexibility on offer, they will not be able to realise the full range of benefits on offer, and as a result they may be unwilling to invest. It is important end users do not compare the purchase of a smart system against a standard system which only manages core tasks in its given sector. If so, a smart system can appear to be less than competitive, because the full range of benefits and business efficiencies are not being considered. Often decisions to reject smart solutions are made not because the user doesn’t want to invest in the system on offer, but because the integrator or consultant simply hasn’t highlighted the full range of added-value benefits the smart technology offers. Simply stating a system is ‘smart’ isn’t enough. By explaining the benefits and showing the end user examples of how it can create efficiencies in their business or organisation, their ability to understand how the system can add value is stimulated. This often results in a more in-depth exploration of how the technology can be used to increase the return on investment. By linking other types of events, such as those generated by devices on an operational system or from management-based devices, cause and effect programming in an integrated building system can add significant benefits when a site is active. The inputs don’t have to be facilities-related. Increasingly, smart solutions can include an interface to data-generating devices such as POS systems in retail applications, ATMs in financial institutions and logging systems in warehouses and logistics operations. The information flow can be used as event triggers and will contribute to core operations. Indeed, cause and effect programming can create Rules or scenarios that trigger other Rules and scenarios when the situation demands it. This allows the site status and ongoing events to control system attributes automatically, opening up a layer of flexibility that is difficult to achieve when systems are effectively ‘siloed’ to a single task. If an integrator, consultant or end user limits their thinking about events to a simple on/off concept, then achieving a bespoke solution using cause and effect programming will not be possible. However, by embracing the potential on offer from systems that use this approach, the potential to deliver flexible and bespoke smart solutions is available today, even for small and mainstream businesses.
<urn:uuid:d27005c2-0ea4-4c54-9b32-624387b979ff>
CC-MAIN-2021-43
https://benchmarkmagazine.com/rules-and-automated-actions/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587655.10/warc/CC-MAIN-20211025061300-20211025091300-00230.warc.gz
en
0.939865
3,532
2.8125
3
Workshop to be held at ESSLLI 2017 July 17th-21st 2017 Motivation and description Natural language use involves drawing information from different sources and fitting it together. For example, to understand the utterance “Take this and this and that and put it there” one has to be able to track the pointing device and be clear about the different referents of the deictic expressions. Similarly, for questions, a syntactic structure and an intonation contour must be aligned. In conversation, phenomena such as split utterances and other-repairs show that several speakers can co-produce single dialogue acts - even using non-standard phonetic, morphological and syntactic components. Language is a key component of interaction, and, as work in a variety of fields such as psycholinguistics (e.g. Pickering and Garrod, 2004; 2013) and conversation analysis (CA, see e.g. Schegloff, 2007) has emphasised, an account of interaction is also crucial in the analysis of language. In the examples above, as with uses of natural language generally, different information - often in different modalities as in a gesture-speech context or in a speech-vision context - must be incorporated and produced or interpreted as and when it is encountered. This poses challenges for formal approaches to language, which have traditionally abstracted away from the problems presented by the dynamic nature of linguistic interaction. Some researchers have therefore concluded that formalisation is inappropriate as a tool for the analysis of natural language (e.g. Cowley, 2011; Linell, 2009). However, formal approaches are not just desirable but necessary - not only for a precise understanding of language phenomena, but also in order to enable the development of technology e.g. to meet the demands of language instruction necessary in an increasingly globalised society or to create conversational agents and robots. Taking interaction seriously means acknowledging the importance of the dynamics in accounts of language. Languages can no longer be conceived of as static systems of individual processes with modules operating independently. This has consequences for the way we think about language at all levels - including phonological, lexical, syntactic, semantic and pragmatic components. Not only must we consider the interaction between modules within an individual, but we must also take into account the changes brought about by the interactions between speakers and communities. Dynamic approaches are therefore crucial in handling multiparty interaction for example, where patterns of interaction can result in different levels of understanding between different participants in the same conversation (Eshghi and Healey, 2016). Taking interaction dynamics into account is also necessary to explain language change - including diverse phenomena such as diachronic change (Bouzouita, 2008) and semantic adaptation within a conversation (Mills and Healey, 2008). Theories of dynamics in linguistic interaction are also essential for accounting for language learning. This is the case for both first language acquisition in children, where interaction (in the form of e.g. turn-taking) precedes the acquisition of specific words (Bullowa, 1979) and second language learning where previously learned languages affect the learning of a new language (Ellis, 2008). Formal approaches must model both the different types of information to be individuated and their interactions, setting up the structures algorithmically in a principled manner. The validity of formal mechanisms to relate the different types of information and compute the interactions can be evaluated against corpus data, experimental data or intuitions. Here, simple mappings will not do. Instead we need dynamic tools such as update rules, joint building of incremental structure or shifting of information to structurally relevant places. Recent work is beginning to tackle these issues from a formal perspective in a number of disciplines, for example, models of diachronic change (Kempson et al., 2016); speaker-hearer coordination (Howes et al., 2011; Poesio and Rieser, 2010; 2011; Healey et al., 2014); semantic update (Larsson and Cooper 2009; Cooper, 2012); language acquisition (Fernandez et al., 2011; Fernandez and Grimm, 2014); syntax for dialogue (Cann et al., 2005; Gregoromichelaki et al., 2013); information state models of dialogue (Ginzburg, 2012); embodied interaction (Hunter et al., 2015); human-agent interaction (Peltason et al., 2013; Purver et al., 2011; Schlangen, 2016); reasoning (Breitholtz, 2013; Piwek, 2015); the speech-gesture interface (Rieser, 2010; 2015; Lücking et al., 2012; 2015; Healey et al., 2015; Howes et al., 2016) and the semantics of gesture and prosody (Lascarides and Stone, 2006; 2009; Schlöder and Lascarides, 2015). This workshop aims to bring together researchers working on different formal approaches to the dynamics of interaction to foster cross-disciplinary collaboration around these issues. We encourage contributions dealing with material from typologically different languages and with different contexts of language use, to address a linguistic public with a variety of interests and working within different paradigms. Due to its formal orientation the workshop will also be relevant to participants with a focus on logic and computation. The organisers have extensive experience in working on dialogue theories, HCI, multimodal corpora and speech-gesture integration. Bouzouita, M. (2008). At the syntax-pragmatics interface: Clitics in the history of Spanish. In Cooper, R., and Kempson, R. (editors) Language in flux: Dialogue coordination, language variation, change and evolution, 221-263 College publications, London. Breitholtz, E. (2014). Reasoning with topoi–towards a rhetorical approach to non-monotonicity. In Proceedings of the 50th Anniversary Convention of the AISB. Bullowa, M. (1979). Before speech: The beginning of interpersonal communication. CUP Archive. Cann, R., Kempson, R., & Marten, L. (2005). The Dynamics of Language: An Introduction. Syntax and Semantics. Volume 35. Academic Press. Cooper, R. (2012). Type theory and semantics in flux. Handbook of the Philosophy of Science, 14, 271-323. Cowley, S. J. (Ed.). (2011). Distributed language (Vol. 34). John Benjamins Publishing. Ellis, N. C. (2008). The dynamics of second language emergence: Cycles of language use, language change, and language acquisition. The Modern Language Journal, 92(2), 232-249. Eshghi, A., & Healey, P. G. (2016). Collective Contexts in Conversation: Grounding by Proxy. Cognitive science, 40(2), 299-324. Fernández, R., & Grimm, R. M. (2014). Quantifying categorical and conceptual convergence in child-adult dialogue. In Proceedings of the 36th Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society. Fernández, R., Larsson, S., Cooper, R., Ginzburg, J., & Schlangen, D. (2011). Reciprocal learning via dialogue interaction: Challenges and prospects. In Proceedings of IJCAI 2011 Workshop on Agents Learning Interactively from Human Teachers (ALIHT), Barcelona, Spain Ginzburg, J. (2012). The Interactive Stance. Oxford University Press. Gregoromichelaki, E., Kempson, R., Howes, C., & Eshghi, A. (2013). On making syntax dynamic: the challenge of compound utterances and the architecture of the grammar. In Wachsmuth et al. (editors) Alignment in Communication. Towards a New Theory of Communication, pp. 57-86, John Benjamins. Healey, P. G. T., Plant, N., Howes, C., and Lavelle M. (2015). When words fail: Collaborative gestures during clarification dialogues. In AAAI Spring Symposium Series: Turn-Taking and Coordination in Human-Machine Interaction. Healey, P. G. T., Purver, M., and Howes, C. (2014). Divergence in dialogue. PLoS ONE, 9(6):e98598. Howes, C., Purver, M., Healey, P. G. T., Mills, G. J., & Gregoromichelaki, E. (2011). Incrementality in dialogue: Evidence from compound contributions. Dialogue and Discourse, 2(1), 279-311. Howes, C., Lavelle, M., Healey, P. G. T., Hough, J. and McCabe, R. (2016). Helping hands? Gesture and self-repair in schizophrenia. In LREC-2016 Workshop: Resources and processing of linguistic and extra-linguistic data from people with various forms of cognitive/psychiatric impairments (RaPID-2016). Portoroz, Slovenia. Hunter, J., Asher, N., & Lascarides, A. (2015). Integrating non-linguistic events into discourse structure. In Proceedings of the 11th International Conference on Computational Semantics (IWCS 2015), 184-194. Kempson, R., Cann, R., Gregoromichelaki, E., & Chatzikyriakidis, S. (2016). Language as mechanisms for interaction. To appear in Theoretical Linguistics. Larsson, S., & Cooper, R. (2009). Towards a formal view of corrective feedback. In Proceedings of the EACL 2009 Workshop on Cognitive Aspects of Computational Language Acquisition (pp. 1-9). Association for Computational Linguistics. Lascarides, A. and Stone, M. (2006). Formal semantics of iconic gesture. In Schlangen D. and Fernández, R., (editors), Proceedings of the 10th Workshop on the Semantics and Pragmatics of Dialogue (Brandial), Potsdam. Universitätsverlag Potsdam, pp. 64–71. Lascarides, A. and Stone, M. (2009). A formal semantic analysis of gesture. Journal of Semantics, 26(4), pp. 393–449. Linell, P. (2009). Rethinking language, mind, and world dialogically: Interactional and contextual theories of human sense-making. IAP. Lücking, A., Bergmann, K., Hahn, F., Kopp, S., & Rieser, H. (2012). Data-based analysis of speech and gesture: The Bielefeld speech and gesture alignment corpus (SaGA) and its Applications. Journal on Multimodal User Interfaces 7(1-2), pp. 5-18. Lücking A., Pfeiffer T., Rieser H. (2015). Pointing and reference reconsidered. Journal of Pragmatics 77, pp. 56-79 Mills, G. J., & Healey, P. G. (2008). Semantic negotiation in dialogue: The mechanisms of alignment. In Proceedings of the 9th SIGdial Workshop on Discourse and Dialogue (pp. 46-53). Association for Computational Linguistics. Peltason, J., Rieser, H., Wachsmuth, S. (2013). "The hand is no banana!" On communicating natural kind terms to a robot. In Wachsmuth et al (eds.) Alignment in Communication. Towards a New Theory of Communication, pp. 167-193, John Benjamins. Pickering, M. J., & Garrod, S. (2004). Toward a mechanistic psychology of dialogue. Behavioral and brain sciences, 27(02), 169-190. Pickering, M. J., & Garrod, S. (2013). An integrated theory of language production and comprehension. Behavioral and Brain Sciences, 36(04), 329-347. Piwek, P. (2015). Two accounts of ambiguity in a dialogical theory of meaning. In Interactive Meaning Construction A Workshop at IWCS 2015 (p. 19). Poesio, M. & Rieser, H. (2010). Completions, coordination, and alignment in dialogue. Dialogue and Discourse, 1, 1–89. Poesio, M., & Rieser, H. (2011). An incremental model of anaphora and reference resolution based on resource situations. Dialogue and Discourse, 2(1), 235-277. Purver, M., Eshghi, A., & Hough, J. (2011). Incremental semantic construction in a dialogue system. In Proceedings of the Ninth International Conference on Computational Semantics (IWCS 2011) pp. 365-369. Association for Computational Linguistics. Rieser, H. (2010). On factoring out a gesture typology from the Bielefeld speech-and-gesture-alignment corpus (SAGA). In:Kopp, S., Wachsmuth, I. (editors) Gesture in Embodied Communication and Human-Computer Interaction. Springer, Berlin/Heidelberg Rieser, H. (2015). When hands talk to mouth. Gesture and speech as autonomous communicating processes. In Howes, C. and Larsson, S., (editors), Proceedings of the 10th Workshop on the Semantics and Pragmatics of Dialogue (goDIAL), Gothenburg, pp. 122-131 Schegloff, E. A. (2007). Sequence organization in interaction: Volume 1: A primer in conversation analysis (Vol. 1). Cambridge University Press. Schlangen D. (2016) Grounding, justification, adaptation: Towards machines that mean what they say To appear in: Proceedings of the 20th Workshop on the Semantics and Pragmatics of Dialogue (JerSem). Schlöder, J. & Lascarides, A. (2015). Interpreting English pitch contours in context. In Howes, C. and Larsson, S., (editors), Proceedings of the 10th Workshop on the Semantics and Pragmatics of Dialogue (goDIAL), Gothenburg, pp. 131-140
<urn:uuid:fbd23fae-de49-44de-bbdf-3c56d902e670>
CC-MAIN-2021-43
https://christinehowes.com/fadli-17
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585380.70/warc/CC-MAIN-20211021005314-20211021035314-00270.warc.gz
en
0.799246
3,042
2.78125
3
JOY: The Journal of Yoga May 2003, Volume 2, Number 5 Yoga is the name of the Teaching of the means of achieving spiritual perfection. Translated from Sanskrit, this word means "union", "merging", "becoming one". Implied is removal of one's personal separation, harmonious merging of individual with macro-ecosystem. Yoga is a religious teaching. But separate chapters of yoga may be considered as "neutral" as referred to religion and can be used by atheists too. Such are, for example, hatha-yoga (a teaching of bringing one's body into harmonious state) a yoga (in the past known in the west and in our country rather in modifications, titled "autogenic training" or "psychic self-regulation"). Yoga includes a number of methodically independent parts: such as above mentioned hatha-yoga and raja-yoga, jnana-yoga ("yoga of wisdom", theoretical substantiation of the path of yoga), karma-yoga (ethical teaching, including among other notions about man's destiny and influences on it), bhakti-yoga (a higher, as compared to karma-yoga, stage of mastering ethics, emphasizing development of love), buddhi-yoga (teaching of perfecting consciousness). Sometimes separate authors also call the systems of outlook they expound, "yogas", as for example, agni-yoga of the Roerichs couple (in fact representing a manifest, a call to spiritual awakening) and so on. The term "yoga" is inherent in Hinduism and integrated into the vocabulary of Buddhism. Sometimes, when a wide interpretation of the term is used, different ethical teachings, including the Teaching of Jesus Christ, are referred to as "yogas". The basic literary source of yoga is the Indian Bhagavad-Gita, which was composed a few thousand years ago. But as well as the term "yoga", people who trod this path, had appeared long before the creation of the Bhagavad-Gita. As it was mentioned above, the goal of yoga is religious in nature. Through a prolonged evolution, yoga seeks the merging of individual human consciousness with the Creator's Consciousness. Attaining the Divine Perfection and merging with the Creator's Consciousness (God-Father, in Christian terminology) is called in the Bhagavad-Gita "the Supreme Abode". "Submerge your consciousness (buddhi) into Me (God) - truly you will then live in Me", - so Krishna formulates in the Bhagavad-Gita the final goal of the yogic seeker. But if you are still unable of doing it now, says He, practice preliminary meditations. And if the technique of meditations is difficult for you as yet, learn to perform everything that you do in your life not for your personal benefit, but devote these actions to God. In other words, first, do not think what you personally will get from it (Bhagavad-Gita, 12:9-10). In these simple words, briefly speaking, is the essence of the yogic path. Now, let us consider who is capable of treading the yogic path and how far? Are there many among us who are able of working for the sake of others selflessly? And among those who are able, can also meditate? Meditation, as one of the leading masters of modern yoga Rajneesh formulated, is the "state of no-mind". Before one is capable of practicing high forms of meditation, one must develop his intellect. After developing the intellect, the yogi learns how to master and govern the mind. Only then does the yogic consciousness begin to develop and mature while passing through the consecutive stages of buddhi-yoga. The fist stage consists of an experience of Samadhi, which passes into a state of Nirvana, and after a "crystallization" of the Nirvana experience, the height of Nirvana is realized in a yogic merging with the God-Father. These stages cannot be mastered without preliminarily bringing one's body to the necessary level of perfection. And, apart from other considerations, advancement in the highest yoga is connected with the necessity of deep comprehension of the laws of spiritual growth. Are there many of us who can at least formulate clearly what "mind" (manas) and "consciousness" (buddhi) are? Those who are not trained in the subtle distinctions of these two terms often identify these absolutely different meanings. Consciousness composes the basic essence of man. And it is purposeful work intended for developing one's consciousness through yogic methods that allows separate people to realize the highest human possibilities in themselves. Why separate people and who are they? They are those, firstly, who are able owing to a number of objective and subjective causes of making sense of all this; secondly, who had in themselves aspirations of self-perfection strong enough to renounce primitive "earthly" pleasures and brawls with other people and for many years made endless efforts and super-efforts in yoga's practice, and, thirdly, who endured a lot of ethical trials on this path. In such a way we see that the highest yoga is not meant for everyone. Objective laws of man's development allow only those to direct their efforts towards intensive and conscious self-perfection who have overcome in themselves passion for such values as exquisite food, money, fame and so on. Only for that one who is ready to renounce selfish attitudes towards love, and who is ready to easily and naturally sacrifice one's interests for the sake of others - for such a one is the highest yoga. It must not be inferred that people who do not practice yoga lose their time in vain. No, besides the benefits that many of them give to society with their work, they aslo develop in themselves those skills and habits that will be needful for them in the future, when they will "grow" in their psychogenesis to manifestation of the irrepressible need of devoting themselves to the understanding of yoga. And then yoga will provide them an opportunity to perform a "breakthrough" in their personal evolution. The foundation of yoga is ethics. The basic principle of yoga's ethics is Love in the highest sense of this word. The word "love" denotes attraction, and aspiration towards union. And that which divides along any of the common features of division- national, religious, etc. - is to love and yoga a place to evoke kinship and union. (Let us pay attention, incidentally, that meanings of the words "love" and "yoga" are quite close). Those schools which, using diverse systems of training, have as a basis of their world outlook ethical principles based on something other than love, are not entitled to call themselves schools of yoga. Love is the basis and pivot of spiritual development. But it is far from easy to master it. To attain to true Love most of us require long strenuous work over ourselves. Consider such an example: a vast majority of people of this country who were subjected to perverted ideological "treatment" over the period of decades now find themselves capable of grasping the notion of love only as that of sexual passion and sex itself. "Love" as selfish sexual passion is not representative of the highest summit of yogic union. True love, although often misrepresented in contemporary and traditional forms of media, embraces the total self- physical, mental, and spiritual. Additionally, yogic love by its very nature implies union with the cosmos and God. In Hindu yoga the tenet of Love is termed "bhakti". For the first time it was formulated in the Bhagavad Gita where Krishna expounds fundamentals of yoga to His disciple Arjuna. In particular, Krishna promises more successful advancement to those beginners in yoga, who develop in themselves love to the concrete Divine Teacher - in contrast to those who worship the "non-manifested" (Ch.12). It is "only Love (in a seeker)" (11:54) that is capable of achieving the final Goal. Let us consider in detail the meaning of these lovely poetic words. Let us think why Krishna devotes so much attention in His religious teachings to the necessity to learn how to love. Krishna's conception of love is integral in nature for he says that love ought to be directed to the "pure fragrance of the earth" rather than merely towards the Creator. The answer is the following: there is little value in love that is purely "from mind." Such intellectual love is good only as a precondition for developing true love- a heartfelt and emotional form of love. And how is it possible to develop emotional love? Krishna suggests one achieves this through admiration, getting touched, attuned, and changed by what is best that exists around us in nature and in people. Jesus also proclaimed that love of humankind is a necessary prerequisite for the formation of love of God. His whole Teaching set forth in the New Testament is filled with indications of how to do this. So the basis of yoga is Love. One must develop it in oneself by all possible means - through communication with people, communion with nature, through the arts, studying ethical principles, fighting with one's own vices and so on. A fine additional method which accelerates the development of love is the use of special techniques of work which develops the emotional sphere of consciousness. It must be noted, however, that such techniques do not guarantee stable progress in development of the highest spiritual features if a practitioner lacks strong aspiration for acquiring them and also if he or she does not supplement the practice of the mentioned exercises with the transformation of his whole existence. High rates of advancement along the path of spiritual development in yoga may be possible only in the case of the complex employment of various methods and techniques. This implies intellectual work- the expansion of knowledge, ethical self-analysis, aspiration to understand one's path and the paths of other people with the aim of helping them and so on, transformation and development of one's emotional sphere, preparatory and auxiliary exercises of the body, and finally, work with consciousness itself. It is also desirable to purposefully use everyday activity (including professional work) in combination with specific methods of yoga training. The rate of a disciple's advancement directly depends on the quantity of the techniques combined. The pivot of the whole discipleship is ethical behavior based on the principle of Love; this pivot gets enveloped with fruits of other practical methods. Among the latter ones, first emphasized is the development of the body and bioenergetic structures of the organism (chakras, meridians), and on higher planes and stages - the work with consciousness. Efficient development of consciousness is possible only with the help of buddhi-yoga techniques. successive stage of yogic study is conquered by considerably fewer students than the previous one. The highest stages of yoga are accessible only to a few out of thousands who started the practice. It is not possible to quickly satisfy even the strongest aspiration to Perfection: advancement consists of "ups" and "downs". The latter are necessary for consolidation of new stages, for accumulation of power for mastering new "ups", for intellectual understanding of the parts of one's Path, and for resolution of arising ethical problems. Periods between "ups" may last from days to years and much more with different students. Attempts to artificially accelerate the process of a student's development by teaching him new techniques of training usually yields negative results. One should always keep in mind another important principle of teaching yoga: a student must be absolutely free to leave the teacher, the latter has no any right to compel a student to continue the study, attracting him towards himself even in thoughts.
<urn:uuid:59872bda-bdf9-4bbd-8c4a-1e35e9fe2b6a>
CC-MAIN-2021-43
https://www.journalofyoga.org/basicprinciples.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585246.50/warc/CC-MAIN-20211019074128-20211019104128-00630.warc.gz
en
0.955631
2,447
2.859375
3
“The task is…not so much to see what no one has yet seen; but to think what nobody has yet thought, about that which everybody sees.” ― Erwin Schrödinger Here are the biases discussed so far: 1. Confirmation Bias 2. Hindsight Bias 3. Negativity Bias 4. Impact Bias and The Inaccurate Simulator 5. The False Consensus Bias or “most people are like me bias” 6. Attention Bias And The Tunnel Visioning Effect 7. Optimism bias Or the Wishful Thinking bias 8. Distinction bias 9. Anchoring bias Let us get on with the cognitive biases: 10. The Endowment Effect “Impossibility only lasts until you find new unbelievable hard evidences.” ― Toba Beta The endowment effect or bias happens when you demand a lot more value to give up what you own. Your value of giving up something in comparison to your willingness to pay for it is out of proportion. In other words, we demand more than we would be willing to pay for something because we own it. A research article titled “The Endowment Effect, Loss Aversion, and Status Quo Bias” by Kahneman, Knetsch, and Thaler explains this bias (Journal of Economic Perspectives, 1991). The authors describe the bias by giving the example of a wine-loving economist friend. The economist purchased some high-quality Bordeaux wines at $10 a bottle, a low price point. Soon thereafter, the wines increased in value to $200 a bottle at an auction. The interesting point about this example is that the economist would occasionally taste his own wines. But he refused to sell his wines at the auction price. He was also not willing to buy more wines at the auction price either. In 1980, Thaler called this bias “the endowment effect.” This effect was also called “the status quo bias” by Samuelson and Zeckhauser in 1988. Let us look at economist example above. The bias places the economist in a state of status quo or a preference for the current state. This preference makes him unwilling to either buy or sell the wine. The authors also point out that as described by Kahneman and Tversky (1984), this bias is a possible manifestation of loss aversion. This state of aversion to loss happens when the value of getting an object is less than the disutility of letting it go. We find it hard to sell something because we perceive a greater value in the object than what it may be worth. What can we do about this bias? The authors suggest: “The amendments are not trivial: the important notion of a stable preference order must be abandoned in favor of a preference order that depends on the current reference level. A revised version of preference theory would assign a special role to the status quo, giving up some standard assumptions of stability, symmetry and reversibility which the data have shown to be false. But the task is manageable.” 1. Understand that we may have a problem of letting go of things because we perceive a greater value in them than the reality. 2. We may also have a loss aversion mechanism in place that bolsters the endowment effect and keeps us locked in a current status quo. 3. Ask yourself if the status quo and being locked into the perceived value and endowment is enhancing your life quality? Perhaps this bias is making you feel stuck. After all, what is the use of having things when you cannot enjoy them or do not allow others to enjoy them. 4. Imagine the freedom that you would receive by getting a reasonable price for your object. And you can use the resources to enjoy other pursuits that you fancy. “The most important thing in communication is hearing what isn’t being said. The art of reading between the lines is a life long quest of the wise.” ― Shannon L. Alder 11. Functional Fixedness “Human beings tend to be unable to estimate how biased they are.” ― Jean-François Manzoni In my post on Motivation, I wrote the following: “In his TED talk, Dan Pink, the author of Drive, presents you with Karl Duncker’s candle problem. You are given a candle, thumbtacks in a box and matches. The objective is to attach the candle to the wall and prevent the dripping of wax on the table. Many people try to attach the candle to the wall using thumbtacks but it does not work. Some people try to melt the side of the candle and attach it to the wall but that does not work either. The idea is to overcome “functional fixedness” and eventually people figure out a creative way to thumbtack the box to the wall and place the candle inside. The trick is to have the insight to use the box as a candle-holder instead of just as a holder for tacks.” Let us look at this cognitive problem in greater detail. In a study titled “Innovation Relies on the Obscure: A Key to Overcoming the Classic Problem of Functional Fixedness” by Tony McCaffrey in the journal Psychological Science (2012), the author analyzes this idea further. The author says that insight problems demand that we see a different solution, something that is normally overlooked. He gives the example of a toy insight problem where you have two steel rings that are weighty, a long candle, match and a two-inch cube of steel. Your goal is to fasten the two steel rings with the available materials. What should you do? The wax melted from the candle cannot hold the rings together because it is not strong enough to do the job. To come to a solution, you will have to notice that the wick of the candle can function as a rope to fasten and tie the rings together. This is an unconventional use for the wick. Once you figure this new function for it, you will find ways to scrape the wax away at the edge of the provided cube. He adds that this kind of insight based problem solving happens in real life. Mechanical flight is a classic example. Humans could not achieve flight so long as they attempted to emulate the flight movement of a bird. But once they got beyond this functional fixedness, flight became possible. McCaffrey also mentions Challoner’s work on insight problems and also real-world inventions. Challoner’s work shows us that innovative problem solving requires two steps. The first step is to notice a new feature of the problem that is not frequently understood or used. The second step then would be to build a solution on that feature that remains obscure or hidden from plain sight. In fact, he calls it the obscure-features hypothesis of innovation. “The classic obstacle is functional fixedness, which has been described as the tendency to fixate on the typical use of an object or one of its parts (Duncker, 1945). On the basis of my examination of many inventions and insight problems, however, I characterize functional fixedness as the tendency to overlook four types of features possessed by a problem object (parts, material, shape, and size) because of the functions closely associated with the object and its parts.” How can we get beyond functional fixedness? Many solutions are offered, including some from previous studies. Here are a few of them: 1. You add new information on the old problem, a process that Ohlsson calls “elaboration” or you reinterpret old information or what is called “re-encoding.” 2. You can use Knoblich’s technique of breaking the materials of the problem into smaller parts or what is termed as “chunk decomposition.” The author argues, that this is not a complete solution to fixedness because even after decomposing objects, you will still need to see their function in a new way. You will need to get beyond the idea that the wick emits light in the example above. 3. The author suggests a method called “the generic parts technique.” You will need to create a parts diagram and ask 2 questions through the process. The first question is to simply ask if you can decompose or break the object further into smaller hierarchies. The second would be to ask if this new level and description can suggest a new use and provide a solution. This analysis creates a generic description based on the shape and the material of the object. You will result in creating a tree with descriptions and potential uses. These descriptions and uses may allow you to see beyond functional fixedness. So for the example above, create a diagram breaking the candle into wax and wick. Then you describe the wax and list potential uses. Then you describe the wick which is a string and list its uses. Ask questions such as: A wick is made of long interwoven strands that can be used for what? So the decomposition allows you to describe the hierarchies of an object into parts. It also allows you to observe and analyze the generic descriptions and shapes of parts of the objects. The results of the study indicated the following: - Using GPT allowed subjects to significantly solve insight problems. In these insight problems, functional fixedness is a limiting factor. The control group solved less problems. - Subjects were able to find and list additional features that would have remained obscure without GPT. This included the major factor that resulted in an innovative solution. - Parts, material, shape, and size are our allies in the quest away from functional fixedness. Describing them and finding uses is beneficial. - On the whole, GPT is a great innovation promoting technique to have in our toolbox of solving obscure and insight-based problems. “The rules of the universe that we think we know are buried deep in our processes of perception.” ― Gregory Bateson, Mind and Nature: A Necessary Unity 12. Projection Bias “I prefer to rely on my memory. I have lived with that memory a long time, I am used to it, and if I have rearranged or distorted anything, surely that was done for my own benefit.” ― Leon Festinger In a research article titled “Projection Bias in Predicting Future Utility” by Loewenstein, O’Donoghue and Rabin, (2003) in The Quarterly Journal of Economics, the authors describe this bias. “People exaggerate the degree to which their future tastes will resemble their current tastes.” The authors say that if you want to make a great decision that is optimal, you may have to make a prediction of future states. The problem is that we are not able to accurately predict for future states because of several factors. Among them are daily mood fluctuation, and the possibility of social influences. We may also misjudge the possible change of environment. An example they give is making vacation plans. You are making summer vacation plans and it is still winter. Will you choose overly warm places based on your current weather? We may engage in this bias while making purchases too. Since satisfaction from purchases can fluctuate, we can overvalue or undervalue a product based on the day and our states. In general we may over-predict the usefulness of an object. For example, we can overbuy groceries if we go to the grocery store hungry than if we go after a meal. What are some ways that you can get beyond this bias? Based on the article: 1. Become aware and experience of this bias but the awareness may not be enough to get beyond it. We know that going hungry to the store can make us overbuy. But the current emotional state might just be overpowering and we still overbuy. 2. The authors say that one decision in a certain state may not be enough to diagnose this bias. We may be able to observe many of our decisions based on our plans and actual behaviors. This may allow us to come to a conclusion on the patterns. 3. Setting up of rules such as “never shop when hungry” is a demonstration of the awareness of this bias. These rules provide a specific moment-by-moment awareness and dealing with specific situations. And finally this quote by Adam Smith, also quoted in the article is a great description of the bias. “The great source of both the misery and disorders of human life, seems to arise from over-rating the difference between one permanent situation and another. Avarice over-rates the difference between poverty and riches: ambition, that between a private and a public station: vain-glory, that between obscurity and extensive reputation—Adam Smith, The Theory of Moral Sentiments Now over to you. Let me know in the comments below if these biases sound familiar and how you get beyond them.
<urn:uuid:44bc4259-3214-48e9-aacb-8210454b8c78>
CC-MAIN-2021-43
http://launchyourgenius.com/2015/08/27/cognitive-bias-part-4/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583083.92/warc/CC-MAIN-20211015192439-20211015222439-00070.warc.gz
en
0.940837
2,718
3.375
3
Toothpaste sales worldwide are literally a billion pound industry with supermarket shelves stacked high with every conceivable option. For a substance that performs such a basic function like cleaning teeth, toothpaste formulations can be incredibly complex and controversial. The modern toothpaste is manufactured and marketed to do so much more than just help to clean plaque off of our teeth. There are so many toothpastes available that claim to have amazing medicinal, healing and cosmetic powers that you could be forgiven for thinking that you may never need to see a dentist again! But are we being blinded by science and clever marketing? Do we really need these toothpastes, or can they do more harm than good? Do we need toothpaste to clean teeth? Plaque is actually very soft and easily removed, provided it is not allowed to harden and become tartar. Cleaning teeth regularly and effectively is far more important than what brand of toothpaste you use. The general rule of thumb is to brush teeth twice a day for 2 minutes and floss at least once a day. By following a strict cleaning routine like this, you will go a long way to achieving great oral health regardless of what toothpaste you use. It is perfectly possible to clean away plaque with nothing but a soft or medium bristle toothbrush or alternatively you can easily make your own toothpaste by mixing baking soda with water to create a fine paste and then adding a couple of drops of peppermint oil for taste. Eating a healthy diet will also do far more than any toothpaste for preserving your teeth and gums. A nutritious diet full of fresh vegetables and fruit is the best way to keep teeth healthy and strong. If you can avoid regularly eating foods and beverages high in sugar you can effectively starve the harmful bacteria in your mouth that cause decay. In a perfect world, tooth decay, sensitive teeth and gum disease would be the exception and not the rule. But that is not the world we live in, nor would we want to! Here are some of the most common reasons that many people eventually experience poor oral health – - Most of us like to eat and drink nice things that are often not kind to our teeth. Sweets, cake, soft drinks, energy drinks, and alcohol are just some of the things that can cause havoc with our teeth. - Many medicines can cause “dry mouth” as a side effect. Saliva is nature’s way of cleansing the mouth and teeth and without it, the mouth can become too acidic resulting in tooth decay, gum disease, and bad breath. - Smoking is not only very bad for our general health it is also very bad for our teeth and gums. It drys out the mouth, causes plaque to build up quicker, releases chemicals and toxins that can be absorbed by the lining of the mouth and exasperates gum infections. - Many people find flossing too time-consuming and awkward and as a result they rarely if ever floss the areas between their teeth. This will eventually lead to tooth decay and gum disease. - Research has shown that many people overestimate how long they spend brushing their teeth everyday and as a result, they don’t spend enough time cleaning every surface on every tooth. - It is vitally important for everybody of all ages to visit a dentist at least twice a year. The dental team will check for tooth decay, gum disease and for any problems with the oral soft tissue. The number of cases of oral cancer continues to rise with Cancer Research UK predicting a further 33% raise by 2035. Regular routine checks could literally save your life. Despite this, many people still put their health at risk by not scheduling regular dentist appointments. What toothpaste options are available? There are now toothpastes to cater for practically every dental condition and consumer demographic you can think of, and probably some you could never have imagined. Here are some of the most popular: Sensitive teeth can occur for many reasons and if you experience pain when eating hot food, cold food or beverages, biting down or eating sweet foods you should have a conversation with your dentist to find out the cause of the sensitivity. Usual causes of tooth sensitivity are cracked or decayed teeth, thinning or erosion of tooth enamel and receding gums. It is also very common to experience sensitivity after undergoing a teeth whitening procedure that uses hydrogen peroxide or carbamide peroxide. Another reason that teeth can become over sensitive is from toothbrush erosion, by brushing too hard or with a bad technique it is possible to wear away at the enamel and expose the dentin underneath. For people who’s brushing technique is not up to scratch, switching to an electric toothbrush would be a good idea. Electric toothbrushes like the Oral-B Pro 2000 have built-in pressure sensors that alerts the user if excessive force is being applied and tooth enamel is in danger of being damaged. Will a sensitive toothpaste really help reduce teeth sensitivity? Yes: Choosing a good sensitive toothpaste will be beneficial to anyone who experiences pain because of eroded or thinning enamel. The science behind modern sensitive toothpaste is sound, and brushing with a good brand regularly will definitely help. Sensitive toothpaste like Sensodyne Repair & Protect uses effective ingredients like NovaMin to protect the tooth nerve. Teeth can be discolored for many reasons - Eating and drinking highly pigmented foods . - Genetics, we are all born with different colored teeth. - Some medicines discolor teeth. Will whitening toothpaste actually whiten teeth? No: Whitening toothpaste can be useful for removing surface stains or soft deposits on the surface of the tooth but they will never actually lighten the shade of the teeth. The Vita Shade Guide is universally used to determine teeth color and it would be impossible for a toothpaste to lighten the teeth by even one single shade. The only way a toothpaste could lighten a tooth by any noticeable degree would be if contained a high percentage of hydrogen peroxide or carbamide peroxide which would make it both inefficient and illegal. Toothpaste like Nuskin’s AP24 Whitening Fluoride toothpaste retails for about 5 times as much as other whitening toothpastes but will still not noticeably lighten the shade of the teeth at all. Having said that, whitening toothpaste can be useful for people who regularly smoke or eat highly pigmented foods and beverages. The abrasive ingredients will help to scrub off stains and return the teeth to their default color. The strength of the abrasives in toothpaste are rated on the Relative Dentin Abrasivity (RDA) Scale that stretches from 0 – 250. Using a whitening toothpaste like Sensodyne Extra Whitening that falls in the 70-100 range would be a good choice. Toothpaste like Colgate Tartar Protection Whitening has a rating of over 165 and should only be used occasionally because they may actually damage the tooth enamel. It is important to point out that it is impossible for tooth enamel to regrow or regenerate parts of the teeth that are already missing. Having said that, it is possible to remineralize, repair and strengthen teeth that have been subject to minor erosion. There is some exciting science behind some of the latest regenerating toothpaste technologies with brands like REGENERATE Enamel Science™ using calcium silicate and sodium phosphate to create a crystal structure that emulates natural tooth enamel. Can regenerating toothpaste remineralize teeth enamel? Yes: Research does support the manufacturer’s claims and there are real benefits to using a toothpaste like REGENERATE. The downside is that they are expensive and retail for a premium price. It is widely accepted that fluoride plays an important role in the prevention of tooth decay and most dentists will strongly recommend brushing with a fluoride-based toothpaste. Even though about 95% of all toothpaste contains some form of fluoride there is still a significant amount of people who are strongly opposed to its use. The reason fluoride has become so controversial is that it can be highly toxic if ingested or absorbed in high enough levels. Excessive exposure to fluoride can have a number of negative effects on the human body and in the worst case scenario, can result in death. Will fluoride toothpaste reduce the risk of dental decay? Yes: It is clear that if fluoride is available to the teeth during remineralization it will help to make them stronger and more resistant to decay. The beneficial effects of fluoride are more noticeable in children and teenagers between the ages of 5 years and 16 years old, this is when the teeth are developing and maturing and can gain the maximum benefit from the availability of fluoride. High Fluoride Toothpaste For people who are at high risk of developing dental decay, there is the option of using high fluoride toothpaste. Most toothpaste contains between 1350ppm and 1500ppm amounts of fluoride but there are more powerful options like Colgate Duarphat that has a massive 5000ppm fluoride content. High fluoride toothpaste will usually only be available by prescription and must be used carefully because of its toxic nature. Will high fluoride toothpaste reduce the risk of tooth decay? Yes: But toothpaste that contains between 2800ppm and 5000pmm fluoride would only be used by prescription and under the guidance of a dentist or doctor. SLS free toothpaste Sodium lauryl sulfate (SLS) has gained a bad reputation in recent years. Viral internet rumors have claimed that SLS is highly toxic and can cause cancer, blindness, hair loss and skin irritation. SLS is used as a surfactant in toothpaste to create the foaming action that we are all used to. Despite the rumors, over 85% of all kinds of toothpaste still use SLS mainly because it is very effective and cheap to manufacture. Consumers are much savvier regarding the ingredients in personal care products today so many manufacturers are now producing SLS free toothpaste for people who are concerned about the side effects. Children’s Toothpaste AND LOW FLUORIDE TOOTHPASTE Persuading children to clean their teeth regularly can be challenging for parents. In an effort to make the toothbrushing experience more enjoyable for kids, many manufacturers market their children’s toothpaste with sweet tasting flavors and cartoon characters on the packaging. Some companies like Jack and Jill are dedicated to producing toothpaste formulations that are free from any harsh or potentially toxic ingredients sodium lauryl sulfate and fluoride. Parents may prefer to use children’s toothpaste because they find their child responds better to the sweet flavors and colorful tubes. However, it is important to make sure that young children are supervised when using a fluoridated toothpaste, especially one that tastes nice. For people who do not approve of the use of fluoride, there are 100% natural toothpastes that are safe, even if swallowed. Many people are particular about the kinds of synthetic chemicals used in their personal care products. It does make sense to take note of the ingredients in toothpaste because the lining of the mouth is sensitive and delicate and can absorb chemicals into the body. If you are looking for a natural or organic toothpaste check to see if the company has any organic or ethical certifications. Green People are an excellent example of a company who are dedicated to producing products that are free from synthetic chemicals. Their toothpaste range provides adult and children’s toothpaste that are 100% natural. Animal welfare groups claim that almost 40 thousand animals die needlessly each year during the testing of cosmetic ingredients. This statistic is harrowing for many people and it is natural to seek out cruelty-free products where no animal has been harmed in their manufacture.However, it can be very difficult to know which brands are truly cruelty-free. To find a toothpaste that is verified cruelty-free check Peta’s website. There are definitely toothpastes available that have therapeutic value to the consumer but there are also some that are pure marketing hype and not worth the money.
<urn:uuid:8f981deb-cddd-4da9-84f3-b077ff04f433>
CC-MAIN-2021-43
https://www.fashion-mommy.com/toothpaste-confidential/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585381.88/warc/CC-MAIN-20211021040342-20211021070342-00550.warc.gz
en
0.945591
2,483
2.71875
3
What is coronavirus? COVID-19 (COronaVIrus Disease) is the illness caused by a virus first discovered in late 2019. It is generally referred to as ‘coronavirus’ in the media. It is actually one type of coronavirus (CoV) and is part of a large family of viruses causing ilnesses that have emerged in the last few decades such as Middle East Respiratory Syndrome (MERS-CoV) and Severe Acute Respiratory Syndrome (SARS-CoV). COVID-19 affects your lungs and airways and can lead to health complications such as pneumonia (a lung infection that causes inflammation). Coronaviruses are ‘zoonotic’ which means that they can be spread from animals to humans. COVID-19 caused by is what we would call a ‘novel coronavirus’ since it isn’t one that we have seen before and we don’t have existing immunity (our ability to fight off the virus) to it. Viruses from the coronavirus family are usually transmitted through the spread of aerosols or droplets released when people sneeze or cough. These droplets can then be breathed in or can land on surfaces we touch and then transferred to the mouth, nose and eyes by our hands. However, as COVID-19 is caused by a new virus we don’t know exactly how it is spread from person to person. The symptoms of coronavirus are: - a cough - fever (high temperature) - shortness of breath But these symptoms do not necessarily mean you have the illness as the symptoms are similar to other viruses like the cold and flu. The best ways to prevent spreading the virus are to: - Cover your mouth and nose with a tissue or sneeze/cough into your elbow – bin any tissues straight away. - Wash your hands with soap and water often – washing them for at least 20 seconds. When soap and water are not available – use alcohol-based hand sanitiser. - Try to avoid close contact with people who are unwell and limit your own contact with other people if you suspect any of the symptoms. - Avoid touching your face if your hands are not clean. - Clean and disinfect any frequently touched objects and surfaces – including your phone! People who do not have symptoms may still have the virus and be able to pass it on to others. It is important that we all take care to reduce the spread of viruses. Some people with existing health conditions are at greater risk from the virus – this means people who have existing respiratory conditions or ones affecting their immune system. Worried about coronavirus? Viruses (such as the common cold, flu and hep C) and bacteria (such as E. coli and Staph), can be spread when people take drugs with unclean or shared equipment. To help prevent the spread, good hygiene practices are essential. The following advice can help reduce the risk of spreading infections all year round but are especially important during this outbreak of COVID-19. If you’re taking drugs remember to: - Rest well before and after - Stay hydrated - Eat nutritious well-balanced meals before and after This can all help keep your immune system healthy. It is also a good idea, especially in the autumn and winter months in Scotland, to take a vitamin D supplement. All drug use has risks. This page is for information only and does not constitute or replace medical advice. If you have medical concerns about your drug use, please speak to a medical professional. Please bear in mind that now is a particularly risky time to take drugs. Despite the myths, drugs like cocaine and mephedrone are not shown to kill the virus! Cutting down on or avoiding tobacco can also help keep your lungs prepared to fight off any illness. Your local Stop Smoking Service can offer resources and advice if you want to stop or cut down. - Wash your hands for at least 20 seconds before and after you handle, prepare or take drugs - Clean surfaces with alcohol wipes before preparing drugs - Crush substances down as fine as possible before use to reduce soft tissue abrasions (cuts can increase the likelihood of disease transmission) Noticed changed to the way people take, buy or sell drugs? Has your drug use changed since the outbreak of COVID-19? We’re running a short survey – give us your insights here. How are you taking drugs? There are many ways you can reduce the harm from drugs depending on how you take them. Full info can be found on our website. - Inhaling drugs can damage the mouth, throat and lungs and can cause breathing difficulties, wheezing, chest pain and shortness of breath. Smoking drugs during times of respiratory infection is discouraged as this will most likely make the infection worse and slow down healing. - If smoking from foil, use clean foil each time. - Keep all pipes and bongs clean and disinfect them regularly. - Avoid sharing pipes, joints, cigarettes and vapes. - Avoid sharing snorting tools – use colour coded straws so you don’t get mixed up. - Avoid sharing the same card to crush up drugs - Avoid using notes or keys which can harbour viruses and bacteria – use a clean straw, post-it or piece of paper and bin it after use. - Rinse your nose out with clean water at the end of a session. - Only use clean needles and supplies. Free, clean needles are available from needle exchange services. If needle exchange services are disrupted, you can buy injecting equipment online. - Wash injection sites (before and after). - Avoid sharing equipment (including needles, filters, containers, spoons and water) – use coloured coded equipment so you don’t get mixed up. - If mixed into a drink, avoid sharing bottles/cups. Make sure it is marked so no one accidentally drinks it and never leave your drink unattended. - Wash your hands before each ‘dab’. - Avoid ‘dabbing’ from shared bags of drugs. - Ensure all equipment is clean and sterile before use – this includes washing your hands. - Add lube to the outside of the syringe to allow for easier entry and to prevent soft tissue damage. - Avoid sharing water, mixing cups, syringes, straws, lube launchers and lube. Visit our website for more harm reduction information. Interruption to supply Travel and work restrictions may cause an interruption to the supply of drugs meaning that the people selling drugs might not have stock for everyone who wants to buy them. This means that the drugs you are buying could be more likely to be cut with something unexpected or may not contain only or any of the drug you expect. It is important to test the drugs if you can – services like WEDINOS offer free testing to find out what the contents of a drug are. If you don’t have access to a drug testing service, reagent testing kits are available online and can give a greater understanding of what the drug contains, but they may not be suitable for identifying newer compounds or adulterants and can tell you nothing about purity or strength. If you are concerned about your supply of prescribed medicines, please speak to your medical provider. If you don’t have access to the drugs you usually take you could experience symptoms of withdrawal. Withdrawal symptoms can include seizures, sickness and diarrhoea, headaches, pains and hallucinations. The severity of the symptoms will vary depending on the type and amount of drug used but most symptoms will ease after a few weeks. Taking benzos, GHB/GBL or drinking alcohol? If you take these drugs on a regular basis it is important to avoid sudden withdrawal. If you think you will be short in supply try to taper (reducing the amount you take each day) slowly and seek help and advice from your local drug service. If symptoms become too much seek medical help and in an emergency call 999. Drugs like heroin and Valium slow down your central nervous system, reducing your heart rate and breathing. If you are taking these drugs during times of respiratory infection be aware that these drugs could reduce your breathing to a dangerous level. If you require medical assistance be honest about the drugs you are taking. It may be tempting to stock up on drugs to keep in the house if you are worried about running out. If you have stocked up on drugs be careful with the temptation to binge. Worried about your use? Have a look at our check it out tool. Thinking about sex? If you are having sex, then you are likely to be getting up close and personal with someone so the risk of passing on the virus is high. You should still think about safe sex if you do choose to get friendly during this outbreak! We don’t know if COVID-19 can be passed on through sexual fluids but using barrier methods to reduce that risk is always a good idea. Have you been tested for sexually transmitted infections and blood borne viruses recently? If you already have an infection this will reduce your body’s ability to fight off other illnesses. Get tested, get treated! Stock up on condoms, dams and lube – we have loads in the Crew Drop-in.* Lube can prevent tears and abrasions– small cuts and tears can increase the risk of infection. Do you take PreP? Contraceptives? Ensure you have a good supply in case access to services is interrupted. *Please note the Crew Drop-in is currently closed. To find out how you can access sexual health services in Lothian visit www.lothiansexualhealth.scot. Are you a sex worker? - Stock up on safer sex materials – come to Crew! - Be more careful about clients washing their hands or showering before the start of the session or meeting. - Wipe down all surfaces, change sheets and disinfect all sex toys between clients. - Do you have funds prepared if you have to take time off work? - Are there people who can offer you support with essentials (like food, rent/housing) during this time? - Do you know of any emergency funds available to you? Get in touch with Scot-Pep for any other advice around sex work. The Red Umbrella Fund could be a good way to self-organise emergency funding for sex workers who aren’t able to work and the Scottish, Umbrella Lane has also organised a fund to help sex workers during this time. Some services may reduce the hours that they are open or some services may close. Keep an eye on their social media accounts and look for any local or national announcements. If you receive on-going support from an organisation, ask them about what will happen if they close. Will online appointments be available to you? Staying indoors and not seeing your usual social group can feel lonely and frustrating. It could be an idea to stock up on books from the library or try out Borrow Box which allows you to borrow books digitally. Think about having a plan for people you can contact if you are feeling down. There are many helplines that you can call to chat including: - Breathing Space | 0800 83 85 87 - Samaritans | 116 123 | email@example.com - Scottish Families Affected by Alcohol and Drugs | 08080 101011 Where to get information It can be stressful to read about COVID-19 in the media. Sometimes the information from some news sources can create feelings of panic. Think about the sources of information that you are reading. A good way to keep up to date with the facts about COVID-19 would be to stick to trusted sources like the ones listed below: You can also visit fullfact.org which is an organisation that fact-checks any claims made in the media – in newspapers, magazines and on TV, including info on COVID-19 The Mental Health Foundation has produced some excellent advice about managing your mental health during this time. Talk about any worries you have with people you trust. We’re all in this together If you come from an area where there is an outbreak or a nearby area, avoid going to parties or clubs at this time. This helps to protect the health of others. We have a responsibility to look after ourselves and each other. Read our info for venues – pubs and clubs.
<urn:uuid:155dc931-6fa6-4a2c-96b4-3b5886385c56>
CC-MAIN-2021-43
https://www.crew.scot/coronavirus-general-hygiene-tips/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585696.21/warc/CC-MAIN-20211023130922-20211023160922-00590.warc.gz
en
0.945869
2,616
4.15625
4
1. Franklin Roosevelt was related to 11 other presidents. It seems like every day there is a new report tracing the genealogical roots of the American presidents: Abraham Lincoln and George W. Bush were seventh cousins (four times removed), and Jimmy Carter and George Washington were ninth cousins (six times removed). No president, however, can boast as many commander-in-chief connections as Franklin Delano Roosevelt who, by blood or marriage, was related to 11 other former presidents: John Adams, James Madison, John Quincy Adams, Martin Van Buren, William Henry Harrison, Zachary Taylor, Andrew Johnson, Ulysses S. Grant, Benjamin Harrison, William Howard Taft and, of course, Theodore Roosevelt, FDR’s fifth cousin. Roosevelt’s famous family tree doesn’t end at the White House. He was also reportedly related to several other historic figures, including Winston Churchill, Douglas MacArthur and two famed Confederate leaders: Jefferson Davis and Robert E. Lee. 2. Another famous relative? His wife, Eleanor. Fifth cousins (once removed), Franklin and Eleanor had met briefly as children—although neither remembered the occasion. Though both were Roosevelts, they had grown up in competing New York branches of the family, Franklin from Hyde Park and Eleanor from Oyster Bay on Long Island. A chance meeting in 1902, shortly before Eleanor’s debutante ball, reacquainted the pair, who began dating later that year after a New Year’s reception at the White House hosted by Eleanor’s uncle, President Theodore Roosevelt. Though the outgoing Franklin and introverted Eleanor seemed to have little in common, they had both grown up in households seemingly haunted by illness. Franklin’s father James was 54 when his son was born, and chronic heart problems eventually rendered him an invalid until his death when Franklin was a teenager. Eleanor’s mother and brother both died early from diphtheria, and her alcoholic father Elliot (Teddy’s younger brother) died a few years later, leaving her orphaned at the age of 10. Whether or not it was this sad shared bond that united them, their relationship progressed quickly, and less than a year later they became engaged, when he was 22 and she was 19. 3. When Franklin and Eleanor married, Teddy Roosevelt gave the bride away. In fact, the wedding date itself was selected with the sitting president in mind: March 17, 1905, when he was already scheduled to be in New York for the St. Patrick’s Day parade. Teddy, who by all accounts adored his niece, was thrilled to be there, but perhaps inevitably it was the Rough Rider who garnered almost all the attention. The president’s attendance at the ceremony was front-page news (including in the New York Times), leaving Eleanor convinced that more people had come to see her uncle than her and Franklin. TR stole the show again when he met with reporters before leaving the reception. When asked for his thoughts on the Roosevelt-Roosevelt union, he quipped, “It is a good thing to keep the name in the family.” 4. Sara Delano Roosevelt was a domineering mother-in-law. Not everyone was thrilled with the marriage. Franklin’s domineering mother Sara had opposed it from the start. She thought the couple was too young to marry, was far from pleased with Eleanor’s family history and was unimpressed with the shy, retiring bride-to-be herself. She went so far as to whisk Franklin away on a foreign vacation in the hopes of changing his mind. She lost that battle, but Sara went on to wage familial war with her daughter-in-law for the rest of her life. Her gift to the newlyweds (a brownstone on Manhattan’s Upper East Side) may have seemed a generous gesture, but it came with powerful strings attached: Sara bought the adjoining building for herself, had connecting doors installed on every floor and proceeded to pop over whenever she pleased. She even hired (and fired) Eleanor and Franklin’s staff and eventually took control of much of the upbringing for their five children. Eleanor, naturally upset with the situation, found Franklin unsympathetic to her plight. Which is not surprising when you realize that Sara had kept her only child on just as tight a leash for his entire life. In fact, until her death in 1941—after FDR was already president—it was Sara who handled the Roosevelt family finances, doling out allowances to Franklin (and Eleanor) as she saw fit. 5. Franklin Roosevelt had a unique connection to the USS Arizona. In 1913, FDR became Assistant Secretary of the U.S. Navy (a post previously held by cousin Teddy). The following year, he attended a keel-laying ceremony at the Brooklyn Navy Yard for a Pennsylvania-class battleship officially known as BB-39. Fifteen months later, when the ship was launched, it was christened the USS Arizona, after America’s newest state. On December 7, 1941 the Arizona was bombed during the attack on Pearl Harbor and 1,177 of its men went down with the ship. The next day, Roosevelt appeared before Congress asking for a declaration of war against Japan. Few people had noted Roosevelt’s connection to the Arizona’s beginning and end until staffers at the National Archives discovered photos of Roosevelt’s 1914 appearance in 2012. The images show a smiling Roosevelt sauntering down the gangplank, just seven years before he was stricken with polio and permanently paralyzed from the waist down. 6. The 1944 presidential election pitted Franklin Roosevelt against one of his neighbors. In his campaign for an unprecedented fourth term in office, Roosevelt faced Republican Thomas E. Dewey, a former federal prosecutor and Manhattan District Attorney. Dewey had been born in Michigan, but made his home north of New York City, in a rural part of Dutchess County. In fact, he lived less than 30 miles from the Roosevelt family home at Hyde Park. Recommended for you This marked the last time that both major-party candidates for president lived in the same state, until the 2016 election between Hillary Clinton and Donald Trump. Roosevelt and Dewey also shared another bond; both had served as governors of New York, with Dewey elected 10 years after Roosevelt had left the office to assume the presidency. 7. FDR was an avid stamp collector. Roosevelt’s passion for stamps began when he was a small child and continued throughout his life, resulting in a collection of 1.2 million pieces. Wherever he travelled, his stash of albums went with him in a special trunk. While Roosevelt himself admitted that his collection was large but not necessarily selective or valuable, he did have several unique pieces created expressly for him by foreign heads of state. Roosevelt was so enthusiastic about his philatelic pursuit that he met regularly with Postmaster General James A. Farley to go over plans for upcoming releases, even sketching a few designs himself. While president, Roosevelt spent much of his downtime working on his collection, a welcome respite from the difficult burdens of leading the nation through both the Great Depression and World War II. It turns out it made for good PR, too. The White House released dozens of photos of a tranquil, focused FDR at work, seemingly “putting the world in order.” After his death, his collection was sold at auction, attracting significant interest and selling for more than three times its estimate—one collector even paid $500 for a simple catalogue in which Roosevelt had indicated which stamps he already owned. Roosevelt would no doubt be thrilled that more than 80 countries have released stamps bearing his image. 8. Eleanor Roosevelt held the first press conference by a first lady. In fact, between 1938 and 1945 she held 348 of them. Encouraged by both her husband and good friend Lorena Hickok, an AP reporter, Eleanor became a shrewd manager of her public image, using it to further the cause of women’s rights. Female reporters, who were by tradition excluded from press conferences held by her husband, found a welcome audience with the first lady—only women were invited to attend. If a news organization wanted to cover Eleanor, who was now increasingly creating her own headlines, they had to keep women on their payroll, no small comfort in the midst of the Great Depression. Her support of female reporters also led her to create the “Gridiron Widows,” a rebuke to Washington’s Gridiron Club for their refusal to admit women as members, for which she organized and hosted several high-profile benefits. Her interest piqued by the time she spent with these writers, Eleanor started a side career as a journalist, writing a daily syndicated column (which continued until her death in 1962) and contributing more than 50 articles to some of the nation’s leading magazines. 9. Franklin Roosevelt narrowly avoided disaster on his way to the Tehran Conferences. The USS William D. Porter might be the unluckiest ship in U.S. naval history. Commissioned in 1943, its first assignment was as escort for several other vessels, including the battleship USS Iowa, when they crossed the Atlantic that November. Who was on board the Iowa? President Roosevelt, Secretary of State Cordell Hull and several high-ranking military officials, on their way to a top-secret summit in Iran with Joseph Stalin and Winston Churchill. The Porter’s bad luck started early, when it rammed into another ship while still in the dock. The next day saw another accident. While performing a routine drill (during which disarmed weapons were to be used), a fully operational depth charge fell off the ship and detonated, sending the rest of the convoy into a near panic, sure that Axis submarines were nearby. But it was the events of the following day, November 14 that sealed the ship’s fate. The Porter was once again performing drills, this time using what were supposed to be fake torpedoes. The problem was, the fourth round fired wasn’t a fake, it was live, and it was aimed directly at the Iowa. However, the whole convoy was under strict orders to maintain radio silence, so the Porter instead sent light signals to try to warn the Iowa. After several mistaken messages, word finally got through and the Iowa safely maneuvered out of harm’s way. While many on-board the Iowa were terrified at the prospect of an attack, FDR took it all in stride, ordering his Secret Service agents to wheel him ship-side, so he could watch the events unfold. In the aftermath of the incident, the Porter’s entire crew was arrested (a naval first), with most demoted to shore duty. But when one of the men was assigned to hard labor for his role in torpedo disaster, FDR had the sentence reduced. 10. Amelia Earhart was supposed to teach Eleanor Roosevelt how to fly. The Roosevelts met famed aviator Amelia Earhart at a White House state dinner in April 1933, and she and the first lady quickly hit it off. Near the end of the night, Amelia offered to take Eleanor on a private flight, that night if she wanted to. Eleanor agreed, and the two women snuck away from the White House (still in evening clothes), commandeered an aircraft and flew from Washington, D.C. to Baltimore. After their nighttime flight, Eleanor got her students’ permit, and Earhart promised to give her lessons. When Earhart went missing in 1937, both Roosevelts were shocked by the news. Franklin immediately authorized a massive search effort covering more than 250,000 square miles of the Pacific and costing more $4 million. However, Earhart was never found, and Eleanor Roosevelt never got her flying lessons.
<urn:uuid:4a10a97e-218d-4a66-a284-c87f49bc5b25>
CC-MAIN-2021-43
https://preview.history.com/news/10-things-you-may-not-know-about-the-roosevelts
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585322.63/warc/CC-MAIN-20211020152307-20211020182307-00430.warc.gz
en
0.982435
2,446
2.90625
3
By the end of this section, you will be able to: - Define electric current, ampere, and drift velocity. - Describe the direction of charge flow in conventional current. - Use drift velocity to calculate current and vice versa. The information presented in this section supports the following AP® learning objectives and science practices: - 1.B.1.1 The student is able to make claims about natural phenomena based on conservation of electric charge. (S.P. 6.4) - 1.B.1.2 The student is able to make predictions, using the conservation of electric charge, about the sign and relative quantity of net charge of objects or systems after various charging processes, including conservation of charge in simple circuits. (S.P. 6.4, 7.2) Electric current is defined to be the rate at which charge flows. A large current, such as that used to start a truck engine, moves a large amount of charge in a small time, whereas a small current, such as that used to operate a hand-held calculator, moves a small amount of charge over a long period of time. In equation form, electric current is defined to be where is the amount of charge passing through a given area in time . (As in previous chapters, initial time is often taken to be zero, in which case .) (See Figure 20.2.) The SI unit for current is the ampere (A), named for the French physicist André-Marie Ampère (1775–1836). Since , we see that an ampere is one coulomb per second: Not only are fuses and circuit breakers rated in amperes (or amps), so are many electrical appliances. Calculating Currents: Current in a Truck Battery and a Handheld Calculator (a) What is the current involved when a truck battery sets in motion 720 C of charge in 4.00 s while starting an engine? (b) How long does it take 1.00 C of charge to flow through a handheld calculator if a 0.300-mA current is flowing? We can use the definition of current in the equation to find the current in part (a), since charge and time are given. In part (b), we rearrange the definition of current and use the given values of charge and current to find the time required. Solution for (a) Entering the given values for charge and time into the definition of current gives Discussion for (a) This large value for current illustrates the fact that a large charge is moved in a small amount of time. The currents in these “starter motors” are fairly large because large frictional forces need to be overcome when setting something in motion. Solution for (b) Solving the relationship for time , and entering the known values for charge and current gives Discussion for (b) This time is slightly less than an hour. The small current used by the hand-held calculator takes a much longer time to move a smaller charge than the large current of the truck starter. So why can we operate our calculators only seconds after turning them on? It's because calculators require very little energy. Such small current and energy demands allow handheld calculators to operate from solar cells or to get many hours of use out of small batteries. Remember, calculators do not have moving parts in the same way that a truck engine has with cylinders and pistons, so the technology requires smaller currents. Figure 20.3 shows a simple circuit and the standard schematic representation of a battery, conducting path, and load (a resistor). Schematics are very useful in visualizing the main features of a circuit. A single schematic can represent a wide variety of situations. The schematic in Figure 20.3 (b), for example, can represent anything from a truck battery connected to a headlight lighting the street in front of the truck to a small battery connected to a penlight lighting a keyhole in a door. Such schematics are useful because the analysis is the same for a wide variety of situations. We need to understand a few schematics to apply the concepts and analysis to many more situations. Note that the direction of current in Figure 20.3 is from positive to negative. The direction of conventional current is the direction that positive charge would flow. In a single loop circuit (as shown in Figure 20.3), the value for current at all points of the circuit should be the same if there are no losses. This is because current is the flow of charge and charge is conserved, i.e., the charge flowing out from the battery will be the same as the charge flowing into the battery. Depending on the situation, positive charges, negative charges, or both may move. In metal wires, for example, current is carried by electrons—that is, negative charges move. In ionic solutions, such as salt water, both positive and negative charges move. This is also true in nerve cells. A Van de Graaff generator used for nuclear research can produce a current of pure positive charges, such as protons. Figure 20.4 illustrates the movement of charged particles that compose a current. The fact that conventional current is taken to be in the direction that positive charge would flow can be traced back to American politician and scientist Benjamin Franklin in the 1700s. He named the type of charge associated with electrons negative, long before they were known to carry current in so many situations. Franklin, in fact, was totally unaware of the small-scale structure of electricity. It is important to realize that there is an electric field in conductors responsible for producing the current, as illustrated in Figure 20.4. Unlike static electricity, where a conductor in equilibrium cannot have an electric field in it, conductors carrying a current have an electric field and are not in static equilibrium. An electric field is needed to supply energy to move the charges. Find a straw and little peas that can move freely in the straw. Place the straw flat on a table and fill the straw with peas. When you pop one pea in at one end, a different pea should pop out the other end. This demonstration is an analogy for an electric current. Identify what compares to the electrons and what compares to the supply of energy. What other analogies can you find for an electric current? Note that the flow of peas is based on the peas physically bumping into each other; electrons flow due to mutually repulsive electrostatic forces. Calculating the Number of Electrons that Move through a Calculator If the 0.300-mA current through the calculator mentioned in the Example 20.1 example is carried by electrons, how many electrons per second pass through it? The current calculated in the previous example was defined for the flow of positive charge. For electrons, the magnitude is the same, but the sign is opposite, .Since each electron has a charge of , we can convert the current in coulombs per second to electrons per second. Starting with the definition of current, we have We divide this by the charge per electron, so that There are so many charged particles moving, even in small currents, that individual charges are not noticed, just as individual water molecules are not noticed in water flow. Even more amazing is that they do not always keep moving forward like soldiers in a parade. Rather they are like a crowd of people with movement in different directions but a general trend to move forward. There are lots of collisions with atoms in the metal wire and, of course, with other electrons. Electrical signals are known to move very rapidly. Telephone conversations carried by currents in wires cover large distances without noticeable delays. Lights come on as soon as a switch is flicked. Most electrical signals carried by currents travel at speeds on the order of , a significant fraction of the speed of light. Interestingly, the individual charges that make up the current move much more slowly on average, typically drifting at speeds on the order of . How do we reconcile these two speeds, and what does it tell us about standard conductors? The high speed of electrical signals results from the fact that the force between charges acts rapidly at a distance. Thus, when a free charge is forced into a wire, as in Figure 20.5, the incoming charge pushes other charges ahead of it, which in turn push on charges farther down the line. The density of charge in a system cannot easily be increased, and so the signal is passed on rapidly. The resulting electrical shock wave moves through the system at nearly the speed of light. To be precise, this rapidly moving signal or shock wave is a rapidly propagating change in electric field. Good conductors have large numbers of free charges in them. In metals, the free charges are free electrons. Figure 20.6 shows how free electrons move through an ordinary conductor. The distance that an individual electron can move between collisions with atoms or other electrons is quite small. The electron paths thus appear nearly random, like the motion of atoms in a gas. But there is an electric field in the conductor that causes the electrons to drift in the direction shown (opposite to the field, since they are negative). The drift velocity is the average velocity of the free charges. Drift velocity is quite small, since there are so many free charges. If we have an estimate of the density of free electrons in a conductor, we can calculate the drift velocity for a given current. The larger the density, the lower the velocity required for a given current. Good electrical conductors are often good heat conductors, too. This is because large numbers of free electrons can carry electrical current and can transport thermal energy. The free-electron collisions transfer energy to the atoms of the conductor. The electric field does work in moving the electrons through a distance, but that work does not increase the kinetic energy (nor speed, therefore) of the electrons. The work is transferred to the conductor's atoms, possibly increasing temperature. Thus a continuous power input is required to maintain current. An exception, of course, is found in superconductors, for reasons we shall explore in a later chapter. Superconductors can have a steady current without a continual supply of energy—a great energy savings. In contrast, the supply of energy can be useful, such as in a lightbulb filament. The supply of energy is necessary to increase the temperature of the tungsten filament, so that the filament glows. Find a lightbulb with a filament. Look carefully at the filament and describe its structure. To what points is the filament connected? We can obtain an expression for the relationship between current and drift velocity by considering the number of free charges in a segment of wire, as illustrated in Figure 20.7. The number of free charges per unit volume is given the symbol and depends on the material. The shaded segment has a volume , so that the number of free charges in it is . The charge in this segment is thus , where is the amount of charge on each carrier. (Recall that for electrons, is .) Current is charge moved per unit time; thus, if all the original charges move out of this segment in time , the current is Note that is the magnitude of the drift velocity, , since the charges move an average distance in a time . Rearranging terms gives where is the current through a wire of cross-sectional area made of a material with a free charge density . The carriers of the current each have charge and move with a drift velocity of magnitude . Note that simple drift velocity is not the entire story. The speed of an electron is much greater than its drift velocity. In addition, not all of the electrons in a conductor can move freely, and those that do might move somewhat faster or slower than the drift velocity. So what do we mean by free electrons? Atoms in a metallic conductor are packed in the form of a lattice structure. Some electrons are far enough away from the atomic nuclei that they do not experience the attraction of the nuclei as much as the inner electrons do. These are the free electrons. They are not bound to a single atom but can instead move freely among the atoms in a “sea” of electrons. These free electrons respond by accelerating when an electric field is applied. Of course as they move they collide with the atoms in the lattice and other electrons, generating thermal energy, and the conductor gets warmer. In an insulator, the organization of the atoms and the structure do not allow for such free electrons. Calculating Drift Velocity in a Common Wire Calculate the drift velocity of electrons in a 12-gauge copper wire (which has a diameter of 2.053 mm) carrying a 20.0-A current, given that there is one free electron per copper atom. (Household wiring often contains 12-gauge copper wire, and the maximum current allowed in such wire is usually 20 A.) The density of copper is . We can calculate the drift velocity using the equation . The current is given, and is the charge of an electron. We can calculate the area of a cross-section of the wire using the formula where is one-half the given diameter, 2.053 mm. We are given the density of copper, and the periodic table shows that the atomic mass of copper is 63.54 g/mol. We can use these two quantities along with Avogadro's number, to determine the number of free electrons per cubic meter. First, calculate the density of free electrons in copper. There is one free electron per copper atom. Therefore, is the same as the number of copper atoms per . We can now find as follows: The cross-sectional area of the wire is Rearranging to isolate drift velocity gives The minus sign indicates that the negative charges are moving in the direction opposite to conventional current. The small value for drift velocity (on the order of ) confirms that the signal moves on the order of times faster (about ) than the charges that carry it.
<urn:uuid:ff7e45b6-f54a-431a-be53-6b4962de026f>
CC-MAIN-2021-43
https://openstax.org/books/college-physics-ap-courses/pages/20-1-current
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588257.34/warc/CC-MAIN-20211028034828-20211028064828-00390.warc.gz
en
0.933789
2,884
4.28125
4
The 6th Amendment Right to Counsel Clause guarantees that if you are ever charged with a crime, you are entitled to the assistance of a lawyer, or "counsel." It is also referred to as the "assistance of counsel clause." The Courts have determined that this clause means you can hire your own attorney if you have the means, or that you must have one appointed by the government and paid for by the government, if you do not have personal means to hire an attorney. The Right to Counsel Clause also gives you the right to represent yourself in court if you want. The Right to Counsel Clause reads like this: criminal prosecutions, the accused shall enjoy the right... to have the Assistance of Counsel for his defence." The Right to Counsel Clause is considered by some to be the most important right that is protected by the 6th Amendment. The main purpose of the entire 6th Amendment is to protect the rights of a person who is accused of a crime by the government. If a person is accused, he must be able to defend himself before the court of jurisdiction. Since legal matters are often confusing and foreign to the average person, most people are not prepared to adequately defend themselves in court. Therefore, the Supreme Court decided that people must be allowed to have an experienced attorney to advise them and represent them in legal In a famous Supreme Court case called Powell vs. Alabama, in 1932, Justice George Sutherland wrote this very meaningful statement about the importance of the Right to Counsel Clause: "The right to be heard would be, in many cases, of little avail if it did not comprehend the right to be heard by counsel. Even the intelligent and educated layman has small and sometimes no skill in the science of law. If charged with crimes, he is incapable, generally, of determining for himself whether the indictment is good or bad. He is unfamiliar with the rules of evidence. Left without the aid of counsel he may be put on trial without a proper charge, and convicted upon incompetent evidence, or evidence irrelevant to the issue or otherwise inadmissible. He lacks both the skill and knowledge adequately to prepare his defense, even though he have a perfect one. He requires the guiding hand of counsel at every step in the proceedings against him. Without it, though he be not guilty, he faces the danger of conviction because he does not know how to establish his innocence." Most observers believe that the Founding Fathers did not originally intend that the Right to Counsel Clause meant that people who could not afford an attorney must have one appointed for them and paid at government expense. Instead, the Founding Fathers meant that if someone could afford to hire a private attorney, he could not be barred by law from This is a very different interpretation of the 6th Amendment Right to Counsel Clause than the one generally adhered to by the courts today. It wasn't until 1932 that the Courts began to find a right to have the government appoint a lawyer for a criminal defendant in the Right to Counsel Clause. The roots of all American laws are found in English law. In England, people who were charged with felonies had no right to hire a private attorney, though it was allowed sometimes in special circumstances. After the Glorious Revolution in 1688, Parliament passed a law allowing people accused of treason the right to be represented by an attorney at trial, but this right did not extend to any other classes of crime. All the way up until 1836, with the passage of the Prisoners' Counsel Act, this right was denied to people charged with nearly all serious crimes in England. The early American colonies generally brought English law with them, so most colonies also barred serious criminal defendants from obtaining a lawyer. This practice varied from colony to colony with some colonies appointing lawyers in some circumstances. Sometimes people were represented by an outside attorney, but it was done so freely by attorneys as an act of good will, for trial experience and for personal publicity. In some cases, these attorneys were paid at the public's expense. By the time of the Revolutionary War, most of the educated class believed that a person should have the right to hire an outside attorney or to even represent himself at trial, if he chose to do so and had the financial means to do so. Representing oneself at trial was very common in these days. Hiring an outside attorney to represent oneself was more rare and did not come into prevalence until the first half of the 1800's. Many Americans were dissatisfied with the United States Constitution as it was originally written, believing it did not adequately safeguard basic individual rights. Consequently, a movement to add amendments to the Constitution was successful in getting a Bill of Rights added to it. A Bill of Rights is a list of rights the government cannot interfere with. James Madison proposed twenty amendments to the US Constitution on June 8, 1789 during the first session of the First Congress. These amendments were later debated, altered and whittled down to ten amendments, which were ratified by the States. These first ten amendments became law on December 15, 1791, and became known as "The Bill of Rights." You can read more about the purpose of the Bill of Rights here. People are allowed to hire an attorney if they want one and can afford to do so. If they cannot afford their own attorney, the court must appoint an attorney for them. The court appointed attorney must be in good standing with his local bar association, the organization that accredits attorneys, must give his undivided loyalty and attention to the defendant and must make a good faith effort to assist the defendant. People do not have the right to choose their own court appointed attorney. This right is left to the court. If an attorney is court appointed and the defendant has some means, the court may require the defendant to pay a part of the government's costs. Attorneys must be acquired or appointed in all cases where an incarceration of any length of time is the actual punishment received, no matter how insignificant the crime. If incarceration is a possible punishment, but not the actual punishment given, then an attorney is not required. The Right to Counsel Clause takes effect the moment the government initiates adversarial criminal proceedings such as when formal charges are filed. The Right to Counsel Clause also applies during any critical part of the criminal trial procedure, such as sentencing, jury selection, participation in a criminal lineup or preliminary hearings. There are some parts of the criminal litigation proceedings that do not allow the presence of counsel. For example, scientific analysis of blood samples, hair, fingerprints, clothing and handwriting and voice samples do not require the defendant's attorney to be present at the time of analysis. The use of this evidence in court would require the presence of the defendant's attorney, though. The Supreme Court has determined that the Right to Counsel Clause guarantees not only the right to have an attorney in a criminal proceeding, but also to have an effective attorney. This doesn't mean that the attorney has to be perfect, but that he must adequately ensure that the defendant receives a fair trial. Courts can replace attorneys if they believe it is in the best interest of the defendant. Some reasons that a court may conclude that an attorney is ineffective include lack of knowledge of judicial and legal proceedings, conflicts of interest that prevent the attorney from being fully loyal to the interests of the defendant or a breakdown of communication between the defendant and his lawyer. Criminal defendants can waive their right to have an attorney in some cases if they are believed to be competent enough to understand what denying the right to counsel means. If a person is not knowledgeable enough to understand what giving up this right means, for example in the case of a minor or a mentally handicapped person, the court can deny them the right to refuse counsel and can appoint them an attorney anyway. If a person chooses to deny counsel and represent himself in court, he must be informed that defending himself is not merely a matter of explaining what happened. He must also have some knowledge of court procedures, the ability to adequately examine and cross-examine witnesses and communicate his side of the story efficiently and effectively. When a person takes advantage of the Right to Trial Clause guarantee to represent oneself in court, he is said to be representing himself pro se. Pro se is a Latin term meaning "for self." If a person proceeds pro se in a court case, it is usually because either he is a lawyer himself, he believes he can adequately navigate the court system and represent himself well, or because he is for some reason unable to obtain a lawyer. People rarely proceed in a court case pro se because they cannot afford to hire an attorney, since most criminal cases allow a court appointed attorney. It is generally understood that the Founding Fathers intended this clause to mean that if a person wanted to hire an outside attorney and they were able to afford it, they were to be allowed to do so. The Founding Fathers did not necessarily intend that the right meant that anyone who could not afford an attorney must be given one at the In colonial days, it was common for people to represent themselves in court and this was the understanding that the Founders had of court procedures. When they passed the 6th Amendment and the Right to Trial Clause, they merely meant to ensure the right of people to hire an outside attorney if they chose to do so and could afford it. This understanding was generally held until 1932 in a case called Powell vs. Alabama, in which a right to counsel was determined to exist in the 6th Amendment whether or not one could afford it. Through a series of rulings, the right became more and more established in American law, until today, when it is considered to be a universal right anytime someone faces serious criminal charges. As with all of the Amendments in the Bill of Rights, the 6th Amendment applied originally to the Federal government only and not to the state governments. After the Civil War and the addition of the 14th Amendment to the Constitution, the Supreme Court gradually applied all of the provisions of the Bill of Rights, including the Right to Counsel Clause, against the states as well. The Court did this through its interpretation of the 14th Amendment's Due Process Clause, which says that the states must give equal rights to all people. Other 6th Amendment clauses: to the Bill of Rights Learn about the 1st Amendment here. Learn about the 2nd Amendment here. Learn about the 3rd Amendment here. Learn about the 4th Amendment here. Learn about the 5th Amendment here. Learn about the 6th Amendment here. Learn about the 7th Amendment here. Learn about the 8th Amendment here. Learn about the 9th Amendment here. Learn about the 10th Amendment here. Read the Bill of Rights here. © 2008 - 2020 Revolutionary-War-and-Beyond.com Dan & Jax Bubis
<urn:uuid:a5ea7312-74bb-4225-8cc6-66b7994b2462>
CC-MAIN-2021-43
https://www.revolutionary-war-and-beyond.com/right-to-counsel-clause.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585653.49/warc/CC-MAIN-20211023064718-20211023094718-00030.warc.gz
en
0.973775
2,286
3.046875
3
Advertising, television, film and video, entertainment, game design, architecture, education, and businesses, including law firms and insurance companies, are among the industries that make use of design, illustration, and modeling skills developed by graduates of the animation programs. Entry-level opportunities such as storyboard artist, character designer, special effects artist, background painter, clean-up artist, animator, modeler, and video post-production artist are at the forefront of a field that is repackaging information in creative new ways. The animation professional is a skilled and specialized visual communicator who combines individual artistic talent with technological expertise to create impressions in a moving-image format. Students in the Animation Art & Design program all begin with a foundation in drawing, color design, video production, and computer applications. From this foundation, students develop advanced skills in various aspects of computer graphics and animation. Students learn to use the tools of the animation profession, ranging from computer operating systems to three-dimensional modeling and desktop video production. In addition to software applications, equipment also includes scanners, printer, video, audio, and classroom presentation equipment. These tools enhance students’ flexibility and creativity, and enable them to produce an individualized digital portfolio that demonstrates their practical and technical abilities to potential employers. The Animation Art & Design program helps students attain a foundation in animation art and design. The program also provides a hands-on approach to education that develops students’ strengths in art, computer animation operations, model building, storyboarding, character and object development, and 2-D and 3-D animation. The Digital Film & Video Program provides an intensive study of digital production focusing on digital film, corporate and commercial video production. New tools for content creation are continually rising on the digital landscape. Today’s content developer must be able to navigate this world with confidence. This program will provide the student with the skills and organizational thinking necessary for a safe, creative, and productive journey. Expanding digital markets have and will continue to present new challenges for the workforce. With this in mind, the Digital Film & Video program will offer the student an ever expanding curriculum to meet the needs of industry, while creating an environment conducive to helping students grow intellectually and creatively to meet the demands of tomorrow’s marketplace. Students will be prepared for at least entry-level positions depending on their motivation and skill level in a variety of settings including production houses, film sets, film and documentary companies, television stations, advertising agencies, and in corporate video production facilities. This program is best suited for students who are highly motivated self-starters who want to learn about digital video technology. The Game Art & Design program is designed to concentrate on the artistic side of games, not computer programming, this unique program is the first step toward becoming an artist and designer in the multi-billion dollar game design industry. Students will strengthen their basic art and design skills, then learn how to design game play and backgrounds, create characters and their environments, and apply knowledge of video and computer games to evaluate game products so as to plan game environments and determine attributes for game characters. Graduates will have the training and skills necessary to compete for entry-level positions in the game industry, such as game-play tester, 2D conceptual artist, 3D character builder, 3D object modeler, interactivity designer and background artist. The Visual Effects program trains students in two major areas; motion graphics and digital compositing. These interrelated fields deal with design, layering and movement of digital elements and imagery. Motion graphics is graphic design for broadcast and film, requiring additional skills in television technology, audio, video, animation and experimental graphics. A motion graphic specialist makes type, colours and images move, to communicate, educate, entertain, or build brand value. Examples of motion graphics work include film credits and television network identifiers, ranging from the CBS “eye” and the NBC peacock to the complex moving visuals that precede news or sports broadcast specials. Digital compositing uses computer software to assemble various component images into a single integrated believable scene. The components that are digitally “layered” could be live action shots, digital animations or still images; combining them required expertise in color and lighting adjustment, motion tracking and other related skills. Examples of digital compositing range from broadcast post-production to feature film visual effects, where imaginary animated elements are combined seamlessly with real world shots. As technology and software are constantly evolving, students will be trained in diagnostic and problem solving techniques designed to orient them quickly to unfamiliar software environments and solve common technical problems. Finally, students will learn how to communicate an idea or tell a story effectively, as well as how to work in a collaborative environment. This program helps to create game programmers. Game Programmers must not only have the artistic talent and abilities, but more importantly be well-versed in the technical aspects of the game, thus capable of comprehending the intent of the artistic creator and the technical needs and challenges in achieving the intended results of the game designers. With that unique understanding, the Game Programmer can customize the programming tools in a computer software application to best meet the needs of an individual game. An intensely hands-on program that combines an introduction to animation skills with technical programming skills, the Visual & Game Programming program focuses on student ability to create and modify programs/scripts for game levels. Students will be introduced to the principles of programming, which enables them to enter into the world of shading development, graphic dynamics, and pipeline streamlining. They will learn programming tools such as Perl, C++, C-shell, Mel scripting, MaxScript, DirectX & OpenGL. Students in this program will become very familiar with different operating systems while focusing on Unix type platforms. The program includes all course work in the Professional Recording Arts diploma program with additional academic and project requirements and higher expectations regarding academic achievement. Extra courses include Media Studies and Technology, and Directed Studies. The program requires the completion of a variety of written papers through Directed Studies courses. In addition, students are required to complete a major collaborative project. Surround sound, interactive CDs and DVDs: Audio has come along way since the early days of vinyl. For individuals interested in professional engineering in the digital age, or just learning how to work with and create sound, the Independent Recording Arts Program at The Art Institute of Vancouver – Burnaby campus is the perfect educational vehicle. Students train in digital and analogue recording studios using linear and non-linear recording technologies. Curriculum includes topics such as professional session engineering, microphone techniques, outboard equipment, MIDI, signal flow, critical listening and audio/acoustic principles. Instruction includes linear and non-linear digital audio theory, surround sound, system integration and synchronization (e.g., analogue, digital, video, machine control, automation) and advanced recording techniques. Practice and integration are emphasized along with technical expertise, production and project management, and problem-solving and troubleshooting skills. The Fashion Design and Merchandising program offers the best of both worlds – the ability to transform design ideas into garments and accessories as well as knowledge of the business side of the fashion industry. In the design segment of the program, students are introduced to basic skills of construction in sewing, tailoring, flat pattern drafting, and draping to provide a solid foundation in the fundamentals of apparel engineering. In the merchandising segment, usage of textiles, colours and design to create visual merchandising campaigns are taught. Business courses are added to teach students how to develop, analyze and implement effective sales strategies. The faculty nurtures creativity and teaches hands-on skills using traditional tools as well as industrial equipment similar to that found in the fashion design field. The combination of professional marketing skills and technical knowledge help students prepare for successful entry into the industry as junior designers, patter grader, management trainee, visual merchandiser, and assistant merchandise buyer. Fashion retail and merchandising is a thriving industry worth billions of dollars worldwide. Globalization is creating a demand for fashion marketers and merchandisers with sensitivity to other languages, cultures and tastes. The industry is increasingly requiring that professionals possess a basic understanding of the elements of international business. To operate within the global market, professionals need a skill-set that is flexible and includes training specific to marketing and visual merchandising design basics, quantitative skills, a keen sales orientation, and sense of creativity. The Fashion & Retail Management diploma program at The Art Institute of Vancouver blends individual creativity with a keen sales orientation. Marketing, visual merchandising, manufacturing, buying and merchandising, retail management, publicity, and fashion publishing: these are just a handful of the entry-level positions that may await graduates. In the first quarter of the program, students are introduced to foundation skills such as colour theory, fashion sketching, costume history, digital imaging and introductory retail skills. In the second and third quarters, they will move on to topics such as concepts and trends in apparel, merchandise management, textiles and fabrics, advertising and marketing, elements of retail operations and technology, manufacturing and retail mathematics. By the fourth and fifth quarters, students are ready to tackle courses in consumer behaviour, business ownership, media buying, human resources, accounting, store planning and lease management. Finally, in the last quarter they will concentrate on developing a portfolio, securing an internship, and turn to topics on e-commerce and web marketing. Students in the Fashion & Retail Management will be taught to cultivate the creative talents that bring their ideas to life, and will acquire the business skills needed to take them to market. Entry-level jobs vary widely from retail store managers, department retail sales professionals, visual merchandisers and sales supervision to merchandising managers. The 9 month Residential Design diploma program provides a hands-on approach to education that develops students’ strengths in the design of three-dimensional space. Students will learn the basics of residential design and space planning as well as the use of fixtures and furnishings. In addition to these design elements, students will also focus on the communications skills and professionalism that are necessary to succeed in this field. The Residential Design diploma program was born out of feedback from industry professionals who indicated a need for graduates with skills they will acquire in this course. Residential design businesses are growing concurrent with national prosperity. There are many types of specialty stores that employ residential designers in positions that typically require some skills in interior design. This diploma provides the hands-on instruction needed to prepare students for the growing entry-level opportunities available. This diploma program is a prerequisite for the Advanced Diploma program in Interior Design. A graduate of the Residential Design Diploma Program will be prepared for entry-level postions such as: Facilities Planner; Visual Merchandiser; Residential Planner; Draftsperson, Manufacturer’s Representative; Design Consultant; and Showroom Coordinator.
<urn:uuid:edb66ee9-7852-45e5-b396-075fec0a5f3c>
CC-MAIN-2021-43
https://wherecreativitygoestoschool.ca/author/admin/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585290.83/warc/CC-MAIN-20211019233130-20211020023130-00029.warc.gz
en
0.926702
2,224
2.796875
3