content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
Andreas T. Zanker, Greek and Latin Expressions of Meaning: The Classical Origins of a Modern Metaphor. Zetemata, 151. München: Verlag C. H. Beck, 2016. Pp. 274. ISBN 9783406688454. €88.00 (pb).
The book contains nine chapters. The first three (pp. 22-89) deal with the polysemy of expressions of meaning, which are classified in complex groups such as desiring and wanting (e.g. βούλεσθαι and velle), thinking (e.g. διανοεῖν and sentire), speaking (e.g. λέγειν and dicere), passive constructions (e.g. τὰ ὑποτεταγμένα and intellectus), equivalence (e.g. δύνασθαι and valere), and finally showing and sign-giving (e.g. σημαίνειν and significare). This classification reveals the broad spectrum of the semantic field of meaning. One might question, however, the category of passive constructions, since all of Zanker's other groups have a semantic basis whilst this one is built on a syntactic one, and the same lemmata treated elsewhere are considered here, but in their passive form. Thus the argument that the verbs for "signifying" occur in both the active and the passive voice is surely important, but the classification is not coherent and, at the very least, questionable when presented in this way.
Chapter four (pp. 90-103) turns to the second important argument developed in the work: that the polysemy of expressions of meaning originally issued from metaphorical and metonymical transference. Using various (ancient and modern) theories of metaphor Zanker argues that the terminology of meaning established itself in Greek and Latin (Latin being influenced by Greek) in three ways: through new coinages, borrowing from a foreign language, and metaphor and metonymy. Further, the usages of all semantic fields discussed above, with the exception of those of equivalence (εἶναι, δύνασθαι etc.), were transferred from animate to inanimate subjects.
Chapter five (pp. 104-122) goes back ad fontes and analyzes Archaic poetry. Zanker argues that the vocabulary of meaning is present neither in Homer nor in Hesiod, but the vocabulary of interpretation of signs and dreams is well developed. Thus the verb σημαίνειν is used in the sense of "making a sign", "commanding", "marking"; the verb ὑποκρίνεσθαι means "to interpret, to explain"; and emphatic affirmative expressions such as ἦ μάλα "surely" stand for "it means" (Il. 18, 6-15, Od. 19, 474-475 Eurycleia saying ἦ μάλ' Ὀδυσσεύς ἐσσι). Further the genitive absolute can stand in for A means B in such expressions as "A happening…B will happen" (Hes. Op. 383-384). Approaches to the concept of signifying are to be seen as well in epic allegorical interpretations: in order to make an allegorical interpretation it is necessary to separate the apparent meaning of the words from what the author really meant. Thus, according to Zanker, the historical period (specifically, the 7th – 5th c. BCE) provides a fitting context for the creation of the subsequently attested vocabulary of meaning, vocabulary being transferred in order to investigate a new set of questions. Writing and reading culture and the new technology of tablets find their reflection in the development of the Greek terminology of meaning (established by the beginnings of the 4th c. BCE), which facilitated a different type of interaction with language than had been possible before.
In chapter six (pp. 123-145) Zanker further develops his main argument on the transference from animate to inanimate subjects and regards the metaphor "text as person" in Greek and Latin texts from the earliest inscriptions through to late antiquity. He reveals that text is depicted as a human being from the Archaic period onwards, and books and poems were regularly treated in Greek and Roman culture as sentient entities. Here various kinds of texts are examined, examples being dedicatory epigrams on inscribed objects speaking for themselves "X set me up" (μ' ἀνέθεκε and med feced) both in the Greek and Italian Archaic periods, and Hellenistic and Roman epigrams. Further personifications of text are considered both in the metaphor "books as author's children" and in direct addresses to a written medium (such as Cat. Carm. 35, 2 velim Caecilio, papyre, dicas). This treatment of text as a human being led to the establishment of fixed terminology transferred from parts of human body standing in for parts of texts (e.g. κῶλα and membra, κεφαλή and caput) and registered in prose and technical writings.
If metaphor is a substitution (child for book), then the other cognitive phenomenon, metonymy, represents an association, and thus the next chapter (pp. 146-163) concerns the metonymy of author for text. Roman authors made great use of metonymy and it was common to refer to a piece of literature by the name of its author. Thus verbs of reading take both texts and authors as their direct objects, and the trope further enables an elision of the distinction between human beings and texts (such as legar in Horace and Ovid). Zanker argues that the metonymy was available in Greek from Plato onwards, and gives as an example Socrates noting that Phaedrus is carrying a speech by Lysias under his cloak (Pl. Phdr. 228d-e). At this point the objection that this metonymy is present in Greek thought during the course of 5th c. BCE seems appropriate, as there are a number of earlier parallels to this process in comedy. In mocking his contemporaries and explicitly naming them, is Aristophanes referring to their personalities or to their work? Whereas one might dispute whether Aeschylus and Euripides in the Frogs should be considered pre-metonymies for their plays, the tragic poet Theognis was charged of his 'frigid' style and thus called ψυχρός himself. (Ar. Th. 170, cf. Ach. 138-140, and on 'frigidity' of style Arist. Rhet. 1405b35-1406b14), and the dithyrambist Cinesias was mocked as a bird flying up to extract the preludes from the clouds for his poetry lacking substance (Ar. Av. 1375-1391). As far as the identification of texts with animate beings is concerned, the satyroi used for satyr drama in Greek (cf. Ar. Th. 157) constitutes a further example of metonymy from the 5th c. BCE.
Chapter eight (pp. 164-190) deals with (meta-)metaphor and with the spatial metaphorical vocabulary of metaphor (μεταφορά and translatio). According to Zanker, metaphor itself makes a significant contribution to the creation of the classical vocabulary for meaning. Thus for example spatial metaphor contributed to the creation of a vocabulary for a number of grammatical notions. Examples include circumitio "circumlocution", compound names (in other words two elements placed together in order to make a new whole, τὸ σύνθετον "placed together"), and ὁρίζειν and finire "to bound, to enclose" (which served as verbs for defining a word). The ancient metaphors for metaphor exemplified three functions of the trope mentioned by the ancient theorists: a) its ability to provide a name for a hitherto unnamed thing, b) the way in which it made vivid otherwise abstract concepts, c) its role in adorning prose and poetry. In this context Zanker cites ancient and modern theories of metaphor. Among the ancients, Aristotle, Cicero and Horace reveal a consciousness of the metaphorical quality of the vocabulary for metaphor. Among the moderns, basic works on metaphor such as I.A. Richards (1936), Paul Ricoeur (1977), Max Black (1954-55), George Lakoff and Mark Johnson (1980), Gilles Fauconnier and Mark Turner (2003) are referred to. Zanker draws the conclusion that there is hardly any break between ancient and modern terminologies for metaphor.
The last chapter (pp. 191-204) demonstrates the importance of ancient approaches to metaphor for an analysis of modern criticism. Τhe Intentional Fallacy theorists (W.K. Wimsatt and Monroe Beardsley (1946) with their position that the mental life and intentions of the author are of minor interest take us back to ancient personifications of the text and to the ancient penchant for detaching an author's work from the author (Plato's Phaedrus and Protagoras deal with this issue par excellence). In the mid 20th c. text and author become fused again with the result that it is difficult to determine to what the terms "text" and "author" refer. Critics (for example Hans-Georg Gadamer's Wahrheit und Methode (1960) and Roland Barthes' Le plaisir du texte (1973)) frequently transferred verbs and expressions of intentionality from the author to his text.
Finally, it is worth noting that the book contains an important appendix with a useful analysis of the terminology of "signifying" in Herodotus (pp. 212-224).
Though there is much to praise in this work, two crucial methodological problems should be mentioned. First, a monograph dealing with the study of the development of vocabulary should not simply trust the LSJ and the OLD. The reader expects the philological analysis of real text parallels. Quotation from the LSJ lemma (or reference to some secondary literature) is disappointing. Thus in a very important argument, the discussion on the verbs of reading in Greek, Zanker lists (only) six verbs, neither explaining his choice nor clarifying which of these occur earlier or later, which are more or less common, which are used in which contexts or registers (p. 146-148). The only reference point provided is the LSJ. Information presented in this way (with additional oversights such as the missed ἀνανέμειν attested in the early 5th c. BCE in Epich. fr. 232 PCG, SEG 35, 1009 and GVI 1210, 2) is not particularly useful, unless the reader is sufficiently interested in the issue to complete the philological work from the beginning by himself. Other examples for both Greek and Latin might be mentioned (e.g. pp. 63-64, 69, 88).
A second methodological problem concerns the work's ambitious task of charting the development of the ancient vocabulary of meaning from Homer to late antiquity. Zanker even attempts an analysis of the vocabulary of metaphor in a number of modern languages. The author posits 'one' coherent ancient (or even diachronically 'European'!) way of thinking and of dealing with language from Archaic through Hellenistic, Roman and modern times. On both chronological and content levels this approach is problematic. To give only one example, important evidence from Plutarch on the phenomenon of metonymy (Plut. Mor. 379a) may perhaps more valuably be quoted alongside Quintilian's discussion of metonymy (Quint. 8, 6, 26) rather than Plato's Phaedrus, as in this case contemporaries such as Plutarch and Quintilian probably reflected and contributed to the same grammatical and rhetorical discourses (p. 150-152).
Notwithstanding these criticisms, Zanker achieves his aims: his work surveys the archaeology of the vocabulary of meaning in Greek and Latin terminology and in its influence on modern languages, and, secondly, explores that vocabulary's metaphorical and metonymical origins. The work makes a substantial and invaluable contribution to our understanding of the cognitive processes by which metaphor and metonymy supplied Greek and Roman scholarly language with new terminologies.
1. See, on Greek theories of meaning, Giovanni Manetti's Theories of the sign in classical antiquity (1993) and Knowledge through signs: ancient semiotic theories and practices (1996), Max Hecht's Die griechische Bedeutungslehre: eine Aufgabe der klassischen Philologie (1888), and George Kennedy's chapter "Language and meaning in archaic and classical Greece" in The Cambridge history of literary criticism, vol. 1, (1989), and the volume edited by Marc Baratin and Claude Moussy Conceptions latines du sens et de la signification (1999) for Latin concepts of meaning. In addition, there are a few works on individual ancient authors' approaches to meaning. Zanker's bibliography contains all the relevant items. | http://www.bmcreview.org/2017/05/20170514.html |
A University of Houston (UH) scientist and his team are working to develop the next generation of prostate cancer therapies, which are targeted at metabolism.
With approximately one out of six American men being diagnosed and nearly a quarter of a million new cases expected this year, prostate cancer is the most common malignancy among men in the U.S. Since prostate cancer relies on androgens for growth and survival, androgen ablation therapies are the standard of care for late-stage disease. While patients initially respond favorably to this course of treatment, most experience a relapse within two years, at which time limited treatment options exist.
At this stage, known as castration-resistant prostate cancer, androgen-deprivation therapies are no longer effective, but interestingly, androgen receptor signaling is still active and plays a large role in the progression of the cancer. Because of this, both androgen receptors and the processes downstream of the receptor remain viable targets for therapeutic intervention. Unfortunately, it is unclear which specific downstream processes actually drive the disease and, therefore, what should be targeted.
Daniel Frigo, an assistant professor of biology and biochemistry with the UH Center for Nuclear Receptors and Cell Signaling (CNRCS), has set his sights on a particular cascade of biochemical reactions inside the cell. Focusing specifically on an enzyme known as AMPK, which is considered a master regulator of metabolism, Frigo and his team have demonstrated that androgens have the capacity to take control of this enzyme’s molecular signals.
“The androgen signaling cascade is important for understanding early and late-stage prostate cancer progression,” Frigo said. “We found that when androgens activated this signaling pathway, it hijacked normal conditions, allowing the tumor to use diverse nutrients to the detriment of the patient. These results emphasize the potential utility of developing metabolic-targeted therapies directed toward this signaling cascade for the treatment of prostate cancer, and we look forward to exploring this and other metabolic pathways further in order to develop the next generation of cancer therapies.”
In their studies, Frigo’s team showed that prostate cancer cells respond to androgens not only by increasing the breakdown of sugars, a process known as glycolysis that is commonly seen in many cancers, but also escalating the metabolism of fats. While much of the research on cancer metabolism has historically focused on glycolysis, the researchers say it’s now becoming apparent that not all cancers depend solely on sugars.
Their findings further indicate that the metabolic changes brought about by the AMPK enzyme result in distinct growth advantages to prostate cancer cells. They say, however, that our understanding of how androgen receptor signaling impacts cellular metabolism and what role this has in disease progression remains incomplete.
The Frigo lab is one of several within the CNRCS concentrated on the role of nuclear receptors in cancer prevention and treatment, and his team has long studied the androgen receptor, which turns on or off various signaling pathways. Frigo believes these pathways hold the potential for better cancer treatments. Targeting these underexplored metabolic pathways for the development of novel therapeutics, Frigo’s ultimate goal is to unlock more effective and less harmful cancer treatment alternatives.
With funding from the Department of Defense, National Institutes of Health, Texas Emerging Technology Fund and Golfers Against Cancer, Frigo’s latest research appears in Nature’s Oncogene. One of the world’s leading cancer journals, Oncogene covers all aspects of the structure and function of genes that have the potential to cause cancer and are often mutated or expressed at high levels in tumor cells. | https://www.uh.edu/nsm/biology-biochemistry/news-events/stories/2014/0303-targeting-metabolism.php |
While bios typically follow a sequential order, there are some that are not. The very first significant sort of bio is memoir, which information the life of a bachelor. These are written by a 3rd party and are generally intended for a broad target market. Popular instances include the lives of Shakespeare as well as the Beatles, and Steve Jobs and also the Immortal Life of Henrietta Lacks. These are likewise taken into consideration “bios” of fantastic men and women.
The second kind of biography includes re-creating the subject’s life in today. Trainees have to try to recreate the topic’s atmosphere, explain how they operated in it, and also answer the reader’s inquiries about the person’s life. This sort of bio requires a high degree of study as well as creative thinking. A trainee ought to choose a historic figure to study and use as a version. For example, a historic number might be renowned, however a biographical account of an artist or a scientist would certainly be a lot more precise.
When composing a biography, trainees need to be able to locate details about the subject’s life. They have to gather details concerning the person’s youth, family, legacy, as well as success. Additionally, they ought to infuse their own character right into their writing by including discussion and quotes. Throughout the research study procedure, students should search for concealed treasures and questions to sustain the creating procedure. After a thorough evaluation of the resources, they can now present their timelines to the course.
To start a biography, trainees ought to make a timeline of the topic’s life. A member of the family can be a great technique topic. Trainees need to look into crucial events in the person’s life as well as arrange them onto a timeline. Afterwards, they can include captions to photos and provide their timeline to the course. This is an integral part of the planning process, and also the timeline ought to be completed in a week. When finished, the timelines can be offered to the course.
After making a timeline, trainees must look into their notes and also timeline. Identify essential occasions and tasks that occurred throughout the person’s life. Depending on the topic, these events can be grouped into groups. Each group must be categorized by a solitary word, or thematic principle. A thematic statement will certainly aid students bring a lot more indicating to the bio. Once they have actually sorted out the key events as well as styles, they can begin creating a narrative.
When students have actually picked a subject, they ought to begin by producing a timeline. They can make use of a member of the family as a practice topic. They must look into vital occasions in the individual’s life and arrange them on the timeline. They can likewise consist of pictures and captions to make the timeline much more fascinating and insightful. After finishing the timeline, pupils ought to offer it to the class to offer context. An ideal bio needs to be a well-written, well-researched paper that reflects the topic’s individuality as well as way of life.
The primary step in making up a biography is to discover more concerning the topic. A biography should be outlined as well as exact. A thorough bio ought to have information about the individual’s childhood years, household, as well as tradition. Including dialogue as well as quotes can enhance the material of a biographical essay. Usually, brand-new concerns can assist students form the direction of their writing. These can be beneficial in writing a bios. So, the next time you’re seeking a subject to cover, do a little research and choose a topic.
The following action is to produce an overview. As soon as you have a rough outline, the following action is to produce the story. You will certainly need to include a timeline and notes for each and every event. When you’ve described the major events, the next step is to write your first draft of the biography. This will require you to reword the essay numerous times. This will certainly help you create a bio that is interesting and also informative. It’s important to offer as much details about the individual as you can.
Thematic suggestions can be helpful for students to compose a bio. Generally, the themes are recommended throughout the research procedure. You can utilize these themes in your biography. Keep in mind to utilize print and also on-line resources when creating a bio. If you can not locate anything concerning the person, try to talk to people that were close to the individual. Then you can utilize their tales to provide a rundown of their own lives. It will certainly provide a far better understanding of the subject’s life as well as help them create an accurate as well as appealing bio.
The second action in the composing process is to pick a topic that interests students. This can be anything from a famous historical number to a common person. It needs to be an individual they have actually read about before. Making use of the source, the writer must likewise recognize the subject’s life and the times in which he or she lived. When a person lived a lot, their life has to be interpreted in a comparable means.
Throughout the research stage, the trainee needs to choose a subject that fascinates him or her. The trainee can select a subject based upon the styles that she or he discovered essential. The student ought to note down the details from the source. The last draft should be as detailed as feasible. When the draft is ready, trainees must note it. They need to likewise note their first drafts with a single word. Thematic statements assist pupils include a deeper definition to the biography.
While a biographer’s goal is to create the truth, there are some instances when the writer makes up truths or misses out on vital information. Preferably, a biographer must be able to write from the very first person and fairly, yet this is difficult in a lot of cases. Sometimes, the author might not have the best information on the subject. The pupil ought to make sure she or he does not make any type of errors when reviewing a biography.
It is very important to remember that the subject of a biography is supposed to be true. Nevertheless, it can be tough to discern the reality in a work of art. The author needs to carefully take a look at the perspective of the subject and also see to it the facts hold true. Or else, the biography will certainly be a job of fiction. If the author is not a chronicler, the writer will certainly be an amateur. For a biographer, the focus of a topic is the writer’s bio. read more
The writing of a bio is a complex process. The pupil must have the ability to re-write a paragraph using their very own viewpoint and also discourse. By doing this, the trainee should transform a detached account right into a fascinating story. While it is necessary to investigate the subject of a biography, trainees should be able to select a subject that is interesting to them. Furthermore, trainees need to have the ability to identify the schedule of information online and evaluate the reliability of on-line sources. | http://educatorswitnessprotectionprogram.com/2022/03/28/the-shocking-revelation-of-bio/ |
The 10 cyber trends Australian businesses must consider in 2019
The ever evolving cyber risk landscape has led to an increasing awareness at senior management and board level of cyber exposures, and the need for it to be treated not only as a technology exposure but an overall enterprise risk.
The 2019 Global Risks Report confirmed that technological instabilities remain an elevated concern for businesses across the globe. Utilising data collated from 1,000 multi-stakeholder members who responded to the World Economic Forum ‘Global Risks Perceptions Survey’, this year’s Global Risks Report showed that “massive data fraud and theft” was ranked the number four global risk by likelihood over a 10-year horizon, with “cyber-attacks” at number five.
With this as our starting point, we’re pleased to be able to share with you our list of Top 10 cyber considerations and predictions for Australian businesses in 2019.
1. Creating a strong cyber security culture
A strong cyber security culture should not only focus on the training of employees to build awareness of common forms of threats (phishing emails, social engineering scams) but should also empower individuals to understand their responsibility and the critical role they play in the success of their company’s cyber risk management framework.
2. Cyber Coverage Under Traditional Insurance Policies
There is growing attention from insurers regarding the provision of unintended ‘silent cyber’ coverage within non-cyber insurance policies. We are at a point in time where these policy wordings are being closely reviewed with a view to adding affirmative / non-affirmative language that clarifies instances where cover will / won’t be provided for a cyber event.
3. Data Encryption Legislation
In early December 2018, the Australian Senate passed new laws aimed at providing law enforcement agencies access to encrypted communications.
In the event of a cyber-attack that causes widespread disruption to business networks, software applications that are traditionally intended for personal use can become invaluable. This is certainly what one multinational law firm found when impacted by the NotPetya malware in 2017, relying on messaging application WhatsApp for several weeks to keep their business running. What happens however when the encrypted communications within these messaging applications lose their encryption protections?
4. Contractual Requirements to Purchase Cyber Insurance
There has been notable growth in the caution displayed by companies on how their business partners and suppliers handle sensitive and confidential information. Organisations, especially government associated entities are seeking to include in their contracts a requirement for a contractor or supplier to hold cyber risk and data breach related insurance.
5. Cyber and Business Interruption
All types of organisations, even if they do not hold large volumes of sensitive or valuable data, need to consider and account for potential risk associated with a cyber event rendering operating systems ineffective or inaccessible.
6. Blockchain
Within the insurance industry, insurers have traditionally been reluctant to provide coverage to this newer risk class. As the use of Blockchain and digital asset currencies grows, and governments establish protocols for regulating their use, we anticipate the insurance market will rapidly evolve to provide alternate risk transfer solutions to the corporate world.
7. IoT Devices Increase the Risk of Security Incidents
The vulnerabilities that exist in IoT devices are substantial, and there is certainly heightened awareness that cyber criminals will continue to target IoT devices as a gateway to larger computer networks. Despite these exposures, organisations can successfully position themselves to take advantage of powerful new technologies made available using IoT devices. This can be achieved by proactively identifying the potential risks exposures of using these machines, and implementing robust security policies, procedures and a strong cyber risk culture to counter the potential cyber risks they carry.
8. Social Engineering Fraud
This type of fraud doesn’t require sophisticated software or a high level of technical knowledge. It only takes a basic understanding of a company’s organisational structure and key employees, which can be found through a quick internet search. Given the relative ease of conducting social engineering fraud when compared to carrying out a sophisticated hack or targeted ransomware attack, it should come as no surprise that this form of cybercrime is expected to continue, and even escalate, this year.
9. Government incentives – grants for micro/small business to conduct a health check
In late 2018 the Australian Government announced that applications were open for its Cyber Security Small Business Program2. This initiative underscores growing recognition from Government bodies that clearer and more stringent privacy protection legislation can only be effective if companies are taking an active role in the management of their cyber risk. It is not only larger companies that can be significantly impaired by a cyber event; organisations of any size are at risk.
10. Less about security, more about resiliency
While decisions can be made to invest money in preventing cyber events from occurring, the nature of operating a company in today’s highly technological and connected world means that cyber risks will always be there. Therefore, the cyber security conversation should also include a focus on resiliency and a holistic approach to protecting your company, considering factors to both prevent an attack as well ensuring that the organisation can respond to and recover from one. | https://www.marsh.com/au/insights/research/cyber-trends-considerations-2019.html |
Every year someone dies on a bicycle. Cars hit bicyclists every year and every year there are people in our community, friends, family, co-workers that suffer injuries and worse due to being struck by vehicles. There are many instances of DUI involvement and many more of clean and sober moving violations on either parties side. The part that pisses me off the most is the surge of bike hate that causes malicious acts against cyclist for the simple reason of being too slow, taking up space or being presumed to have some superiority. There are bloggers that write about being so angry with the slow moving cyclists downtown that they want to encourage people to spray them in the face with super soakers filled with hot sauce concoctions. Garbage and slushies are thrown at cyclists in and out of bike lanes. Burning cigarettes too. Any cyclist who has read a post about a bicycle accident knows standard responses include "Were they wearing a helmet?" "Were they in their lane?" "Why do they have to be in the road when there is a sidewalk?" and other such responses working to assign blame to the cyclist.
I am not saying every cyclist is perfect and without fault. Just last night I was walking my toddler and a cyclist was riding down one of the busiest streets in town in the wrong direction. He was staying as close to the side as he could get. There was no bike lane and no shoulder and he was blatantly a danger. I get this. I understand how the driver of a vehicle could be upset by this but what it comes down to for me is the fact that this man is unprotected. He is a life. I am tired of everyone being in such a hurry that the value of a life is reduced.
Safely moving around a cyclist takes less than a minute most of the time. What if it takes 5 or 10 or even 15 minutes? What is the value of that life versus getting to work on time or whatever location your vehicle is taking you? Is putting that person in danger worth those minutes?
To all the drivers out there I'd like to say stay off the hooch, put down the phone, slow down and start seeing the life over the bike and the destination.
To all the cyclists out there I'd like to say stay safe, keep riding, play defense and enjoy your ride.
| |
9.0 on the Mohs scale.
Myanmar (Burma), Thailand, India, Sri Lanka (Ceylon), Tanzania, Afghanistan, Australia, Brazil, Cambodia, Malagasy Republic, Malawi, Pakistan, Zimbabwe (Rhodesia), United States (Montana, North Carolina).
Slightly more brownish and purple than transparent ruby. These gemstones are never the less very attractive.
Star Ruby is not known to be enhanced.
More information on gemstone enhancements.
Indian Star Ruby is a member of the Ruby gemstone family. | http://gemhut.com/staruby.htm |
# Geometrical-optical illusions
Geometrical-optical illusions are visual illusions, also optical illusions, in which the geometrical properties of what is seen differ from those of the corresponding objects in the visual field.
## Geometrical properties
In studying geometry one concentrates on the position of points and on the length, orientation and curvature of lines. Geometrical-optical illusions then relate in the first instance to object characteristics as defined by geometry. Though vision is three-dimensional, in many situations depth can be factored out and attention concentrated on a simple view of a two-dimensional tablet with its x and y co-ordinates.'
## Illusions are in visual space
Whereas their counterparts in the observer's object space are public and have measurable properties, the illusions themselves are private to the observer's (human or animal) experience. Nevertheless, they are accessible to portrayal by verbal and other communication and even to measurement by psychophysics. A nulling technique is particularly useful in which a target is deliberately given an opposing deformation in an effort to cancel the illusion.
## Categories of visual illusions
Visual or Optical Illusions can be categorized according to the nature of the difference between objects and percepts. For example, these can be in brightness or color, called intensive properties of targets, e.g. Mach bands. Or they can be in their location, size, orientation or depth, called extensive. When an illusion involves properties that fall within the purview of geometry it is geometrical-optical, a term given to it in the first scientific paper devoted to the topic by J.J. Oppel, a German high-school teacher, in 1854. It was taken up by Wilhelm Wundt, widely regarded as the founder of experimental psychology, and is now universally used. That by 1972 the first edition of Robinson's book devotes 100 closely printed pages and over 180 figures to these illusions attests to their popularity.
## Examples of geometrical-optical Illusions
The easiest to explore are the geometrical-optical illusions that show up in ordinary black and white line drawings. A few examples are drawn from the list of optical illusions. They illustrate illusions of position (Poggendorff illusion), of length (Müller-Lyer illusion), of orientation (Zöllner illusion, Münsterberg illusion or shifted-chess-board illusion and its café wall illusion variant), of rectilinearity or straightness of lines (Hering illusion), of size (Delboeuf illusion) and of vertical/horizontal anisotropy (Vertical-horizontal illusion), in which the vertical extension appears exaggerated.
## Related phenomena
Visual illusions proper should be distinguished from some related phenomena. Some simple targets such as the Necker Cube are capable of more than one interpretation, which are usually seen in alternation, one at a time. They may be called ambiguous configurations rather than illusion, because what is seen at any time is not actually illusory. The configurations of the Penrose or Escher type are illusory in the sense that only on a detailed logical analysis it becomes apparent that they are not physically realizable. If one thinks of an illusion as something out there that is misinterpreted, and of a delusion when a demonstrable substrate is lacking, the distinction breaks down for such effects as the Kanizsa triangle and illusory contours.
## Explanations
Explanations of geometrical-optical illusion are based on one of two modes of attack:
the physiological or bottom-up, seeking the cause of the deformation in the eye's optical imaging or in signal misrouting during neural processing in the retina or the first stages of the brain, the primary visual cortex, or the cognitive or perceptual, which regards the deviation from true size, shape or position as caused by the assignment of a percept to a meaningful but false or inappropriate object class.
The first stage in the operations that transfer information from a visual target in front of an observer into its neural representation in the brain and then allow a percept to emerge, is the imaging by the eye and the processing by the neural circuits in the retina. Some components of geometrical-optical illusions can be ascribed to aberrations at that level. Even if this does not fully account for an illusion, the step is helpful because it puts elaborate mental theories in a more secure place. The moon illusion is a good example. Before invoking concepts of apparent distance and size constancy, it helps to be sure that the retinal image hasn't changed much when the moon looks larger as it descends to the horizon.
Once the signals from the retina enter the visual cortex, a host of local interactions are known to take place. In particular, neurons are tuned to target orientation and their response are known to depend on context. The widely accepted interpretation of, e.g. the Poggendorff and Hering illusions as manifestation of expansion of acute angles at line intersections, is an example of successful implementation of a "bottom-up," physiological explanation of a geometrical-optical illusion.
However, almost all geometrical optical illusions have components that are at present not amenable to physiological explanations. The subject, therefore, is a fertile field for propositions based in the disciplines of perception and cognition. To illustrate: Instead of interpreting them as just a pair the sloping lines within which one feature is seen smaller than an identical one nearer to the point of convergence, the Ponzo pattern may be taken for a railroad track rendered as a perspective drawing. A barrel lying within the rails would have to be physically wider to cover the increased portion of the width of the track if it were farther away. The consequence is the judgment that the barrels differ in diameter, whereas their physical size in the drawing is equal.
A scientific study will include the recognition that a representation of the visual word is embodied in the state of the organism's nervous system at the time the illusion is experienced. In the discipline of experimental neuroscience, a top-down influence has the meaning that signals originating in higher neural centers, repository of memory traces, innate patterns and decision operations, travel down to lower neuronal circuits where they cause a shift of the excitation balance in the deviated direction. Such a concept is to be distinguished from the bottom-up approach which would look for aberrations that are imposed on the input in its path through the sensory apparatus. Top-down neural signaling would be a fitting implementation of the gestalt concept enunciated by Max Wertheimer that the "properties of any of the parts are determined by the intrinsic structural laws of the whole."
## Mathematical transformation
When objects and associated percepts, in their respective spaces, correspond to each other albeit with deformations describable in terms of geometry, the mathematically inclined are tempted to search for transformations, perhaps non-Euclidean, that map them on each other. Application of differential geometry has so far not been notably successful ; the variety and complexity of the phenomena, significant differences between individuals and dependence on context, previous experience and instruction set a high bar for satisfying formulations. | https://en.wikipedia.org/wiki/Geometrical-optical_illusion |
If everyone who recently migrated to cities in the region were a country unto themselves, it would be the world's sixth largest.
Almost 200 million people in East Asia migrated from rural areas to cities between 2001 and 2010. If all these people were to band together and declare themselves a country, it would be the world's sixth largest. That's a lot of people.
A new World Bank report uses satellite imagery and geospatial mapping to compare this influx of people with the expansion of urban areas from Mongolia and Myanmar on up to Japan. It finds that although cities in the region expanded at breakneck speeds during this period, they couldn't keep up with ballooning urban populations. New residents poured into cities at an average of 3 percent per year, while land expansion happened at a 2.4 percent annual rate, according to the report.
China rules the roost, accounting for 80 percent of the region's urban land growth. But while China is dominating in absolute numbers, it's smaller countries like Laos and Cambodia that are transforming at the fastest rates: 7.3 and 4.3 percent urban expansion per year, respectively. Because these countries have historically been rural they literally have more ground to cover, explains Judy Baker, author of the report.
The report also examines the link between urbanization and a country's GDP. High-income countries, like Japan and Korea, had the lowest growth rates because they're already pretty urbanized. Cities in middle-of-the-rung countries, like Indonesia and China, spread out quicker. Lower-income countries, like Cambodia, showed the fastest population growth. Whatever the current GDP of the country, more urbanization means more economic activity, which leads to a higher national income, says Baker. The chart below shows that all countries experienced a jump in GDP with urbanization.
The other thing the report backs up is that, on the whole, urban density in the entire region is really high. In fact, the density of the region's urban areas is 1.5 times the average density of cities around the world. This is a good thing, says Baker, as compact urban development, done right, can lead to walkable, livable cities with affordable transport.
Of course, the extent of urban density varies by country. Indonesia, for example, has seen the most growth in density because its growth in population surpasses its growth in urban space. On the other hand, China's urban density remained stable because its spatial expansion rate is keeping up with the rate of urban migration.
The point of the whole study is to give planners a lay of the urban land across East Asia so that they can tackle problems like urban fragmentation in the region. Baker also hopes that people combine the released dataset with others to reveal new insights that help plan for the future. At the moment, people who live in cities only make up 36 percent of the region's population, and only 1 percent of the total area is urbanized. This is just the beginning.
megacities, which are home to 10 million residents (not 1 million). | https://www.citylab.com/equity/2015/01/east-asias-massive-urban-growth-in-5-infographics/384960/ |
Annual interest is the rate of interest paid by an investor or lender over a twelve-month period. It is usually expressed as a percentage of the principal and is typically compounded, meaning that it accumulates over time. Annual interest is earned on money in savings accounts, investments, or credit cards that are charged as debt for borrowing money. Compounding, repayment terms, and the amortization plan are the primary characteristics of annual interest.
Compounding occurs when accrued interest adds up to the loan’s principle, as well as any outstanding sums due for each period. The repayment terms are normally specified by the loan agreement and specify how frequently payments must be made as well as when payments must be received in order for interest to continue accruing. An amortization schedule is a chart that shows how much of each payment goes toward repaying the loan’s principal and how much goes toward repaying any accumulated interest.
Annual interest is important in a variety of ways. First, it is used to calculate the cost/benefit of taking out a loan, as borrowers need to pay back the loan amount plus any accrued interest. In addition, annual interest is one of the key elements used in calculating total returns on investments. The company earned the total annual return (after taxes) on its assets if a company or individual earned 1% of its assets per year. Finally, annual interest rates serve as an indicator of economic growth; if it is low, then the money is not being borrowed as much, and fewer people are making investments in the economy at large.
The formula is A P x (1 + r/n)^nt , where A is the total amount due at the end of one year, P is the original principal (the initial sum borrowed or invested), r is the annual rate of interest, n is the number of times per year that interest compounds and t are time in years. The types of loans that use annual interest rates are personal loans, home equity loans, car loans or auto loans, small business loans, student loans, and payday loans.
Contents
- 1 What is Annual Interest Rate?
- 2 What is an Example of Annual Interest Rate?
- 3 What Types of Loans Use Annual Interest Rate?
- 4 What is the Formula for Annual Interest Rate?
- 4.1 How to Calculate Annual Interest?
- 4.2 Who benefits from an Annual Interest Loan?
- 4.3 What are the Limitations of Annual Interest?
- 4.4 Is Annual interest better for investing?
What is Annual Interest Rate?
An annual interest rate is a form of interest rate determined by dividing the amount of interest payable on an investment by the principal or balance on loan over the course of a year. It is usually used to refer to loans and bank accounts, but it applies to investments such as certificates of deposit (CDs). The APR allows consumers to evaluate different types of loans and credit cards because it includes all associated costs with borrowing money. The annual interest rate is the return on an investment or loan that an individual earn in a year. It is computed as a percentage dividing the interest paid over a year by the total principal debt. Interest rates substantially impact capital investments and borrowing decisions because borrowers determine how much money is made or spent over time.
How Annual Interest Rate Work?
The Annual Interest Rate (AIR) is the rate set by a lender that determines how much interest borrowers pay over one year. The AIR includes both the principal and interest payments and is expressed as a percentage of the amount borrowed. The Annual Interest Rate helps borrowers understand how much is owed annually on top of the principal loan payment. It helps people compare different types of loans, so borrowers decide which is the most cost-effective.
What are the other terms for Annual Interest?
Other terms for Annual Interest include Annual Percentage Rate (APR), Nominal Interest Rate, and Effective Annual Rate (EAR). The APR is the total percent of interest a borrower pays on a loan, including fees and additional costs. The nominal interest rate is the annual interest rate that does not take into account compounding or other fees. The EAR, called The true cost of borrowing, reflects how much a loan or investment actually costs per year when compounded.
What is an Example of Annual Interest Rate?
Example of an annual interest rate is the interest rate on a loan a borrower takes out, such as a credit card or auto loan. The rate is typically expressed as an annual percentage and ranges from 0% to upwards of 20%. The higher the rate, the more the borrower pays in total interest throughout the loan. The main difference between credit cards and auto loans is the payment type. A borrower typically makes monthly payments with a credit card based on the amount purchased that month. However, with an auto loan, borrowers are expected to make fixed monthly payments for a predetermined period until the loan is paid in full. Credit cards tend to have higher interest rates than auto loans, so careful management of both is essential to ensure solid financial health.
What Types of Loans Use Annual Interest Rate?
Listed below are the types of loans that use annual interest rates.
- Personal Loans. Personal loans are unsecured loans given to individuals, often for personal expenses such as home improvements, debt consolidation, and car purchases. Personal loans come with a fixed annual interest rate that must be paid back over the loan term.
- Home Equity Loans. Home equity loans use the equity of the borrower’s home as security for the loan. Borrowers access up to 80% of the home’s value at a fixed annual interest rate. The interest rate is usually much lower than other personal or business loans due to the loan being secured against the borrower’s property.
- Car Loan or Auto Loan. An auto loan is given by a lender to purchase a vehicle and pays off in installments with an annual interest rate set for the entire duration of the loan term. Auto loan interest rates are slightly higher than mortgages or other secured debt because no collateral is used for repayment assurance except for the car itself.
- Small Business Loan. Small business owners access capital with either long-term or short-term loans from banks and other financial institutions at an annual interest rate determined by factors including creditworthiness, collateral offered by borrowers, and assets available to be pledged against loan repayment in case of defaulting on payments.
- Student Loan. Student loans have become one of the most common forms of education financing today with students borrowing money both from private lenders and government sources at an annual interest rate that fits within certain predetermined parameters set by individual programs and regulations shaped by Federal law as part of bringing down student debt burden while protecting borrowers’ rights continuously enhancing consumer educational finance outcomes.
- Payday Loan. A payday loan is a type of short-term loan that grants fast access to cash but has very high-interest rates that are charged each year as part of its terms and conditions, which make it attractive for emergency funds yet heavy on pocket due to extremely costly periodic repayments, so caution is highly advised when using payday loan financing source as debt relief option if alternatives haven’t produced results desired or expected from borrower’s goals in place originally before taking out payday type any kind contract agreement with lending entities involved.
1. Consumer Loans
Consumer loans are unsecured personal loans that are taken by individuals and used to fund items such as automobiles, vacations, home upgrades, or debt consolidation. Consumer loans are used to finance these types of purchases and more. The date or quantity that is expected to be repaid and the interest rate charged are often included in the conditions of consumer loan contracts; various lenders have varying requirements that must be met to get a loan. To protect the lender against default, consumer loans almost always need some form of collateral. Annual interest on consumer loans is typically calculated as the interest due over one year, expressed as a percentage. It is usually stated in terms of an annual percentage rate (APR), which considers additional fees that are charged with loan origination or servicing. The APR often includes processing fees and other costs associated with acquiring the loan, giving prospective borrowers a more accurate picture of the total borrowing cost.
2. Credit Card Loans
Credit card loans are short-term loans that customers take out against a credit card. Customers borrow up to the amount of the available credit limit in exchange for a fixed interest rate. Credit card loans are often used as a way to finance personal purchases such as vacations or home improvements. Annual interest is used in credit card loans to calculate the yearly cost of borrowing money. The rate is typically expressed as a percentage of the total amount borrowed and is used to estimate how much borrowers pay each year for the loan. The amount of interest owed is usually calculated monthly and added to the balance due on the credit card statement each month.
What is the Formula for Annual Interest Rate?
The annual interest formula is used to calculate the interest accrued on a loan or other type of debt over a period of one year, as well as determine the total amount of payment due at the end of the year. The formula is A P x (1 + r/n)^nt, where A is the total amount due at the end of one year, P is the original principal (the initial sum borrowed or invested), r is the annual rate of interest, n is the number of times per year that interest compounds and t are time in years.
How to Calculate Annual Interest?
To calculate annual interest, first, determine the interest rate. The first step in calculating annual interest is to determine what is the interest rate for the loan or investment. The APR determines how much a borrower pays in interest each year, and it varies greatly depending on factors like credit score and loan terms. Second, compute the interest payment. Once the applicable interest rate is determined, calculate the total APR chargeable by multiplying it with the principal balance that is payable annually. For example, if a borrower borrows $10,000 at 3% annual interest, then the yearly chargeable for that particular year is 0.03 x 10,000 $300 in total interest. Third, calculate the annual rate of return.
To calculate the annual rate of return (ARR) from an investment such as stocks, bonds, or mutual funds—start by taking the total dividends earned within that time frame and divide it by how much money was initially invested into the fund: Dividend / Initial Investment Annual Rate Of Return. Fourth, convert compound interest to simple interest. It must be converted to simple before properly calculating annual interest charges so that comparisons with other loans accurately occur if an APR is listed as a compound.
Simply take the compound APR and divide it by 12 months to see what each month’s exact cost and multiply that number over 12 months to find out what the actual cost of borrowing is in a year’s time. Fifth, calculate the total cost over time. To find out what the actual financial impact of any loan has taken over time, including inflation adjustments—use future value calculators, which give a detailed calculation based on different payment schedules at certain APRs over the year’s worth of data points.
How much can an Annual Interest Rate vary?
The annual interest varies at zero percent and reaches beyond thirty percent, depending on the type of product or service and the lender. Varying lenders have different interest rates, so looking around for the best deal is essential. For instance, a mortgage or auto loan’s APR is significantly lower than a credit card’s. The typical range for interest rates is between 1% and 30%.
How often do Annual Interest Rates change?
Annual interest rates change depending on various factors, such as the state of the economy, the level of inflation, and the central bank’s policies. In general, interest rates tend to rise when the economy is doing well, and there is a high demand for borrowing, and tend to fall when the economy is weak and there is less demand for borrowing. The frequency at which annual interest rates changes vary. In some cases, interest rates change daily or even more frequently. For example, the interest rate on a credit card or a short-term loan changes every day, depending on the lender’s policies and market conditions. In other cases, interest rates change less frequently, such as once a month or once a year. For example, the interest rate on a mortgage or a long-term loan changes once a year, depending on the loan agreement terms and market conditions.
Who benefits from an Annual Interest Loan?
Both the borrower and the lender benefit from an annual interest loan. For the borrower, an annual interest loan provides access to the funds that borrowers need to make a purchase or invest in a project. The borrower uses the loan to finance a large purchase, such as a home or a car, or to fund a business venture. In return, the borrower agrees to pay back the loan principal plus the interest over a certain period of time, as agreed upon in the loan contract. For the lender, an annual interest loan provides a source of income. The lender charges the borrower interest for the use of the funds, which is a significant source of revenue for the lender. The lender takes on some risk by lending the money, so the interest rate on the loan reflects the level of risk involved. Overall, an annual interest loan is a useful financial tool for both borrowers and lenders, as it allows both parties to access the funds needed and earn a return on the investment, respectively. However, it is important for borrowers to carefully consider the terms of the loan and the potential consequences of not being able to make timely payments.
What are the Limitations of Annual Interest?
Listed below are the limitations of annual interest.
- Lack of Investment Diversity. When using annual interest, the investor is limited to earning a fixed interest rate over the course of one year and is not able to take advantage of any other investment opportunities such as stocks, bonds, real estate, or other investments that offer more return than a traditional savings account.
- Inflation Risk. The yearly returns from an annual interest savings account do not usually keep up with inflation rates. Therefore the real value of money decreases rather than increases over time even though a nominal amount is credited to the account.
- Low Returns. Annual interest rate returns typically remain fairly low due to the association with less risky investments such as CDs and money market accounts; it does not allow for significantly large rewards for investors looking for a higher return on the investment capital.
- Limited Time Frame. There is only a one-year period in which investors receive the returns, making it hard to grow wealth slowly and steadily through long-term investments that generate higher returns over multiple years when investing in an annual interest savings account.
- Liquidity Issues. Investors have no variable methods for generating additional income after funds have been locked away for an entire year without penalty or fees because savings accounts cannot be sold or exchanged like stocks or bonds, which makes it hard to convert assets into cash if there is an emergency situation where quick access to funds is needed.
Is Annual interest better for investing?
Yes, annual interest is good for investing. It is attractive to investors looking for a steady stream of income from investments. Annual interest often offers a higher interest rate compared to other types of investments, such as savings accounts or money market funds, which makes it more attractive to investors seeking a higher return on investment. However, annual interest loans carry some risks. The borrower defaults on the loan, which results in the investor losing some or all of the investment. Additionally, the value of the loan fluctuates over time, depending on various factors, such as changes in market conditions or the borrower’s creditworthiness, making it more difficult for investors to predict the potential returns on the investment accurately.
What types of investments make use of Annual Interest?
Listed below are the types of investments that make us of annual interest.
- Savings Accounts. Most savings accounts offer interest earned yearly and deposited or withdrawn at the owner’s discretion, done through a bank, credit union, or an online platform such as an ETF or CD. The return on savings account type of investment is modest but is often seen as a safe strategy due to the stability of the funds involved.
- Certificates of Deposit (CD). A CD is a fixed-term account that pays an annual rate of interest that is set at the time of purchase and remains consistent for the duration of the term. CD investment has limited liquidity since it cannot be accessed until maturity. However, there are usually options for penalty-free early withdrawal before its end date.
- Bonds. Investors loan money to government entities and corporations in exchange for an annual rate of interest plus principal repayment upon maturity date, which ranges from one month to up to thirty years, depending on the bond being purchased, when invested in bonds.
- Annuities. Annuities are investments that provide financial protection against life events such as retirement or disability and guarantee an annual interest rate throughout its term; when bought by a consumer, annuities often provide tax-deferred growth so long as it held until the annuitant’s death or until it is surrendered after five years have passed whichever comes first.
Savings accounts, certificates of deposit, bonds, annuities, and investment funds use annual interest to pay out a percentage of the total amount invested over time. Annual interest helps an investor keep the money growing each year, allowing it to compound over time and increase in value. With annual interest payments from these investments, investors make gains on the total principal without having to reinvest actively.
What is the difference between Annual Rate and Interest Rate?
The cost of borrowing money over the course of a year is expressed in terms of a percentage that is referred to as the annual rate (or APR for short). An interest rate, on the other hand, is a percentage charged for borrowing money that does not take into account any additional fees or charges levied by the lender.
Kathy Jane Buchanan has more than 10 years of experience as an editor and writer. She currently worked as a full-time personal finance writer for PaydayChampion and has contributed work to a range of publications expert on loans. Kathy graduated in 2000 from Iowa State University with degree BSc in Finance. | https://www.paydaychampion.com/what-is-annual-interest-rate/ |
Q:
TSP Brute Force Optimization in Python
I am currently working on a Python 3.4.3 project which includes solving the Traveling Salesman Problem using different algorithms. I have just written a brute force algorithm and I would love some feedback. Now I understand that the number of operations increases by the factorial of the route length so brute forcing is not considered feasible, but for my project I need a standard brute force.
My overarching question is: How can I make this as fast and efficient as possible?
import itertools
def distance(p1, p2):
#calculates distance from two points
d = (((p2[0] - p1[0])**2) + ((p2[1] - p1[1])**2))**.5
return int(d)
def calCosts(routes, nodes):
travelCosts = []
for route in routes:
travelCost = 0
#Sums up the travel cost
for i in range(1,len(route)):
#takes an element of route, uses it to find the corresponding coords and calculates the distance
travelCost += distance(nodes[str(route[i-1])], nodes[str(route[i])])
travelCosts.append(travelCost)
#pulls out the smallest travel cost
smallestCost = min(travelCosts)
shortest = (routes[travelCosts.index(smallestCost)], smallestCost)
#returns tuple of the route and its cost
return shortest
def genRoutes(routeLength):
#lang hold all the 'alphabet' of nodes
lang = [ x for x in range(2,routeLength+1) ]
#uses built-in itertools to generate permutations
routes = list(map(list, itertools.permutations(lang)))
#inserts the home city, must be the first city in every route
for x in routes:
x.insert(0,1)
return routes
def main(nodes=None, instanceSize=5):
#nodes and instanceSize are passed into main() using another program
#I just gave them default values for this example
#The Node lookup table.
Nodes = {
'1': (565.0, 575.0),
'2': (25.0, 185.0),
'3': (345.0, 750.0),
'4': (945.0, 685.0),
'5': (845.0, 655.0),
'6': (880.0, 660.0),
'7': (25.0, 230.0),
'8': (525.0, 1000.0),
'9': (580.0, 1175.0),
'10': (650.0, 1130.0),
'11': (1605.0, 620.0),
'12': (1220.0, 580.0),
'13': (1465.0, 200.0),
'14': (1530.0, 5.0),
'15': (845.0, 680.0)
}
routes = genRoutes(instanceSize)
shortest = calCosts(routes, Nodes)
print("Shortest Route: ", shortest[0])
print("Travel Cost: ", shortest[1])
if __name__ == '__main__':
main()
A:
Lets have a little style review, before some code refactoring, and finish off with some performance comparison.
Style and code review
I suggest you read up on PEP8, which is the official style guide for Python.
Not using snake_case for variable and function names - You are using mostly camelCase, and in some cases one letter variables
Good use of singular vs plural – I like combinations like for route in routes. They make sense
Good use of if __name__ ... - This is good construct
Many temporary variables – You have a little too many temporary variables, which are only used once. These can often be skipped
Avoid using indexes in loops – In general, it is better to avoid using indexes in loops, rather than looping on the elements themself. Especially bad is it when you use str(i) to get the index, when the Nodes list should have used pure integers directly as keys.
Avoid building large lists in memory – Your code builds the routes list in memory. When calling this for a route length of 9, your original code uses approx 6.7MB to store the routes. My optimised routine below, uses 0.02MB because it uses generators instead...
Avoid recalculating distances – Python can memoize the output from a function for each given input by using decorators. This means that you can calculate the distance between two points once, and the next time you need it the memoize function picks your previous calculation of the result.
Use docstrings instead of ordinary comment to document functions – If you use docstrings, some of the editors will provide interactive help when you use your functions, and it is according the style guide.
Performance comparison
I made some variants of the code, both of your original, the one by SuperBiasedMan (with some correction as his code currently has a bug), some versions using only generators, itertools, sum and min, and finally a version using for loops instead of list comprehension and a slightly alternate optimisation to find the minimum.
I used the following version of memoize, which most likely can be further optimised:
def memoize(f):
""" Memoization decorator for functions taking one or more arguments. """
class memodict(dict):
def __init__(self, f):
self.f = f
def __call__(self, *args):
return self[args]
def __missing__(self, key):
ret = self[key] = self.f(*key)
return ret
return memodict(f)
@memoize
def distance(p1, p2):
"""Calculates distance between two points, memoizes result"""
d = (((p2[0] - p1[0])**2) + ((p2[1] - p1[1])**2)) **.5
return int(d)
One version taking it to the extreme regarding list comprehension and generators are this one:
def route_generator(route_length):
"""Generate all possible routes of length route_length, starting at 1."""
for route in permutations(xrange(2, route_length+1)):
yield (1, ) + route
def main_holroy_v2(instance_size=INSTANCE_SIZE):
print min((sum(distance(NODES[start], NODES[stop])
for start, stop in pairwise(route)), route)
for route in route_generator(instance_size))
I made the Nodes into a global dict with integers as key, named NODES. After some testing I found that the extreme variant was slower than expected, so I fumbled around and found this to be the fastest in my environment:
def find_shortest_route3(nodes, route_length):
"""Find shortest route of length route_length from nodes."""
minimum_distance = None
for route in permutations(xrange(2, route_length+1)):
current_distance = 0
prev = nodes[1]
for next in route:
current_distance += distance(prev, nodes[next])
prev = nodes[next]
if minimum_distance and current_distance > minimum_distance:
break
else:
if not minimum_distance or current_distance < minimum_distance:
minimum_distance = current_distance
minimum_route = route
return minimum_distance, minimum_route
def main_minimum3(instance_size=INSTANCE_SIZE):
distance.clear()
cost, route = find_shortest_route3(NODES, instance_size)
print('Shortest route: {}'.format((1, ) + route))
print('Travel cost : {}'.format(cost))
Some comments on this code:
It turns out that list comprehensions are slightly slower than plain for loops in this case
Instead of completing the sum for a route, I break out of the inner loop if the sum has crossed an earlier minimum. The current route is then not interesting any more
The actual route returned does not include the starting point. This can easily be added, as done in the print statement. This eliminates the need for either adding in the starting point to all routes, or to remove all routes not starting at the correct point. In short, it is more efficient
If the inner for loop completes ordinarily (no breaks), then it goes into the else block of the for loop. A slightly strange construct, but for this purpose ideal, as we then can compare the minimum distance versus the current route distance
The distance.clear() is used to reset the memoisation between test run done using timeit in IPython. Memory profiling was done using a module name memory_profiler.
Here are some of the timing results I got:
In [325]: %timeit main_org_str(9) # Your original code
1 loops, best of 3: 529 ms per loop
In [326]: %timeit main_org(9) # Using int as keys
1 loops, best of 3: 304 ms per loop
In [327]: %timeit main_holroy_v2(9)
1 loops, best of 3: 420 ms per loop
In [328]: %timeit main_superbiasedman(9)
1 loops, best of 3: 362 ms per loop
In [330]: %timeit main_minimum3(9)
10 loops, best of 3: 146 ms per loop
In [330]: %timeit main_minimum3(9) # After removing `**0.5`
10 loops, best of 3: 116 ms per loop
As can be seen here your original version took around 0.5 seconds, but only changing into using ints as keys and memoisation, shaved off 0.2 seconds. My extreme version and version by SuperBiasedMan, actually uses longer time than your version when optimised. But still the fastest version is the minimum3 version, clocking in at 146 milliseconds. (The last number, 116 ms, comes if you remove the **0.5 in distance(). Taking the square root is somewhat expensive.)
So there you have it, your code is not too bad timewize, but it used way too much memory. When coding according to the most pythonic ways, using generators, list comprehensions and sum and min you lower the memory consumption, changes the readability, but don't always gain that much in reduced time.
Of the alternatives I tested, the optimal combination for a brute force approach runs in approximately 30% of the time, and uses almost none memory. Some other timings for my optimal method: (10: 1.26 seconds, 11: 11.7 seconds, 12: 1 min 54 seconds). So better algorithms should be applied if wanting to calculate the shortest route for routes longer than 11/12 nodes.
| |
Dear colleagues:
This program was a top priority in our budget planning to enable much-deserved recognition for employees who received no increases this year due to the financial fallout of the COVID-19 pandemic.
In addition, our three universities have been asked to establish their own guidelines for separate programs to address compression, market, equity and retention issues. Commitments for existing contractual agreements, promotions and equity adjustments will also be honored as part of this program. I have asked chancellors to monitor carefully any individual increases amounting to more than 7 percent at their universities and I will do the same at the system level.
Pay adjustments for faculty and staff in bargaining units will be determined by negotiations with the bargaining representative and/or applicable contract settlements, and are not subject to unilateral adjustments unless authorized or permitted pursuant to the collective bargaining process.
I hope that the salary program, announced today, serves to illustrate just how much we value your hard work and sacrifice during the ongoing COVID crisis. Thanks to you, a year of historic challenges has also seen one of our greatest triumphs over adversity. You have made the U of I System a model for the nation, sustaining our world-class teaching and research while dealing with the many unique challenges that the pandemic brought to our institution and to your own lives and families.
You are the foundation of our excellence, and I am deeply grateful for your hard work, your flexibility and creativity, your dedication and your loyalty.
|
|
This mailing approved by:
Office of the President
sent to: | https://massmail.illinois.edu/massmail/55641457.html |
Not too long ago John Langford stopped on by and gave a fascinating talk. There were a lot of take-aways from the talk but here’s one that really got my noodle going: A lot of times we get really high-dimensional data (say text) and we want to use it to make some prediction. The high-dimensionality makes the problem intractably difficult, in part because of the curse, and in part because high-dimensional feature spaces lead to high-dimensional parameter spaces which is computationally (and space) taxing.
So what do we do? Dimensionality reduction or feature selection. A lot of effort has gone into doing these well (some popular ones these days are LDA and L1 regularization). JL came up with another way of doing dimensionality reduction: he ignores hash conflicts. At first blush this seems silly: you’ll get a form of dimensionality reduction wherein different features will be additively combined in some arbitrary way. But his results show that this actually works quite well. I’m not entirely sure why, but that got me thinking: are there simple ways of reducing feature spaces that make for very good !/$ propositions? I decided to find out by running some experiments on a scale/domain I’m a little more familiar with.
Here’s the experimental setup: I took the Cora data set (thanks to LINQS.UMD for making a munged version of this data available) which is a database of scientific papers each one with a subject categorization. In order to keep things simple, my goal will be to predict whether or not a paper is labeled as “Theory.” I’ll try to make these predictions using binary word features (i.e., the presence or absence of a word in a paper’s abstract). I’ll take these features and put them through a logistic regression model with a little bit of an L2 penalty for good measure (n.b. I also tried an SVM; the results are about the same but a little worse). My data’s been cleaned up but there are still around 1.5k features and only 2.7k data points. This is less than ideal because 1.) the dimensionality to data ratio is not too good, and 2.) storing that matrix is still pretty taxing for my poor laptop.
What I want to do is reduce the number of features to a fixed size (in the plots below, I do the experiments using ). I’m going to try five methods:
- random – Uniformly randomly select features from the feature space.
- corr – Select the features which have the highest individual correlation with the dependent variable.
- cream – Select the most frequently occurring features (the cream of the crop; they rise to the top).
- crust – Like “cream,” except skip the first features, where is the dimensionality of the feature space. These are the”crust” features, i.e., those that are too frequent or not frequent enough to carry much information.
- merge – Randomly additively combine features until there are just features left.
Here’s the pretty dotchart (the red line is how well you’d do using random guessing) of the 0-1 error estimated using 10 bootstrap replications:
Predictive performance using different feature selection mechanisms.
A few things that surprised me:
- You don’t really need that many features. Using 50 features doesn’t really do substantially worse than 200 if you use corr.
- Corr was just something I threw together on the fly; I really didn’t expect it to work so well and I haven’t actually used it in practice. I guess I should start.
- Cream does better than crust. Crust was motivated by the fact that when you put data into topic models like LDA, you get better looking topics when you remove overly-frequent words (since they tend to trickle to the top of every topic). I guess good looking topics don’t necessarily make for good predictions. Food for thought.
- Random does poorly as one might expect.
- Merge is just about as bad as random. I don’t know how to reconcile this with what I said at the outset except to that 1.) merge is not necessarily combining features the exact same way a hash would 2.) different features 3.) different number of features 4.) different amounts of data 5.) different data sets 6.) a different classifier.
Ok, so the data didn’t show me what I was hoping and half expecting to see. Chalk it up to different strokes for different domains. But I did find something that, while obvious in retrospect, will probably be useful going forward. | https://pleasescoopme.com/2009/03/19/some-simple-attempts-at-feature-selection/ |
Today’s topic is about our philosophy on Language Arts, or English. It's being posted early because I get to go have fun at the US Science and Engineering X-STEM Symposium happening today!
In retrospect, my personal philosophy about how English (aka Language Arts) should be taught was defined in Middle School when I took GT Humanities (combined English/American History) from the one teacher that had the most influence in my educational success, then reinforced through my experiences in the workplace. In her class, English wasn’t learned in isolation. My reading level was stretched through the texts that we read, both fiction and nonfiction. My writing and analysis was developed through a series of assignments that included creating a newspaper, creating a diorama, drawing a political cartoon, as well as completing the traditional book report or writing assignment. Most importantly, she taught me how to read a newspaper and understand bias and she challenged me to research in order to support my opinions. Finally, she believed enough in me to take a chance on me, this kid who had just learned how to speak English five years before, and asked me to be the Editor-in-Chief of the school newspaper.
Goal
In short, we view the primary goal of Language Arts in primary and secondary education, as learning how to communicate effectively both orally and through writing. This deceptively simple statement is made up of a myriad of physical and mental skills and processes that need to come together, which I find easier to explain using a diagram.
This process is so complex and is individual to each child’s experiences, strengths, weakness, likes and dislikes. Additionally, when viewed from an academic standpoint, it is interrelated with other academic subjects, such as science, history, and art.
How
In line with our philosophy in other subjects, where the work is hands-on and has a purpose, we look at the study of Language Arts, like in Math and Engineering, from a “So what?” point of view. With that in mind, Language Arts is integrated into Humanities, Science and Engineering to give it purpose and context, but is also being done with the child’s unique strengths and weaknesses in mind. Let’s look at how that is done.
Humanities: Creative and Analytical Communications
The clearest and easiest point of integration is with History and Social Studies. When one studies American History or World History and Cultures, to truly understand what shapes that history, one has to understand the culture. In order to understand the culture, one needs to understand its people. And to understand its people, one needs to understand their literature, their music and their art. This can be done through analysis of primary source documents, reading of biographies and autobiographies, reading fiction or historical fiction, reading poetry, as well as news articles. Through continued exposure to challenging material, children’s vocabulary will continue to grow.
Output can come in many forms. It can be a written analysis, it can be a documentary, it can be a poem, it can be a Literature Circle, or it can be a political cartoon. All of these types of output require understanding the topic. By giving the teacher the opportunity to work in a small classroom with 8-10 kids, the teacher will have the opportunity to learn the strengths and weaknesses of each child, and to give kids choices in how to present output. We fully expect that kids will gravitate to what is easiest for them, and that’s ok. We’re first going to focus on building trust with the teacher and with each other. Then, when that trust has been built, children can then be challenged to expand on choosing output outside of their comfort zones. When the individual child is ready, then coaching in writing and grammar will begin for that student.
To close out how Language Arts is included in the Humanities curriculum, the only homework all students will have every day is to read from (or listen to) a challenging book of their choice for at least 30 minutes.
Science: Technical Communications
Another major integration point is between Language Arts and Science. While there’s a lot of experimentation that occurs with science, scientists need to know how to present their results. Scientists also need to know to justify their idea. And even more basic than that, scientists need to be able to research a topic in order to understand it.
The type of writing and communication that you do in a natural science classroom or laboratory is technical writing. Sentence structure, vocabulary, organization of ideas and message are all still important, but the audience is different and the tone is different. In science, students will experience a different, yet very important, type of Language Arts education.
Engineering: Persuasive and Technical Communications
Our Engineering classes present yet another opportunity to hone in students’ communications skills. From technical design specifications, to a website that describes their product, students will get additional education in technical communications, as well as in persuasive language. Will 4th-grade students entering the school know how to do this? We don’t think so. It’s our job to expose them to the methods for communicating, memorializing, and selling their projects.
I could go on and on. In Art, the Graphic Novels unit integrates storytelling in a unique way and in Community Service, students will practice (or learn) about correspondence and current events.
In conclusion, the teaching opportunities for Language Arts are tremendous. It our responsibility as a school to find those teaching opportunities and incorporate them in a way that is interesting, meaningful, and that makes sense to the kids.
Resources specific to Language Arts, that we like:
If you have others you like, please share them! I love learning about other learning options and creative teaching methods out there.
Your comment will be posted after it is approved.
Leave a Reply.
|
|
Author
Juliana Heitz is co-founder of Ideaventions Academy and is very excited to share the thinking behind the Academy. | https://www.ideaventionsacademy.org/blog/teaching-tuesdays-english-with-a-purpose |
Page: 1 2 3 4 5
Tim Train: Yes. Our "Clockwork Spider" unit was created to be in the "Da Vinci style." With all of our new units and buildings we go through a thorough concept sketch approval process. We begin with a variety of sketches showing different possibilities for the unit, and then we settle on one direction that we like the most. Sometimes it's the head from this sketch on the body from that sketch with the weapon from this other sketch. Then we proceed to a final sketch which we approve as the basis for modeling the 3D unit or building.GameSpy: How would you describe the game's art style? Did you draw inspiration for the design of the world for any place specific?
Ideas for particular units can come from several different directions. Sometimes we need a unit with a certain function for balance reasons (for example, a Vinci anti-air unit), other times we need a unit because of our storyline (we might need some Vinci units which look evil so that one of our bad guys can make an army of them), and then sometimes our artists think something look so cool it just deserves to be in the game -- like our tech-spider. In the best situations it's a little bit of each.
Dave Inscore: In constructing Aio, we looked to some of the most exotic regions on Earth to both inspire and inform. A dramatic example can be found in our landscape design for Pirata, home of the Vinci hero Lenora and her fleet of airships. To protect our band of secretive air pirates we wanted to construct an extremely remote and virtually inaccessible locale. As we thumbed through our reference library we stumbled upon the Zhangjaijie National Forest Park, located in China. The park is known for its massive quartz-sandstone rock formations, buffered by green trees and often nestled within cloud cover. We borrowed heavily from the rock formations and shifted the terrain to be slightly more jungle in design. The result is a world like no other, but familiar in its construct of various parts.GameSpy: How does magic work in this world, what is it based on, and what kinds of things can it do?
Doug Kaufman: Magic in Aio, as it is currently understood, involves the summoning and binding of spiritual forces. For the Alim, the major practitioners of magic, this specifically means the summoning and binding of genies and other spirits who occupy planes of existence other than our own. Two planes are well known, the Sand and the Fire. A third has been fused from a conjunction of these two -- Dark Glass. It's unclear whether a plane of Dark Glass exists independently of human actions, or whether it was brought into being by magi and beings such as Sawu, the Marid, a glass genie.
In either case, most of the Alim magic brings forth souls that manifest in the world. At the simplest level (some say at the most complex), living beings spring into existence seemingly out of nowhere. These can be what appear to be humans, or Salamanders (fire lizards that dwell in the burning regions of the world), or different types of genies such as Marids or Afreets (fire genies). Other versions of magic spells involve the use of elemental forces. For example, the Wind Deflection spell summons forth swirling winds that protect the targets from missile attacks; these winds are actually a manifestation of elemental forces of Air.
How it is that genies and other elemental forces do magic is not entirely understood, but these are beings whose form of existence is very different from our own. They appear to be tapped more directly into the primal forces of nature, and are able to perceive and manipulate unseen energies the way we can perceive and manipulate matter. | http://pc.gamespy.com/pc/rise-of-nations-2/668096p3.html |
Temperate forests are the most common biome in eastern North America, Western Europe, Eastern Asia, Chile, and New Zealand (see the figure below). This biome is found throughout mid-latitude regions. Temperatures range between -30 °C and 30 °C (-22 °F to 86 °F) and drop to below freezing on an annual basis. These temperatures mean that temperate forests have defined growing seasons during the spring, summer, and early fall. Precipitation is relatively constant throughout the year and ranges between 75 cm and 150 cm (29.5–59 in).
Because of the moderate annual rainfall and temperatures, deciduous trees are the dominant plant in this biome (see the figure below). Deciduous trees lose their leaves each fall and remain leafless in the winter. Thus, no photosynthesis occurs in the deciduous trees during the dormant winter period. Each spring, new leaves appear as the temperature increases. Because of the dormant period, the net primary productivity of temperate forests is less than that of tropical wet forests. In addition, temperate forests show less diversity of tree species than tropical wet forest biomes.
The trees of the temperate forests leaf out and shade much of the ground; however, this biome is more open than tropical wet forests because trees in the temperate forests do not grow as tall as the trees in tropical wet forests. The soils of the temperate forests are rich in inorganic and organic nutrients. This is due to the thick layer of leaf litter on forest floors. As this leaf litter decays, nutrients are returned to the soil. The leaf litter also protects soil from erosion, insulates the ground, and provides habitats for invertebrates (such as the pill bug or roly-poly, Armadillidium vulgare) and their predators, such as the red-backed salamander (Plethodon cinereus). | https://nigerianscholars.com/tutorials/ecology-biosphere/temperate-forests/ |
Abstract:Computational Electromagnetics is a discipline that deals with the processing and modeling of multi-physics and electromagnetic problems. Thanks to the advent of computers and numerical methods, engineers today can develop algorithms and software to solve Maxwell’s equations numerically. The electromagnetic scattering problem leads to a very large system of equations with millions or even billions of unknowns; traditional data analysis methods are oftentimes not efficient enough to handle the problem due to data volume. The field of Big Data has emerged from the need to process a massive amount of data and is a research area that facilitates the complex work of extremely large data sizes. Fast algorithms can be developed to efficiently manage the Big Data approach to support areas of science and engineering. In this paper, we explore an application of Big Data and algorithms in computational electromagnetics scattering problems. | https://thinkmind.org/index.php?view=article&articleid=dbkda_2021_1_10_50002 |
Past Members, Projects and Theses Here are some pointers to some of our areas of work in the past:
Combining filters and maps into pipelines Coffee Break: An alternative to map and filter List Comprehensions with Scala: Simple list comprehensions Coffee Break Explained: Simple list comprehensions A Generalisation of a list comprehension Comprehension over lists of objects Coffee Break: Working with lists of objects Coffee Break Explained: Working with lists of objects Comprehension over lists that have lists Comprehensions on lists of lists are flatMaps Relating filter, map and flatMap to comprehensions Working with multiple lists Coffee Break: Working with multiple lists Coffee Break Explained: Working with multiple lists List Comprehensions or filter, map and flatMap?
Summary Our journey of learning to program with immutability Boolean: A really simple immutable data structure Coffee Break: Find the size of a tree Coffee Break Explained: Finding the size of a tree Data structures that are combinations of elements Programming with immutable data structures Thinking in Data Structures is thinking in Types Coffee Break: More types for the Turtle Coffee Break Explained: A look at concise code with higher order functions Concise code tends to be declaritive code What is a higher order function?
A higher order function receives functions as inputs Coffee Break: Designing with Functions and Data Coffee Break: Designing with Functions and Data Summary.THE SEMANTIC ANALYSIS OF ADVANCED PROGRAMMING LANGUAGES by Harley D. Eades III A thesis submitted in partial ful llment of the requirements for the Doctor of Philosophy degree in Computer Science in the Graduate College of The University of Iowa August Thesis Supervisor: Associate Professor Aaron Stump.
An Introduction to Functional Programming in Java 8 (Part 3): Streams Streams are an important functional approach that can impact performance via parallelism, augment and convert data structures.
For this, we are going to use Clojure, which is a dynamic functional language based on Lambda Calculus. Church Encoding.
A mathematician called Alonzo Church was able to encode data and operators in Lambda Calculus. Church encoding is the name of this way of building primitive Church-Turing Thesis.
Programming With Nothing by Tom Stuart. ABSTRACT OF THE THESIS Functional Verification and Programming Model of WiNC2R for e Mobile WiMAX Protocol Guruguhanathan Venkataramanan Thesis Director: Prof.
Predrag Spasojevic The WiNLAB Network Centric Cognitive Radio (WiNC2R) is a task-based, programmable, multi-processor system-on-a-chip architecture for radio processing.
The earliest statement of Church’s Thesis, from Church () p is We now define the notion, already discussed, of an effectively calculable function of positive integers by identifying it with the notion of a recursive function of positive integers (or of a lambda- definable function of positive integers).
Propositions as Types Philip Wadler University of Edinburgh dations of functional programming, explaining features including functions, records, variants, parametric polymorphism, data ab- what we now know as Church’s Thesis, and demonstrated that there was a problem whose solution was not -. | https://xivorujewigoceh.regardbouddhiste.com/churchs-thesis-and-functional-programming-19269nx.html |
# Varig Flight 820
Varig Flight 820 was a flight of the Brazilian airline Varig that departed from Galeão International Airport in Rio de Janeiro, Brazil, on July 11, 1973, for Orly Airport, in Paris, France. The plane, a Boeing 707, registration PP-VJZ, made an emergency landing on onion fields about four kilometers from Orly Airport, due to smoke in the cabin from a fire in a lavatory. The fire caused 123 deaths; there were only 11 survivors (ten crew members and one passenger).
## Aircraft and crew
The Boeing 707-320C registration PP-VJZ, serial number 19841, was manufactured in February 1968 and had flown 21,470 hours. The aircraft was originally meant to be sold to Seaboard World Airlines, but was bought by Varig prior to this taking place. Varig briefly leased it to Seaboard World Airlines but otherwise owned and operated the aircraft for the entirety of its life. The aircraft had seating capacity for 124 passengers and was operating close to full on the fateful flight.
The crew aboard the flight consisted of four flight crew, four relief flight crew, and nine cabin crew. The primary flight crew consisted of Captain Gilberto Araújo da Silva, 49, First Officer Alvio Basso, 46, Flight Engineer Claunor Bello, 38, and Navigator Zilmar Gomes da Cunha, 43. Captain Araújo da Silva was highly experienced and had flown 17,959 hours, of which 4,642 hours were on the 707. First Officer Basso was also very experienced, with 12,613 flying hours, of which 5,055 hours were on the 707. Both Bello and Gomes da Cunha were also highly experienced airmen with 9,655 hours and 14,140 hours in total respectively; between them they had 8,113 hours on the 707.
The relief flight crew consisted of Captain Antonio Fuzimoto, 45, First Officer Ronald Utermoehl, 23, Flight Engineer Carlos Nato Diefenthaler, 38 and Navigator Salvador Ramos Heleno, 45. Relief Captain Fuzimoto was also very experienced, with 17,788 flying hours total, of which 3,221 were on the 707. Relief First Officer Utermoehl was much less experienced, with only 1,540 hours in total, of which only 788 were on the 707. Relief Flight Engineer Diefenthaler and Relief Navigator Heleno were both very experienced airmen, with 16,672 and 15,157 flying hours, respectively, and 17,859 total hours between them on the 707.
The cabin crew consisted of Chief Purser João Egidio Galleti, 33, and attendants Edemar Gonçalves Mascarenas, 31, Carmelino Pires de Oliveira Jr, 31, Sergio Carvalho Balbino, 28, Luiz Edmundo Coelho Brandão, 26, Alain Henri Tersis, 26, Hanelore Danzberg, 34, Andrea Piha, 24 and Elvira Strauss, 24.
## Incident
Flight 820's problems began when a fire started in a rear lavatory. Crew members moved to the front of the airplane toward the emergency exit, as many passengers in the rear of the plane inhaled smoke. Prior to the forced landing, many of the passengers had already died of carbon monoxide poisoning and smoke inhalation. The aircraft landed in a field 5 km short of the runway, in a full-flap and gear-down configuration.
Of the 134 passengers and crew aboard the flight, ten crew and one passenger, 21-year-old Ricardo Trajano, survived. Of the crew, Captain Araújo da Silva, First Officer Basso, Flight Engineer Bello, Navigator Gomes da Cunha, Relief Captain Fuzimoto, Chief Purser Galleti and Attendants Pires de Oliveira and Piha were in the cockpit and evacuated from there, whilst Tersis and Brandão escaped out of the forward galley. Trajano was found unconscious with Relief Navigator Heleno, Attendant Balbino and another passenger; Balbino and the passenger died at the scene, whilst Heleno died in a hospital soon after.
A possible cause of the fire was that the lavatory waste bin contents caught fire after a lit cigarette was thrown into it. Consequently, the FAA issued AD 74-08-09 requiring "installation of placards prohibiting smoking in the lavatory and disposal of cigarettes in the lavatory waste receptacles; establishment of a procedure to announce to airplane occupants that smoking is prohibited in the lavatories; installation of ashtrays at certain locations; and repetitive inspections to ensure that lavatory waste receptacle doors operate correctly".
## Passengers
Most of the passengers on the aircraft were Brazilian. The only survivors were in the cockpit and the first several rows of seats. Of the 11 survivors, 10 were members of the crew; the sole surviving passenger disobeyed instructions to remain in his seat.
Notable passengers who died included:
Jörg Bruder, Olympic sailor Filinto Müller, President of the Senate of Brazil Agostinho dos Santos, singer Júlio Delamare, sports journalist | https://en.wikipedia.org/wiki/Varig_Flight_820 |
Introduction At IBM, work is more than a job - it's a calling: To build. To design. To code. To consult. To think along with clients and sell. To make markets. To invent. To collaborate. Not just to do something better, but to attempt things you've never thought possible. To lead in this new era of technology and solve some of the world's most challenging problems.
Your Role and Responsibilities
The security compliance leader's role is to determine the secure operation of all computer systems, servers, and network connections in accordance with our policies, procedures, and compliance requirements. A security compliance leader in our team will participate in some or all of the following:
- Providing subject matter expertise in the creation, implementation, and maintenance of appropriate enterprise programs, policies, and procedures to be compliant with all applicable regulations including ISO, SOC, HIPAA, PCI, FedRAMP/FISMA
- Having the ability to utilize a working knowledge of information security best practices such as: NIST 800 series, ISO 27000 series, GDPR, etc
- Interpreting standards, requirements, and their application to the enterprise Cloud environment in the most reasonable and cost-effective manner
- Developing, implementing, maintaining, and overseeing enforcement of security policies
- Collaborating with security architects and technical security teams to define and implement security processes and procedures based on industry-standard best practices and compliance requirements. Defining the requirements and validating the procedures and audit testing methodology
- Conducting regularly scheduled audits on systems and hosting third-party audits as required in order to maintain certifications and compliance certificates.
- Working with the DevOps teams to prepare ongoing client reporting, information for prospective clients, and marketing materials
- Providing training to teams as needed
- Assisting team members and internal clients in addressing highly complex security issues applicable to the enterprise environment
Required Technical and Professional Expertise
- Ability to utilize a working knowledge of information security best practices such as: NIST 800 series, ISO 27000 series, GDPR, etc
- Experience with US Federal Compliance programs such as FedRAMP/ FISMA
- Excellent skills in risk assessment processes, policy development, proposals, work statements, product evaluations, and delivery of technology
- Ability to understand enterprise business computing operations/requirements, and in particular, Cloud
- Ability to stand firm on issues yet be flexible and creative when working with customers to find effective solutions
- Ability to understand and interpret laws and regulatory requirements related to information protection, and develop and implement appropriate processes to achieve and maintain compliance and reduce risk
Preferred Technical and Professional Expertise
- Working in a change controlled production environment.
- Administering systems that are internet facing.
- Diagnosing the root cause of problems and propose solutions: Examples would be failed patches, tooling issues, false positives on system tests, authentication problems.
- Expertise in system configuration, especially privilege control (for example sudoer configuration), and system level firewall (iptables)
- An understanding of basic networking concepts: ipsec tunnels, firewalls, routers, public and private addressing.
- Computer science BSc or equivalent
- Security/privacy specific training such as CIPT, CRISC, CISSP
About Business Unit
Digitization is accelerating the ongoing evolution of business, and clouds - public, private, and hybrid - enable companies to extend their existing infrastructure and integrate across systems. IBM Cloud provides the security, control, and visibility that our clients have come to expect. We are working to provide the right tools and environment to combine all of our client's data, no matter where it resides, to respond to changing market dynamics.
Your Life @ IBM
What matters to you when you're looking for your next career challenge?
Maybe you want to get involved in work that really changes the world. What about somewhere with incredible and diverse career and development opportunities - where you can truly discover your passion? Are you looking for a culture of openness, collaboration and trust - where everyone has a voice? What about all of these? If so, then IBM could be your next career challenge. Join us, not to do something better, but to attempt things you never thought possible.
Impact. Inclusion. Infinite Experiences. Do your best work ever.
About IBM
IBM's greatest invention is the IBMer. We believe that progress is made through progressive thinking, progressive leadership, progressive policy and progressive action. IBMers believe that the application of intelligence, reason and science can improve business, society and the human condition. Restlessly reinventing since 1911, we are the largest technology and consulting employer in the world, with more than 380,000 IBMers serving clients in 170 countries.
Location Statement
For additional information about location requirements, please discuss with the recruiter following submission of your application.
Being You @ IBM
IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
Fuente: Bebee2
Área:
Conocimientos: | https://www.buscojobs.cr/security-compliance-leader-ID-234582 |
The Best Ergonomic Principles for Improved Work Performance
In the field of ergonomics, workplace environment and job requirements are matched to the capabilities of the working population (interaction between the operator and the job demands).
The ideas and specifications of ergonomics guide the design of tools, machinery, labor practices, and environments. In order to maximize a machine's efficiency, a worker must be able to control it effectively and precisely. There is no evidence that badly built workplaces are the most effective mode of production; rather, workers should be able to operate machinery in the most stress-free manner.
Over time, musculoskeletal problems might develop as a result of workplace ergonomic flaws, even if the symptoms aren't obvious at first. Rather than looking for issues, ergonomics should be viewed as providing solutions and can be used in any business.
According to University of Cape Town faculty of health and family medicine- task completion occurs when the employee and the machine work together in the same workplace in a certain setting. According to its dimensions and equipment/machinery configurations, a workspace is defined in detail. Worker posture and reach distances will be affected by these parameters, which will have a direct impact on worker comfort and productivity. Temperature, illumination, sound, and vibration all play a role in describing the surrounding environment.
So, here are the ergonomic principles to assist you in identifying ergonomic risk factors and maintaining your impeccable safety record.
Retain a Balanced Body Position
Positions in which the body is perfectly aligned and balanced while sitting or standing are known as neutral postures.
To maximize control and force output, neutral postures decrease the stress imposed to muscles, tendons, nerves, and bones.
An "awkward posture" is the polar opposite of a neutral posture. Positions that deviate from the "natural" range of motion become awkward. Musculoskeletal Disorder (MSD) risk can be increased by increasing the stress on the worker's body, and this should be avoided at all costs.
Wrist, elbow, shoulder, and back postures that are both neutral and awkward are shown in the images below. When you put on your ergo eyes (lens of fundamental ergonomic principles) you'll be able to tell right away if workers are hunched over their desks or standing straight.
Source: Neutral and awkward wrist postures
Source: Neutral and awkward elbow postures
Source: Neutral and awkward shoulder postures
Source: Neutral and awkward back postures
Do What Is Comfortable
Keeping a neutral posture is related to this notion, but it's worth elaborating on.
Lifting is most effective in the area between mid-thigh and mid-chest height, which is near to the body.
The arms and back are able to raise the most weight with the least effort in this area.
The "comfort zone" or "handshake zone" is another name for this area.
As long as you can "shake hands" with your work, you're keeping your body in an upright position and minimizing the need to squint.
MSD risk factors can be reduced by working from the power / comfort / handshake zone, which guarantees that you are working at the right heights and reaches.
Now you'll be able to tell if someone is working outside of their normal range of motion or at an unsafe height if they have extended their reach or are working at an awkward angle.
Allow for some flexibility and movement
The human body's mobility system is the musculoskeletal system, and it is meant to move.
Working in a stationary position for a lengthy period of time will wear down your body. Static load is what we're dealing with here.
For instance:
- For the next 30 minutes, keep your hands raised above your head.
- For the following eight hours, stand in the same spot.
- For the next 60 minutes, write using a pencil.
Static load will occur if you accomplish these things. A few seconds or minutes of discomfort may not seem like a big deal, but the cumulative effect of holding these seemingly stress-free positions over time will induce weariness and discomfort.
When you've completed these tasks, what is the first thing you'll likely do?
You'll get a workout.
You'll be able to loosen up your shoulders and lower back. You'll likely do some squats as well as some leg stretches. You'll be able to get your wrist and fingers as wide as possible.
Stretching aids in recovery from exertion, strengthens the muscles and joints, and enhances overall posture and coordination. In order to perform better and avoid injury, you should warm up before working out, just like you would before a game of sports. Warming up with a stretching routine is a terrific approach to get your body ready for the day's work ahead.
Stretch breaks are also a good way to get your blood pumping and recover your energy during the course of your work day.
Reduce Force That Is Too Great
One of the most common causes of ergonomic injury is using too much force. The human body is subjected to a great deal of stress in many jobs. Muscle fatigue and the likelihood of a musculoskeletal disorder (MSD) rise as a result of increased muscle exertion.
When a work or task necessitates an excessive amount of force, the goal is to identify it and then discover solutions to lessen that force.
Most workers will experience less fatigue and less chance of developing MSDs if excessive force requirements are eliminated. Using mechanical aids, counter balance systems, adjustable height lift tables and workstations, powered equipment, and ergonomic gadgets will lessen the amount of effort and physical exertion required to complete a job.
Reduce the amount of movement you make
In addition to the dangers posed by repetitive motion, it is also one of the most common. There are a lot of jobs that repeat themselves on a daily, weekly, or even hourly basis, and this is often due to production goals and processes. With other risk factors like excessive force or uncomfortable postures, high task repetition might contribute to MSD development. Cycle time of fewer than 30 seconds indicates that the work is highly repetitious.
If at all possible, needless or excessive motions should be avoided. Excessive force needs and unnatural postures must be eliminated in circumstances when this is not achievable
Expansion of the work, job rotation, and counteractive stretch breaks are some options to explore.
Reducing the Intensity of Direct Physical Contact
OSHA says that contact stress is caused by repeated rubbing or contact between hard or sharp objects/surfaces and soft body tissue, such as the fingers, palms, thighs, and feet, among other places. An area of the body is subjected to localized pressure due to this contact, which might affect blood and nerve flow or the movement of tendons and muscles in the vicinity.
Contact stress can be caused by resting the wrists on a sharp edge of a desk or workstation while conducting activities, forcing tool handles into the palms, especially when they can't be set down, tasks that involve hand hammering, and sitting without appropriate knee space.
Decrease Motion Excessively
More than a few studies have demonstrated that long-term health impacts from vibration exposure are most likely to occur when a worker is regularly exposed to a vibrating tool or process as part of his or her job duties.
Hand-arm vibration syndrome (HAVS) can induce a variety of disorders, including white finger or Raynaud's syndrome, carpel tunnel syndrome, and tendinitis, among others. Fingers are particularly vulnerable to the damaging effects of vibration syndrome, which affects blood flow and nerve function. Numbness, discomfort, and blanching are some of the warning signals (turning pale and ashen).
Make Sure There Is Enough Light
Poor lighting is a prevalent issue in the workplace, and it can have a negative impact on the productivity and well-being of employees. To make matters worse, if you don't have sight, it's impossible to accomplish your job!
Workers are more likely to suffer from eye tiredness and headaches if their workplaces are not sufficiently illuminated.
An easy answer to lighting issues is to give employees access to dimmer-adjustable task lighting. Computer workstations should be set up such that monitors aren't situated in front of windows or bright backgrounds, so that they don't glare.
An office redesign doesn't have to resemble a brain operation in terms of difficulty or complexity. Most of the ergonomic principles discussed in this article are self-evident; however, putting them into action on a daily basis can be a difficult task for many businesses.As an ergonomics expert, you can help your company identify risk factors that often go unnoticed, measure the risk with an objective ergonomic evaluation, and implement control measures to reduce or eliminate ergonomic risk factors. | https://www.sunaofe.com/blogs/blogs/the-best-ergonomic-principles-for-improved-work-performance |
The Agra Fort is an UNESCO World Heritage site situated in Agra, Uttar Pradesh, India. It is around 2.5 km northwest of its more celebrated sister landmark, the Taj Mahal. The fort can be all the more precisely portrayed as a walled city.An period set apart by intrusions and strongholds, where force was symbolized by terrific castles and more fabulous fortresses amid such time was constructed THE AGRA FORT. Stronghold has dependably been and still is the right of the powerful; the separating line between the ruler and the ruled.
The Agra Fort, otherwise called the "Lal –Qila", "Stronghold Rouge" or "Qila-i-Akbari", is the highlight of the city of Agra, then capital of the Mughal Sultanate .
An image of force, quality and flexibility, the way things are today in full grandness. With the Taj Mahal eclipsing it, one can without much of a stretch overlook that Agra has one of the finest Mughal fortifications in India. Development of the huge red-sandstone stronghold, on the bank of the Yamuna River, was started by Emperor Akbar in 1565.
There is notice of a post named Badalgarh in history books amid the eleventh century. It was a block fortress. Sikandar Lodhi, the lord of Delhi Sultanate, moved to Agra and made it his second capital. The fortress turned into his habitation. After his passing in 1517, his son Ibrahim Lodhi assumed control over the reins of the post. The Mughals vanquished Ibrahim Lodhi in 1526 at Panipat and caught the post. With the fortification came an inconceivable fortune including the acclaimed Koh-i-Noor precious stone. Mughal head, Babur manufactured a baoli, a stage well, in the post. At the point when Babur kicked the bucket in 1530, Humayun was delegated as the ruler at the fortification.
Sher Shah Suri crushed Humayun in 1540 and took control of the fortress for next five years. Sher Shah Suri passed on in 1545 and Humayun wrested back his kingdom from the successors in 1555, clearing path for foundation of Mughal domain for next three hundred years. Of these three hundred years, the great period finished with Aurangzeb's passing in 1707, and, the following one hundred fifty years saw the decay of the realm. Humayun kicked the bucket in 1556 and a youthful Akbar turned into the ruler at thirteen years old. Akbar made Agra his capital in 1558. He requested re-working of the stronghold. An enormous development activity was taken up and the fortress was assembled utilizing the blocks as an inward center and red sandstone as the outside layer. It took eight years to finish the work, from 1565 to 1573. Later on, Akbar's grandson, Shah Jahan, amid his rule as head (1627-58) made real adjustments and augmentations to the fortress.
Jahangiri Mahal is the main landmark that comes in the perspective once one enters the post through the Amar Singh Gate. Before Jahangiri Mahal is a dish formed shower tub, known as Jahangir's Hauz. It is around five feet high and has a width of eight feet. Jahangir got it made in 1610. Jahangiri Mahal was worked by Akbar amid 1565-69. It is effortlessly the most conspicuous working in the post. It doesn't resemble a four hundred year old royal residence. That is, perhaps, due to sublime upkeep. Jahangiri Mahal is an extraordinary bit of craftsmanship. There are examples and plans to be seen.Another masterful landmark that gets the attention from a separation. For the Musamman Burj is another class landmark at the Agra Fort. It is an octagonal tower which is open at five sides and makes a phenomenal overhang for a perspective of the riverside and the Taj Mahal. It was initially made of red sandstone and utilized by rulers Akbar and Jahangir. Shah Jahan got it changed to white marble according to his taste.
Shah Jahan was ousted by his child Aurangzeb in 1658. From that point, he put in the most recent eight years of his life under house capture at Musamman Burj, helped by his girl Jahanara Begum. Diwan-i-Am is the Hall of Public Audiences. It is the spot where the head would meet general society and hear their grievances. It was worked by Shah Jahan amid the years 1628-35. | https://www.thetravelreminiscences.in/2015/11/agra-fort.html |
Pam Cloud, MPT, is a licensed physical therapist who graduated with her Master’s degree in Physical Therapy from CSU, Northridge in 1998 and BS in Exercise Physiology from CSU, Fresno in 1995. She is the Assistant Director at CORE Orthopaedic and manages the PT aides and volunteers. Pam specializes in sports rehab with an emphasis in manual and functional exercise techniques to obtain the goals of her patients. Her philosophy is an eclectic approach to patient care including interventions such as mobilizations/high velocity thrusts, Therapeutic Taping, ASTYM and postural correction through biomechanical analysis. Pam also enjoys treating TMJ dysfunction, addressing headache and postural dysfunction that can contribute to this condition. Pam leads our aquatics rehab program at the Boys and Girls Club in Solana Beach to assist conditioning in a reduced weight bearing environment. Her career interests include health, wellness and injury prevention. In her free time, she enjoys spending time with family, camping, hiking and outdoor sports.
Susie Croke, MPT, CSCS, earned her Master’s degree of Physical Therapy from Mount St. Mary’s College in 1993. She received her undergraduate degree in Exercise Science/Physical Education from Arizona State University in 1991 and became a Certified Strength and Conditioning Specialist in 2002. Susie incorporates manual therapy techniques and her knowledge of strength and conditioning to treat her orthopedic patients. Susie also has her SFMA (Selective Functional Movement Assessment) Level I certification which is a movement based diagnostic system to help find the cause of pain by breaking down dysfunctional movement patterns. She is also SFMA Level 2 trained which helps guide the use of mobility techniques and motor control interventions based on the SFMA system. She is a certified provider in the ASTYM system which is used to help promote regeneration of soft tissue, reduce scar tissue, and restore mobility.
Winston Purkiss, MPT, CSCS, is a licensed physical therapist who graduated from CSU Long Beach with a B.S. in Physiology, where he was a NCAA Division I track athlete. Winston received his Master’s degree in Physical Therapy from Children’s Hospital Los Angeles/ Chapman University in 1992. Winston has worked for over 19 years as a sports/orthopaedic physical therapist in San Diego, California and Sun Valley, Idaho utilizing his manual therapy skills and other specific orthopaedic techniques, including Active Isolated stretching, in order to maximize the benefits to his patients. Winston specializes in rehabilitating golf injuries and is TPI certified. His hobbies include golf, skiing, surfing and strength and conditioning. Winston is currently providing care at our Carlsbad location.
Audra Jahn, DPT, graduated from St. Ambrose University in Davenport, Iowa with her Doctorate Degree in Physical Therapy in 2013. She obtained her undergraduate degree from the same university, majoring in Exercise Science with a minor in Biology. Her graduate school education was fostered by a small class size, which allowed faculty members to provide personalized, one-on-one attention to all students. She just recently moved to California from Illinois and enjoys hiking, rock climbing, running, and traveling. Audra’s treatment strategy includes a balance of therapeutic exercises to strengthen, stretching, and manual techniques. She has also found a new passion for yoga and likes to incorporate her knowledge of yoga into practice where appropriate.
Alexa Lewis, PTA earned her Bachelor’s degree in Kinesiology from Occidental College, Los Angeles, CA in 2004 and her Physical Therapy Assistant degree from San Diego Mesa College in 2013. She is also a Licensed Massage therapist and a certified Yoga Teacher. She has worked in the outpatient orthopedic setting for several years and she is grateful to be able to help patients decrease their pain and restore function. Alexa utilizes manual therapy techniques, corrective exercise, aquatic exercise, and therapeutic taping in treating acute and chronic conditions. She is also ASTYM certified and finds this technique very effective in treating soft tissue disorders. Her yoga background has given her an appreciation for correct alignment and she is passionate about neuro-re-education and assisting patients in improving their kinesthetic awareness, posture and movement patterns. She loves educating and teaching patients tools to meet their goals, gain independence, and prevent future injury. In her free time, she enjoys spending time at the beach, yoga, hiking, and anything outside with her sweet pup Kolohe.
Stephanie Bloom, OTL, CHT, is a licensed occupational and hand therapist who graduated from Tufts University in Boston with her Bachelors degree in Occupational Therapy in 1984, has been practicing hand therapy since 1987 and received her Certified Hand Therapist certification in 1990. Stephanie has extensive experience in complicated splinting and fracture bracing, along with joint mobilization and neural mobilization. She has expertise in trauma, reconstructive surgery, pediatrics and soft tissue problems.
Robin Dushkin, OTR-L, CHT, is a licensed occupational and hand therapist who graduated from University of Illinois with her Bachelors of Science degree in Occupational Therapy in 1977 and from Loma Linda University with Hand Rehabilitation Advanced Clinical Training in 1988. Robin has over 20 years experience as an occupational therapist and received her Certified Hand Therapist certification in 1991. Her areas of expertise include hand therapy (treatment and evaluation) and splinting. | https://coreorthopaedic.com/physical-therapy/physical-therapy-staff/ |
Ryan grew up in the Philadelphia area where sports and exercise was a driving force in his life. Ryan was a multi-sport athlete in soccer, wrestling, and volleyball in high school.
Ryan attended Slippery Rock University and earned his Bachelors and Masters degree in Exercise Science. While in undergrad, Ryan was 3x regional All-American in soccer and captain his junior and senior year. While completing his Masters degree Ryan was the assistant coach for both men’s and women’s soccer programs. After college, Ryan was a member of the US National Beach Soccer Team from 2004-2006.
Ryan’s professional career has spanned over 20 years in health and wellness and helping people and organizations achieve their goals. His experience includes corporate wellness and fitness/recreation facility management for non-profit organizations and universities. Ryan is currently an Adjunct Professor for the Exercise Science Dept. at UNCW and Ryan previously managed and designed award-winning wellness programs for PPD’s global headquarters here in Wilmington.
Certifications/Training:
LesMills BodyCombat
Reebok Indoor Cycling
Total Resistance Exercise (TRX) Qualified Suspension Trainer and Rip Trainer
Wellness Council of America (WELCOA) Faculty member – 2014
Office/Lab Ergonomic Training from GSK Occupational Health and Safety
Presentations:
Lower Cape Fear HR Assn. – Tools to Create an Effective Wellness Program – 2015
Southeast Area Health Education Ctr. – The Role of the Employer in Employee Wellness – 2015
Philly-Fit Magazine Festival, awarded Most Athletic Workout – 2011, 2012
Pennsylvania Public Health Association Annual Conference in 1997
BRAD HOLLINGSWORTH, CSCS, TSAC-F, USAW, TRX, FMSC
Director of Strength and Conditioning
Brad is originally from Philadelphia, PA. He attended LaSalle College High School and was involved in multiple sports, including ice hockey, tennis, and crew. He then attended Norwich University on ROTC scholarship but after two semesters decided to leave school to enlist in the Marine Corps. He spent 4.5 years stationed at Camp Lejeune with 2nd Battalion 2nd Marines.
As an athlete almost all his life Brad has always had a passion for health and exercise, but it was his time in the Marines when he learned its true value. After exiting the Marines in 2014, Brad decided to pursue a career in the health and fitness field and returned to school at Wake Tech Community College in Raleigh. In August 2016 Brad moved to Wilmington and transferred to UNCW and in 2018 he received his Bachelors Degree in Exercise Science from UNCW.
Brad’s goal is to provide the most comprehensive and results-based programs to the tactical community, including firefighters, law enforcement officers, military and all other first responders. He plans to train active duty military personnel to extend their operational life and overall performance, to take the best and make them even better.
Certifications:
National Strength & Conditioning Association (NSCA) Certified Strength & Conditioning Specialist (CSCS)
National Strength & Conditioning Association (NSCA) Tactical Strength & Conditioning Facilitator(TSAC-F)
USA Weightlifting (USAW) Level 1 Sports Performance Coach
Functional Movement Screening (FMS) Level 1
Total Resistance Exercise (TRX) Suspension Trainer Qualified, Rip Trainer Qualified
Berkley Hall, CSCS, CPT
Assistant Strength and Conditioning Coach
Berkley grew up in Ashland, VA playing soccer and basketball. She earned all conference accolades for her participation in high school soccer, as well as earning many team awards as the starting goalkeeper for Patrick Henry High School. Throughout high school she worked as a coach for Own The Goal, a local company geared towards goalkeeper development.
She attended the University of North Carolina Wilmington, playing D1 soccer as a goalkeeper. After completing her playing career, she developed a passion for weightlifting, fitness, and sports performance. Her Bachelors degree in Exercise Science furthered her passion for weightlifting and performance, giving her an avenue to create a career out of the things she loves. While in undergrad Berkley completed an internship at PEAK under the supervision of Ryan Gillespie and learned many valuable lessons regarding exercise, fitness, athlete development, and coaching as a whole.
Berkley participates in powerlifting, but loves to complete workouts of all kinds to continue improving as a person as well as a coach. No matter what your fitness goals are, Berkley will help you achieve them.
Certifications:
National Strength & Conditioning Association (NSCA) Certified Strength & Conditioning Specialist (CSCS)
National Strength & Conditioning Association (NSCA) Certified Personal Trainer (CPT)
Jake Jones, CSCS, CPT
Assistant Strength and Conditioning Coach
Jake grew up in Rocky Mount, North Carolina and has always been active, even through his childhood. He played several sports, but mainly stuck to baseball and soccer. He played baseball on travel teams, showcase teams, and his high school team at Nash Central High School.
After graduation, Jake went straight to college at the University of North Carolina at Wilmington, to pursue his Bachelor’s Degree in Exercise Science. Jake worked as a personal trainer after his freshman year, and got his ACE Personal Training certification, and became one of the head trainers. Soon after, he was involved in research with a fellow trainer and other supervising professors, and showed how passionate he was about health and fitness.
Jake was previously an intern at Peak Athletics over the Spring of 2018 and was brought onto our team after he graduated. He shares our passion for health and fitness, and continues to work to make himself and others better. Soon after graduation he passed his NSCA strength and conditioning certification and plans to work with all athletes, improve their form, and become well-rounded athletes. He also plans on attending Grad School for Athletic Training, to work towards his passion, and continue helping people reach their goals as best as he can!
Certifications: | https://www.peakathleticsnc.com/the-staff/ |
GUARDIANSHIP LAW ATTORNEY IN PUNTA GORDA, FLORDIA
Guardianship is a specialized area of the law dealing with the capabilities of individuals to exercise their legal rights. The infirm and the elderly are often alleged to be either incapable of exercising their own rights or subject to undue influence by others due to their circumstances. Guardianship laws were enacted to protect these vulnerable citizens.
With extensive experience in this field of law, I can represent individuals alleged to be incapacitated or vulnerable, their family members or other interested parties in these proceedings.
Guardianship proceedings are usually conducted in a very short period of time from commencement to completion. It is extremely important that the client have competent and experienced representation, especially when dealing with sensitive issues such as cognitive changes that have a profound effect on everyday behavior. I have the knowledge and experience to address these difficult issues and reach a resolution. The guardian is responsible for looking out for the best interest of the ward and to assure that their assets are put to the best use throughout their lifetime. I can continue to represent the guardian after the establishment of the guardianship. | https://kellerlaw.biz/elder-law/guardianship-law/ |
Options and Probabilities – Part 4
In the last three articles in this series, Part 1, Part 2 and Part 3, I’ve been discussing probabilities as applied to option trading. Last time I mentioned there are software tools that do the heavy lifting of these calculations for us. Below is an example of one of these, the Volatility Cone.
First, look at this price chart for GLD, the exchange-traded fund that tracks the price of gold. As of March 13 that chart looked like this:
The current price was in a demand zone in the $110-113 area. This zone had originated in early 2010, not shown on the chart. There was a supply zone overhead in the $123-125 area.
In this example I’m ignoring the nearby supply area around $116, which of course we couldn’t do in real life. The point here is to illustrate the tools available to us.
Let’s say that wearing our rose-gold-tinted glasses, we believed that gold had a good chance to make a stand here and climb up to the $123 area. Now we have to decide which options to use.
First we need to decide on which expiration date to select. There are many dates to choose from, from one week to almost two years. Using options that are too far out will cost a lot of money – the farther out the expiration the more expensive the option. Therefore, going too far out will be a poor allocation of capital. Using options that expire too soon won’t work either. If our options expire before GLD makes its move we will lose.
So, we need to estimate how long this move up to the $123 target could take. We note that the last time that trip was made, between October 2014 and January 2015, it took about twelve weeks. This could be our educated estimate of how long that rally could take this time.
Now we need to cross-check against probability. The question now is this: given the recent volatility of GLD, is it reasonable to expect a movement up to $123 in twelve weeks? And if not, how long should we allow?
To answer this we can use a tool called the Volatility Cone, or Vol Cone. This tool is provided in slightly different forms by several software vendors. The one we’re using here is part of the OptionStation Pro package from TradeStation.
To start, we do what is called creating a theoretical option position by selecting an option from the chain. For this purpose we’re just going to use an option whose strike price is close to the current stock price and whose expiration is later than our educated estimate of twelve weeks. Although our reasonability check is for our twelve-week estimate, we want to add an extra two months to that since we would plan to sell any options that we might purchase when they still have at least two months of life left. That way we avoid owning options in the death-spiral portion of their lives, when time value declines rapidly.
Here is the collapsed option chain for GLD as of March 13, 2015:
Note the highlighted row for the options that expire in 189 days, on September 18, 2015. This is the next available expiration date that is farther out than twelve-weeks-plus-two-months-extra.
Expanding the September option chain, we see:
To create a theoretical position we select an option to be used. In this case I selected the call option at the $109 strike. This results in the green-highlighted position at the bottom of the OptionStation Pro display above.
Next, we display a two-dimensional graph of the position by selecting the 2D Graph tab at the bottom of the option chain. This is the result:
The top half of the above chart is the familiar option payoff graph which has been discussed in several past articles.
The bottom half is the Volatility Cone. It shows the price range that falls within one standard deviation away from the current price. This is the price range that, given current volatility, has about a 68% chance of containing the actual price at a given future date. It takes into account GLD’s Historical Volatility, given as 19.35% at the top of the chart, and then calculates the price that is one standard deviation away as of each date in the future. The vertical axis of the vol cone is “Days Ahead,” the more days ahead, the wider the probable range. At the top of the vol cone is the 68%-probable range as of the options’ expiration date, 189 days in the future.
The blue labels show that as of the expiration date of 9/18/15, the boundaries of that 68%-probability range are $96.42 on the down side and $127.34 on the up side.
Interesting, but not exactly what we want to know. What we want to know is: is our $123 target probable, i.e. within one standard deviation for a twelve-week time frame?
To determine this we can read down the gold line of the vol curve until we get to the $123 target price, and then read over from this point to the left axis and read off the number of days. This is a little hard with the fairly compressed scale on the left side of the chart but it crosses on June 25, about 20 days further out then we estimated. We’ll need to allow a little more time for our trade to work.
That is all the space we have for now. Next time we’ll continue to describe how to use this ingenious tool.
For comments or questions on this article, contact us at help@tradingacademy.com. | https://www.tradingacademy.com/lessons/article/options-and-probabilites-part-4/ |
Voter sentiment is changing from not wanting much change to wanting significant change. Some want revolution.
Perceived personality of politician has become more important than policies and actions – to an increasing number of voters and also to journalists who are increasingly involved in make the narrative rather than reporting.
But the hope of compassionate revolution is not (yet) being realised.
“We have moved into a political era where talk of empathy and compassion rates more highly than taking action, and the extent to which Jacinda Ardern can continue to rewrite the narrative this way will determine the outcome of the next election”
“The Prime Minister’s challenge is to entrench empathy and compassion as the basis of contemporary government, before evidence and achievement reassert themselves.”
Peter Dunne (Newsroom): Government by worthy sentiment
For the older voters, the broad consensus from 1999 to 2017 was a welcome relief to the upheavals of the 1980s and early 1990s that had led them to opt for MMP in 1993, to place a greater restraint on governments. But for 1999 first time voters, most of whom would have been too young to recall directly the experiences and hardships of the restructurings of the 1980s and early 1990s, the same broad consensus was actually a straightjacket.
No matter the complexion of the government, the policy outcomes had still been broadly the same. While the country was being transformed, quietly and significantly, in those years, to those voters nothing much was actually seeming to change.
So it really did not matter to them which of the major parties was in power – they were all broadly the same anyway, and the succession of leaders each major party put up while in Opposition tended to confirm that.
If anything National under Simon Bridges’ leadership is becoming more old school conservative. His recent “What the Kiwi way of life means to me’ hints more than a little of ‘the good old days’ that we have evolved significantly away from. There are ,more Kiwi ways of life than there ever was.
What these voters were yearning for, and did not see in contemporary political leaders, were “people like them” becoming more prominent in politics. People who would speak their language, and share their concerns and frustrations.
Bridges is failing at speaking anyone’s language well if at all.
The fortuitous arrival of Jacinda Ardern as leader of the Labour Party in quite dramatic circumstances weeks before the 2017 election was the tonic many of them were seeking to vote for, in the expectation of a real break from the status quo they had known all their voting lives. She was, after all, one of them, fitting their demographic near perfectly, and completely untainted by ever having held any previous significant or substantial political office. So, for her, no problem was insoluble, no challenge insurmountable, and no existing solution sufficient.
Her appeal was (and remains) that she is a break from the past in so many ways.
The contrast between Ardern and the four Labour leaders who preceded her was huge. She made an immediate impact when she stepped up. The media become unusually excited and gave her an enormous amount of favourable coverage, but people, voters, could see for themselves that she was different, she spoke a different language that resonated.
That of itself provides those voters with a confidence that she understands their plight, because she is living it too. Forget the fact that she has changed very few of the policies that Labour took to the 2011 and 2014 elections where they were trashed; or that those they have tried to implement now (like Kiwibuild) are becoming embarrassing failures.
Forget too that her Government now admits that it does not even know how to measure whether or not its policies are working, and the deteriorating relationship with our major trading partner.
It just seems not to matter because the sustaining feature of this Government is not anything it has done or stands for, but rather the effervescent personality of the Prime Minister, that fits the current mood of the group of voters around the median population age.
Indeed, it is highly doubtful whether many of them could articulate beyond the vaguest of platitudes what she actually stands for.
Your NZ commenters probably don’t represent average voters, but as an exercise I asked What does Jacinda Ardern stand for?
We are now in an almost post ‘politics as usual’ phase, where the previous emphasis on policy and delivery has given way to feeling and identifying with the issues of the day, although it is far from clear to where that is leading, or what the new norms will be.
The emerging reality is that, despite some of the rhetoric, we are moving into an era where commitment to aspiration (prioritising empathy and compassion) rates more highly than action (prioritising evidence and achievement).
The Prime Minister’s challenge is to entrench empathy and compassion as the basis of contemporary government, before evidence and achievement reassert themselves.
The extent to which she can rewrite the political narrative this way, and paint National as cold and heartless in the process, and therefore part of the past, rather than anything her Government manages to do, let alone what the opinion polls may say, will determine the outcome of the 2020 election.
I think many on the left would love for Judith Collins to take over the National leadership so they could build on the “cold and heartless” contrast with Ardern. As things stand Bridges playing into National’s opponents hands with his opposition to a compassionate approach to drug law, his opposition a compassionate legalising of euthanasia.
Ardern’s compassion and empathy and wellbeing and fairness – at a superficial level at least – is going to be hard to beat, unless Government failures to match rhetoric with action become too apparent (they are really struggling with housing and health in particular, with poorly performing Ministers Phil Twyford and David Clark).
National have indicated they plan to roll out policies this year, trying to offer substance over nice but empty words. But will voters listen, whether bridges or Collins are leading?
Labour are helped in the compassionate politics stakes by the Greens, but Winston Peters and NZ First are a sharply contrasting blast from the past. This may not matter if NZ First fail to make the threshold next election.
It may be that Ardern successfully manages to fool the masses with more feel good than actual good. | https://yournz.org/2019/02/25/more-feel-good-but-still-waiting-for-actual-good/ |
On a topographical map, sometimes called a topo map, contour lines are often used to join points of equal elevation above sea level and color gradations may depict elevation ranges between the contour lines. Topographical maps are often used to determine areas and routes where the terrain is fairly level or where steep slopes exist. This page includes both static and interactive topographical maps of North Carolina.
This section features a topographical map of North Carolina as well as a map legend that specifies elevation ranges and indicates their corresponding map colors.
North Carolina's highest mountain is Mount Mitchell, whose peak is 6,684 feet above sea level. North Carolina's lowest elevation is sea level, at the Atlantic Ocean. North Carolina features a coastal plain in the east, a central piedmont region, and the Blue Ridge Mountains in the west. Major rivers in North Carolina include the Roanoke, Yadkin, and Pee Dee.
This section features a topographical map that can be zoomed and panned to show the entire state of North Carolina or a small portion of the state. To zoom in or out on the map, use the plus (+) button or the minus (-) button, respectively. To pan the map in any direction, simply swipe it or drag it in that direction. At high zoom levels, the contour lines on this North Carolina map can help outdoorsmen, land developers, and others to plan their routes and activities more efficiently. | https://www.north-carolina-map.org/topo-map.htm |
Brighthouse Financial Inc. (NASDAQ: BHF) open the trading on October 28, 2020, with great promise as it jumped 1.59% to $30.10. During the day, the stock rose to $30.535 and sunk to $28.74 before settling in for the price of $29.63 at the close. Taking a more long-term approach, BHF posted a 52-week range of $12.05-$48.25.
The Financial Sector giants’ yearly sales growth during the last 5-year period was -7.10%. Meanwhile, its Annual Earning per share during the time was -22.00%. Nevertheless, stock’s Earnings Per Share (EPS) this year is -193.70%. This publicly-traded company’s shares outstanding now amounts to $94.70 million, simultaneously with a float of $92.65 million. The organization now has a market capitalization sitting at $2.75 billion. At the time of writing, stock’s 50-day Moving Average stood at $29.86, while the 200-day Moving Average is $30.28.
While finding the extent of efficiency of the company that is accounted for 1330 employees. It has generated 4,927,820 per worker during the last fiscal year. For the Profitability, stocks operating margin was -13.14 and Pretax Margin of -16.05.
Brighthouse Financial Inc. (BHF) Ownership Facts and Figures
Sometimes it helps to make our mind if we keep our tabs on how bigger investors are working with the stock of the Insurance – Life industry. Brighthouse Financial Inc.’s current insider ownership accounts for 0.10%, in contrast to 88.70% institutional ownership. According to the most recent insider trade that took place on Mar 12, this organization’s Director bought 5,000 shares at the rate of 22.83, making the entire transaction reach 114,150 in total value, affecting insider ownership by 9,694. Preceding that transaction, on Mar 12, Company’s Director bought 5,000 for 20.14, making the whole transaction’s value amount to 100,700. This particular insider is now the holder of 9,694 in total.
Brighthouse Financial Inc. (BHF) Earnings and Revenue Records
So, what does the last quarter earnings report of the company that was made public on 6/29/2020 suggests? It has posted $0.41 earnings per share (EPS) not meeting the forecaster’s viewpoint (set at $0.88) by -$0.47. This company achieved a net margin of -11.29 while generating a return on equity of -4.84. Wall Street market experts anticipate that the next fiscal year will bring earnings of 2.59 per share during the current fiscal year.
Brighthouse Financial Inc.’s EPS decrease for this current 12-month fiscal period is -193.70% and is forecasted to reach 11.34 in the upcoming year. Considering the longer run, market analysts have predicted that Company’s EPS will increase by 40.01% through the next 5 years, which can be compared against the -22.00% growth it accomplished over the previous five years trading on the market.
Let’s observe the current performance indicators for Brighthouse Financial Inc. (BHF). The Stock has managed to achieve an average true range (ATR) of 1.41. Another valuable indicator worth pondering is a publicly-traded company’s price to sales ratio for trailing twelve months, which is currently 0.24. Similarly, its price to free cash flow for trailing twelve months is now 1.85.
In the same vein, BHF’s Diluted EPS (Earnings per Share) trailing twelve months is recorded 22.04, a figure that is expected to reach 2.50 in the next quarter, and analysts are predicting that it will be 11.34 at the market close of one year from today.
Technical Analysis of Brighthouse Financial Inc. (BHF)
[Brighthouse Financial Inc., BHF] recent stats showed that its last 5-days Average volume was poorer than the volume it posted in the year-ago period. During the previous 9 days, stock’s Stochastic %D was recorded 28.98% While, its Average True Range was 1.47.
Raw Stochastic average of Brighthouse Financial Inc. (BHF) in the period of the previous 100 days is set at 39.89%, which indicates a major rise in contrast to 30.91% during the last 2-weeks. If we go through the volatility metrics of the stock, In the past 14-days, Company’s historic volatility was 48.24% that was lower than 53.43% volatility it exhibited in the past 100-days period. | |
Playwright Sarah Kane Producer Shotgun Players [Website] Description A middle-aged man and younger woman meet at a hotel room. As public and private violations collide, their world shatters around them. Acknowledged as the most provocative and influential British playwright of her generation, Sarah Kane weaves an unforgettable story of destruction, chaos, and, ultimately, redemption and love.
When Blasted debuted in 1995, the production sparked outrage. Sarah Kane created characters living in a state of siege with a war mentality stripping away their humanity layer by layer. Ben Brantley of the NY Times writes: “For all their degradation and cruelty, her characters are complex, ambivalent and specifically, identifiably human. In this sense, Blasted is an extraordinary act of empathy, an imagining of how people could reach the point where they behave like participants in war crimes. Ms. Kane’s chilling contention is that it’s not such a stretch for any of us." | http://awards.theatrebayarea.org/listings/event/4405 |
Q:
How do I generate random numbers with a mean in C++?
How do I generate 100 random numbers in between 1,000 and 20,000, with a total mean of 9,000 in C++? I am looking into the C++ 11 Libraries but am not finding a method that allows me to include a mean AND a range.
A:
Since you don't care about the distribution as long as it satisfies your constraints, by far the easiest thing to do is to simply produce 9000 all the time. And the simplest distributions that are not that are things like producing 1000 with probability p and 20000 with probability 1-p, where you've solved for the value of p that gives the correct mean.
I strongly suspect you should figure out the math/statistics of what you're trying to do before you start thinking about programming anything.
A:
Since you are flexible on distribution, an easy solution that still gives reasonable results, without having to do rejection logic, is a triangular distribution. I.e. you set the lower end of the triangle at 1,000, the upper end of the triangle at 20,000, and the tip of the triangle such that you get your desired mean of 9,000.
The wikipedia link above indicates that the mean of a triangular distribution is:
(a + b + c) / 3
where a and b are your lower and upper limits respectively, and c is the tip of your triangle. For your inputs, simple algebra indicates that c = 6,000 will give your desired mean of 9,000.
There is a distribution in C++'s <random> header called std::piecewise_linear_distribution that is ideal for setting up a triangular distribution. This needs only two straight lines. One easy way to construct such a triangular distribution is:
std::piecewise_linear_distribution<> dist({1000., 6000., 20000.},
[](double x)
{
return x == 6000 ? 1. : 0.;
});
Now you simply have to plug a URNG into this distribution and crank out the results. For sanity's sake it is also helpful to collect some statistics that are important according to your problem statement, such as minimum, maximum, and mean.
Here is a complete program that does this:
#include <algorithm>
#include <iostream>
#include <numeric>
#include <random>
#include <vector>
int
main()
{
std::mt19937_64 eng;
std::piecewise_linear_distribution<> dist({1000., 6000., 20000.},
[](double x)
{
return x == 6000 ? 1. : 0.;
});
std::vector<double> results;
for (int i = 0; i < 100; ++i)
results.push_back(dist(eng));
auto avg = std::accumulate(results.begin(), results.end(), 0.) / results.size();
auto minmax = std::minmax_element(results.begin(), results.end());
std::cout << "size = " << results.size() << '\n';
std::cout << "min = " << *minmax.first << '\n';
std::cout << "avg = " << avg << '\n';
std::cout << "max = " << *minmax.second << '\n';
}
This should portably output:
size = 100
min = 2353.05
avg = 8972.1
max = 18162.5
If you crank up the number of sampled values high enough, you will see convergence on your parameters:
size = 10000000
min = 1003.08
avg = 8998.91
max = 19995.5
Seed as desired.
| |
Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/44328
|Title:||Distributed energy storage systems for power networks||Authors:||Wang, Junbo.||Keywords:||DRNTU::Engineering::Electrical and electronic engineering::Electric power::Production, transmission and distribution||Issue Date:||2011||Abstract:||With the rapid development of the economy and society, the consumption of energy shows continuous increase. However, earth fossil fuel resources are finite and therefore with the depletion of the fuel resources, so we need a new and sustainable resource to substitute the fossil fuels. Due to their clean, renewable and environmental-friendly characteristics, renewable energy sources can be actively considered. However, the uncontrollable weather dictates the amount of generation that can be harnessed from these sources. So the production of the electricity from the renewable sources cannot match the load demand exactly at all time. The consequence would be un-acceptable network voltage/frequency variations. The security and reliability of the network will be compromised. Methods should be found in order to increase the amount of renewable energy without causing degradation in the network performance. This project intends to extend the use of hybrid storage system to cater for energy management, power quality and bridging power purposes. The hybrid energy storage system could include the use of different types of energy storage medium. Suitability and capacity of each of the storage media would depend on the power level and the full charge/discharge cycle time expected for each application.||URI:||http://hdl.handle.net/10356/44328||Rights:||Nanyang Technological University||Fulltext Permission:||restricted||Fulltext Availability:||With Fulltext|
|Appears in Collections:||EEE Student Reports (FYP/IA/PA/PI)|
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated. | https://dr.ntu.edu.sg/handle/10356/44328 |
Over his career, Paul Chibe has led marketing for some of the largest brands in the world ranging from Quaker Oats and Wrigley to Anheuser-Busch where he served as Chief Marketing Officer in the United States. Today, he serves as the CEO of Ferrero North America, maker of iconic brands such as Nutella, Tic Tac, Kinder, and Butterfinger. We sat down to talk about his move from CMO to CEO, and what marketers should focus on during their career.
Dave Knox: In your career, you started in finance, moved to marketing and eventual became the Chief Marketing Officer at Anheuser-Busch. It's rare to see a finance leader move into marketing and even rarer to see a CMO move into the CEO job like you have at Ferrero USA. How did that broad background prepare you for today?
Paul Chibe: The very first positions when I was in finance were commercial roles, so I supported sales and I supported marketing. And I think that financial context really helped me be grounded in terms of the working with numbers, but also understanding that ultimately the role of a business is to generate profitability. Rhat context is always underneath all the things that I did in my career. Moving from a marketing role to a general management role, the difficulty sometimes is that there is a opinion that the marketers don't have the functional context to move and make that transition. And what I mean by functional context is that they don't have the understanding around supply chain, operations, and the industrial processes related to managing a business. This is where it's important for someone in their career journey to have that openness, to make sure that they are getting that exposure through curiosity, through respect for the other functions. You build that understanding that permits you to make that transition. The criticism always of a marketer is they worry about marketing but are not as worried about issues directly related to management of the business. And for those marketers that do think that way, it is a difficult transition to run the “mundane” parts of the business. But if a marketer has the right attitude as they come along in their career, they can be quite well prepared to become the CEO.
The big part of that transition is that your responsibility set expands exponentially. As I moved from general management to my experience at AB- Inbev, what made that attractive was just the scale of the Anheuser-Busch business in the United States. It was a $15 billion business. We had a billion dollar marketing budget. I had 400 people working for me in a large organization. Those resources enabled you to unleash your creativity in ways that you could not in many other roles. But that role also helped me become much better at the operational understanding in terms of how to run businesses. That experience, the discipline and rigor in which the operation was managed, was quite formative in developing my capability to become a CEO.
As you are moving through your career, make sure that you are getting the appropriate exposure and contextual understanding of the functions as a marketer. There are plenty of opportunities as you work on projects to help you become an effective leader of a business. For example, when you work with new products, you often work with product design. A lot of what product design involves is shipping the product and you understanding and learning shipping in the context of your new product's role is something that will serve you later on as a general manager. The thing I push on marketers that have worked for me is to make sure that they respect the functions and they are curious about the functions as they move through their career journey.
Knox: What else should marketers consider if their goal is to become a CEO?
Chibe: First, you have to have a true curiosity. You have to have a curiosity on how all the functions of a company run. What do they do? What are the metrics that these functions operate against? How do they do their job? And that curiosity has to be grounded for respect for the other functions and for your colleague. Think about your vocation. You love marketing, you went to marketing because that you felt was a fun, exciting role. That's just as much true for the person who's going into engineering or going into supply chain. Being curious about what your colleagues are doing and having a true respect and appreciation for their role and their choice of being in that role, sets up the kind of the mutuality that needs to be there for you to have that open dialogue of learning that is required.
Finally, it is absolutely critical to have an entrepreneurial view. An entrepreneur figures out how to get things done. They overcome barriers and obstacles. They work with issues. A marketer should have an entrepreneurial view because that aspect of trying to overcome problems and launching a product or building a brand, gives you the context of engaging the organization. If you have that attitude that makes you someone that then when the opportunity comes for you to move into general management, you may not be an expert, but at least you have some understanding and you are able to ask good questions as you are moving into these roles.
Knox: What is the largest opportunity for marketers today in the constantly changing business landscape?
Chibe: I think the largest opportunity for marketers is to focus on building the connection of their brands with consumers. To focus on developing the emotional capital. It may be tougher in some categories, but the younger consumer has a different relationship with brands than the older consumers have. A digital native is much more used to going online and shopping and their loyalty to brand is much less. Their loyalty might be to the platform they are buying from and less to the brand. This is something that marketers are going to have to observe and pay attention to over time.
Let's say I’m a loyal shopper on Amazon. I have my Prime membership. I am able to have the stuff delivered to my house. And if I'm looking for an item like dish detergent, I am going to buy the dish detergent that is on Amazon at a good price, not necessarily the big brand like Dawn. And for me, I have been brand loyal to Dawn my whole life, but you can see where for a younger consumer, it's more about what is easy for them to purchase with the app than necessarily brand loyalty. It is a new world. It's a new world for how brands establish a relationship with consumers.
Knox: With that changing brand loyalty of the next generation of consumers, what do big brands need to keep in mind?
Chibe: It comes down to authenticity and while that is an overplayed word, it is at the heart of the issue. If you think about the brands that are sourcing their volume from the big brands, there are usually parameters. They are local, there is an origin story of a founder who developed this business with some sort of love for that particular category. And there's a product advantage.
One of the things that has been an issue for big CPG is for the last few decades, the focus was on cost savings, not necessarily on continuing to focus on creating distinct product performance advantage and quality advantage with their products. Take a brand like Annie’s Mac and Cheese that comes along with a local story, an emphasis on product quality and speaks to needs and concerns of the mom. It sources of ton of volume from Kraft Mac and Cheese. That’s the classic story of a startup brand taking a lot of share from a big brand. And that is being played out in almost every category out there.
Big companies need to keep this mind, whether they buying a new brand or launching a new product, You have to make sure you have that authenticity and hitting on these key components. There has to be an authenticity of why you are entering that category and truth to it.
Knox: How has COVID-19 changed the approach that brands need to talk in the impulse and convenience categories as it relates to eCommerce and last mile delivery?
Chibe: For us, COVID did not change our strategy, because we were significantly interested in the space. What we have observed is that COVID-19 accelerated the future, particularly for people that might not have tried at home delivery without it. Let's say you're the 65 year old who has a kind of routine. They really enjoy going to the grocery store in that routine. Now they were forced to buy online and they had a very enjoyable experience buying online. And so what I think you're going to see in the post COVID world is that people who were forced to try at home delivery, they had a satisfactory, very enjoyable experience. You are going to see much more of a mix in behavior going into the future. You are going to see people completely switch to online, but their behaviors will change. The complexity comes from the point that the cost to serve for our retailers and for our partners in this new environment is extremely expensive. And you read all these articles about the retailers not making money on these services. What we will see in the future is some sort of reconciliation of the model in terms of the costing. The big retailers can't lose $10 on ever eCommerce order going into the future for a long time. Maybe that was okay when the online was 2% of their volume and they were figuring it out, but when it gets double digits, the reconciliation has to happen. They have to make money on this transaction and there will be some adjustments in terms of a pricing or how the charges are structured to have your stuff delivered to your house.
I have been recognized throughout the industry as an innovator who bridges the worlds between brand marketing, digital and entrepreneurship. I blend classical brand
…
I have been recognized throughout the industry as an innovator who bridges the worlds between brand marketing, digital and entrepreneurship. I blend classical brand marketing acumen with entrepreneurial instinct to navigate a changing corporate landscape. As a consultant, speaker and coach in the areas of innovation, marketing and digital transformation, I have been asked to share my expertise by some of the world’s largest companies and most innovative startups. As the former Chief Marketing Officer of Rockfish, I helped the company become one of the fastest- growing agencies in the country, going from $8 million in 2010 to $70 million in revenue in 2016. Simultaneously, I cofounded The Brandery, one of the top 10 startup accelerators in the US. Combining these two worlds, I am also the author of the bestselling Predicting the Turn: The High Stakes Game of Business Between Startups and Blue Chips. | |
Check out these 8 global corporate offenders who contribute the most to plastic packaging production and pollution
The report states that the companies involved are actively taking steps to obstruct and undermine legislative solutions aimed at tackling an unprecedented global plastic waste crisis.
The Changing Markets Foundation has released ‘Talking Trash: The Corporate Playbook of False Solution’ report on the world’s worst plastic waste offenders. The report states that the companies involved are actively taking steps to obstruct and undermine legislative solutions aimed at tackling an unprecedented global plastic waste crisis. The Foundation is on a mission to expose irresponsible corporate practices and drive change towards a more sustainable economy. The report focused on companies who actually took the step of announcing the amount of plastic they produce. Let’s take a look at the world’s worst offenders for plastic pollution who are producing metric tonnes of plastic packaging annually as of 2020. (Image: Reuters) | |
There were many important take-aways for employers in the aviation industry from the CAPA Asia-Pacific Aviation Summit that we recently attended. A couple are worth highlighting, as they also apply more broadly to Australian employers managing a rapidly changing competitive landscape within the current industrial relations framework.
1. Global competition and the need to benchmark against international competitors
Australian organisations face increasing pressure from companies operating from foreign locations, including from China and South East Asia. While the Chinese and Indian economies have slowed slightly, they are still expanding and will be the drivers of significant growth in the future. The development of significant domestic productive capacity in those locations continues apace.
In the Australian aviation industry this is evidenced by a sharp increase in capacity (number of available airline seats available) being put into the international market by Asian and Gulf carriers and the entrance of low cost carriers to the market. This is no different to the entrance of international competitors in other industries, such as manufacturing.
Given foreign competitors do not face the same rules and regulations (particularly in relation to industrial relations or employment law) as Australian firms, the cost and operating benchmark for Australia needs to be the global benchmark. It is no longer enough for a company to measure its operations against only its competitors based in Australia.
It is critical from a labour perspective that employees and unions appreciate the realities of the situation, particularly in enterprise bargaining claims.
2. Innovation and the need to maximise flexibility in deploying assets (including labour)
The pace of technological change in the airline industry, and most other industries, also continues unabated. The pursuit of innovative offerings is important, especially as expanded capacity and automation are increasing features. Companies who do not innovate and adapt are quickly left behind.
To paraphrase and expand on comments from one of the speakers at the Summit: innovation is going to happen; it’s just a question of who is going to lead the way and how well you are positioned to adapt quickly.
This is especially important for cross-sector businesses. For example, some areas of the economy are slowing (mining) while others are picking up (tourism). To survive and prosper, employers need the ability to react quickly to those changes by redeploying assets. Ensuring you have flexibility, particularly in relation to labour, is critical to being able to harness the benefits of rapid technical change and meet the effects of globalisation.
This means that businesses should work to minimise impediments to the speedy redeployment of labour wherever possible such as:
- restrictive enterprise agreement conditions
- onerous consultation obligations in relation to change
- limits on managerial discretion
- a “money for nothin’” approach to enterprise bargaining – that is, guaranteed standard wage increases plus more for buying productivity gains.
3. Talent retention and employee engagement is critical
Despite technological changes, people still play a significant role in how a company is perceived by the market and how services are delivered. Retaining and engaging with your workforce remains critical.
Successful organisations are now adopting a more sophisticated approach, which involves ensuring that employees are not just another ‘input’ to the business, but are aligned with the business’s strategic direction. This involves a number of elements, one of the most important is to move beyond the narrow focus of traditional ‘enterprise bargaining’ (which can often involve costly and time-consuming adversarial confrontation) and think more broadly about the ‘whole relationship’ with employees. This means considering how to manage and incentivise employees outside the frame of enterprise bargaining.
4. Productivity and the importance of planning
In terms of productivity measures (for example, labour costs as a percentage of revenue), the Summit highlighted that the most productive airlines were low cost carriers.
It is clear that legacy airlines and many other participants in the industry face barriers to rapid change. This is much the same for manufacturers, service providers and a range of other employers who have a long history of enterprise bargaining enshrining terms and conditions out of line with current market conditions and imposing restrictions on flexibility.
The Productivity Commission’s recent draft report has proposed that the current system, which is squarely aimed at promoting collective bargaining, should give way to greater opportunities for individual bargaining arrangements. Related to this is a hybrid proposal which enables an employer to make individual agreements across entire classes of employees amounting to a “collective individual flexibility arrangement”. As our colleague Chris Gardner stated in his opinion piece in Fairfax recently, these changes alone will be profound.
For the moment, the current system remains. While change can be achieved, the current system makes it all the more important for businesses to plan carefully for bargaining in the future. | https://www.workplacelawandstrategy.com.au/2015/08/learning-from-a-competitive-aviation-industry/ |
This unit examines current developments in international and regional instruments and institutions that promote and protect the human rights of indigenous peoples. Comparative perspectives on the rights of indigenous peoples in common law jurisdictions such as Australia, Canada and New Zealand will be studied. Areas of focus include the definitions of indigenous peoples, the concept self-determination, collective and individual rights, land and resource rights, civil and political participation, and economic and cultural rights.
Please note that in 2009 this unit will involve the opportunity to participate in international videoconferencing seminars with universities in North America and New Zealand. | http://www.monash.edu.au/pubs/2018handbooks/units/LAW4197.html |
WHEN JOURNALIST ALEX NEWMAN didn’t have time to attend a Medford City Council meeting last February, he was able to view an official recording and learn about plans for a controversial school fundraiser. The school would receive some of the proceeds of sales from a food truck company that several city councilors accused of inflating prices and collecting customers’ personal data for marketing purposes.
It’s the type of local story that prior to 2020 might have gone unreported given shrinking community newsrooms and fewer reporters covering municipal meetings. A series of executive orders and state legislative action in response to the COVID-19 pandemic, though, temporarily required most government bodies to provide remote public access and recordings to their proceedings — an assist, however imperfect, to local journalists and other interested citizens.
“I’ve covered more meetings since COVID than I ever have before,” said Newman, a local reporter for Patch who single-handedly covers Medford, Malden, Somerville, Arlington, and Reading. The municipalities encompass more than 32 total square miles and a quarter-million residents. “I don’t have to drive,” he said. “I can watch the meetings the next day.”
Unless action is taken by April 1 when the Open Meeting Law changes expire, journalists like Newman may again find themselves constrained by the limits of in-person meetings. If so, it’ll be a burden shared by everyone in the Commonwealth.
Large regional newsrooms such as the Boston Globe, online news media, public broadcasting stations, and neighboring publications help fill the void. Still, local journalism here is threatened by many of the same industry challenges battering newsrooms in other states. That threat led legislators to establish a commission in 2021 to research local journalism in Massachusetts and make recommendations on how the industry can be strengthened.
The commission is a well-intended endeavor. The benefits of robust local journalism are plentiful: less government corruption and municipal mismanagement, increased civic engagement and less partisan polarization, more economic investment and better public health information, among many others. The remote meeting capabilities we now have in place are a means to these ends. The technology helps journalists, particularly those covering multiple towns with little newsroom support, expand their coverage and report about government more efficiently.
Without the ability to attend meetings remotely or view recordings, Newman said, “it would be exceedingly difficult to cover issues with the depth and consistency I’ve been able to during the last couple years.”
For all its open government benefits, however, remote meeting technology can also be used as a shield against scrutiny and public engagement. A survey conducted last year by the Associated Press found that while more public bodies are live-streaming their meetings, it’s becoming more difficult for people to actually speak with their elected officials. Citizens are often prohibited from testifying at remote hearings or they are limited to submitting written testimony.
Fortunately, a solution exists. A plan to make remote access to government meetings permanent and improve the current changes to the Open Meeting Law is now pending in the Legislature. An Act to Modernize Participation in Public Meetings (H.3152/S.2082) will require both remote and in-person access to government proceedings.
The legislation is sponsored by Rep. Denise Garlick and Sen. Jason Lewis. It’s endorsed by the New England First Amendment Coalition along with other government transparency advocates such as the ACLU of Massachusetts, the Massachusetts Newspaper Publishers Association, and Common Cause.
Unlike other bills being proposed, this legislation requires a hybrid format that allows citizens to not only participate in meetings remotely but also attend them in person. This dual access is especially important for journalists who not only need the ability to monitor many meetings from afar, but must also often follow up with public officials after those meetings.
While local journalists and their audiences will benefit from this hybrid access, so will other groups of citizens such as those with disabilities, with family or work obligations, with limited transportation, or other circumstances making it difficult to attend government meetings in person. From a press perspective, this legislation is about government transparency and accountability. But it’s also about equity and providing access to all citizens.
During the last two years, Newman has noticed more citizen interest in government meetings. Some public bodies have responded by posting agendas and meeting materials online faster than usual, he said. All of this has provided more news for Newman to cover, a task made easier through remote access and recordings.“This amount of coverage,” he said, “just wouldn’t be possible if I could only be at one meeting.”
Justin Silverman is executive director of the New England First Amendment Coalition, a Massachusetts-based non-profit organization that advocates for press freedoms and open government. | https://commonwealthmagazine.org/media/pandemic-driven-changes-to-open-meeting-law-should-be-made-permanent/ |
Threshing is a key part of agriculture that involves removing the seeds or grain from plants (for example rice or wheat) from the plant stalk. In the case of small farms, threshing is done by beating or crushing the grain by hand or foot, and requires a large amount of hard physical labour. A simple thresher with a crank can be used to make this work much easier for the farmer. In most cases it takes two people to work these: one person to turn the crank and the other to feed the grain through the machine. These threshers can be built using simple materials and can improve the efficiency of grain threshing. They can also be built with pedals, or be attached to a bicycle, so that the person operating it can simply pedal to reduce the work even more and make threshing faster.
Threshers can be made in a number of ways using simple tools, and can be used in the harvesting of maize/corn, rice, wheat, sorghum, pearl millet, and any other grain or seed that must be separated from a stalk. The attachment of a thresher to a pedal-system can be built with basic materials. Two versions are the pedal-powered thresher which is built as one piece and the attachment to a bicycle for a regular thresher with a crank. Pedal-powered threshers have been suggested or made available to farming communities by governmental or non-governmental organizations. It should be remembered that there are some disadvantages to these threshers and their impact in the specific region should be researched before being suggested.
Thresher are many different designs for threshers and they can be made from wood or metal. The shape of the thresher can vary, but it must include some main parts:
An addition that can be built to make a thresher more efficient is to make it pedal-powered. This adds two more parts:
The pedal-powered thresher developed by the Maya Pedal Project provides a good example of a built-in pedal system to a thresher/mill.
An attachment to a regular bicycle can be built to allow the bike to be used as the seat, pedals, chain and sprocket of the thresher. The bicycle must be on a stand so that the back wheel is raised off the ground. Plans have been developed to build the attachment and the wheel-stand out of pieces of metal, including a large wheel that can be screwed to the crank section of the thresher (see External links). A drill will be required to make this as well.
Advantages of the thresher include less physical labour and more efficiency (amount of grain thresher per amount of time). Less seed breakage is also a benefit of using a thresher as opposed to stomping or beating grains. However, more breakage can occur it is not used properly.
Cultural differences must be considered. Introduction of machinery to the threshing process, and the way that the pedal-powered thresher is used have conflicted with cultural beliefs or practices in some cases. The preferences of the region must be taken into consideration.
There are physical dangers involved in introducing machinery into a farming process; one of these is injury to hands and arms when feeding the stalks into the thresher. When building the thresher, creating a higher hood/chute cover helps stop the operator’s hands from contacting the machine, but does not entirely eliminate the danger.
Seeds can be broken and ruined as they go through the thresher, and seed breakage can happen more often with threshers that are the wrong size or design for the type of seed. The wire loops or spikes may have to be adjusted if seeds appear to be broken (please see suggestion for spacing). Seed breakage also happens with stomping and beating, however if the thresher is not built in an appropriate way for the specific grain, more breakage may occur. If the thresher is well-suited for the size of the grain and stalks, it should have fewer broken seeds than beating or stomping. The most common seed breakage with threshers is with corn/maize, when there is too much moisture in the kernels. This can be reduced by drying kernels more thoroughly before threshing.
The size and weight of the thresher can be problematic. The thresher may need to be carried, and therefore must be light enough for one person. The suggested weight is 35 kg. On hillside farms it may be difficult to transport the thresher or to set it up properly.
A bicycle, also called a bike or cycle, is a human-powered or motor-powered, pedal-driven, single-track vehicle, having two wheels attached to a frame, one behind the other. A bicycle rider is called a cyclist, or bicyclist.
Thresher may refer to:
A threshing machine or a thresher is a piece of farm equipment that threshes grain, that is, it removes the seeds from the stalks and husks. It does so by beating the plant to make the seeds fall out.
The tandem bicycle or twin is a form of bicycle designed to be ridden by more than one person. The term tandem refers to the seating arrangement, not the number of riders. Patents related to tandem bicycles date from the mid 1880's. Tandems can reach higher speeds than the same riders on single bicycles, and tandem bicycle racing exists. As with bicycles for single riders, there are many variations that have been developed over the years.
Mechanization is the process of changing from working largely or exclusively by hand or with animals to doing that work with machinery. In an early engineering text a machine is defined as follows:
Every machine is constructed for the purpose of performing certain mechanical operations, each of which supposes the existence of two other things besides the machine in question, namely, a moving power, and an object subject to the operation, which may be termed the work to be done. Machines, in fact, are interposed between the power and the work, for the purpose of adapting the one to the other.
The modern combine harvester, or simply combine, is a versatile machine designed to efficiently harvest a variety of grain crops. The name derives from its combining three separate harvesting operations—reaping, threshing, and winnowing—into a single process. Among the crops harvested with a combine are wheat, oats, rye, barley, corn (maize), sorghum, soybeans, flax (linseed), sunflowers and canola. The separated straw, left lying on the field, comprises the stems and any remaining leaves of the crop with limited nutrients left in it: the straw is then either chopped, spread on the field and ploughed back in or baled for bedding and limited-feed for livestock.
The crankset or chainset, is the component of a bicycle drivetrain that converts the reciprocating motion of the rider's legs into rotational motion used to drive the chain or belt, which in turn drives the rear wheel. It consists of one or more sprockets, also called chainrings or chainwheels attached to the cranks, arms, or crankarms to which the pedals attach. It is connected to the rider by the pedals, to the bicycle frame by the bottom bracket, and to the rear sprocket, cassette or freewheel via the chain.
The pedal is the part of a bicycle that the rider pushes with their foot to propel the vehicle. It provides the connection between the cyclist's foot or shoe and the crank allowing the leg to turn the bottom bracket spindle and propel the bicycle's wheels. A pedal usually consists of a spindle that threads into the end of the crank, and a body on which the foot rest is attached, that is free to rotate on bearings with respect to the spindle.
The universal nut sheller is a simple hand-operated machine capable of shelling 50 kilograms (110 lb) of raw, sun-dried peanuts per hour.
Cycling shoes are shoes purpose-built for cycling. There are a variety of designs depending on the type and intensity of the cycling for which they are intended. Key features include rigidity, for more-efficient transfer of power from the cyclist to the pedals, weight, a method of attaching the shoe firmly to the pedal and adaptability for use on and off the bicycle. Most high-performance cycling shoes can be adjusted while in use, via a quick-adjusting system that has largely replaced laces.
A motorized bicycle is a bicycle with an attached motor or engine and transmission used either to power the vehicle unassisted, or to assist with pedalling. Since it always retains both pedals and a discrete connected drive for rider-powered propulsion, the motorised bicycle is in technical terms a true bicycle, albeit a power-assisted one. However, for purposes of governmental licensing and registration requirements, the type may be legally defined as a motor vehicle, motorbike, moped, or a separate class of hybrid vehicle.
A treadle is a mechanism operated with a pedal for converting reciprocating motion into rotating motion. Along with cranks, treadmills, and treadwheels, treadles allow human and animal machine power in the absence of electricity.
A handcycle is a type of human-powered land vehicle powered by the arms rather than the legs, as on a bicycle. Most handcycles are tricycle in form, with two coasting rear wheels and one steerable powered front wheel. Despite usually having three wheels, they are also known as handbikes.
In a reciprocating engine, the dead centre is the position of a piston in which it is either farthest from, or nearest to, the crankshaft. The former is known as top dead centre (TDC) while the latter is known as bottom dead centre (BDC).
The Avery Company, founded by Robert Hanneman Avery, was an American farm tractor manufacturer famed for its undermounted engine which resembled a railroad engine more than a conventional farm steam engine. Avery founded the farm implement business after the Civil War. His company built a large line of products, including steam engines, beginning in 1891. The company started with a return flue design and later adapted the undermount style, including a bulldog design on the smokebox door. Their design was well received by farmers in central Illinois. They expanded their market nationwide and overseas until the 1920s, when they failed to innovate and the company faltered. They manufactured trucks for a period of time, and then automobiles. until they finally succumbed to an agricultural crisis and the Depression.
A threshing board is an obsolete agricultural implement used to separate cereals from their straw; that is, to thresh. It is a thick board, made with a variety of slats, with a shape between rectangular and trapezoidal, with the frontal part somewhat narrower and curved upward and whose bottom is covered with lithic flakes or razor-like metal blades.
Agricultural machinery is machinery used in farming or other agriculture. There are many types of such equipment, from hand tools and power tools to tractors and the countless kinds of farm implements that they tow or operate. Diverse arrays of equipment are used in both organic and nonorganic farming. Especially since the advent of mechanised agriculture, agricultural machinery is an indispensable part of how the world is fed.
Grains may be lost in the pre-harvest, harvest, and post-harvest stages. Pre-harvest losses occur before the process of harvesting begins, and may be due to insects, weeds, and rusts. Harvest losses occur between the beginning and completion of harvesting, and are primarily caused by losses due to shattering. Post-harvest losses occur between harvest and the moment of human consumption. They include on-farm losses, such as when grain is threshed, winnowed, and dried, as well as losses along the chain during transportation, storage, and processing. Important in many developing countries, particularly in Africa, are on-farm losses during storage, when the grain is being stored for auto-consumption or while the farmer awaits a selling opportunity or a rise in prices. | https://wikimili.com/en/Threshers%2C_pedal_powered |
Statement of Policy
Washington University in St. Louis (WashU) is committed to conducting all university activities in compliance with all applicable laws, regulations and university policies. WashU has adopted this policy to outline the security measures required to protect electronic information systems and related equipment from unauthorized use.
Objective
Identify common classification of information that WashU uses, stores and/or transmits.
Policy
Information that is used, stored or transmitted will be classified as public, classified or protected. The data owner will assist with data classification and labeling.
Workforce members will assess the information prior to sharing with a third party to ensure sharing the information will not cause damage or distress. If there is potential for damage or distress, extra controls will be needed before the information is transferred.
Public
Public information consists of information that is acceptable to share openly and has no requirements from federal or local regulations on its control and use.
Confidential
Information that is not freely available to use, store, and transmit, but does not have any regulatory compliance is confidential. This may include data provided to WashU by external individuals or entities for use or storage by the university.
Intellectual property of a department, school or research group, employee salaries, unlisted phone numbers, email address lists for studies or volunteers, human resource files and legal documents would fall into this category.
This information is for limited distribution and requires basic information security controls.
Protected
Information identified by federal, state and local regulations is classified as protected. This information is regulated and requires information security controls in accordance to the mandates of those regulatory bodies.
Regulations including but not limited to: | https://informationsecurity.wustl.edu/policies/information-classification-policy/ |
The Hubble Space Telescope, operated by NASA and ESA, is fantastic for detecting objects that reside in the confines of space. Black holes, which are actually impossible to see, reveal their position thanks to the galaxies that often surround them, but a new poll has revealed a black hole with a disk of material that, according to what we think we know about the holes blacks, I should not even be there.
The black hole lies at the heart of the galaxy NGC 3147, a spiral galaxy that is 130 million light years from Earth. Because of the state of the galaxy, researchers would have guessed that the black hole was essentially starved, but the presence of a material disc throws that assumption into question.
Active galaxies that feed supermassive black holes at their centers often produce a ring of debris that surrounds the black hole. When the material gets too close, it swallows, but in less active galaxies, the black holes in its core do not have the gravitational power to continuously extract the material from the surrounding galaxy.
NGC 3147 should be one of those galaxies, and scientists assumed that their black hole was starved for matter before they discovered that the material disk accelerated around the center at more than 10 percent of the speed of light. That's the kind of thing that scientists hope to see surrounding a black hole that is feeding on matter in the heart of a much more active galaxy.
"The type of disk we see is a reduced quasar that we did not expect to exist," said Stefano Bianchi, first author of a new article on the black hole. Monthly notices from the Royal Astronomical Societyhe said in a statement. "It's the same kind of disk we see in objects that are 1000 or even 100 000 times brighter." The predictions of current models for very weak active galaxies clearly failed. "
In the future, the team plans to look at similar galaxies to determine if this observation is representative of a trend or just a strange anomaly. | https://tech2.org/hubble-sees-a-black-hole-surrounded-by-material-that-should-not-be-there-bgr/ |
Socio-cultural beliefs about an ideal body size and implications for risk of excess weight gain after immigration: a study of Australian residents of sub-Saharan African ancestry.
Though several studies have focused on risk factors associated with excess weight gain, little is known about the extent to which socio-cultural beliefs about body sizes may contribute to risk of excess weight gain, especially in non-Western migrant communities. Drawing on socio-cultural and attribution theories, this study mainly explored socio-cultural beliefs about an ideal body size among Australian residents who were born in sub-Saharan Africa (SSA). Implications of body size beliefs for risk of excess weight gain after immigration have also been discussed. Employing a qualitative design, 24 in-depth interviews were conducted with Australian residents who were born in SSA. Thematic content analysis was undertaken to ensure that participants' experiences and views were clearly captured. According to the participants, a moderately large body size is idealised in the SSA community and post-migration weight gain is commonly regarded as evidence of well-being. While desirability of a moderately large body size was noted by some participants, others were concerned about health risks (e.g. high blood pressure) associated with excess weight gain. Moreover, body size ideals seemed to be different for men and women in the SSA community and these ideals were mainly promoted by family and friends. Participants reported that women with very slim (skinny) body sizes are often regarded as persons suffering from health problems, whereas those with 'plumpy' body types are often considered beautiful. Participants also noted that men are expected to look well-built and muscular while those with big bellies are often seen as financially rich. Participants' interpretation of post-migration weight gain as evidence of well-being calls for urgent intervention as risk of excess weight gain appear to be high in this immigrant group.
| |
This blog contains a general introduction to designing and creating a form with the form generator package. For a better understanding, the individual steps are gradually explained using a simple example.
Overview
<a name="chapter-1"></a>
1. Designing the Form
The basis of every form is an instance of the
> Coding the form direct in PHP or using an XML file? > In principle, (almost) all functions of the package are available when > defining a form using an XML file. Which of the two methods is used depends > on the one hand on the particular preferences of the developer, or on > whether the form to be created always has exactly the same structure or not. > If, for example, due to user rights or other externally changeable factors, > different elements of the form are to be shown / hidden or provided with > write protection, this can only be implemented by direct coding!
For the Quick Start we first choose the direct coding in PHP. The complete source can be found in the example directory of the package (QuickStart.php). The XML-Version is also in this directory (QuickStartXML.php and xml/QuickStart.xml)
<a name="chapter-1-1"></a>
1.1. The first Steps
Now we are ready to define our form.
<a name="chapter-1-2"></a>
1.2. Basic structure of the Form
Let's take a look at our example form and then break it down into its basic structure:
<img src="QuickStartStructure.png" alt="QuickStartStructure"/>
...and now let us build this structur with the appropriate objects:
<a name="chapter-1-3"></a>
1.3. Creating the input Elements
Now that we have created the basic structure of the form, the input elements follow next.
<a name="chapter-1-4"></a>
1.4. Bring up the form to the Web
and embed this parts in a (very) simple HTML page:
In the
Behind the stylesheet we insert the dynamic generated styles from the Formgenerator in a
For all JS functionality (initialization, validation, ...) we only need to include
the
In the example, the HTML body simply consists of a DIV so that the form is highlighted against a dark background. In real use, in the most cases, this is a DIV that is located somewhere within the layout of the website.
Using the MSO-Theme stylesheet the result of this quick start looks as follows:
<img src="QuickStartMSO.png" alt="QuickStartMSO"/>
by only changing the stylesheet (... and some values in the configuration) the result changes to:
<img src="QuickStartSKien.png" alt="QuickStartSKien"/>
<a name="chapter-2"></a>
2. Connecting the form with Data
The form must (in most cases) be linked to a data provider. The data provider can not only be used to transfer the data to be processed to the form. He can also be used to specify select option lists (for select elements or radio groups) if these will not be included in the form definition.
The Dataprovider must implement the
If no existing data have to be edited and there are no select options to specify, a
The simplest - and in many cases applicable - data provider used in the example
is the
<a name="chapter-2-1"></a>
2.1. The data to be processed by the Form
A common use case is that the data to be processed results from a database query. This
can easily be implemented e.g. with
<a name="chapter-2-2"></a>
2.2. Specifying select options
Select options can either be set directly in the form definition for a contained Select element or RadioGroup or be transferred via the data provider.
In both cases a option list have to be an associative array with any count of
Pass it directly to the element:
Pass it to the dataprovider: The selection lists for any number of elements are passed in a nested array:
The best practice to specify avalable select option depends on each special case: - If the options are fixed values (which may be language dependent), it is better to specify them in the form definition (in PHP or XML). - If the options are flexible values from any data source, the definition via the data provider is probably the better choice. | https://www.phpclasses.org/browse/file/273352.html |
A Lecture of the Centre for Research in Modern European Philosophy’s 20th Anniversary Public Lecture Series, in association with the London Graduate School.
Patrick Leman – Inside Children’s Peer Interactions
Children often regard adults as infallible sources of knowledge. However, in discussions with their peers children can be freer to question, discuss and explore the world for themselves. Psychologists have often regarded peer interaction as one of the fundamental building blocks of development. This lecture describes research examining how children interact with their peers, and how these interactions are affected by social identities such as gender and ethnicity. It will also consider why children can often learn better through peer interactions than from adult instruction across a range of topics from morality to understanding of science. | https://backdoorbroadcasting.net/2015/02/12/ |
Login
忘記密碼
BAE Systems Moves Compass Call Electronic Warfare System to Modern Business Jet
By Associated Press
2018/07/09 21:02
NASHUA, N.H.--(BUSINESS WIRE)--Jul 9, 2018--BAE Systems has begun work to transition its advanced Compass Call electronic warfare (EW) system from aging EC-130H aircraft to a modern, more capable platform that will significantly improve mission effectiveness. This Cross Deck initiative, as it is commonly called, will enable the U.S. Air Force to continue disrupting enemy command and control capabilities in denied environments well into the future.
BAE Systems will transition its advanced Compass Call electronic warfare (EW) system to special-mission Gulfstream EC-37Bs to significantly improve mission effectiveness for the U.S. Air Force. (Photo: BAE Systems)
As the mission system integrator for the program, BAE Systems is working with L3 Technologies to transition the Compass Call capabilities onto an EC-37B aircraft, a special-mission Gulfstream G550 that meets Air Force requirements. This new platform will provide combatant commanders with improved stand-off jamming capability and flexibility to counter sophisticated communications and radar threats.
“The Compass Call mission electronics are world-class EW systems that are in high demand from operational commanders because of their electronic attack capabilities and their ability to protect critical missions,” said Pamela Potter, director of Electronic Attack Solutions at BAE Systems. “The cross-decking program enables the Air Force to maintain existing, unmatched EW mission capabilities in an economical business jet that can fly faster, higher, and farther than its predecessor, improving mission effectiveness and survivability.”
In 2017, BAE Systems and its partners completed the initial design review of the Compass Call weapon system, and the final design review is planned for this fall. Initial modifications of the first G550 are underway, with the first two aircraft fielded in 2023. A total of 10 new aircraft are planned.
BAE Systems will continue to sustain the electronics for the fleet of EC-130H Compass Call aircraft while it develops, procures, manufactures, and integrates electronics for the new fleet. | |
A new report based on feedback from residents with disabilities calls on the Austin Police Department to improve its training and practices regarding interactions with residents with mental health issues, hearing loss or other disabilities.
The report comes from the Police Oversight Bureau, which reviews police activities and acts as an advocate for residents to file complaints about the department. In six findings and other follow-up recommendations, the report highlights the issues faced by residents with disabilities in recent years and pushes the department to improve engagement with these communities, including students with disabilities of color who have experiences. disproportionately poor with the police.
The report’s findings come from a May community forum that included 42 community members working with city staff and officials. Forum participants examined why interactions with police officers tend to make residents with disabilities feel unsafe and explored ways in which APD can modify its practices to improve these interactions.
Among the findings was the view that current police practices are not favorable to people with disabilities and that those with physical or psychological disabilities are considered dangerous, which may increase the risk of hostility from the police. .
The report says the department needs to devote more training resources to improving communication and community engagement with residents with disabilities, to increase officers’ understanding of the lived experiences of people with physical health issues. and mental.
This awareness was also seen to have a strong impact on the outcome of an agent’s interaction with a person with a disability, with those showing empathy and understanding having better results than those who were disrespectful and reluctant to adapt to a disability.
Feedback from the session, which is to be held regularly in the future, also showed that consideration should be given to the impact of a person’s disability, race and socioeconomic status on their quality of life and his impression of the police. This was especially true for disabled students of color, who may experience increased racism and other trauma at higher rates than able-bodied white students.
The final conclusion of the report was that training on mental health issues is a particular area of ââneed for the department, with a high level of negative impacts for residents resulting from a lack of understanding and support resources for those who suffer from poor mental health.
The report will soon be presented to city council and will serve as the basis for another public meeting to be held next year.
The Office of Police Oversight chose to schedule town hall after making a presentation last summer to the Mayor’s Committee for People with Disabilities. This presentation focused largely on findings from data on shootings involving police officers in recent years, with some attention paid to the role of mental health issues in these incidents.
Committee members responded by criticizing office staff for not having detailed data on police interactions with residents with disabilities, and urged staff to become more involved in advocating for people with disabilities.
âYou are one of the many city departments that came to the Disability Committee without really touching on or highlighting disability issues. Without removing this issue and discussing it when it is the purpose of this committee, again, it appears that equity does not include people with disabilities, âsaid Deborah Trejo, committee member, at the meeting. last July. âYou spoke exclusively about what I heard about mental health as an area of ââconcern to you, but I didn’t hear you expressing concern about people with disabilities. “
The report for the local disabled community is the latest release from the Police Oversight Bureau seeking to improve ODA, following last week’s announcement that it has asked the department to review more than 200 complaints. linked to widespread protests against police violence in Austin last summer. .
Photo by Edward Kimmel from Takoma Park, MD, CC BY-SA 2.0, via Wikimedia Commons.
The Austin MonitorThe work of is made possible by donations from the community. While our reports cover donors from time to time, we make sure to separate commercial and editorial efforts while maintaining transparency. A full list of donors is available here, and our code of ethics is explained here. | https://whalesci.org/report-urges-police-to-improve-training-and-communication-with-austinites-with-disabilities/ |
This invention relates to the preparation of gels of certain polysaccharide materials. In particular, it relates to the preparation of gels suitable for a variety of uses from a biologically produced beta- 1, 3-glucan-type polysaccharide.
In U.S. Pat. No. 3,822,250, there is disclosed a method of preparing a beta-1,3-glucan-type polysaccharide material by cultivation of certain microorganisms. The microorganisms of interest in this connection are:
A. Agrobacterium radiobacter - ATCC-6466: This strain is available from American Type Culture Collection under the accession number of ATCC- 6466;
b. Agrobacterium radiobacter - Strain U-19. This strain is a mutant derived from the parent strain ATCC-6466 by irradiation of ultraviolet rays in a conventional manner and has a unique property that it produces substantially no other polysaccharide. A subculture of this strain has been deposited with Institute for Fermentation, Osaka, Japan, under the accession number of "IFO-13126" and with ATCC under accession number ATCC- 21679.
c. Alcaligenes faecalis var. myogenes, Strain NTK-u: This strain is obtained by treating Alcaligenes faecalis var. myogenes, Strain K, with N- methyl-N'-nitro-N-nitrosoguanidine. This strain is available from ATCC under accession number ATCC-21680. Inasmuch as these microorganisms are known entities, further description of them is not deemed necessary here. For a more detailed description, reference can be had to the aforesaid U. S. Pat. No. 3,822,250.
The polysaccharide prepared by cultivation of the specified microorganisms is, as stated above, of the beta-1,3glucan type. Hereinafter, reference to beta-1,3-glucan or to the polysaccharide can be taken to mean such a compound prepared by the action of these microogranisms.
The polysaccharide is substantially insoluble in neutral water at temperatures below about 50° C. although it is swellable. In water at acid pH levels, it forms gels and at pH levels above about 10.5 it is soluble.
A highly interesting property of this polysaccharide is its capacity to form gels possessing excellent water-holding and flavor- binding abilities. The polysaccharide is also non-toxic and pharmacologically and nutritionally inert. Gels prepared therefrom can be taken into the human body safely, affording such gels a variety of applications in the food industry.
The above-referenced U.S. Pat. No. 3,822,250 discusses at great length the formation of gels from the polysaccharides contemplated by this invention and the utilization of such gels in foodstuffs. The technique taught in that reference for gelling the polysaccharide is by heating. The reference teaches that if the polysaccharide is heated to a temperature between about 50° and 200° C., a gel is formed very readily which has excellent gel strength and freeze-thaw stability, is thermally irreversible and retains such favorable properties over a wide pH range from about 1 to 11.5.
British patent No. 1,379,406 teaches the preparation of gels from beta- 1,3-glucan by a procedure which involves dissolving the polysaccharide in basic aqueous medium and then removing the base by diffusion, e.g., dialyzing or by neutralization with an acid. Gels can be prepared by this technique in the form of films, thinwalled tubes, filaments or globules. In the gelling process, the basic polysaccharide solution is brought into contact with the acid, whereupon neutralization and gelling take place substantially immediately.
Both of the techniques taught by the prior art are subject to certain objections. There are many instances when heating to effect gelling is an impractical nuisance which it is desirable to avoid if possible. The acid gelling technique is subject to the objection that, except for very thin configurations, it is not useful for forming continuous bodies of gel of any significant size.
It is an object of this invention to provide a technique for gelling the beta-1,3-glucan polysaccharide which overcomes some of the objections just cited. It is a further object to prepare gels having the same favorable combination of properties as those taught by the prior art as well as other properties which are improvements over those possessed by prior art gels.
In accordance with this invention it has now been found that a gel is formed when a solution of a beta-1,3-glucan polysaccharide in basic medium having a pH greater than about 10.5 is subjected to an atmosphere of a gaseous acid anhydride. Stated specifically, the invention is a method of preparing a gel of a beta-1,3-glucan polysaccharide which comprises preparing a solution of said beta-1,3- glucan in an aqueous medium having a pH of at least about 10.5 and subjecting said solution under quiescent conditions to an atmosphere of a gaseous acid anhydride under conditions of time and gas pressure sufficient to cause said anhydride to diffuse through said solution and effect gelling.
The most common gaseous acid anhydrides are carbon dioxide, the oxides of nitrogen such as NO.sub.2 or N.sub.2 O.sub.4 and the like and the oxides of sulfur such as SO.sub.2 and SO.sub.3. Since the beta-1,3- glucans which are gelled by the process of the invention are extensively employed in foodstuffs, the preferred gas is carbon dioxide. Use of the other gases is limited to applications where the possible presence of the acids of nitrogen and sulfur is not objectionable.
The beta-1,3-glucan must be subjected to the atmosphere of gaseous acid anhydride under quiescent conditions, i.e., there must be no agitation of the polysaccharide while it is exposed to the gas. The gas must be caused or permitted to get into the solution via diffusion rather than by mixing it in. If the solution is not quiescent during the incorporation of the gas, gelling will take place, but the resultant product will not exhibit a firm, continuous gel structure and will have substantially no measurable gel strength. Rather, it will form a plurality of discrete unconnected gel particles, having the consistency of, e.g., apple sauce, rather than the desired firm gel structure.
The pressure employed in the treatment with gaseous acid anhydride can be varied depending upon the concentration of the beta-1,3- glucan in the solution to be gelled and upon the configuration sought in the finished gel. Lower concentrations of the beta1,3-glucan in solution can be gelled in less time and with a lower pressure of the gas than can higher concentrations. Likewise, a thin body of the solution can be gelled more quickly and at a lower pressure than can a thicker body. In fact, a very thin film of the beta-1,3-glucan can gel simply from the effect of CO.sub. 2 found in the atmosphere if it is exposed to the atmosphere for any significant time. Also, the less basic the solution, the easier it forms a gel, thus lowering the time and pressure requirements to effect gelling.
The efficacy of the gaseous acid anhydride is affected by the pressure of the gas, by the temperature of the gas and the solution, by the concentration of the polysaccharide and by the basicity of the solution. Generally, it is preferred, for the sake of convenience, to operate at the lowest temperature and pressure reasonably possible, i.e., at atmospheric pressure and room temperature. It is necessary to operate at a temperature level at which the solution does not boil, since the turbulence associated with boiling would not permit a continuous gel to form. Generally, the gas pressure on the system will be no greater than about 100 p.s.i.g.
Due to the ease with which the acid anhydride gas can diffuse into thin layers of the polysaccharide solution, the method of the invention is particularly adapted to the preparation of thin films. Such thin films can find use as packaging films, edible coatings and oxygen barrier films.
The solutions to be gelled can contain about 0.1 to 10% of the beta-1, 3-glucan in solution. Preferably, they will contain 0.2 to 5% beta-1,3- glucan. Within these ranges sufficient polysaccharide is present to form a gel of whatever gel strength is required without forming a solution which is too viscous to permit diffusion of the gaseous acid anhydride to effect gelling. Gel strength, as with most gel processes, increases as the concentration of the polysaccharide is increased.
As stated, the invention proceeds from a solution of the beta-1, 3- glucan in an alkaline medium. The beta-1,3-glucan is relatively insoluble in cold aqueous systems of less than about 10 pH. Thus, in order to form a gel, the pH is raised to about 10.5 or higher, at which point solution occurs quickly and substantially completely. The raising of the pH can be accomplished by any reagent capable of creating the appropriate degree of alkalinity such as, e.g., ammonium hydroxide, trisodium phosphate, sodium carbonate, sodium hydroxide, potassium hydroxide, lithium hydroxide, tripotassium phosphate, potassium carbonate or calcium hydroxide. For food applications, sodium or potassium phosphate are preferred materials for creating the required alkaline environment.
The invention is exemplified in the following examples. Parts and percentages are by weight unless otherwise specified. As stated hereinabove, reference to "polysaccharide" in these examples means the beta-1,3-glucan type polysaccharide produced by cultivation of the microorganisms specified at the beginning of these specifications.
EXAMPLE 1
A 5% solution of polysaccharide in 0.2% NaOH was cast on a Teflon substrate to form a film of 50 mil wet thickness. The film was then exposed at room temperature to CO.sub.2 at a pressure of one atmosphere for about 15 minutes. At the end of this time, the film had gelled and was no longer fluid. Further drying in air at ambient temperature gave a hard film which, when stripped from the substrate, was 2 mils in thickness. It was of sparkling clarity and had good flexibility at relative humidities as low as about 30%.
EXAMPLE 2
The procedure of Example 1 was repeated with a solution of polysaccharide containing 20% glycerol, based on the polymer, as a plasticizer. The resulting film retained flexibility at relative humidities as low as 10-15%.
EXAMPLE 3
The procedure of Example 2 was repeated except that the plasticizer was propylene glycol in the amount of 10% based on the polymer. The resulting film retained good flexiblity at low relative humidities.
EXAMPLE 4
The procedure of Example 2 was repeated, except that the plasticizer was sorbitol at 5% concentration, based on the polymer. The resulting film retained good flexibility at low relative humidities.
EXAMPLE 5
A 5% solution of polysaccharide in 0.2% NaOH was prepared and had a pH of about 12. The solution was poured to a depth of several centimeters into a cylindrical container and then exposed to an atmosphere of CO.sub. 2 for several hours. The resulting gel was firm. In contrast to gels made in the conventional manner by heating neutral suspensions of polysaccharide, the gel of this example showed no syneresis and was of improved clarity. The pH, measured at the top of the gel was 6.9 and at the bottom was 7.4, indicating substantially complete neutralization of the NaOH by CO.sub.2.
EXAMPLE 6
A 1% solution of polysaccharide in 0.1% NaOH and a similar solution made up in 0.05% NaOH were poured to a depth of several centimeters into cylindrical containers. The clear solutions were then exposed to CO.sub.2 at a pressure of one atmosphere and the rates of gel formation were observed by following the development of the tubid gel layer through the solution. Complete gelation of the solution in 0.1% NaOH required about 50% more time than gelation of the solution in 0.05% NaOH.
EXAMPLE 7
A gelatin-type dessert formulation containing FD+C -2 red dye, raspberry flavor and sugar was placed in a can and sealed in such a manner that the contents were at a pressure of about 60 p.s.i.g. of CO. sub.2 pressure. When the can was opened several hours later, the contents were removed as a molded gel of pleasing appearance and aroma, having excellent texture and mouth feel.
EXAMPLE 8
An air freshener formulation containing a green dye and pine oil as an odorant was made up in a 3% solution of polysaccharide in 0.2% NaOH. The formulation was poured into a rectangular plastic container and gelled by exposure to CO.sub.2 at one atmosphere pressure. The container was then fitted with a perforated plastic plate to protect the surface of the gel, and finally with an impervious snap-on lid. The impervious lid was removed from the container, whenever it was needed to serve as an air freshener, effectively masking cooking and tobacco odors. Over many hours of service, the gel gradually shrunk in volume and when exhausted had assumed the state of a dark, horny, innocuous mass which could easily be disposed of.
EXAMPLE 9-12
The procedure of Examples 1-4 was followed, substituting SO.sub. 2 in place of CO.sub.2. The resulting films had substantially the same properties as those described earlier.
EXAMPLE 13
A test panel of cold rolled steel was used as a substrate on which a 5% solution of polysaccharide in 0.2% NaOH was cast to a wet thickness of 50 mils. The solution was gelled by exposure to CO.sub.2 and the panel was allowed to dry in air. The resulting protective coating was tough and adherent. On spraying with water for several minutes, the film swelled and softened, and could easily be removed with a stiff brush.
EXAMPLE 14
Pharmaceutical tablets were coated with polysaccharide in a regulation pan coater, in which the agitated mass of pellets was sprayed with a 5% solution of the polymer in dilute NaOH. Simultaneously with the spraying, the tablets were subjected to a stream of heated CO.sub.2 - enriched air to gel the polymer. The stream of heated air was continued for some time after the spraying to dry the polymer completely. The tablets had a hard, shiny polymer coating which served as an effective barrier against moisture and air.
A particular advantage of this invention is that clear strong gels, exhibiting little or no syneresis, can be formed in systems containing large amounts of water-soluble hydroxylated organic compounds such as glycols, polyglycols, sorbitol, mannitol or mono- or di- saccharides. Ordinarily, if 0.5 g. of polysaccharide is suspended in a 65% aqueous sucrose solution and then heated to about 80°, in the manner commonly employed for making thermal gels, no gel results. Even with higher amounts of polysaccharide, if the amount of hydroxylated organic compound is in excess of about 25%, either no gel will be formed or the gel will be weak and exhibit excessive syneresis. However, in the present invention, if the polysaccharide is first dissolved in the alkaline aqueous phase, the sugar or polyol can be added in large amount without interfering with the subsequent gelation by means of a gaseous acid anhydride.
This feature is of particular importance in the formulation of desserts and confectionery items such as jellies, jams, gumdrops and the like. In such items, the sugar concentration is commonly 50-65% sugar or even higher. The term "sugar" is used generically and is meant to include carbohydrates or their derivatives commonly used as sweetening agents, e. g., monosaccharides, such as glucose, or corn syrup; disaccharides, such as cane or beet sugar, or mixtures of the two such as invert sugar, a mixture of glucose and fructose.
An example of such usage is demonstrated in Example 15.
EXAMPLE 15
A 1.25 g. portion of polysaccharide was dissolved in a solution of 1. 25 g. of Na.sub.3 PO.sub.4 in 85.5 ml. of water. To the solution was added 0.03 g. of fruit flavor, 0.01 g. of a certified food dye, and 162 g. of cane sugar. The mixture was then warmed gently with stirring until the sugar dissolved. The approximate composition of the resulting solution was 0.5% Na.sub.3 PO.sub.4, 0.5% polysaccharide and 65% sugar, with the remainder being water, flavoring agent and food dye. This mixture was then placed in an atmosphere of Co.sub.2 until it was completely gelled. The resulting clear firm jelly was of pleasing appearance, easily spreadable on bread and had a good taste and mouth feel. | |
PROBLEM TO BE SOLVED: To provide a nonaqueous electrolyte secondary battery which is enhanced in battery performance in comparison to conventional ones.
+
-
6-2n
2
4
n
SOLUTION: A nonaqueous electrolyte secondary battery comprises: a positive electrode; a negative electrode; and a nonaqueous electrolyte. The nonaqueous electrolyte contains a P-oxalate compound expressed by the following general formula: A[PX(CO)]. Supposing that the total amount of moisture included in the battery per one Ah of a capacity of the battery is W1, and the amount of the P-oxalate compound in the nonaqueous electrolyte is W2, the following relation holds: 2.25≤W2/W1, where 4.15(mg/Ah)≤W1 and 20(mg/Ah)≤W2.
COPYRIGHT: (C)2015,JPO&INPIT | |
We are “young at heart”
Our work is our play. – Creation is our gift. – Nature is our palette.
We approach the land with the same respect and kindness our clients enjoy. We are curious, sometimes funny and always very hard working.
Yes, we are a bit obsessed with plants, bugs and being good stewards of the land.
We like to collaborate with other professionals who share our vision and philosophy. If needed, we will invite other professionals to join us, as their unique knowledge and special expertise could be valuable for the situation.
Ana Hajduk, Owner
My life is a tapestry in it’s quality and range of experiences. I’ve been weaving together a life passionate with plants, wildlife and the environment. Please join me on my journey.
1978 – Graduated from the School of Architecture and Urban Design, University of La Plata, Argentina.
1979-1984 – Worked as an Architect in Argentina.
1985- 1995 – My shift to gardening. Created and operated a dried flower farm for 17 years in Patagonia. I worked at several prestigious hotels designing flower beds & herb gardens. Also as a Contractor, I designed and executed special event floral decorations at the LLao llao Hotel and Resort, Bariloche, Argentina.
1995 – The 5th Summit of American-Hispanic Presidents at the LLao LLao Hotel and Resort in Bariloche, Argentina. I was in charge of all the gardens and the floral decorations for this 5 day event.
1995 – Moved to USA. Worked at private residences and estates as a Master Gardener, for Landscape Architects and Designers.
2004 – Started Singing Brook Gardens designing, installing and caring for gardens in Westchester and Dutchess Counties in NY , and the Litchfield Hills in Ct.
Courses and Credentials :
- Ecological Landscape Design at the Cary Institute for Ecosystem Studies in Millbrook NY ( 3yr course)
- Connecticut Accredited Nursery Professional
- NOFA Accredited Organic Land Care Professional
- Member of ELA ( Ecological Landscape Alliance)
- Mad Gardeners member
The health of your family and of our planet is as important to us as it is to you. | https://singingbrookgardens.com/about/ |
Al Harees is a UAE traditional dish made with a combination of meat/chicken, wheat, rice and simple Arabic spices. It is a spectacular food invention that people love to eat in Ramadan daily and other festivals, such as Eid and weddings in Arab countries. This simple and elegantly delectable meal never misses satisfying your taste buds. It is simple to cook but requires several hours for baking. Al-Harees is also the most famous dish in Qatar as well and variations of this Arabic dish are consumed in various other countries. You can also make some variations in the recipe of Al-Harees according to your desire and taste.
Here is a recipe of Al-Harees that has a super delicious taste which forces you to finger licking. You can enjoy this recipe as a simple salty flavor and the rich and amalgamated savors of wheat and meat. Have a look at the key ingredients of this Arabic cuisine and also learn how to cook it.
Today’s Recipe: How You Can Cook Chicken Machboos?
Soak the whole wheat overnight or for at least 8 hours in plenty of water.
In a large pot, place the whole drained wheat and add 1, ¾ litre water, and boil them on high heat until the wheat is beginning to make fluffy and soft.
Boil the chicken and rice separately in plenty of lightly salted water until tender.
Then take a large pot, place the boiled wheat, chicken and rice with a little salt and pepper and enough water to cover the mixture of wheat, rice, and chicken by about 5 cm.
Bring to a boil and turn down the heat to cook it for 3, ½ hours. Let it boil, boil and boil until it gets a shape of a little thick mixture. Also, stir continuously when all the ingredients are well combined.
Once the wheat is too soft and has lost its shape, the water has been absorbed, then remove it from the heat and allow to cool it.
Add ¾ cup of boiling water, if all the water has been absorbed and more water is needed and crushed the chicken if any larger pieces remain.
Then start blending the wheat and chicken until they get a homogeneous form, slightly elastic, soft and thick. Use a large wooden spoon to mix them thoroughly. You can put it in an immersion blender, but it is better to mix it with a wooden spoon to get an authentic taste of harees.
The harees when ready or cooked properly will have a thick mixture consistency. Then, take it out in a serving dish.
Fry the onions separately in a frying pan until brown. And then pour the samen over the Harees and also add the brown onion and ground cinnamon and roasted ground cumin seeds on the top of it.
Use beef or mutton instead of chicken to make a variation of this dish.
Instead of Samen or melted butter, you can also add the ghee.
Serve Harees with plenty of cinnamon and sugar to change the taste of this traditional dish.
Garnish with coriander leaves, green chillies and serve it with roti or naan. It will definitely excite your taste buds! So, enjoy this super delicious and lip-smacking Harees.
Yes, whipped wheat dish which is traditionally eaten during Ramadan.
On the other hand a hand blender is very useful kitchen equipment which is used blending liquids and semi-liquids, chopping and mixing. | http://blog.papaorder.com/2016/02/22/do-you-love-al-harees-learn-how-to-cook-it/ |
A meta-analysis of use of Prostate Imaging Reporting and Data System Version 2 (PI-RADS V2) with multiparametric MR imaging for the detection of prostate cancer.
This meta-analysis was undertaken to review the diagnostic accuracy of PI-RADS V2 for prostate cancer (PCa) detection with multiparametric MR (mp-MR). A comprehensive literature search of electronic databases was performed by two observers independently. Inclusion criteria were original research using the PI-RADS V2 system in reporting prostate MRI. The methodological quality was assessed using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. Data necessary to complete 2 × 2 contingency tables were obtained from the included studies. Thirteen studies (2,049 patients) were analysed. This is an initial meta-analysis of PI-RADs V2 and the overall diagnostic accuracy in diagnosing PCa was as follows: pooled sensitivity, 0.85 (0.78-0.91); pooled specificity, 0.71 (0.60-0.80); pooled positive likelihood ratio (LR+), 2.92 (2.09-4.09); pooled negative likelihood ratio (LR-), 0.21 (0.14-0.31); pooled diagnostic odds ratio (DOR), 14.08 (7.93-25.01), respectively. Positive predictive values ranged from 0.54 to 0.97 and negative predictive values ranged from 0.26 to 0.92. Currently available evidence indicates that PI-RADS V2 appears to have good diagnostic accuracy in patients with PCa lesions with high sensitivity and moderate specificity. However, no recommendation regarding the best threshold can be provided because of heterogeneity. • PI-RADS V2 shows good diagnostic accuracy for PCa detection. • Initially pooled specificity of PI-RADS v2 remains moderate. • PCa detection is increased by experienced radiologists. • There is currently a high heterogeneity in prostate diagnostics with MRI.
| |
The purpose of the bill is to promote the attraction of direct investment in the creation of new industries through the mechanism of industrial parks to turn them into a driving force of economic and industrial development through the introduction of a number of investment incentives that correspond to current global trends and which will increase the competitiveness of domestic industrial parks.
According to the law, an industrial (manufacturing) park is a territory, defined by the initiator of the creation of an industrial park in accordance with the urban planning documentation, equipped with the appropriate infrastructure, within which the participants of such a park can carry out economic activities in the processing industry, processing of industrial and/or domestic waste (except landfills), as well as scientific and technical activities, information and telecommunications activities on the terms defined by this law.
The bill also provides for full or partial compensation of the interest rate from the budget on loans for the arrangement and conduct of business activities in industrial parks for their participants. Besides, financing on the irrevocable basis of the arrangement of industrial parks, providing the construction of objects of the adjacent infrastructure — highways, communication lines and other communications necessary for the work of such parks is also provided. The bill also provides for the compensation of costs at the expense of the state and local budgets for connection and connection to engineering and transport networks.
The bill in the wording for the second reading also provides that the cabinet shall allocate in the state budget for 2022 at least $74 mln for the norms of the bill. | https://good-time-invest.com/blog/draft-law-on-incentives-for-industrial-parks/ |
Regional arts have the power to change lives. They make our regional communities better places to live, contribute to economic growth and give people a sense of belonging.
Established in 1994, Regional Arts WA is the State’s only multi-arts organisation with a purely regional focus.
We are an independent, membership-based, not-for-profit organisation and are highly regarded within WA and nationally as an innovative, high value leader in the arts and in regional and community development. We represent the regional arts sector, this means every artist, arts worker, arts and cultural organisation, and organisation or group which has regional programs in WA.
Regional Arts WA understands that each region has its diverse communities, imperatives, cultural requirements and challenges. We honour these distinctive communities by coordinating a series of investment, advisory, partnership and presenting offerings which are flexible, innovative and relevant.
Our service delivery is diverse with a suite of programs including funding for arts projects large and small, development support for key regional arts organisations and artists, opportunities for the state peak organisations to develop regional programs, youth and First Nations’ specific projects, and an extensive professional performing arts touring program.
Recognised as a lead organisation of the arts, Regional Arts WA is the peak Western Australian body for regional arts and culture, and a member of Regional Arts Australia – a national network with deductible gift recipient (DGR) status.
Our Purpose
To celebrate and strengthen a powerful regional arts sector
Our Vision
Connected and creative regional communities making Western Australia a better place to live
Our Values
- Trusted: We are approachable, listen with respect and respond reliably and honestly. We strive for continuous improvement and accountability
- Brave: We expect change and are a catalyst for positive adventure. We challenge disadvantage to make courageous, regions-first choices.
- Curious: We question everything we do for relevance, actively seeking diversity and innovation. We value unique experiences.
- Involved: Relationships are fundamental. We are open with information, consult where possible, pursue partnerships and collaborate when relevant.
Our 5 Year Strategic Plan (2019-2023)
Feedback collected from major surveys and key stakeholders, including representatives from the regional arts sector and communities, we have created our new strategic plan for the next 5 years which will build upon the strong foundations of regional WA culture. The five key strategies in the plan includes stimulating activity and investment, advancing regional arts value, coordinating networks and relationships, building skills and wellbeing, and championing diversity and inclusiveness. | https://www.regionalartswa.org.au/about/ |
This paper introduces a formal concept of ideology and ideological system. The formalization takes ideologies and ideological systems to be situated in agent societies. An ideological system is defined as a system of operations able to create, maintain, and extinguish the ideologies adopted by the social groups of agent societies. The concepts of group ideology, ideological contradiction, ideological dominance, and dominant ideology of an agent society, are defined. An ideology-based concept of social group is introduced. Relations between the proposed formal concept of ideology and the classical concepts of ideology elaborated in social sciences are examined. A computational notation is presented, to support the realization of ideological systems in computationally implemented agent societies. The adequacy of the approach for the formal modeling and analysis of ideological issues is illustrated through three case studies.
KeywordsIdeologies Ideological systems Agent societies Computational notation for situated ideological systems Formal modeling and analysis of ideological issues
Notes
Acknowledgments
The author thanks Helder Coelho, Agemir Bavaresco and the anonymous referees for their comments and suggestions. CAPES, FAPERGS and CNPq (Grant No. 310423/2014-7) contributed partial financial support. | https://link.springer.com/article/10.1007%2Fs10516-016-9293-3 |
Although the COVID-19 pandemic in the water and the isolation measures implemented by governments around the world will continue for a few more months, many companies are trying to plan the return to normality. Google and Facebook recently decided that most employees will work from home by the end of 2020.
In late April, Sundar Pichai, Google's CEO, sent an internal memo that indicated that, at best, company employees could return to the offices as of June 1. Now, the manager decided to open only the doors to employees whose functions cannot be performed at home, advances the newspaper The Information. All others will continue to be telecommuting for at least another 7 months.
In the case of Facebook, the company is planning to open its office doors on July 6 to some employees. However, a Facebook spokeswoman clarified the international press that () all employees who can do their work remotely will be able to do it by the end of the year. The company led by Mark Zuckerberg is still deciding which employees will be able to return early.
It should be remembered that, even in early March, Google announced, that all US employees should work from home, if the function they perform permits. According to information provided by a spokesman for Google in the international press, the measure also applies to European officials.
Facebook was also one of the first North American techs to announce the adoption of telecommuting after learning that several Seattle office employees had contracted with COVID-19 in early March. | https://dnetc.net/google-and-facebook-employees-will-work-from-home-by-the-end-of-the-year-due-to-covid-19/ |
Hachi Grated Ginger Paste, 42 g
This is a handy tube of oroshi shoga, or grated ginger paste. Highlight the delicate tastes and textures of Japan's lightest dishes with the zesty bite of this grated ginger topping. A tangy, flavoursome garnish that can be combined with spring onion and bonito flakes on top of noodles, hiyakko tofu, tempura and more, this shredded condiment adds refreshing crunch and sharpness to subtle flavours in your cooking.
Ingredients and Allergens:
Allergens are displayed in bold below. | https://shop.ichibalondon.com/products/hachi-grated-ginger-paste-42-g |
Invented by Johannes Gutenberg.
Viol (viola da gamba) and Cello (late 15th and 16th century, Italy)
Pocket watch (1510, Germany)
Invented by Peter Henlein.
Violin (Early 16th century, Italy)
Thermometer (1593-1714)
- 1593 : Invented by Galileo Galilei (Italy)
- 1714 : Mercury thermometer invented by Daniel Gabriel Fahrenheit (Poland/Netherlands)
Microscope (1595, Netherlands)
Invented by Zacharias Janssen.
Telescope
- late 11th century : astronomical lenses (Sweden)
- 13th century : experimental telescopes built by Francis Bacon (UK)
- 1595/1608 : refracting telescope (Netherlands)
- 1609 : improved by Galileo (Italy)
Newspaper (1605, Belgium/France/Germany)
The world’s first printed newspapers were the Relation aller fürnemmen und gedenckwürdigen Historien published in Strasbourg (Germany at the time, now France), and the Nieuwe Tijdingen, published the same year in Antwerp (part of the Spanish Netherlands at the time, now Belgium).
Calculator (1623-1954)
- 1623 : automatic calculator invented by Wilhelm Schickard (Germany)
- 1642 : adding machine invented by Blaise Pascal (France)
- 1954 : electronic calculator invented by IBM (USA)
Barometer (1643, Italy)
Invented by Evangelista Torricelli.
Daily newspaper (1645, Germany)
The Einkommende Zeitungen in Lepizing.
Pendulum clock (1657, Netherlands)
Invented by Christiaan Huygens.
Pressure cooker (1679, France)
Invented by Denis Papin.
Postage stamps (1680, England)
Adhesive stamp invented in 1840 in Britain
Clarinet (1690, Germany)
Invented by Johann Christoph Denner.
Steam engine (1698, UK)
Invented by Thomas Savery in 1698, and improved by James Watt in 1769.
Piano (early 1700’s, Italy)
Invented by Bartolomeo Cristofori in Florence.
Fire extinguisher (England, 1723)
Patented in England in 1723 by Ambrose Godfrey.
Magazine (England, 1731)
The Gentleman’s Magazine was the world’s first general-interest magazine.
Refrigerator (1748-1856, Scotland/USA)
- 1748 : first known method of artificial refrigeration was demonstrated by William Cullen (Scotland).
- 1805 : first refrigerator invented by Oliver Evans (USA).
- 1834 : first patent for a vapor-compression refrigeration system granted to Jacob Perkins (USA).
- 1842 : first system for refrigerating water to produce ice developed by John Gorrie (Scotland-USA).
- 1848 : first commercial vapor-compression refrigerator developed by Alexander Twining (USA). It was commercialised in 1856.
Hot air balloon (France, 1782-83)
Invented by the brothers Josef and Etienne Montgolfier.
Parachute (France/Germany/Russia)
- 1785 : first modern parachute invented by Jean Pierre Blanchard (France).
- 1890’s : Hermann Lattemann and his wife Käthe Paulus jump with bagged parachutes (Germany).
- 1911 : knapsack parachute invented by Gleb Kotelnikov (Russia).
Steam boat (1786, USA)
First built by John Fitch.
Engine (1791-1939)
- 1791 : Gas turbine patented by John Barber (England).
- 1826 : Reciprocating internal combustion engine patented by Samuel Morey (USA)
- 1867 : Petrol engine developed by Nikolaus Otto (Germany)
- 1892 : Diesel engine invented by Rudolph Diesel (Germany)
- 1924-57 : Rotary engine developed by Felix Wankel (Germany)
- 1936-39 : Jet engine developed simultaneously by Frank Whittle (England) and Hans von Ohain (Germany).
Submarine (1800, USA/France)
Invented by American Robert Fulton commissioned by Napoleon. First launched in France.
Ambulance service (early 1800’s, France)
Modern method of army surgery, field hospitals and the system of army ambulance corps invented by Dominique Jean Larrey, surgeon-in-chief of the Napoleonic armies.
Locomotive (1804, UK)
Invented by Richard Trevithick. First Steam Locomotive invented by George Stephenson in 1814.
Railway (1820, UK)
The idea of the railway dates back to Roman times, 2000 years ago, when horse-drawn vehicles were set on cut-stone tracks. In 1802, the first modern horse-drawn train appeared in England, and the first steam powered train was however launched in 1820, also in England.
Comic strips (1820’s, Switzerland)
Swiss Rodolphe Toepffer was probably the first modern cartoonist.
Photography (1825-1861)
- 1825 : First photograph (France)
- 1840 : Silver photo (France)
- 1840 : Negative (UK)
- 1861 : Colour photography invented by James Clerk Maxwell (Scotland)
Gas stove/cooker (1826, England)
First patented and manufactured by James Sharp.
Lawn mower (1827, England)
Invented by Edwin Beard Budding.
Tramway (1828-1880)
- 1828 : first horse-drawn carriage on rail in Baltimore (USA).
- 1868 : first cable-car in New York (USA).
- 1873 : first steam-powered tram.
- 1880 : first electric tram in St. Petersburg (Russia) and the next year in Berlin (Germany).
Light bulb (1835, UK/Germany)
- 1835 : first Incandescent light bulb invented by James Bowman Lindsay (UK).
- 1854 : first practical light bulb invented by Heinrich Goebel (Germany).
Saxophone (1840’s, Belgium)
Invented by Adolphe Sax.
Telegraph (1844, USA)
Invented by Samuel Morse.
Telephone (1849, Italy)
The invention of the telephone has long been credited to the Scot Alexander Graham Bell in 1876. However, the Italian Antonio Meucci is now recognised to have invented the device as early as 1849.
Dishwasher (1850-1886, USA)
Steam-powered airship (1852, France)
Invented by Henri Giffard.
Helicopter (1861, France)
The earliest flying toys resembling the principle of a helicopter first appeared in China around 400 BCE. More advanced models were developed in Russia and France in the second half of the 18th century. The first small steam-powered model was invented by Gustave de Ponton d’Amécourt, who also coined the word “helicopter”. New models were developed mainly in France, notably by Emmanuel Dieuaide in 1877, Dandrieux in 1878, Jacques and Louis Breguet in 1906, or Paul Cornu in 1907. The first turbine-powered helicopter in the world was not built until 1951, in the USA.
Metro/Subway (1863, Britain)
The London Underground was the first rapid transit network in the world.
Vacuum cleaner (1865, USA)
Dynamite (1866, Sweden)
Invented by Alfred Nobel.
Wrist watch (1868, Switzerland => Patek Philippe & Co.)
Radio (1874-96)
- 1874 : Radio waves identified by by James Clerk Maxwell (Scotland).
- 1875 : Thomas Edison patents electrostatic coupling system (USA).
- 1895 : first radio receiver developed by Alexander Stepanovich Popov (Russia).
- 1895 : first successful radio transmission acheived by Guglielmo Marconi (Italy/UK). Commercial radio patented the next year.
Loudspeaker (1876, Scotland)
Invented by Alexander Graham Bell.
Phonograph (1877, USA)
Invented by Thomas Alva Edison, although based on France-born Leon Scott’s 1857 phonautograph.
Microphone (1877, Germany)
Invented by Emil Berliner.
Cash register (1879, USA)
Invented by James Ritty.
Television (1884-1927)
- First TV => 1884, Germany
- TV tube => 1907, Russia
- Electronic TV & Broadcast => 1927, USA
Motorcycle (1885, Germany)
First designed and built by Gottlieb Daimler and Wilhelm Maybach
Car/Automobile (1886, Germany)
Developed independently and simultaneously by Carl Benz in Mannheim, amd Gottlieb Daimler and Wilhelm Maybach in Stuttgart.
Zipper (1891, USA)
Invented by Whitcomb L. Judson.
Animation (1892, France)
First animated film created by Emile Reynaud.
Tractor (1892, USA)
- first practical gasoline-powered tractor built by John Froelich in 1892.
- irst practical caterpillar tracks for use in tractors developed by Benjamin Holt in 1904.
Cinema (1894, France)
Cinematograph invented by the Lumiere brothers.
Electric stove/cooker (1896, USA)
First patented by William S. Hadaway.
Remote control (1898, Austria-Hungary)
First developed in 1893 by Nikola Tesla.
Air Conditioner (1902, USA)
Invented by Willis Carrier.
Traffic lights (1914, USA)
Parking meter (1935, USA)
Helicopter (1939, Russia)
Developed by Igor Sikorsky.
Microwave oven (1947, USA)
Invented by Percy Spencer.
Atomic clock (1949, USA)
Charge/credit card (1950, USA => Diner’s Club)
Video Games (1951-58, USA/UK)
Invention disputed between 3 people, 2 Americans and a Briton.
Laserdisk (1958, USA; commercialised by MCA and Philips in 1972)
Photocopier (1959, USA => Xerox)
Soft contact lenses (1961, Czech)
Invented by Otto Wichterle.
Cassette tape (1967, Netherlands => Philips)
LCD screen (1968, Germany)
Quartz watch (1969, Japan => Seiko)
Video tape (1972, Netherlands – Philips, later replaced by JVC’s VHS)
Walkman (1977, Germany => commercialised by the Japanese Sony from 1979)
Compact Disk (1982, Netherlands/Germany – Philips)
CD-ROM (1985, Netherlands/Japan => Philips/Sony)
Minidisk (1991, Japan => Sony). | https://barb-nowak.com/2017/06/white-race-pioneers-inventors/ |
Heather Wiggins and Trey Robertson are federal employees who handle all things copyright. They will discuss what copyright is and how it relates to electronic/online resources. More information on these speakers coming soon!
Christine Platt is a passionate advocate for social justice and policy reform. She holds a B.A. in Africana Studies from the University of South Florida, M.A. in African and African American Studies from The Ohio State University, and J.D. from Stetson University College of Law. A believer in the power of storytelling as a tool for social change, Christine’s literature centers on teaching race, equity, diversity and inclusion to people of all ages. She currently serves as the Managing Director of the Antiracist Research & Policy Center at American University.
Christine’s debut novel, The Truth About Awiti, won the 2016 Independent Publisher Book Awards Gold Medal for Multicultural Fiction and is currently used in high schools, colleges and universities to teach the history of the transatlantic slave trade and its lasting implications. In 2018, Christine decided to teach history to a new audience–children. She has since written over two dozen children’s books including the beloved Ana & Andrew series, which teaches young readers about African-American history and culture. She is also the author of several biographies for advanced readers and her first middle-grade work, Trailblazers: Martin Luther King Jr., was published by Penguin Random House in December 2020. Additionally, Christine recently signed with Candlewick Press/Walker Book on an original series entitled Frankie at Five which will introduce advanced/early middle-grade readers to the importance of journalism. Her next long anticipated adult work, The Afrominimalist’s Guide to Living with Less, will be published on June 15, 2021 by Simon & Schuster/Tiller Press.
Christine is a member of the Association of Black Women Historians, the Association for the Study of African American Life and History, and serves as an Ambassador for Smithsonian’s National Museum of African American History and Culture. She is also a member of the Society of Children’s Book Writers and Illustrators. Christine regularly partners organizations on educational initiatives including Teaching for Change, Turning the Page, An Open Book Foundation, First Book, Eaton Workshop, PEN/Faulkner Foundation, and Writers and Artists Across the Country. Christine currently serves on the Board of Directors for Lee Montessori Public Charter School in Washington, DC. | https://nahsl.libguides.com/c.php?g=774714&p=8138223 |
I have been teaching music on rotary – with my own music room – for 12 years. Unfortunately, just this week I was told that not only does my room need to be used to spread out classes better (😭), I will have to teach music by going room to room with a cart. (😭) With NO INSTRUMENTS (😰😭😭)
Well… not exactly no instruments. It turns out it is not the instruments themselves that are the issue – it is the sharing of instruments between students/classrooms.
Once I was (mostly) finished feeling sorry for myself, I started to get creative. If you, like me, are forced into changing the way you teach music due to COVID, here are some tips.
- Plan unit-based instrument use. If you have enough instruments for one class to use (whether individually or shared, depending on your restrictions) then assign them to a single classroom for a set period of time (6 weeks, 2 months). Use them only within that classroom until the unit is over, sanitize them and/or let them sit for a week, and then assign them to the next classroom.
- Get a LOT of mallets. If students have their own mallets, then they can share percussion instruments without risking shared germs. I’ve discovered that it’s actually pretty easy to make your own mallets using dowel, floral tape, and yarn.
- DIY some instruments. You probably can’t afford to make properly tuned copper xylophones – but there are tons of other ways to make instruments – and I’m NOT talking about rice in an easter egg or jingle bells on a pipe cleaner.
- Plastic cups. Yes I know you don’t want to hear the cup song again, but cups ARE a fun, engaging, and super-cheap way to teach rhythms.
- Teach them to conduct. Conducting covers many curriculum expectations – including many of those that you would typically play an instrument to demonstrate – without any instrument. Plus, it’s actually really fun!
I hope these ideas will inspire you to keep some instruments in your classroom – and music in your music class! – even in this time of social distance and contract tracing. | https://stageworthybywidy.com/2020/09/11/strategies-for-teaching-music-on-rotary-in-socially-distanced-classrooms/ |
With PQDT Open, you can read the full text of open access dissertations and theses free of charge.
About PQDT Open Search
Appropriate use of self-talk and cognitive strategy has been shown to improve individual performance on a variety of tasks by increasing confidence, focus, and awareness. Likewise, depending on its use, self-talk and cognitive strategy has also been shown to be detrimental to performance. Little research has been undertaken to explore the relationship between self-talk and cognitive strategy and the factors that precipitate changes between the two. To fill this gap, this study examined long distance runners to determine the factors that cause change in self-talk and cognitive strategy as well as the relationship between the two if change occurs. Additionally, this study examined the self-talk preferences athletes make throughout the course of a competition. This study determined that multiple factors influence self-talk and cognitive strategy such as athlete fatigue, performance-to-goal discrepancies, spectators and coaches, and other competitors. The only possible cause and effect relationship between self-talk and cognitive strategy was between motivational self-talk and both associative and dissociative cognitive strategies in the latter portion of a competition. It was determined that runners preferred motivational self-talk over the majority of the competition except in the middle when instructional self-talk was utilized the most. By identifying factors that cause self-talk change, individuals can attempt to mitigate those factors in order maintain their preferred type of self-talk that is most beneficial.
|Advisor:||Hazel, Michael|
|Commitee:|
|School:||Gonzaga University|
|Department:||Communication and Leadership|
|School Location:||United States -- Washington|
|Source:||MAI 47/05M, Masters Abstracts International|
|Source Type:||DISSERTATION|
|Subjects:||Communication, Cognitive psychology|
|Keywords:||Communication, Intrapersonal, Self-talk|
|Publication Number:||1463760|
|ISBN:||9781109109771|
Copyright in each Dissertation and Thesis is retained by the author. All Rights Reserved
The supplemental file or files you are about to download were provided to ProQuest by the author as part of a
dissertation or thesis. The supplemental files are provided "AS IS" without warranty. ProQuest is not responsible for the
content, format or impact on the supplemental file(s) on our system. in some cases, the file type may be unknown or
may be a .exe file. We recommend caution as you open such files.
Copyright of the original materials contained in the supplemental file is retained by the author and your access to the
supplemental files is subject to the ProQuest Terms and Conditions of use.
Depending on the size of the file(s) you are downloading, the system may take some time to download them. Please be
patient. | https://pqdtopen.proquest.com/doc/304353825.html?FMT=ABS |
FAQ: What typical duty cycles are achieved when arc welding?
What typical duty cycles are achieved when arc welding?
The term duty cycle is used to describe the amount of time spent depositing weld metal (the arcing period) as a percentage of the total time taken to complete a weld. In the USA, the duty cycle is called the Operator Factor.
With the MMA process, frequent interruptions are required to allow for slag removal, inter-run dressing and changing the electrode. Consequently, the duty cycle can be quite low.
At the other extreme, a high duty cycle is possible from a programmed robot because it may be able to weld continuously for long periods with only short interruptions to allow for the work-piece to be manipulated.
While the duty cycle for each welding process will vary according to factors such as type of work, access to joints and the working practices of a particular organisation, it is possible to allocate some typical values such as those shown in the table below.
It is important to note that the term duty cycle is also used to rate welding power sources and refers to the maximum welding current that they can be used for a particular operating time. | https://www.twi-global.com/technical-knowledge/faqs/faq-what-typical-duty-cycles-are-achieved-when-arc-welding |
There are a number of causes due to which one can suffer from hypertension, but in most cases lifestyle choices are the risk factors. Sometimes hypertension can be associated with the lack of proper care of the body. There are a number of factors that increase your risk such as a diet poor in healthy nutrients.
The main obstacle when it comes to living with hypertension is that there is no optimal medical treatment that can be used to improve the condition. Once high blood pressure has begun affecting the body organs and arteries the only way to treat the condition is to rejuvenate the body.
Hypertension in its early stages is quite hard to note since there are hardly any symptoms that are manifested. The condition is usually notable when it deteriorates. In its severe stages it affects the body organs such as the heart, kidneys and possibly the brain.
There are a number of mild symptoms that can be noted, although these can be due to a number of other illnesses. Some of the symptoms that are associated with the condition include nosebleeds, headaches and dizziness. The symptoms may manifest themselves in the early and late stages.
When one is trying to rejuvenate the body and treat the organs that have been damaged by hypertension, the most effective and preferred way is to plan meals and stick to a diet that encourages rejuvenation of the body.
Add some Omegs 3 to your diet through regularly eating fish or through a good Omega 3 supplement.
Include more fruits and vegetables into your diet. Eat healthy foods such as fish, whole grains and nuts and seeds and avoid junk food, fried foods, red meat and sugary foods. Once you have visited the doctor and gotten your pressures checked then medical and natural treatment options that could help you with the management of the problem can be prescribed.
Hypertension can occur in three stages during which different methods for dealing with the condition have to be used. The first stage of the condition is pre-hypertension. It involves drifting from the normal blood pressure that is usually around 120/80 mmHg.
In pre-hypertension the systolic pressure ranges between 120 and 139 while the diastolic pressure, which is the denominator value in the readings, is between 80 and 89. Stage one of the condition is when the blood pressure is between 140/90 to 159/99 mmHg, while stage two of hypertension is characterized by blood pressure that is above 160/100 mmHg. The sooner you start a treatment the better.
~*~*~*~*~*~
Warren Tattersall has been a full time nutritional consultant for over a decade and works with people all over the world to help them improve their health, increase their personal energy levels and to use supplements to assist with diet related health issues.
This site has many helpful tips about nutrition and free health plans to reduce your high blood pressure and help you with a healthy heart. Not only will you get valuable :cardiovascular problems health tips about fish oil for cardiovascular health.
To have a free personal consultation with him to learn how incorporating nutritional supplements may improve your health concerns just visit “The Health Success Site” and download the free health report available there, or email warren@TheHealthSuccessSite.com to request a personal one-on-one consultation by email or phone.
Join in and write your own page! It's easy to do. How? Simply click here to return to Cardiovascular.
Subscribe to get your weekly "Health Success Magazine" with a new complete & comprehensive Health Report in every edition!
If you would like a free no-obligation private consultation or to contact Warren Tattersall for more information, please click here >> Contact Us
You will find many assorted Health Reports available for download free to you on this website!
Our free Health Success Reports are each available for you to download when you subscribe to receive them and their 7 part eCourse.
You can unsubscribe at any time, but we are sure you will want to receive all the email lessons of these informative ecourses.
Read more HERE to select the REPORT subjects of most interest (or concern) to you. | https://www.thehealthsuccesssite.com/what-you-should-know-about-hypertension.html |
Geochronological, geophysical, and experimental data provide the framework of new conceptual models that describe magma transport and emplacement throughout the crust. With numerical simulations of magma emplacement and heat transfer, those conceptual models can be put to the test. The integration of field data, experiments, and simulations suggest that the transfer and emplacement of magmas within the crust is discontinuous. In a similar way to the cycles of activity that characterize volcanic eruptions, intrusive activity appears to be cyclic. Heat transfer computation shows that only during periods of the highest magma fluxes can magmas accumulate in the upper crust and form vast shallow magma chambers that are able to feed the largest eruptions. Because of higher temperatures, the deeper levels of the crust are more favorable to the accumulation of melts and are believed to be where most differentiation occurs.
Determining magma accumulation and cooling rates has affected our understanding of the continental crust differentiation but it is not of academic interest only; it also has societal implications as it affects our models of ore deposits formation, as well as our interpretation of the possible precursors of volcanic eruption. | http://lyon-geologie.fr/seminaires/seminaires-passes/copy2_of_2015-2016/copy_of_25-03-2019-francois-soubiran-ens-lyon |
The Michigan Managed Pollinator Protection Plan is a part of a federal effort to reduce pesticide exposure for managed pollinators. In 2012, the United States Department of Agriculture issued its Report on the National Stakeholders Conference on Honey Bee Health in response to unsustainable levels of honey bee colony losses. In 2013, the United States Environmental Protection Agency (EPA) developed a harmonized risk assessment framework for quantifying the risk of pesticide exposure to bees and made it a part of its pesticide registration and registration review processes. Also in 2013, EPA developed new label language for certain neonicotinoid pesticides that were identified as particularly hazardous to managed bees. In 2014, a Presidential Memorandum established a Pollinator Health Task Force. In 2015, This task force developed the “National Strategy to Promote the Health of Honey Bees and Other Pollinators.” The EPA had two main roles in the strategy, one regulatory, and one non-regulatory. As a regulatory measure, the EPA has added label restrictions to pesticide products carrying one or more of 71 active ingredients that are known to be acutely toxic to pollinators. As a non-regulatory measure, the EPA is working with state and tribal agencies to develop and implement local Pollinator Protection Plans for situations that are not covered by the label restrictions. The State FIFRA Issues Research and Evaluation Group (SFIREG) with input from the EPA developed a guidance document to aid in the development and implementation of these state plans.
Development of the Michigan Plan
In February 2016, Michigan began developing its Managed Pollinator Protection Plan (MP3) by bringing together a diverse group of commodity partners and stakeholders. From that meeting, a steering committee was developed that had members from MDARD (Jeffrey Zimmer, Mike Hansen); MSU (Meghan Milbrath, Rufus Isaacs, Walter Pett); Michigan Farm Bureau (Kevin Robson); and the Michigan Commercial Beekeepers’ Association (Jamie Ostrowski). This committee met multiple times over the following 18 months. In August-October 2016, the steering committee hosted seven regional listening sessions across the state for beekeepers, beekeeping organizations, growers, private and commercial pesticide applicators, pesticide registrants, Michigan State University (including Extension), United States Department of Agriculture, and others. Attendees at these meetings heard about and discussed pesticides and risks to managed pollinators, communication between beekeepers and applicators, pollinator habitat, education, regulation and management, recommendations to include in the MP3, and priority areas for research. Listening session attendees were asked to provide their input on development of Michigan’s MP3 strategy. In addition to the seven listening sessions, targeted regional meetings, online communication/social media, and newsletters provided a variety of opportunities for stakeholders to provide input.
MDARD provided funding for MSU to hire a writer and organizer to host the listening sessions and to drive the writing of our plan. In November 2017, the steering committee released Michigan’s plan: Communication Strategies for Reducing Pesticide Risk for Managed Pollinators in Michigan.
More information, and a copy of the plan is available at the MSU website below:
https://pollinators.msu.edu/programs/protection-plan/
Communication Strategies for Reducing Pesticide Risk for Managed Pollinators in Michigan establishes a framework for open communication and coordination between pesticide applicators and beekeepers who have colonies in the areas that could be impacted by applications and supports the need for crop protection and best management practices. The key goals of the plan are to:
- Mitigate potential exposure of honey bees and other managed pollinators to pesticides.
- Foster positive relationships between beekeepers, growers, and applicators.
- Allow for crop and honey production.
- Refine public understanding of pollinator health issues, factors affecting pollinators, and means of mitigating negative outcomes on pollinator populations.
- Clarify pathways to minimize risk to pollinators that citizens, businesses, agencies, and Michigan residents can follow.
The plan also lists 10 action items that are the next steps for implementation of the plan.
In 2019 and 2020, Jeffrey Zimmer at MDARD has secured partial funding for an employee at MSU’s Michigan Pollinator Initiative (MPI) to take the lead on ensuring that we are working towards completing the 10 action items.
Partnerships and Outreach
MPI has extended its capacity by partnering with a variety of statewide organizations and national initiatives. These include collaborations with organizations such as the Michigan Agribusiness Association (MABA), various conservation districts, the Michigan Department of Natural Resources, United States Fish and Wildlife Service, the Bee and Butterfly Habitat fund, Project Wingspan, and others. The role of MPI is to provide knowledge support to programs, share information through Michigan State University Extension outreach channels, and provide a comprehensive website that lists up to date statewide resources. Through its website, www.pollinators.msu.edu, MPI provides information about pollinator health, planting for pollinators, and how to reduce pesticide use. Since the beginning of this year, MPI’s page about minimizing pesticide exposure to pollinators has received over 600 views and MPI’s page about pollinator gardens has received over 5,000 views.
Communication with Beekeepers
MPI has strengthened ties with Michigan beekeepers. Information on the Managed Pollinator Protection Plan has been presented at the annual Michigan Beekeepers’ Association meeting, the Michigan Commercial Beekeepers Association annual meeting, and in the newsletter (over 4,000 subscribers for Michigan Beekeepers). MPI works closely with beekeepers who manage colonies for pollination to understand their concerns and experiences with pesticide risk.
Current and Upcoming Projects in 2020
-
Develop crop-specific pollinator stewardship guides. The blueberry pollinator stewardship guide was presented to blueberry growers on Friday, April 3rd, and the development of plans for apples, cherries, and squash are in progress.
-
Lead a national working group of people working on state managed pollinator protection plans. This work is supported by the USDA National Institute of Food and Agriculture, Crop Protection and Pest Management Program through the North Central IPM Center (2018-70006-28883). The group’s first project is to develop a presentation about pollinators that anyone can use for pesticide applicator recertification credit clinics.
-
Transition pollinator content in the Michigan Private and Commercial Applicator Core Manual from an appendix to a chapter.
-
Update action items based on current work and solicit feedback from stakeholders.
New issues for 2020
Some new issues have already been identified, including the need for a communication strategy between the Department of Health and Human Services and beekeepers in the event of pesticide spraying during public health emergencies. The recent emergency spraying highlighted the need for communication between state agencies and the beekeeping industry. We also had a honey bee Extension specialist retire, and have reduced capacity to carry on our work with beekeepers.
Current status of the action items outlined in the Michigan Managed Pollinator Protection Plan
Action Item #1: Incorporate pollinator protection language in state pesticide certification study manuals and certification exams. Because these exams are required for all initially certified pesticide applicators, this would help ensure that each applicator has at least a minimum of knowledge regarding pesticide risk to pollinators.
Update: In Spring 2019, MPI wrote an appendix for the Michigan Private and Commercial Applicator Core Manual so that people preparing for pesticide applicator certification can learn about pollinators, pollinator health, and ways to reduce pesticide exposure. MPI shared the content with the national Managed Pollinator Protection Working Group and the Apiary Inspectors of America so that other states can use the content (with proper acknowledgment). This work was led by Ana Heck, working under the funding from MDARD.
Action Item #2: Incorporate pollinator protection education into training programs offered to pesticide applicators.
Update: MSU pollinator specialists provided over 10 training programs at pesticide recertification credit clinics throughout the state in 2019, reaching hundreds of pesticide applicators. They worked with the Michigan Nursery and Landscape Association (MNLA) and the program coordinator for pesticide recertification to provide pollinator related education to hundreds of certified pesticide applicators seeking recertification. Ana Heck will give virtual presentations about pollinators to pesticide applicators in the fall and winter of 2020. This work has been funded by MNLA and MSU Extension.
Action Item #3: Incorporate information related to pesticide toxicity, pollinator protection, and pollinator habit into crop production manuals and industry training activities.
Update: Many Michigan farms already provide large areas for pollinator forage and habitat, and MPI recognizes growers as key allies to promote pollinator health. MPI is working collaboratively to develop crop-specific pollinator stewardship guides for reducing pesticide exposure to pollinators. MPI works with MSU Extension experts, growers, and crop industry groups to develop best management practices that are practical and feasible for growers to implement. Since registered pesticides, pest pressure, and farming technology can change from year to year, MPI will create dynamic recommendations that are updated based on pest management challenges and pesticide risk to pollinators. In 2019, MPI spoke to over 300 growers about pollinator health and the Michigan Managed Pollinator Protection Plan. Ana Heck spoke to fruit and vegetable growers at the Southwest Michigan Horticultural Days in February 2020, and she presented pollinator stewardship guide recommendations for blueberry growers at the Michigan Blueberry Kick Off Meeting in April 2020. MPI will post pollinator stewardship guides on its website. Ultimately, our goal is to foster helpful Extension relationships with growers and understand their pest management constraints.
Action Item #4: Develop presentations and webinars on pesticides and pollinators that can be applied to for applicator credits.
Update: MPI is leading a national working group to develop a presentation about pollinators that can be given online and shared with anyone to use at a pesticide credit recertification credit clinic. The presentation will be used in Michigan and throughout the U.S.
Action Item #5: Create outreach material and newsletters to be distributed through social media to educate on proper use of pesticides and management options.
Update: No progress to report.
Action Item #6: Collaborate with Master Gardeners for pesticide use trainings.
Update: No progress to report.
Action Item #7: Develop a certification program for pollinator educators.
Update: Using funding from the MSU Agriculture and Agribusiness Institute, MPI developed Pollinator Champions, a free online course designed to educate the public about pollinators in Michigan and what they can do to help. After course completion, individuals can pay to become Certified Pollinator Champions, gaining access to presentation materials so they can give talks about pollinators at garden clubs, libraries, and schools. As of September 2020, the free course has had over 2,100 participants, 325 of whom became Certified Pollinator Champions. Pollinator Champions is the fastest growing online course in terms of enrollment within MSU Extension and has a high completion rate. The Pollinator Champions course received MSU Extension’s 2019 Innovative Technology Award. The course can be found here: https://pollinators.msu.edu/programs/pollinator-champions/.
Action Item #8: Increase usage of educational materials on MP3 related websites.
Update: No progress to report.
Action Item #9: Work on outreach through the Michigan Farm News, Fruit Grower News, and Vegetable Grower News, by developing articles that speak to this topic, and at the end of the article, give resources to contact, i.e. trainers, MDARD reps, etc.
Update: An article was submitted to Fruit Quarterly Update in April 2020.
Action Item #10: Develop a trifold brochure on Pesticide Risk to Bees to be positioned at areas where crop protection materials are purchased. | https://pollinators.msu.edu/programs/protection-plan/michigan-managed-pollinator-protection-plan-update/ |
You know the feeling: it’s after lunch and you suddenly want a piece of chocolate. You can’t stop thinking about the chocolate until you eat it. So, why does this happen? This article examines what your food cravings really mean and 3 common cravings explained. Let’s dig in!
3 Causes of Cravings
Cravings are complex. They may be caused by a combination of environmental, social, emotional, and habitual cues. To decode your cravings, consider the following potential causes of cravings.
1- What you restrict, persists.
The number one cause of cravings that I observe among my clients is food restriction. This can be explained by the “Forbidden Fruit Effect”: anything which seems to be unavailable is, as a result, more desirable (1).
Think about this in your own life. When you tell yourself that you can’t have chocolate or keep it in the house, what happens when you’re eventually exposed to it? Chances are that you’re more likely to binge eat the chocolate.
Another classic example is among people who do Whole 30. The Whole 30 diet heavily restricts carbohydrate intake. I’ve heard numerous accounts of people craving bagels, pasta, bread (all carb foods) after the challenge is over. What you restrict, persists.
Restriction does not solve your food craving — it fuels it.
2- Sleep and stress cycles.
The body is more likely to produce the stress-response hormone, cortisol, in response to chronic stress and sleep deprivation. Cortisol is correlated with increased appetite, specifically for high-fat and sugary foods (2).
In other words, you’re not crazy for craving a donut the morning after getting little to no sleep. It’s your body’s physiological response to high cortisol levels.
In regards to sleep, the recommendation is for adults to get 6-9 hours of sleep per night.
3 – Inadequate nutrient intake.
Adequate food intake is key to meeting nutrient needs. This means eating a variety of foods from the various food groups: fruits, vegetables, grains, protein, and dairy (or dairy alternatives).
When nutrient needs are met, you should feel energized and satisfied. When nutrient needs go unmet, you might feel sluggish and it’s not uncommon to experience cravings in response.
Some of my clients tell me that they find themselves grazing after a meal. To this, I ask: what did you have for your meal? Did your meal include a combination of carbs, protein, and fat? Did it include a variety of textures and flavors that you actually wanted? Your cravings could be telling you that you didn’t get enough to eat or your meal was lacking variety.
Common Cravings Explained
When You Crave Chocolate.
The chocolate craving is most common among women, and it may have to do with your menstrual cycle. Chocolate is high in magnesium, which is a mineral lost during menstruation. Other magnesium-rich foods include avocados, nuts, legumes, and whole grains.
Chocolate may also help balance low levels of neurotransmitters including serotonin (the happy chemical) and dopamine (the feel-good hormone), which are involved in the regulation of mood (3).
Between its nutrient profile and sensory characteristics, you see why chocolate cravings are real.
When You Crave Crunchy Foods.
Craving crunchy food could be related to emotions. There’s something satisfying about cracking food in your mouth when you feel frustrated. Right? The act of chewing and crunching can release anger.
One of the emotional eating lessons that I teach my clients is to come up with multiple ways of coping with emotions. In addition to snacking on crunchy food for anger, you could: 1) take a boxing class 2) get some exercise 3) enjoy a comedy to release some humor.
When You Crave Salty Foods.
What if you’re craving salty foods like chips, popcorn, or pickles? This could indicate an electrolyte imbalance, which might happen after a workout. Additionally, cravings for salty food could be related to high cortisol levels as mentioned earlier in relation to high stress or sleep deprivation.
Self Check-In.
Don’t try to “control” your cravings. Instead, get curious about the following:
- Release unnecessary food rules. Can you practice giving yourself unconditional permission to eat what you want when you want it?
- Get quality zzz’s and relax. How can you prioritize better sleep hygiene and relaxation techniques?
- Practice gentle nutrition. Can you focus on getting a variety of foods from the main food groups for adequate energy intake? | https://everglownutrition.com/what-do-food-cravings-mean/ |
Raki Phillips, an award-winning hospitality veteran, has been named as the new CEO of Ras Al Khaimah Tourism Development Authority (RAKTDA), with the authority saying that his appointment marks the commencement of a new phase of tourism development for the Emirate. Phillips, who has worked for some of the world’s most renowned global brands, including Ritz-Carlton Hotels, Fairmont Hotels & Resorts and Universal Studios Orlando and who was named one of the ‘Top 20 Most Powerful Hoteliers,’ will be responsible for delivering the authority’s recently announced Destination Strategy 2019-2021, which aims to attract 1.5 million visitors to the Emirate by 2021 and 3 million by 2025.
Phillips takes over the reins from Haitham Mattar, whose four-year tenure as CEO of RAKTDA is credited with repositioning Ras Al Khaimah to become one of the fastest growing destinations in the world and exceeding the target of one million visitors in the first three years. His Highness Sheikh Saud bin Saqr Al Qasimi, Ruler of Ras Al Khaimah, thanked Mattar for his commitment to the role and for his efforts in positioning the Emirate as one of the world’s fastest-growing tourism destinations.
RAKTDA’s first tourism strategy was launched in January 2016 and was responsible for accelerating visitor growth, increasing visitor satisfaction and raising the contribution of the tourist sector to the Emirate’s GDP. The strategy introduced new tourism demand generators, including the world’s longest zip line and the Middle East’s first via ferrata, both of which helped position Ras Al Khaimah as the region’s nature-based adventure hub. As the new CEO, Raki Phillips will supervise a number of key projects, with notable development projects on Jebel Jais including multiple zip lines, adventure park hiking trails, the Bear Grylls Survival Academy and a luxury camp. | http://tourismbreakingnews.ae/hospitality-veteran-raki-phillips-appointed-as-ceo-of-raktda/ |
Obesity is a known risk factor for the development and progression of chronic venous disorders (CVD). It is currently unknown if treatment outcomes, after an intervention for CVD, are affected by obesity. The purpose of this investigation was to assess the effectiveness of various CVD treatments in obese patients and determine what level of obesity is associated with poor outcomes.
Methods
Data was prospectively collected in the Center for Vein Restoration’s electronic medical record system (NexGen Healthcare Information System, Irvine, California), and retrospectively analyzed. Patients and limbs were categorized by the following BMI categories: < 25, 26–30, 31–35, 36–40, 41–45, > 46. Percent change in the revised venous clinical severity score (rVCSS) and the CIVIQ 20 quality of life survey, were utilized to determine CVD treatment effectiveness in patients who underwent endovenous thermal ablations (TA), phlebectomy, and ultrasound guided foam sclerotherapy (USGFS).
Results
From January 2015 to December 2017, 65,329 patients (77% female, 23% male) had a venous procedure performed. Of these patients, 25,592 (39,919 limbs) had an ablation alone, an ablation with a phlebectomy or an ablation with a phlebectomy and ultrasound guided foam sclerotherapy procedure. The number of procedures performed were as follows: TAs (37,781), USGFS (22,964) and phlebectomy (17,467). The degree of improvement six months post-procedure progressively decreased with increasing BMI in patients who underwent ablation alone; and, decreased more significantly in patients with BMI’s greater than 35 (p ≤ 0.001). Outcomes improved approximately 12% with the addition of phlebectomy to ablation. Patients who had ablations, phlebectomies and USGFS demonstrated no additional improvement. Significantly inferior outcomes were noted in patients with BMI’s ≥ 35, with the poorest outcomes observed in patients with a BMI ≥ 46 (p ≤ 0.001). The average number of ablations per patient, increased with increasing BMI and was significantly different compared to BMIs less that 30 (p ≤ 0.001). All pre and post CIVIQ 20 quality of life scores, within a BMI category, at six months were significantly different (p ≤ 0.01). No difference in the degree of improvement were observed in patients with a BMI ≥ 31. Finally, a multivariate logistic regression analysis indicated that when controlling for BMI, diabetes, a history of cancer, female gender and black and Hispanic race were independently associated with poorer outcomes.
Conclusions
Progressive increases in BMI negatively impact CVD related treatment outcomes as measured by rVCSS and CIVIQ 20. Outcomes progressively worsen with BMI’s greater than 35 for patients undergoing CVD treatments. Treatment outcomes in patients with a BMI ≥ 46 are so poor that weight loss management should be considered before offering CVD treatments. | https://thelvf.org/researches/the-effect-of-obesity-on-chronic-venous-insufficiency-treatment-outcomes/ |
What Is an Offset Bend?
One of the more common bends made in electrical conduit is the offset bend: a technique used to move a run of conduit a set distance to one side, up or down. It is very rare that conduit can be placed in a straight line along the entire distance needed. There will usually be small projections in the way, other equipment to navigate around, or other reasons to move the conduit over some distance.
While bending conduit, one of the more important things to consider is the total number of degrees of bend between pull boxes. The NEC (national electric code) limits this number to 360º and some job specifications limit it even further. Fewer degrees of bend also result in an easier pull when it comes time to pull wire into the conduit—always a good thing. While bending an offset may be inevitable and necessary, the degree of the bend is variable depending on circumstances and the electrician doing the work.
The first bend should change the direction the conduit is going.
The second should reverse that direction change. The result is a rather "Z" shaped piece of conduit, as shown in the pictures below.
The most common bend used is a 30º bend, followed by another of the same, resulting in a total bend of 60º, but this is not necessary in most cases. Bends of 10º, 22º, and occasionally 45º or even 60º, are marked on all hand benders and should be used when appropriate. The difference is in the multiplier, as discussed below.
The multiplier is the number of the measured distance of the offset it is multiplied by to obtain the distance between the two bends.
You should memorize this number for the common bends of 10, 22, 30, and 45 degrees. Many benders have the multiplier permanently stamped on the reverse side of the bender—a useful option for the beginning electrician. These numbers are also shown in the chart below.
Once the offset distance is measured, multiply that measurement by the appropriate multiplier from the chart. These figures are all in decimals, though most people will use a tape measure marked in fractions of an inch. The decimals must be converted to fractions to be useful. Few electricians will try to mark and bend conduit in increments of less than 1/8" (the bending process just isn't all that accurate), so the number needs to be converted to just such a fraction. I've listed the decimal equivalents of multiples of 1/8" in the next table. You probably already recognize half of them and the other half is easily memorized. Don't be afraid to round off your numbers—1/1000" of an inch just isn't enough to worry about!
For example, let's suppose that the distance needed is 3½", and that we want to use a 22º bend. The multiplier for 22º is 2.6, and 3½" is 3.5" in decimal notation. Using a calculator, we find that 2.6 times 3.5 equals 9.1". Now 9.1" is very close to 9.125" (the difference is only .025") which we can see from the chart is 9 1/8". The difference between 9.1 and 9.125 is less than 1/32". That's probably less than half the width of the sharpie line you will draw on the conduit! Don't worry about it. Just use the 9 1/8" figure.
From the multiplier chart below, we can see that the multiplier for a 30º bend is exactly 2. That's why many electricians will bend nothing except 30º offsets; the math needed is simple and easy. This practice also results in unnecessarily sharp bends, harder wire pulls, and often additional junction boxes. It can also add time and money to the job and cause additional work during wire pulls. This practice is easy to avoid. Nearly everyone carries a cell phone with a calculator in it nowadays, and even if you don't you can still multiply two numbers. Do it right: Use a bend appropriate to the task. A large offset of 3 feet will probably need 45º bends, while a small one of a few inches can usually get by with 22º or even 10º bends. It is true that 10º bends can be difficult to get perfect, and that the math for 22º or 45º offsets takes a moment of effort, but neither is an excuse for shoddy workmanship.
A last word on multipliers: When bending large conduit, an angle finder is generally used to measure the precise angle being bent as the angle marks used on a hand bender are not stamped onto benders for large conduit. This raises an interesting possibility: Any angle desired may be used if you know the correct multiplier. My article about the math behind bending conduit includes a description of how to find any multiplier and explains where these numbers come from.
The actual bending process begins with measurements. The distance that the conduit run needs to move must be measured as closely as possible.
1. A good way to do this is by temporarily laying a conduit where the run will end, but projecting out and next to the existing run. In the photo below, the conduit coming from the right should continue, but the obstruction prevents it from doing so. The upper conduit is where the run will end up, and is laid there only to take a measurement.
Measuring straight between the original, bottom, and temporary, top conduit the distance is exactly 3 1/8"
2. Measure directly from one conduit to the other at right angles. Do not attempt to take a measurement along the path the offset will take. Take the measurement straight from the bottom conduit to the top one, making sure that if the measurement starts from the bottom of one conduit it finishes at the bottom of the other. Center-to-center or top-to-top measurements are just as acceptable as long as the same point on each conduit is used.
3. In the photo, the measurement is 3 1/8". This means that the example will use a 22º bend so the calculated distance between bends is 3.125" times 2.6, which equals 8.125", or 8 1/8". The photo shows the original, bottom conduit against the obstruction. It is actually back 36" with a temporary extender added to take measurements. Mark the new conduit at 36", then again back 8 1/8" from the first mark at 27 7/8".
Most offset bends are made "in the air." This means the bender is used upside down with the handle on the ground and the bending foot in the air.
Insert the conduit into the bender with the 38" mark positioned at the arrow normally used to bend a 90. (You can use any mark on the bender, as long you use it for both bends. The very toe of the bender is often more convenient for offsets intended to begin close to the end of the conduit).
The bender handle is likely to kick out when bending in this manner. Use a foot or foot and leg to keep it in one place on the floor (see photo of the bending process).
Slowly bend the conduit, keeping pressure as close to the bender as possible. Although more leverage makes it easier, be careful! If pressure is applied several feet back from the bender, it will result in an unsatisfactory bend.
Bend the conduit until the it lines up with the desired mark (22º in this case) stamped onto the bender.
Bending the second offset bend. Not the foot and leg position, holding the bender handle firmly in place.
Rotate the conduit 180º and sight down it to make sure it is exactly 180º. Slide it forward in the bender until the second mark lines up at the same point on the bender used for the first bend, and repeat the conduit bending process.
Check that the conduit is still flat. Lay it on the floor and make sure that both ends lie flat. A slight dogleg (caused by improper rotation between bends) can sometimes be worked out, but often the pipe will need to be discarded and a new one bent.
At this point, the finished offset is completed a short distance from the obstruction. This is due to the shrinkage of the pipe and is inevitable when bending offsets. If this is not acceptable, the first mark should be made a few inches too far, the second (in the example) the same 8.125" back from the first mark, and the completed bend test fit, marked and cut off to fit. While it is possible to calculate the shrinkage (see the page on the math behind bending conduit) it is seldom worth the effort to do so.
Completed offset, The left bending mark is barely visible on the top of the conduit, while the right mark was made clear around the conduit to make lining it up on the bender easier.
In the example above, the offset was built to angle the conduit run straight up, but what if we needed to not only go up, but to one side as well? It is possible to build two complete offset bends, one after the other, but this also results in a greater number of degrees—not a good idea unless absolutely necessary. Instead, you can build a "rolling" offset, taking the conduit run both up and to the side in one bend. Learn to think three dimensionally and make your bends accomplish more than one task at a time whenever possible. A "kick 90" is another example of this, and is described on the page on bending a 90.
The measurement and procedure for bending a rolling offset are identical to the method listed above, but taking the actual measurement may perhaps need a little more description.
When measuring for a rolling offset, place the tape measure from one conduit to the other at right angles to the conduit. Take a measurement from a point on the first conduit to the equivalent point on the second. This usually means measuring from one side instead from the top or bottom. Make sure you don't measure from, say, the left side on one conduit to the right side on the other. This will result in an offset that is too long or too short.
Measuring for a rolling offset.
As an example, consider the same offset used above, except that the top conduit is moved some 6" to one side as well as 3 1/8" up. The photo above shows taking the correct measurement between the original conduit and the temporary one, and shows that the total measurement is 7¼". Note how the tape measure is hooked over the side of the bottom conduit and placed in a straight line to the second conduit regardless of the angle it might be at. With this distance, the length of the total offset when using 22º bends is a little long, so lets make this one using 30º bends. The multiplier for 30º is 2, so we need a distance of 14½" between marks.
The bending procedure is also the same as in the above example and results in the pictured conduit with a long enough offset that it can be "rolled" to one side. It just fits into the corner of the second obstruction while keeping the same 3 1/8" vertical distance change. This offset contains a total of 60º of bend; compared to two 22º offsets, it saves 28º of bend and will be much easier to pull wire through.
Completed rolling offset, moving both up and to the right.
I've posted several more articles making up a conduit bending guide for electricians. This title page has a brief description of each, as well as a handful of other pages that a professional electrician may find of value.
This set of articles is still a work in progress; pages will be added as they are written. If you don't find what you want, leave a comment and I will consider adding it to the set.
How much will a rigid conduit with a 30 degree offset shrink?
It depends on how large an offset is made of. The shrinkage is not dependent on either the type of conduit or on the size of that conduit, however.
What is the maximum number of 90 degree bends permitted between junction boxes in a run of EMT ?
As with any other bend, the total degrees of bend cannot exceed 360. If all the bends are 90 degrees, that means the maximum number of bends between junction boxes cannot exceed 4.
How do you bend small offsets like 1 inch or 1 1/2?
In a smaller pipe, using a hand bender, even a 1" offset is possible using 10° bends. The marks will be 6" apart and that is quite doable. Larger pipe, like 4" conduit, becomes a guessing game though.
thanks a lot , great explanations, thank you again.
You're welcome, Dean. Glad you found it of value.
THANKS A MILLÓN !!! Iove the trade very much, just trying to master my bending skills as a new electrician !!! You've helped.alot !!
Thanks bro bra! My bender was all marked up and I forgot the multipliers. Good site dooood.
Thanks, jesse, and glad you found it useful.
You are certainly welcome, and I'm glad you found it useful. | https://dengarden.com/home-improvement/a-conduit-bending-guide-on-how-to-bend-an-offset |
29 N.Y.2d 559 (1971)
In the Matter of Horn Construction Co., Inc., Respondent,
v.
Arnold G. Fraiman, as Commissioner of Investigation of the City of New York, Appellant.
Court of Appeals of the State of New York.
Argued May 11, 1971.
Decided July 6, 1971.
J. Lee Rankin, Corporation Counsel (Eric J. Byrne and Stanley Buchsbaum of counsel), for appellant.
Mario Matthew Cuomo for respondent.
Concur: Judges BURKE, SCILEPPI, BERGAN and GIBSON. Judge JASEN dissents and votes to reverse in the following opinion in which Chief Judge FULD and Judge BREITEL concur.
Order affirmed, with costs, on the opinion at the Appellate Division.
JASEN, J. (dissenting).
In March, 1968, the Commissioner of Investigation of the City of New York (Commissioner) received a complaint that a named inspector employed by the Department of Marine and Aviation had attempted to extort a sum of money from a subcontractor of the respondent, Horn Construction Co., Inc. (Horn).
A preliminary investigation by the Commissioner revealed that Horn entered into a contract with the city's Department of Marine and Aviation for the demolition of certain piers. According *561 to the complaint, payment was demanded from one of Horn's subcontractors to avoid harassment by city inspectors during the course of the demolition. When payment was refused, there were so many harassing inspections that the subcontractor was forced to cease all work. The successor subcontractor, it was alleged, was subjected to substantially fewer inspections than the first.
In September, 1968, a subpoena duces tecum served on Horn by the Commissioner called for the production of certain corporate books and records for the period of July 1, 1967 to June 30, 1968. Though Horn partially complied with the subpoena and produced various job records for inspection, relating to the demolition contract in question, the respondent declined to submit other business records of the company for the period stated in the subpoena on the ground that the records sought were not relevant to the investigation.
Horn instituted the present proceeding to quash the subpoena duces tecum and the Commissioner cross-moved for an order compelling compliance with the subpoena. Special Term granted the cross motion and denied the application to quash. The Appellate Division, one Justice dissenting, reversed and granted the application to quash on the ground that the subpoena was unduly broad and that there had been a failure to show that the desired records were relevant or material to the inquiry.
The purpose of the investigation instituted by the Commissioner was to determine whether a bribe had been paid in connection with the demolition of piers. Such an investigation, involving an alleged bribe to a city employee in conjunction with a city contract, is clearly within the scope of the Commissioner's authority under section 803 (subd. 2) of the New York City Charter.
The rule of law applicable to subpoenas of the Commissioner of Investigation is well established. It has been repeatedly held that a subpoena of the Commissioner may not be vacated unless the person subpoenaed can demonstrate that it calls for exhibits which are utterly irrelevant to any proper inquiry. (Matter of Edge Ho Holding Corp., 256 N.Y. 374; Matter of Blitzer v. Bromberger, 295 N.Y. 596; Matter of Hirshfield v. Craig, 239 N.Y. 98.)
The question, then, is whether it is obvious that the records of Horn, which the Commissioner seeks to examine, cannot possibly *562 furnish the Commissioner with any information relevant or material to his investigation of the alleged bribery. It should be apparent that the records sought may furnish such information. Horn's principal records are posted on large National Cash Register (NCR) voucher register sheets, which contain, in chronological sequence every cash disbursement made by Horn in conjunction with its entire business. Every disbursement made for a particular job is identified therein by a job number which had previously been assigned by Horn. Although Horn permitted an examination of all entries in the NCR voucher register sheets bearing the job number assigned to the contract in question, it placed blinders over all entries opposite other job numbers and refused the Commissioner access to these records.
I consider this limited examination inadequate for the purpose of the investigation since a bribe payment could have been disguised, not only by allocation to another job number, but also by allocation to entries to which job numbers had not been assigned. Such unnumbered entries relate to personal expenses of officers, cash disbursements made to individuals on a loan basis, miscellaneous expenses, and other types of expenses not attributed to the cost of any particular work project.
In order to track down the bribe payment, it is necessary to check transactions other than those directly related to the contract. It should be perfectly obvious that, if a bribe was paid by Horn, an attempt would be made to conceal the entry in the records. To limit investigations of bribes relating to contracts to entries identified by the alleged payor as applicable to such contracts, would certainly scuttle any investigation. In order for the investigation to be meaningful, the Commissioner should be permitted to view all of the NCR voucher register sheets for the period in question. Once it is concluded, as it is here, that the circumstances warrant investigation, it seems clear that records in which the bribe payment may have been hidden are relevant. Inasmuch as experience has shown that wrongdoing is often artfully disguised or hidden, the Commissioner should be permitted to subpoena the records requested so long as they bear a "reasonable relation to the subject-matter under investigation". (Carlisle v. Bennett, 268 N.Y. 212, 217.) "Of course a governmental investigation into corporate matters may be of such a sweeping nature and so unrelated to the matter properly *563 under inquiry as to exceed the investigatory power * * * But it is sufficient if the inquiry is within the authority of the agency, the demand is not too indefinite and the information sought is reasonably relevant." (United States v. Morton Salt Co., 338 U. S. 632, 652.)
I would reverse the order of the Appellate Division, deny the motion to quash and compel compliance with the subpoena.
Order affirmed, etc.
| |
Femicide, defined as the killings of females by males because they are females, is becoming recognized worldwide as an important ongoing manifestation of gender inequality. Despite its high prevalence or widespread prevalence, only a few countries have specific registries about this issue. This study aims to assemble expert opinion regarding the strategies which might feasibly be employed to promote, develop and implement an integrated and differentiated femicide data collection system in Europe at both the national and international levels. Concept mapping methodology was followed, involving 28 experts from 16 countries in generating strategies, sorting and rating them with respect to relevance and feasibility. The experts involved were all members of the EU-Cost-Action on femicide, which is a scientific network of experts on femicide and violence against women across Europe. As a result, a conceptual map emerged, consisting of 69 strategies organized in 10 clusters, which fit into two domains: “Political action” and “Technical steps”. There was consensus among participants regarding the high relevance of strategies to institutionalize national databases and raise public awareness through different stakeholders, while strategies to promote media involvement were identified as the most feasible. Differences in perceived priorities according to the level of human development index of the experts’ countries were also observed.
Citation: Vives-Cases C, Goicolea I, Hernández A, Sanz-Barbero B, Gill AK, Baldry AC, et al. (2016) Expert Opinions on Improving Femicide Data Collection across Europe: A Concept Mapping Study. PLoS ONE 11(2): e0148364. https://doi.org/10.1371/journal.pone.0148364
Editor: Elizabeth W. Triche, St Francis Hospital, UNITED STATES
Received: May 5, 2015; Accepted: January 19, 2016; Published: February 9, 2016
Copyright: © 2016 Vives-Cases et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data are within the paper and its Supporting Information files.
Funding: This study received funding from Umeå Center for Global Health Research, funded by FORTE, the Swedish Council for Working Life and Social Research (grant no. 2006-1512). With this funding it was possible to do the data collection and analyses with the appropriate software. The authors didn’t receive any other funding for the study design and preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Introduction
“Femicide” is becoming recognized worldwide as the ultimate manifestation of violence against women and girls . Diana Russell proposed the term for the first time at the International Tribunal on Crimes Against Women in 1976 in order to name the intentional “killings of females by males because they are females”, and it is now broadly used also at the UN level . In relation with this definition of Femicide, different forms of women’ killing are recognized such as intimate partner-related killings , so-called “Honour” and Dowry-related murders, forced suicide , female infanticide and gender-based sex-selective foeticide , genital mutilation-related death cases , targeted killing of women at war and in the context of organized crime, among others . A broader definition of this term also refers to the responsibility of States in women’s death-cases perpetuated by misogynous attitudes and/or social discriminatory practices against women. In this broader definition are included, for example, deaths associated to lack of accessibility to healthcare for women and girls, gender-based selective malnutrition and trafficking of women for prostitution or drugs . As it is difficult to decide in all cases of killings of women and girls if they had been killed because of their gender, researchers often include all killings of women in the first step and then differentiate between forms that are more or less relevant with regard to gendered backgrounds and motives. This is also a reason why consensual research on prevalence and nature and characteristics of victims and offenders and their relationship is needed.
Approximately 66,000 women every year from 2004 to 2009 were victims of killings, representing almost one-fifth of total homicide victims (396,000 deaths) in an average year worldwide . It is also known that 38–70% of female homicides are perpetrated by their current or former intimate partners, whereas 4–8% of male homicide are by their intimate partner [4, 14]. Despite its high prevalence or widespread prevalence, they are based on incomplete data collection systems in roughly half of the countries in the world [4, 12]. There is an evident need to identify a systematic method to gather data on the incidence of femicide that will allow comparisons across countries. Without reliable information, policy makers and programmers at all levels (national, regional or local) are unable to allocate resources so as to achieve the greatest impact in preventing these killings and reducing the harm they do to the victims and their relatives. Policy makers and programmers need information specific to their areas of concern .
In Europe, in 2011, the female death rate due to assault had great geographic variability, ranging from 0.03 deaths/100.000 inhabitants in Latvia to 2.85 deaths/100.000 inhabitants in Lithuania . The heterogeneity in the surveillance systems makes it difficult to estimate the implications of these differences. Despite the fact that many countries have sex-disaggregated data on homicides, and a few (e.g. Spain and France)collect intimate partner-related homicide data in particular, the availability of specific data on all forms of gender-based homicide against women is far less developed . Some of the challenges to accurate femicide surveillance across Europe include the misunderstanding of the gender basis of crimes and the limited available information in the existing homicide data registers about the relationship between victims and perpetrators, factors surrounding crimes, the motives of the perpetrators and the modus operandi . At the time this study was conducted, only few national monitoring systems offered an example of how to overcome some of these challenges. The Finnish monitoring system, for example, is a registry of preliminary police investigations on intentional murders, manslaughters, killings, infanticides and negligent homicides that includes compulsory information related to nearly 90 variables regarding victim’s and offender’s characteristics, surrounding circumstances of the crimes, perpetrators’ behavior after the killing and spatial and temporal distribution .
Femicide is both a sociopolitical as well as a public health problem which has damaging effects on the lives of all women and their social environment, and holds negative implications for the whole society . In order for Governments to begin to take action, there must be a common ground for understanding what femicide is, and the social costs, not only the individual ones, associated with it. If Governments were to understand this, not only will it help save lives, but it will also reduce the annual costs related to justice, social, welfare and social system . Implementation of national surveillance systems on femicide and harmonization of these within Europe can facilitate a deeper understanding of this social and public health problem, and provide evidence for policy development, monitoring and prevention.
In 2013, the first professional and research network with focus on the issue of femicide was created within the platform of COST (European Cooperation in Science and Technology). The COST Action entitled “Femicide Action IS1206” brings together top-level experts on femicide and other forms of violence against women from 27 countries . This is the first pan-European coalition on femicide involving researchers already studying the phenomenon nationally, with the purpose of advancing research clarity, agreeing on definitions, improving the efficacy of policies for femicide prevention, and publishing guidelines for the use of national policy-makers. The COST Action is organized in four working groups (WGs), and one of them (WG2) focuses on ‘Reporting’. The WG2 members organize annual meetings where data collection systems across Europe are analyzed, and data on femicide as well as on other related topics are compared and discussed. Discussions in the initial meeting of WG2 in early 2014 confirmed that most European countries collect data on homicides disaggregated by sex, which provides an overall picture of the magnitude of homicide against women across Europe. Furthermore, several countries collect data on the relationship between perpetrators and victims of homicide, and some of them collect data specifically on intimate partner homicide. However, it became clear that the collection of differentiated data on femicide was underdeveloped in most European countries.
The aim of this study on the COST Action on Femicide was to assemble expert opinions regarding strategies that might feasibly be employed to promote, develop and implement an integrated and differentiated femicide data collection system in Europe at both the national and international levels. The intended outcome of this process was to identify actions which are needed to improve the availability, collection and better comparability of data on femicide, taking into account different perceived needs across countries.
Material and Methods
Concept mapping, an integrated mixed methods approach, was used to examine the diverse views of European experts on strategic actions needed to improve and systematize femicide data collection systems in Europe. The integration of qualitative and quantitative data occurs through sequential steps, beginning with generation of ideas (brainstorming), structuring of ideas through sorting and rating, representation of structuring in maps based in multivariate statistical methods, and finally collective interpretation of the maps . This methodology was selected to meet the study objective based on its demonstrated usefulness in integrating the input of broad expert panels to guide development and planning, and its capacity to enable groups of actors to visualize their ideas around an issue of mutual interest and develop common frameworks [25, 26]. The participatory, structured nature of the concept mapping process was well-suited to the complexity of the task of integrating the views of femicide, Violence against Women (VAW), and data registration system researchers from different disciplines and European countries to develop policy recommendations for strategic action in the region. The fact that concept mapping combines qualitative input with multivariate analysis to produce a visual display of how a group views a particular topic was also considered in the selection of this method. Unlike purely qualitative techniques, concept mapping provides a structured approach for allowing participants to co-produce the content in focus in the study and interpret visual representations of their group perceptions .
The study was done with the approval of the members of the coordination board of the COST Action and written informed consent was asked of the participants. In addition, ethical approval for this study was granted by the Ethical Committee of the University of Alicante (Spain).
Concept mapping activities were carried out from December 2014 until February 2015 in three phases: 1) brainstorming, 2) sorting and rating, and 3) representation in maps and interpretation.
Brainstorming
Based on discussions with the coordinators of the working group on ‘Reporting’, the researchers developed the following focus question to orient the brainstorming activity: In what aspects shall we improve our own country’s data collection systems to collect accurate data on femicide?
This question was sent via email, together with clear instructions of the entire concept mapping process to all of the 70 members of the whole COST Action. Participants were asked to write down as many strategies as possible in response to the question, with each strategy containing only one idea. The brainstorming phase was carried out from the 1st of December, 2014 until the 13th of January, 2015. Twenty five members from 16 countries provided strategies, which the researchers checked to eliminate duplicates, and to divide complex strategies into simpler ones(Table 1). We were not able to recruit participants from Bosnia and Herzegovina, Denmark, Estonia, Finland, France, Iceland, Latvia, or Sweden.
The researchers sent the refined list of 69 strategies to all participants to give them the opportunity to review and determine if their ideas were accurately reflected. In the next step, the strategies were entered in the Concept System software , which was used to facilitate the following steps of the process.
Sorting and rating
The sorting and rating phase was accomplished in January 2015at the annual working group meeting in Rome. During the meeting, the first two authors presented the final list of strategies to the whole working group and explained the sorting and rating activities. Sorting activities consisted of experts organizing the strategies into meaningful groups, or thematic clusters, and giving them a title. Rating activities consisted of giving each strategy two ratings, for 1) its relevance to the goal of strengthening data collection systems and 2) its feasibility of being implemented . Each strategy should be given a value from 1 to 6 for relevance and feasibility, where 1 meant very low relevance or feasibility and 6 meant very high relevance or feasibility. Experts who did not finish the sorting and rating in Rome or who were not present were able to complete the sorting and rating online. Of the 45 experts invited to participate in the sorting and rating, 28 from 14 countries, provided answers (Table 1).
Representation in maps and interpretation
In the representation and interpretation step, the gathered data was analyzed using concept mapping techniques that facilitate visualization of thematic clusters, and identification of areas of consensus for action. The sorting data was analyzed using multi-dimensional scaling to generate point maps, where strategies are plotted in a two-dimensional graph based on a similarity matrix, which captures the number of times experts grouped them together. Strategies that were more frequently sorted together are positioned closer to each other. The degree of fit between the point map and the data in the similarity matrix is reflected by the stress index, where a lower value indicates a better fit. The stress index of the final point map, which was used as the base for identifying clusters, was 0.207. This score was in line with the results of other concept mapping studies, as reflected by the results of a meta-analysis of concept mapping projects, which estimated an average stress value of 0.285 with a standard deviation of 0.04 .
Hierarchical cluster analysis was used to show options for aggregating strategies into clusters based on their proximity to each other in the point maps. Cluster maps ranging from seven to twelve clusters were evaluated through discussion among the researchers by reviewing the strategies grouped together at successive levels of clustering based on their conceptual coherence and the value of the precision offered at each level. For example, in moving from the level of nine to ten clusters, the clusters “Public Awareness” and “Media coverage” were separated, and the researchers decided this division was valuable in distinguishing the strategies. Ten clusters were identified as the final solution, and names were assigned by the researchers through consideration of the content of the clusters and the group names suggested by experts. This level of division was similar to the average number of thematic clusters identified in the sorting activity, where participants created an average of nine clusters (average = 8.8 clusters, standard deviation = 4.2, minimum = 4, maximum = 27).
Rating data was analyzed to identify the priorities for action based on the views of the group as a whole, as well as dynamics in the perception of priorities across regional sub-groups using role-stratified averages. We have a variety of countries in the sample, and we wanted to explore whether the level of general development of the country could influence the perceived relevance of the strategies proposed. The rationale behind this assumption was that in countries with better public systems, registers in general and also those related to femicide would be better and so the priorities might differ from those of countries with less developed registers and public institutions. We choose the Human Development Index(HDI) as a proxy for this. The HDI measures the average achievements in a country in three dimensions of human development: health dimension, measured by life expectancy at birth; education component, measured by adult literacy rate and the combined primary, secondary and tertiary gross enrollment ratio; the standard of living, measured by Gross Domestic Product per capita in purchasing parity . The scores for the three HDI dimensions are then aggregated into a composite index using geometric mean. This index score was used to classify countries into two groups due to its association with different mortality causes [31–33], diseases distribution [34, 35] and health behavior [36, 37]. Seven countries with higher HDI (Germany, Israel, Belgium, Slovenia, Spain, Italy and the UK) were included in one group and seven countries with lower HDI (Malta, Poland, Lithuania, Portugal, Croatia, Romania and Macedonia) were included in the other group.
After creating the maps, initial results of these analyses were discussed during the second part of the working group meeting in Rome to facilitate their interpretation. During the meeting, the participants worked in small groups to evaluate the appropriateness of the clusters generated by the analysis and were asked to determine if the strategies within each cluster were coherent, if clusters could be joined or divided and if the titles were appropriate. Experts also reflected on their own experiences to improve data collection systems and discussed connections among the clusters of actions. Notes from these discussions, as well as on-going dialogue among the core research team, provided the base for finalizing the cluster map and identifying domains of thematically related clusters.
Results
Actions to promote improved data collection systems
Of the 69 strategies that were generated by the participants, the final clustering solution identified 10thematic areas of action: “Putting femicide on the public agenda”, “Media coverage”, “Awareness raising on the importance of data collection”, “Definition”, “Quality of data collectors”, “Institutionalizing national databases”, “Data collection structure”, “Variables to be collected”, “Triangulation”, and “Qualitative follow-up” (Fig 1).
Cluster map based on experts’ thematic grouping of action strategies.
Based on relationships among these clusters, the cluster map was divided into two domains: “Political awareness” and “Technical steps”. The six clusters in the domain “Technical steps” include straightforward actions that would directly improve the content, structure, quality and continuity of national data collection systems, as well as measures to improve detection and follow-up investigation. The four clusters in the domain “Political awareness” include actions directed towards enhancing public awareness of the existence and scale of femicide, as well sensitizing key groups to their role in data collection on femicide, and clarifying a common understanding of what constitutes femicide. Table 2 presents an overview of the content of the clusters and selected strategies (for a complete list of all strategies see S1 File).
The varying size of the clusters depicted in the map reflects the tightness of the conceptual coherence of the strategies the clusters contain, while the proximity of clusters reflects perceived relationship between the strategies they contain. The relatively larger size of the clusters “Putting femicide on the public agenda” and “Raising awareness on the importance of data collection” reflects the diversity in the nature of actions needed to generate public consciousness and shift political will. While the concentration of many strategies in the smaller and closely proximal clusters, “Variables to be collected”, “Quality of data collection structure”, and “Institutionalizing national databases”, reflects that the technical steps required to strengthen data collection systems are numerous and closely interrelated.
Identifying priority actions
Analysis of the overall average ratings of the items that make up the clusters indicated that the cluster “Institutionalizing national databases” was the most relevant (5.28), followed by “Variables to be collected” (5.20) and “Quality of data collectors” (5.10). The clusters “Qualitative follow up” (4.56) and “Data collection structure” (4.66) were rated with the lowest relevance. It is also noteworthy that items in the cluster “Definition” received a relatively low overall rating (4.69). The ratings on feasibility were overall lower than those of relevance. The clusters with the highest rating on feasibility included “Media coverage” (4.44), “Awareness raising on data collection” (4.40) and “Putting Femicide on the Public Agenda” (4.29), while the lowest rated were “Quality of data collectors” (3.74), “Data collection structure”(3.90), and “Definition” (4.04) (Table 2 and S1 File).
Strategies with the highest ratings for both relevance and feasibility (items 51, 55, 32, 33, 28) referred to ensuring that specific types of information are collected in a standardized and institutionalized way that provides a base for identifying cases and monitoring femicide cases at a regional level. The strategies with the lowest ratings for relevance and feasibility referred to strategies to conduct in-depth analysis of suspected cases of femicide (items 49, 64, 69, 63, 48) (Table 3).
Analyzing differences in priorities across countries
The comparison of average rating of the relevance of action clusters showed differences between the priorities of countries with lower and higher HDI ranking. Countries with lower HDI rated clusters in the domain of “Political action” most highly, whilst the countries with higher HDI gave highest rating to clusters in the domain of “Technical steps”. The greatest difference in the perceived relevance across groups was found in the clusters “Putting femicide on the public agenda”, “Awareness raising on data collection”, and “Media coverage”, and t-tests showed that these difference were significant with t-values, degrees of freedom and levels of significance of(-3.01, 14, p<0.01),(-3.59, 14, p<0.05), (-5.18, 8, p<0.001), respectively. Both groups agreed on the lower relevance of clusters on “Qualitative follow up”, “Triangulation”, “Data collection structure” and “Definition”. Both groups also agreed on the very high relevance of “Institutionalizing national databases”. Actions belonging to the “Media coverage” cluster were rated with the lowest relevance by experts from countries with higher HDI, and among the most relevant by those from countries with lower HDI. Despite these variations in perceived relevance of actions, the sub-groups’ assessments of feasibility were very similar (Fig 2).
The clusters with significant difference in the domain of “Political awareness” are highlighted in bold.
Discussion
The strategies generated through the brainstorming step provided a broad range of recommendations that may be applied within and across different countries in order to improve the availability, collection and monitoring of femicide data. These include political will, technical specific requirements and the involvement of different actors—governments, mass media, police bodies, courts and professionals, who are in charge of identifying, registering and monitoring. Priority clusters of actions were also identified within this range of strategies, and according to experts’ assessment, “Institutionalizing national databases” was found to be most relevant, while “Media coverage” was rated most feasible. Variation in ratings across countries indicated differing perceptions of priorities based on their current situation.
As has been observed in relation to other health information system issues [38, 39], the experts’ responses reflected that promoting and implementing concrete changes requires not only technical steps, but also socio-political processes. Strategies related to the latter emerged despite the explicit emphases on data collection on femicide in the initial focus question. The connection between technical change and socio-political processes is also reflected in the interrelationship among actions related to institutionalizing national databases, defining the variables to be collected and the quality of data collectors. In order for registers to function continuously, they have to be institutionalized , which depends on political will backed by adequate state funding. When registers are institutionalized, the definition of indicators is enabled and priority is given to certain indicators that will enable differentiation of several forms of femicides from women´s homicides . The definition of variables and indicators, as it has been shown with the implementation of health information systems or in the harmonization of European registers , is strongly connected to the ability to collect valid and reliable data describing both the extent of its occurrence, its context and background of risk factors to establish action lines for its prevention.
Feasibility ratings were lower overall than those of relevance. Experts in this study may have clear ideas about actions needed to ensure improved data collection on femicide (relevance), but they also have experience in the difficulties of achieving these changes (feasibility). Challenges include existing structures of policies and data collection systems, where those responsible may not see the importance of collecting this kind of data, or lack the necessary training, budget or statistical data collection systems to do so. The difficulty of getting different agencies to cooperate with each other to produce this data is another important challenge that is reflected in the low feasibility scores given to “Definition”. Despite the relatively high perceived relevance of actions to reach consensus on a definition of femicide, experts are aware that it will be very difficult to harmonize definitions and also to have enough information for each case of female homicide to allow the data collectors to define it as femicide.
In countries with lower HDI ranking, “Political action” was considered the most relevant and necessary first step. In countries with higher HDI ranking, the “Technical steps” were more relevant. This result could perhaps be explained by the observation that in countries with lower HDI, topics of violence against women or femicide are not prioritized in policies or public discourse. In Portugal, for example, as well as in most East-European countries, women civil society groups addressing VAW are more recently formed than those located in, for example, UK where women’s groups and public awareness about this problem started in the early 1970s . Based on this pattern, it is expected that experts from the former group of countries would perceive the relevance of actions to heighten public awareness and strengthen the political will for data collection activities more strongly than experts from the latter ones.
The results of this study should be interpreted taking into account several study limitations. The final samples of experts in both, brainstorming and sorting/rating steps do not fully represent all countries of the European region. Despite the fact that we asked 70 professionals from all countries involved in the COST Action “Femicide across Europe”, only a part of them accepted this invitation. We probably ended up engaging those participants that were more interested in the topic. Unfortunately, it was not possible to recruit a representative from Finland or France, where examples of femicide-related intimate partner violence and domestic violence data registries have been further developed [17, 19]. It would be important to gain their perspectives in future research. We conducted this concept mapping study with the members of the COST Action due to their professional experience in the topic of femicide, but this expertise not always was focused in data collection systems. However, this profile may be also considered as strength due to their understanding of issues surrounding the development of femicide data collection systems (such as those related to political action) as well as those related to technical aspects. The ecological perspective of this study limits the transferability of our results to the specific situation of each country. Future research should be applied within the specific contexts of each country.
Among the strengths, the use of the concept mapping method must be highlighted, as it allowed us to structure and rate relevant aspects of a complex topic in a very short time and to integrate various expert opinions from different countries. It thus contributed directly to clear and manageable scientific results that may provide an important basis for improved data collection systems. The timing of the study is also strength. Until the establishment of the COST Action IS1206 “Femicide across Europe” in mid- 2013, European agencies had never recognized the lethal act of femicide as a separate topic, although they had funded initiatives on gender and violence where femicide was a rather side-topic. Nowadays with COST Action IS1206 operating for the past two years, the phenomenon of femicide in Europe is entering the public agenda, and intermeshing with that of global institutions such as ACUNS(Academic Council on the United Nations System) and EIGE(European Institute for Gender Equality).
In conclusion, the results of this study provide a concrete plan of the next (political and technical) steps to be taken in order to improve data collection and monitoring on femicide in and across European countries: Institutionalizing a national database on femicide, agreeing on a minimum set of variables that have to be collected in each case, and investing in the training of those professionals who are in charge of collecting the data. Furthermore, expert assessments revealed that implementing and sustaining femicide data collection systems entails not only technical data collection, but also a firm political commitment. Once in place, the evidence produced can contribute to increased public awareness and demand for a public health sector response, as done with IPV, as well as providing concrete information on risk factors and risk groups to guide police, legal, educational, and political forces in development of prevention strategies and services.
Supporting Information
S1 File. Complete list of strategies with individual ratings, organized by cluster.
https://doi.org/10.1371/journal.pone.0148364.s001
(DOCX)
Acknowledgments
To the members of the COST Action “Femicide across Europe” that participate in this study. This study wouldn’t have been possible without their generous contributions.
Author Contributions
Conceived and designed the experiments: CVC IG. Performed the experiments: CVC IG AH BS. Analyzed the data: CVC IG AH BS. Contributed reagents/materials/analysis tools: CVC IG AH BS. Wrote the paper: CVC IG AH BS AG AB MS HS. Made substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work: AND. Drafted the work or revised it critically for important intellectual content: AND. Final approval of the version to be published: AND. Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved: CVC IG AH BS AG AB MS HS.
References
- 1.
Laurent C, Platzer M, Idomir M. Femicide: A Global Issue that Demands Action: Academic Council on the United Nations System (ACUNS) Vienna Liaison Office; 2013. Available: http://www.genevadeclaration.org/fileadmin/docs/Co-publications/Femicide_A%20Gobal%20Issue%20that%20demands%20Action.pdf.
- 2.
Russell DE. Defining femicide and related concepts. Femicide in global perspective. 2001:12–28.
- 3.
Femicide volumes by UN/ACUN. New 2nd Edition of Femicide: A Global Issue that Demands Action Vienna Liaison Office: Academic Council on the United Nations System; 2014. Available: http://acuns.org/new-2nd-edition-of-femicide-a-global-issue-that-demands-action/.
- 4. Stockl H, Devries K, Rotstein A, Abrahams N, Campbell J, Watts C, et al. The global prevalence of intimate partner homicide: a systematic review. Lancet. 2013;382(9895):859–65. pmid:23791474.
- 5. Nasrullah M, Haqqi S, Cummings KJ. The epidemiological patterns of honour killing of women in Pakistan. Eur J Public Health. 2009;19(2):193–7. pmid:19286837.
- 6. Rastogi M, Therly P. Dowry and its link to violence against women in India: feminist psychological perspectives. Trauma Violence Abuse. 2006;7(1):66–77. pmid:16332983.
- 7. Porter T, Gavin H. Infanticide and neonaticide: a review of 40 years of research literature on incidence and causes. Trauma Violence Abuse. 2010;11(3):99–112. pmid:20554502.
- 8. George SM. Millions of missing girls: from fetal sexing to high technology sex selection in India. Prenat Diagn. 2006;26(7):604–9. pmid:16856224.
- 9. Banks E, Meirik O, Farley T, Akande O, Bathija H, Ali M, et al. Female genital mutilation and obstetric outcome: WHO collaborative prospective study in six African countries. Lancet. 2006;367(9525):1835–41. pmid:16753486.
- 10. Kubai A, Ahlberg BM. Making and unmaking ethnicities in the Rwandan context: implication for gender-based violence, health, and wellbeing of women. Ethn Health. 2013;18(5):469–82. pmid:23998330.
- 11.
World Health Organization and Pan American Health Organization. Understanding and addressing violence against women. 2012.
- 12.
Radford J, Russell DE. Femicide: The politics of woman killing: Twayne Pub; 1992.
- 13.
Secretariat GD. Global burden of armed violence 2011. Cambridge Books. 2011.
- 14. Krug EG, Mercy JA, Dahlberg LL, Zwi AB. The world report on violence and health. Lancet. 2002;360(9339):1083–8. pmid:12384003.
- 15.
Holder Y, World Health Organization. Injury surveillance guidelines: World Health Organization Geneva; 2001. 80 p.
- 16.
Eurostat. Death due to homicide, assault, by sex—2011: Eurostat. Available: http://epp.eurostat.ec.europa.eu/tgm/graph.do?tab=graph&plugin=1&language=en&pcode=tps00146&toolbox=type.
- 17. Corradi C, Stöckl H. Intimate partner homicide in 10 European countries: Statistical data and policy development in a cross-national perspective. European Journal of Criminology. 2014;11(5):601–18.
- 18.
Heidi Stöckl CV-C, Belén Sanz-Barbero, Ecaterina Balica, Anna Costanza Baldry and Monika Schröttle. Issues in measuring and comparing the incidence of intimate partner homicide within and across European countries (Forthcoming).
- 19.
Manjoo MR. Femicide and feminicide in Europe. Gender-motivated killings of women as a result of intimate partner violence. New York: Alliance of NGOs on Crime Prevention & Criminal Justice, 2011.
- 20. Dobash RP, Dobash RE. Who died? The murder of collaterals related to intimate partner conflict. Violence Against Women. 2012;18(6):662–71. pmid:22831847.
- 21.
Walby S. The Cost of Domestic Violence: Up-date 2009: Lancaster University, UK; 2009. Project of the UNESCO Chair in Gender Research, Lancaster University]. Available: http://www.lancaster.ac.uk/fass/doc_library/sociology/Cost_of_domestic_violence_update.doc.
- 22.
Cost European Cooperation in Science and Technology. About Cost: Cost. Available: http://www.cost.eu/about_cost.
- 23.
Cost European Cooperation in Science and Technology. Femicide Across Europe. Cost Action IS-1206: Cost. Available: http://www.femicide.net/.
- 24.
Kane M, Trochim WM. Concept mapping for planning and evaluation: Sage Publications Thousand Oaks, CA; 2007.
- 25. Waltz TJ, Powell BJ, Matthieu MM, Chinman MJ, Smith JL, Proctor EK, et al. Innovative methods for using expert panels in identifying implementation strategies and obtaining recommendations for their use. Implementation Science. 2015;10(Suppl 1):A44. PubMed Central PMCID: PMCPMC4551718.
- 26. Trochim WM, Milstein B, Wood BJ, Jackson S, Pressler V. Setting objectives for community and systems change: an application of concept mapping for planning a statewide health improvement initiative. Health promotion practice. 2004;5(1):8–19; discussion 0. pmid:14965431.
- 27. Burke JG, O'Campo P, Peak GL, Gielen AC, McDonnell KA, Trochim WM. An introduction to concept mapping as a participatory public health research method. Qualitative health research. 2005;15(10):1392–410. pmid:16263919.
- 28.
Concept Systems Incorporated. Concept Systems Software Ithaca, NY: Concept Systems, Incorporated; 2013. Available: http://www.conceptsystems.com/.
- 29. Witkin BR, Trochim WW. Toward a synthesis of listening constructs: A concept map analysis. International Journal of Listening. 1997;11(1):69–87.
- 30.
United Nations Development Programme. Human Development Reports. Human Development Index (HDI): United Nations. Available: http://hdr.undp.org/en/content/human-development-index-hdi.
- 31. Hu Q-D, Zhang Q, Chen W, Bai X-L, Liang T-B. Human development index is associated with mortality-to-incidence ratios of gastrointestinal cancers. World journal of gastroenterology: WJG. 2013;19(32):5261. pmid:23983428
- 32. Rodriguez-Morales AJ, Castaneda-Hernandez DM. Relationships between morbidity and mortality from tuberculosis and the human development index (HDI) in Venezuela, 1998–2008. Int J Infect Dis. 2012;16(9):e704–5. pmid:22721701.
- 33. Shah A. A replication of the relationship between elderly suicide rates and the human development index in a cross-national study. Int Psychogeriatr. 2010;22(5):727–32. pmid:20497623.
- 34. Rodrigues-Junior AL, Ruffino-Netto A, de Castilho EA. Spatial distribution of the human development index, HIV infection and AIDS-Tuberculosis comorbidity: Brazil, 1982–2007. Rev Bras Epidemiol. 2014;17 Suppl 2:204–15. pmid:25409649.
- 35. Roy A, Roe MT, Neely ML, Cyr DD, Zamoryakhin D, Fox KA, et al. Impact of Human Development Index on the profile and outcomes of patients with acute coronary syndrome. Heart. 2015;101(4):279–86. pmid:25538134; PubMed Central PMCID: PMC4345920.
- 36. Dumith SC, Hallal PC, Reis RS, Kohl HW 3rd. Worldwide prevalence of physical inactivity and its association with human development index in 76 countries. Prev Med. 2011;53(1–2):24–8. pmid:21371494.
- 37. Oliveira MG, Lira PI, Batista Filho M, Lima Mde C. [Factors associated with breastfeeding in two municipalities with low human development index in Northeast Brazil]. Rev Bras Epidemiol. 2013;16(1):178–89. pmid:23681334.
- 38.
World Health Organization. Country health information systems: a review of the current situation and trends: World Health Organization; 2011.
- 39. Westra BL, Latimer GE, Matney SA, Park JI, Sensmeier J, Simpson RL, et al. A national action plan for sharable and comparable nursing data to support practice and translational research for transforming health care. J Am Med Inform Assoc. 2015. pmid:25670754.
- 40.
Castillo-Salgado C. Módulos de principios de epidemiología para el control de enfermedades. 2 a edición. OPS Washington. 2002:52–62.
- 41.
World Health Organization. Monitoring, evaluation and review of national health strategies: a country-led platform for information and accountability. World Health Organization, 2011.
- 42. Dawson M, Gartner R. Differences in the characteristics of intimate femicides the role of relationship state and relationship status. Homicide studies. 1998;2(4):378–99.
- 43. Doiron D, Burton P, Marcon Y, Gaye A, Wolffenbuttel BH, Perola M, et al. Data harmonization and federated analysis of population-based studies: the BioSHaRE project. Emerg Themes Epidemiol. 2013;10(1):12. pmid:24257327; PubMed Central PMCID: PMC4175511. | https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0148364 |
As a Test Engineer within the Vehicle Systems department you will heavily be involved with Research & Development, along with customer support for new and existing Haldex products.
Responsibilities
-
To work in accordance with Haldex Health & Safety procedures.
-
Carry out the installation, maintenance and test of prototype and production systems at MIRA and, when required, at customer premises in the UK and abroad.
-
Installation, calibration and operation of Haldex instrumentation.
-
Drive commercial vehicles for the purposes of R&D, endurance testing and system monitoring.
-
Work as part of a team responsible for carrying out scheduled safety inspections, to a varied fleet of complex test vehicles with integrated electronic systems and performance monitoring computers.
Required skills & experience
-
A recognised engineering qualification, or relevant experience
-
Degree in automotive / mechanical Engineering
-
Fully Qualified LGV Technician.
-
-
Driving Licence
-
Full LGV - Essential
-
Clean licence - Desirable
-
Driver CPC - Desirable
-
-
Awareness of legislation in relation to the use of a LGV on public roads
-
Capable of carrying out:
-
Routine inspection of Trucks & Trailers in accordance with established company procedure and VOSA/Manufacturer requirements.
-
-
-
Commercial Vehicle workshop experience
-
Understanding of:
-
Brake System Fundamentals.
-
Brake System Diagnosis and Repair.
-
ABS & EBS (Anti-Lock Brakes/Electronic Braking Systems).
-
ASR (Traction Control Systems).
-
ESP (Stability Control)
-
Able to diagnose faults by inspecting/testing vehicle, and by the use of laptop diagnostic software, pressure gauges, rolling roads, multimeters etc.
-
Understanding/Awareness of Truck & Trailer Diagnostic Systems
-
-
-
Computer Literacy
-
A good understanding of Microsoft Office
-
An understanding of Vehicle based Data Logging systems
-
-
Fork Lift Truck Licence
-
Preferred but not considered to be essential
-
For further information, please contact: | http://corporate.haldex.com/en/work-at-haldex/job-opportunities/test-engineer-vehicle-systems |
Last Updated on October 17, 2022
You may be wondering… is Warframe down right now?
Warframe is a free-to-play co-op action game developed by Digital Extremes. This shooter is available on PC, Xbox One, PlayStation 4, and Nintendo Switch. There are hundreds of thousands of players worldwide across global servers.
If you want to know if Warframe is down, there are several ways to check its server status.
Is Warframe Down? How To Check Warframe Server Status
Find out how to monitor the game’s server status by following the five different ways to do so in this article.
How To Check Warframe Server Status
These are some ways you can check the current status of the servers for Warframe. Most of them consist of websites that track the status of the game servers. These sites usually gather reports from players who also experienced the same issue.
1) Check ServicesDown.com
ServicesDown is a website that displays the status of several online services. It can show you if Warframe servers are down for the past 24 hours. This Warframe server status checker can accept reports for servers hosted in the United States, United Kingdom, Mexico, India, Australia, Spain, and Colombia.
It updates in real-time whenever someone submits a report. In addition, you can also read the latest comments from people who also checked the servers.
Check ServicesDown to see Warframe’s status.
2) Check Outage.report
Outage.report helps you find out whether your favourite game is experiencing server issues by showing you if it’s down or experiencing service interruptions.
It features a Warframe Outage Map and a list of areas where there may be server issues. There’s also an Outage History checker to see the dates wherein Warframe players have reported the most outages.
Visit Outage.report to check Warframe’s status.
3) Check IsItDownRightNow
Isitdownrightnow.com is an online platform that offers a fast and easy way to figure out the status of game servers. Instead of using reports gathered from players’ submissions, it has its own built-in web tool that attempts to start a connection with the server. From there, it determines if Warframe servers are available or not.
Among all the other websites on the list, IsItDownRightNow is one of the more reliable websites to check Warframe’s status history. Unlike the aforementioned tools, this website displays the response time when accessing the server, helping you determine the speed as well.
Visit Isitdownrightnow.com to determine if Warframe can be played.
4) Check Latest Twitter Updates
You can also check if Warframe servers are down by heading to Twitter and sorting through the #Warframe hashtag. It’s often seen as a more personal way of receiving notifications about outages since the updates come directly from players, although not always the most reliable.
With over 400 million users using the site, and nearly 500,000 players following Warframe’s page, you can get real-time messages from other fans in a fast and easy manner.
Visit Warframe’s official Twitter page @PlayWarframe for more direct updates regarding their server availability.
5) Check Warframe Forums
You can also search for your concern on the various communities and forums centred around Warframe. The game’s community is active within the forum, so you can get updates there as soon as they become available.
Some active forums that you can visit are:
- Warframe Forums
- /r/Warframe subreddit
- Warframe Steam Community
From there, sort the latest threads and posts by “New” and see whether there’s chatter on the game’s servers. Then, you can find your answer if other community members are expressing their qualms about the game’s servers being down. | https://warframe.today/is-warframe-down-check-warframe-server-status/ |
British-Trinidadian artist Zak Ové’s work Invisible Man has been selected as part of Christie’s October exhibition and private sale Bold, Black, British in partnership with curator Aindrea Emelife.
The exhibition, Bold, Black, British, is a gathering place for artists from many disciplines and generations, demonstrating the importance of Black Britons in influencing the country’s artistic scene. The exhibition brings together pioneers from the 1980s’ renowned British Black Arts Movement with the next generation of exceptional Black British artists. The show, which spans history, and legacy, allows visitors to immerse themselves in the contributions of artists who are frequently forgotten while highlighting the depth, and verve of Black art.
Among Zak Ové’s most recognizable works, The Invisible Man, is an installation originally made for Somerset House in London comprising a phalanx of 40 six-foot-tall graphite figures (based on a small wooden sculpture from Kenya) arranged in a military-type formation.
The work is a rebuke to The Masque of Blackness, a Jacobean-era masque written in 1605 by Ben Jonson for Queen Anne of Denmark in which the masquers, appearing in blackface makeup, were to be disguised as Africans and were to be “cleansed” of their blackness by King James. Click HERE to hear Ové speak about the installation in situ when it was exhibited in the B. Gerald Cantor Sculpture Garden at LACMA, Los Angeles, CA alongside works by Auguste Rodin.
British-Trinidadian artist Zak Ové was born in 1966 in London. He earned a BA in Film as Fine Art from St. Martin’s School of Art (1984-1987). His multi-disciplinary practice across sculpture, film and photography responds to his heritage and notions of identity through the lens of multiculturalism and dialogues between past and future. His work is informed in part through the history and lore carried through the African diaspora to the Caribbean, Britain and beyond with particular focus on traditions of masking and masquerade as a tool of self emancipation. | https://www.debuckgallery.com/zak-ove-at-christies-bold-black-british-private-sale/ |
Great easy to understand book Learnt alphabet in first half hour I have never learnt sign ever.
-
Really clear and concise, I m at the end of my BSL 101 course with signature and got this book to improve my knowledge within my work place.
-
excellent
-
Great help when learning BSL
-
This was a great book clearly showing the signs and grouping them into easy to find categories, although there is an index too A great book to add to my collection.
-
Handy for my job, I use BSL but forget if not used for a while so nice pocket sized reminder. | http://www.teenyboys.de/db4789amznuk1905913524db4789_our-school-signs-british-sign-language-bsl-vocabulary-let-s-sign |
The historical confusion (a word that appears much too often) arises from the fact that these two parts of the brain must, in many ways, be functionally redundant. There is a seminal paper* by Schneider & Schiffrin (1977) which demonstrates the true nature of this redundancy. S&S conclude that the conscious mind processes information serially, but is limited in the number of chunks it can process at any one time, while the unconscious mind processes information concurrently (in parallel), and suffers no such process number limit.
Everyone has experienced the 'autopilot' phenomenon, for example when driving to work. You have driven this route so many times, and usually listen on the radio program to hear the news or relax to music, that you hardly ever remember the individual events along the way. Your sensori-motor (behavioural) apparatus has been 'hi-jacked' by your cerebellum, a 'robotic'. essentially emotionless** (and therefore unconscious) biocomputer. Consequently, you remember little or nothing, because emotional arousal is an essential pre-requisite of most permanent memory formation.
However, when you encounter novel situations, or situations whose relevance level must remain at significant levels, your (conscious) cerebrum is the one that is engaged. Thus, your long-term memory is kept open to the possible formation of permanent template patterns which may later prove useful or vital, by virtue of your mind being kept in this mode.
Consider the 'Stroop' test (as featured in brain imaging experiment**) in which colored fonts form words which are printed on slides. For example, the word 'GREEN' will be printed in a RED typeface, and displayed for only a short time. The required answer from the subject is "red", not "green". However, the subject's automatic reflexive response is to say "green", ie to just read the word out as is.
In TDE theory (my theory of mind, which is named after its basic computational type) the conscious brain (cerebrum) must make a voluntary judgement to SUPPRESS the automated, well-learned reflexive response produced by the unconscious brain (cerebellum). In Stocco et al's paper (co-authored by the inventors of ACT-R, LeBiere and Anderson) there is no such clear idea of the functional split between cerebrum and cerebellum. In fact, the word 'cerebellum' does not occur in its text. Yet the paper makes no large error of data interpretation- it (correctly) states that " a specific network of regions (including the left rostral prefrontal cortex, the caudate nucleus, and the bilateral posterior parietal cortices) is responsible for translating rules into executable form". Since the phenomenon dealt with is linguistic, it is the left cerebrum which has been analysed. However, like many if not most recent experiments of this type, it simply fails to include the cerebellum in its purview.
The most fascinating part of the cerebro-cerebellar model is what it says about our relationship to the rest of the animal kingdom. We primates only have so much 'buffer' or 'working' memory available. It is true that primates are able to make it work for us in a more efficient way than other animals - we have two working memories in each cerebral hemisphere, the parietal lobe (spatial buffer) and frontal lobe (temporal(1)*** buffer), and the patterns we detect in one form are almost instantly available to the other. This means memorized spatial patterns (eg objects) are also available as memorized temporal patterns (eg processes), therefore making analogical thinking easier.
But in as many ways as we are like our primate cousins, we are unlike them. We build mega-structures like termites, we swarm like ants, we build complex ritualistic nests like bower birds, and we can guide each other to remote locations with signs like bees. None of these abilities are exhibited by the primates. They only occur in 'lower' animals which rely heavily on automatised, instinctual memory.
Consider our cerebellum (and its ancillary machinery, like the basal ganglia and the ascending and descending Reticular Activating Systems or ADRAS) as our equivalent of an ancient instinctual framework. No other primate has a similarly sized or interconnected cerebellum. While our cerebrum has a severely limited 'chunk' capacity (see Miller's Magic number 7), the memory capacity of a suitably trained cerebellum is effectively infinite. There are many people that have memorised entire books, and these people are not genetic freaks, just individuals with some extra time on their hands.
Far from lower animals being emotionless 'automata', as Descartes thought, it is in reality we humans who are closest in nature to robots. We alone amongst the animals have a symbolic computer**** nestled underneath our forebrains, capable of infinite machine-like precision
* Shiffrin, R., and Schneider, W. (1977). Controlled and automatic human information processing: II. Perceptual learning, automatic attending, and a general theory. Psychol. Rev. 84: 127-190. | https://tde-r.webs.com/cerebrocerebellarform.htm |
Riders wait at the LATS downtown transfer center Thursday before boarding buses. LATS already has implemented safety measures to keep its buses safe for riders and drivers, but may be adopting other strategies as ridership increases.
Safety measures allowing LATS to continue operations
Riders wait at the LATS downtown transfer center Thursday before boarding buses. LATS already has implemented safety measures to keep its buses safe for riders and drivers, but may be adopting other strategies as ridership increases.
Adjustments have allowed Lawton’s mass transit system to continue operating in the age of COVID-19.
LATS operates six fixed routes in Lawton, meaning bus routes that pass the same fixed points once every hour, before returning to the downtown transfer center north of Central Mall. But the system has set new regulations in place designed to keep bus drivers and their passengers safe while providing much needed transportation to sites across the city and onto Fort Sill.
The COVID-19 pandemic has presented challenges and influenced passenger levels, said District Manager Ryan Landers.
“It’s definitely affected it (ridership) overall,” Landers said, estimating LATS has lost about 50 percent of its riders on its fixed routes.
The loss is even larger on the paratransit system, created to provide curb-to-curb service for qualified passengers (typically those who are elderly or who have medical conditions that make traveling on fixed route buses difficult) to specific destinations.
“It was a huge drop off in ridership for that,” Landers said, estimating a loss of almost 80 percent of riders in the early days of COVID-19.
There also was a loss for a specialty Fort Sill route that LATS created to replace a route that used to bring riders to and from the post. The system is similar to the paratransit service, where riders call and make arrangements for drop off at specific sites on post. Many of the riders for the shuttle service were employees who worked on Fort Sill and without a need to go to work, ridership on the post shuttle declined.
“People were not really going places, staying at home, which was good in the beginning,” Landers said, explaining LATS coped with that loss by reducing its fixed routes to one bus per hour (there were hourly clockwise and counterclockwise buses on all but the Yellow Routes; now, only the counterclockwise buses run).
That adjustment continues today, but LATS made other changes to ensure its bus operators are safe while working and passengers are safe when riding.
Adhering to social distancing guidelines, LATS limits its buses to a maximum of 10-12 people at a time. When ridership on a bus approaches eight to nine people (which may happen during peak hours between 8 a.m. and 3 p.m.), LATS will send out a second bus to accommodate the overflow.
“That’s how we dealt with it,” Landers said, adding it may be a long-term strategy. “We’re fine with doing it. I think social distancing probably will continue for a good long time.”
Limiting passenger loads will help protect drivers, he said.
“We’re taking a risk each time we’re out there,” he said.
In addition, bus operators wear masks and gloves, and are separated from passengers by a shower curtain being used as a “sneeze guard.” Buses are sprayed with disinfectant hourly and are deep cleaned every night.
“We’re taking precautions with things,” Landers said. “It has been a difficult couple of months.”
The first weeks were difficult, but ridership improved a little in mid-April when LATS began offering free rides. The fee structure resumed in June when city businesses began reopening, but ridership has been slow to increase.
“There still is hesitancy in riding transit, because it is a confined area,” Landers said.
The system has made some long-term plans, including a Phase 2 that will allow LATS to add some peak service to routes.
“We were hoping to start that in mid-July, but had to put that on hold until we were able to hire more operators,” Landers said, explaining the system lost eight employees during the pandemic. “We were able to absorb some of those issues, but it has been a struggle to get to Phase 2. Eventually, we’ll get back to full service. But, it’s difficult as long as social distancing is in place.”
That also has prompted the system to look at other ideas for expanding operations while keeping its drivers and passengers safe. One idea has been adopted by sister systems in Texas and Kansas: requiring passengers to wear masks. LATS hasn’t made that decision but is looking at it as ridership begins to increase.
“In the very beginning it was easier to operate, to a certain degree. Everyone was staying home,” he said, adding that as more residents need transportation, LATS will need to adapt. “We’re trying to take as many precautions as we can, but it is difficult.” | |
Photo: JIS PhotographerMinister of Finance and the Public Service. Hon Audley Shaw.
Story Highlights
Mr. Speaker, today I want to share with you the significant progress we have made together in our journey towards growth and prosperity and to outline the path forward.
Attaining and sustaining fiscal stability, economic growth, and prosperity must become the shared goal and responsibility of all!
This is our focus and priority for the future – to build on the foundation of fiscal stability to achieve greater economic growth and prosperity, so as to ensure that we meet and exceed the targets set under the People’s Test!
Mr. Speaker, today I want to share with you the significant progress we have made together in our journey towards growth and prosperity and to outline the path forward.
Through great sacrifice, we have achieved levels of fiscal stability not seen in Jamaica for many decades. However, we must remember why this matters.
It is about improving lives; and stability on its own is not enough to do that. We have many more rivers to cross to achieve prosperity for all Jamaicans.
Attaining and sustaining fiscal stability, economic growth, and prosperity must become the shared goal and responsibility of all!
I am pleased to report that, as at the end of December 2017, the Government of Jamaica has successfully met all the quantitative performance criteria and indicative targets agreed with the International Monetary Fund (IMF) under the current Precautionary Stand-By Arrangement.
Passing these tests is important and necessary, but it is also insufficient. It is our focus and priority to ensure that we also pass the even more important test set by the people of Jamaica – I call it the People’s Test.
Mr. Speaker, I define the People’s Test as the test for a more prosperous society that the people can see, feel, and touch which includes a growing economy with high levels of investment, safer communities, higher standards of living, more and better job opportunities, improved roads and infrastructure – particularly in our rural areas, a lower tax burden, a stable exchange rate, increased opportunities for personal and professional advancement, and meaningful access to affordable housing as well as high-quality education and healthcare.
This is our focus and priority for the future – to build on the foundation of fiscal stability to achieve greater economic growth and prosperity, so as to ensure that we meet and exceed the targets set under the People’s Test!
Mr. Speaker, with this brief introduction, I want to turn your attention to the structure of my presentation today:
I. Review of the Economy for Fiscal Year 2017/18
II. Policy Priorities and Targets for Fiscal Year 2018/19
III. Highlights of the Expenditure Budget for Fiscal Year 2018/19
IV. Highlights of the Financing of the Budget for Fiscal Year 2018/19
V. Conclusion | |
I am my parents’ son
Every now and then I will say something in a certain way or act in a certain fashion that makes me stop and wonder when one of my parents took over my body. More often than not I find these instances to be pretty humorous. They also are powerful reminders of the profound influence that our parents can have upon us.
My parents are fairly private people – they don’t generally like it when a lot of attention is placed upon them. As a couple married for 57 years this year, they enjoy one another’s company and stay pretty close to home. At this point in their lives, their greatest pleasures revolve around home and family. Although they shy away from the limelight, I am mindful that many people have actually met my parents and have come to know them – in an interesting and unusual way.
My mom is a retired registered nurse and nursing educator. She had a very successful career in both fields prior to making the time to raise her family. She resumed her career as I entered my junior year in high school, and eventually retired as a well-respected professional educator. By her nature, my mom tends to be very sensitive to the emotions of others, and she has a very keen ability to read situations based on body language and a host of other non-verbal cues. She also is a very gifted teacher, able to take very complex concepts and make them easily accessible for people of all ages. Mom also is an accomplished musician and artist.
As is often the case with very strong couples, my dad’s nature and skills are very different from my mom’s, and yet Dad’s abilities are very complementary to Mom’s. Dad spent his professional career as an electrical and mechanical engineer. He has a very keen sense of appreciation for how things work. My sense is that Dad can fix or repair pretty much anything as long as he has some baling twine and chewing gum. As an engineer, Dad’s mind is highly trained in math and he can be very analytical in nature. He has an innate ability to evaluate situations and understand why things are the way they are. Dad also has a deep appreciation for history and is fascinated by how the items that are a part of our everyday lives have changed and developed through the centuries.
Although many people have not met my parents, they have met them through me. As I enter into mid-life, I have a growing and deepening appreciation for the many ways my parents have profoundly influenced my life. Who I am and the ways that I look at the world around me are a wonderful hybrid of so many aspects of my parents. I love to teach, and I can be very analytical. I deeply value people and emotions, but I am also fascinated by gadgets. I have my mother’s love for art and music, and my dad’s love for history and science. In becoming more aware of my parents’ influence upon my life, I also have the continued opportunity to honor my mother and father. I happily embrace these many aspects of my own life and I can now see how they are rooted in both who and how my parents are in the world. I also know that my parents share a profound love for God and deep faith. This, too, is a gift they have given me, and my choice to be a person of faith is also a way that I honor my mother and father.
It seems to me that how we honor our parents has to do with loving and respecting them as well as loving, respecting and appreciating the many ways they have influenced each of us to become who we are. With Mother’s Day and Father’s Day just around the corner, perhaps we have a new opportunity to thank God for the influence that our parents have upon us and how we, in turn, can honor them in how we live each day. And so, with thanks to God, Mom and Dad, our journey in FAITH continues. | https://faithmag.com/i-am-my-parents-son |
Smoked Haddock Omelette
This great tasting (and very popular) Smoked Haddock Omelette Recipe comes served with two classic sauces, a béchamel and hollandaise. The only thing you have to worry about with this dish is having enough little saucepans to make both sauces and that you get your timings right when you want to bring everything together. Having made it you will realise that all the fiddly little sauce stages and extra washing up to do is well worth the finished dish – it really is a recipe to impress. This is right up there with one of the best omelettes you can make, served in some of the best restaurants and gastro pubs in the South-East of England.
Béchamel Sauce: also known as white sauce, is one of the main sauces of French cuisine which has travelled the world and is now used in many classic recipes, for example in lasagne.
Hollandaise Sauce: It is so named because it was believed to have mimicked a Dutch sauce for the state visit to France of the King of the Netherlands. Hollandaise sauce goes fantastically well with egg dishes and is well known as a key ingredient of Eggs Benedict.
Smoked Haddock Omelette Recipe
Serves 4: this is a great starter served on its own, but served with fresh breads and a seasonal salad (with dressing) it can easily become a main meal.
Recipe Ingredients:
For the omelette filling
- 500g smoked haddock fillet
- 10 eggs
- 1 tsp sea salt
- 1 tsp freshly ground black pepper
- 50g butter
- 200g parmesan cheese, freshly grated
- 4 egg yolks
For the Bechamel Sauce
- 600ml milk
- 2 whole cloves
- 2 bay leaves
- handful of fresh parsley (including stalks)
- 1 onion, roughly chopped
- 30g butter
- 30g plain flour
For The Hollandaise Sauce
- 100ml white wine
- 200ml white wine vinegar
- 4 black peppercorns
- handful of fresh tarragon (including stalks)
- 3 shallots, finely chopped
- 2 egg yolks
- 250g butter – melted
Recipe Method:
Start the bechamel: Put the milk, cloves, bay leaves, parsley and onion into a saucepan and bring it up to the boil, simmer for two minutes, then turn off the heat. Leave to infuse for 30 minutes
Start the hollandaise: Put the white wine, white wine vinegar, black peppercorns, tarrogaon and finely chopped shallots into a saucepan. Bring this up to the boil and then turn the heat down to simmer to reduce the liquid right down to one-third. Once reduced leave to cool.
Intermediate stages:
Put the beginnings of the bechamel sauce back on to a medium heat after it has had 30 minutes to infuse. Once the milk is at a simmer place the smoked haddock fillets into the milk, and heat through for one minute. After a minute turn the heat off, put the lid on the saucepan, and let the fish fillets cook through in the milk as it cools. Once cooked through and cooled remove the haddock from the milk (bechamel starter) and flake the fish into a bowl and reserve for later.
Finish the bechamel: Pour the infused milk (bechamel starter) through a sieve into a small saucepan to remove the herbs, spices and onion. Heat this cleared milk to simmer. In another small saucepan melt the 30g of butter and stir in the 30g of flour. Stir and cook the flour out until the mixture becomes straw coloured and makes a ‘roux’, (approx 4 minutes) – take the pan off the heat and slowly stir in the infused milk, combining with the roux so that there are no lumps. Put the pan back on to a high heat and stir continuously to thicken the bechamel sauce. Once thickened remove from the heat and reserve.
Finish the hollandaise: In a small pan melt the butter and reserve for later. In another, larger saucepan, add a few inches of water and put it on to a medium heat. Over the saucepan place a heat-proof mixing bowl so that it sits comfortably in the saucepan but above the water, so it is heated only by the steam. Beat in the 2 egg yolks then whisk in the shallot and white wine reduction. Into this slowly pour the melted butter, a little at a time. Add more melted butter when the previous amount has been well incorporated and the mixture has thickened. Once thickened remove from the heat and reserve.
Make The Omelette:
Preheat the oven grill. And warm 4 serving plates.
In a mixing bowl beat together the 10 eggs and season with the salt and pepper.
In another mixing bowl beat in the 4 egg yolks and mix in equal amounts of bechamel sauce and hollandaise sauce, 8 tablespoons each. Mix this combined sauce together.
Note: We need to make 4 omelettes and keep them warm until all 4 are made and can be served together at the same time – so we need to work quickly and carefully.
In a small frying pan melt a knob of butter and pour in a quarter of the seasoned beaten egg mixture. Cook the omelette to about two-thirds of the way through, keep the center quite runny.
Once at this stage spoon over the omelette a quarter of the flaked smoked haddock and sprinkle over a quarter of the freshly grated parmesan cheese. Spoon over the haddock and parmesan cheese a good tablespoon or two of the combined egg yolk, bechamel and hollandaise sauce. Leave the omelette open faced (without folding in half) and place onto a warmed serving plate. Repeat this to make the other three omelettes.
At stages (while making the other omelettes) glaze each of the previous omelettes under the preheated grill until they are golden and bubbling (do not let them burn). When all 4 are done serve the Smoked Haddock Omellete straight away. | https://oakden.co.uk/smoked-haddock-omelette/ |
Using Cacao to Catalyze Development: Productivity drivers and technology adoption amongst smallholder farmers in Montes de María, Colombia
PERMANENT LINK(S)
MetadataShow full item record
Author
Williams, Kalob
Abstract
Smallholder farmers produce a large portion of the world’s total food supply, but are often times limited by economic, social or demographic factors that larger farmers find easier to overcome. The body of literature surrounding smallholder farmer crop production is large and addresses a wide range of topics, from gender equality to agronomic considerations. This thesis expands this body of literature by adding a two-step approach that examines what makes some smallholder farmers more productive than others, focusing on the case of cacao. Step one determines which production technologies have the strongest relationship with yields amongst a certain group of farmers, and step two determines which socioeconomic and demographic factors have the largest impact on the adoption of the technologies identified in step one. We use cross-sectional survey data from 277 smallholder cacao producers in the Montes de María region of northern Colombia to carry out this process. Based on the findings, we make recommendations that are useful to association leaders and government technicians in the area, who are interested in promoting cacao as an engine for regional economic development. We find that harvest intensity and fertilizer use have strong positive relationships with yields, whereas herbicide use exhibits a strong negative relationship with yields. Our results suggest that association membership status and the number of buyers a farmer sells to have causal relationships with cacao yields, which are mediated positively through an increase in harvest intensity. Finally, we find that formal and informal training are highly associated with the adoption of production technologies, but that formal training seems to be more strongly related to adoption of pruning, grafting, herbicide use and pesticide use, while informal training is more strongly related to increases in fertilizer use and harvest intensity.
Date Issued2019-05-30
Subject
Technology Adoption; Agriculture; Agriculture economics; Agricultural Development; Agricultural Production; Cacao; Smallholder Farmers; Structural Equation Modeling
Committee Chair
Gomez, Miguel I.
Committee Member
Lee, David R. | https://ecommons.cornell.edu/handle/1813/67340 |
The Deep Part Of Every Character Sort. Everybody’s identity is significantly diffent, therefore we all bring our very own distinctive pros and cons.
Some of us are more effective with other people, although some folks prefer to function alone.
Some individuals like getting questioned, while others feel much better if they can settle into a program. It-all depends upon anyone, their particular choices, as well as how they feel concerning the various situations they truly are confronted with.
These variations in character characteristics in many cases are labeled because of the 16 Myers-Briggs characteristics type. Evaluating someone’s Myers-Briggs identity kind need examining all of them on four facets: Introverted vs. Extroverted, Sensing vs. Intuition, thought vs. sensation, and Judging vs. Perceiving. All these traits get together to painting a clearer image of one’s identity.
Often, you are in a situation you have issues handling. It can be a challenging coworker or supervisor, a relationship companion who doesn’t look since suitable since you may have actually when thought, or a friend or friend this is certainlyn’t getting as supportive because you can have-been wanting they might feel.
No real matter what the problem are, it may force you to unleash their “dark area,” which could reveal itself in a variety of steps. Perhaps you have furious and throw facts on walls. Maybe you begin weeping and need their alone times. Or maybe the silent procedures is literally your method of option. Either way, when someone try powered their breaking aim, something has to be accomplished.
There are plenty of situations that could reveal the worst in visitors. Even more important, how exactly does your own actions modification while you are faced with these unpleasant conditions, and just what outcomes could these variations have actually on your existence?
Keep reading to discover more regarding the worst identity characteristics for each and every Meyers-Briggs individuality sort and what forms of effects they were able to create.
ISTJs have become structured, but this could possibly almost feel to a failing, specifically if you wreak havoc on their particular system. Don’t be very impressed if you learn them spending countless hours cleansing their whole quarters or reorganizing their own entire processing program (regardless of if they’d more important factors to have completed).
ISFJs are all about generating individuals happier and maintaining the comfort. Often, this might block off the road of these actualizing their own happiness or achieving unique aim. If they’re also focused on keeping balance with others, they may wind up limiting on their own along the way.
INFJs will always be selecting this is in daily life. They’re very innovative and creative, in addition they frequently implement these skills to contour her worldview. This might be tricky when it comes time are practical about real-world problem and inventive assistance aren’t planning let.
INTJs can be suspicious, particularly if you’ve given them reasons to not believe your. They already hold people in their schedules to highest guidelines, if you don’t fulfill their particular expectations, it may be difficult obtain or restore their own trust.
ISTPs need a rather sensible means of approaching trouble, so that they have trouble coping with conditions that don’t apparently add up. If they can’t get the reason in one thing, it mightn’t become a shock if you notice all of them obtaining disappointed or confused.
ISFPs love to are now living in as soon as, so they really aren’t naturally skilled in planning the long term or reflecting regarding last. This could possibly cause catastrophe if they’re not able to study on their particular issues or if perhaps they can’t make the required activity steps to get to their own plans.
INFPs have a very stronger sense of individual standards, and they’ll bring extremely disappointed and defensive if these are typically pushed. Experiencing endangered by anyone may just submit an INFP into a tailspin.
Join all of our publication.
For INTPs, personal conversation isn’t actually certainly one of her stronger meets. Instead, they’re proven to usually determine strategies and envision critically regarding their business. Though this might be effective in certain situations, maybe it’s troublesome once they have to relate solely quiver PrzykЕ‚ady profili to people.
ESTPs is spontaneous, which could easily get all of them into issues. If they react on a whim, they are often getting themselves in peril. This will be especially problematic if they’re in a new spot or with not known anyone since there could be more factors to consider if ESTPs set themselves vulnerable.
ESFPs flourish whenever they’re able to collaborate with other people, thus being required to run by yourself may pose hard on their behalf. It’s essential for ESFPs to develop their own techniques separately when they need to handle problematic on their own.
ENFP was supporting of those they worry about, in addition they anticipate the same inturn. When someone isn’t going for the financing they think they need, they might think slighted to get upset.
ENTP hates being trapped in a schedule; they’re always looking for new encounters, anyone, and places. It’s very likely that the feelings manifests alone as a bad attitude toward efforts, particularly when work feels tedious.
ESTJs go into almost every circumstances with a technique, and additionally they sometimes utilize power to have people to help perform their unique program. They’ll come across troubles if her program will get cast down training course, either by you they’re using the services of or an external event that alters the circumstances.
ESFJs feeling better if they have powerful relationships with others; the perfect individual within life will be anybody they were able to collaborate with, depend on, and feeling sustained by in tough conditions. If they don’t posses this sort of commitment, or if perhaps some body these are generally close to wrongs them, this might develop dissension in their lives.
ENFJs love to lead other people, however they may not continually be able to do this, especially if anyone tries to remove or threaten their unique energy. An individual who attempts to usurp their own authority will begin to get to their terrible part.
ENTJs usually thought for the long-lasting, so they really possess challenge handling the day-to-day operations needed in numerous facets of lifestyle. This would be the majority of challenging as long as they find themselves in an incredibly deadline-driven situation. | http://www.eventibrescia.com/2022/quiver-pl-profil/the-deep-part-of-every-character-sort-everybodys/ |
Browsing University of Alaska Fairbanks by Subject "Tangle Lakes National Register Archaeological District"
Now showing items 1-1 of 1
-
Prehistoric toolstone procurement and land use in the Tangle Lakes Region, central AlaskaThis project explores prehistoric human mobility and landscape use in the Tangle Lakes region, central Alaska through analyses of toolstone procurement and manufacture conditioned by site function. Early Holocene Denali and middle Holocene Northern Archaic traditions are hypothesized to have different tool typologies, subsistence economies, and land use strategies. However, few large, systematic studies of toolstone procurement and use have been conducted. At a methodological level, archaeologists have struggled to quantitatively source non-igneous cryptocrystalline toolstone which often makes up the largest proportion of archaeological lithic assemblages. These problems were addressed by developing rigorous chemical methods for statistically assigning lithic from Tangle Lakes assemblages to (a) two known local toolstone quarries, (b) materials within the Tangle Lakes region, and (c) non-local materials. Lithic technological and geospatial analyses were used to evaluate toolstone procurement, manufacture, and use within sites. Lithic samples from four archaeological components located at different distances from their nearest known quarry sources were used to address the research problems. The archaeological samples were derived from a Denali complex hunting site (Whitmore Ridge Component 1) and three Northern Archaic assemblages: a residential site (XMH-35), a tool production site (Landmark Gap Trail) and a hunting camp (Whitmore Ridge Component 2). Chemical results indicate that cryptocrystalline material in Tangle Lakes assemblages can be statistically assigned to primary sources locations, and visual sourcing of this material is entirely unreliable. Lithic analytical results indicate that despite slight changes in mobility strategies for Denali and Northern Archaic populations, site function is the strongest conditioning factor for material selection and procurement strategies local to the Tangle Lakes region. Thus, this research provides (a) best practice methods for sourcing abundance cryptocrystalline material that has been precluded from most lithic sourcing studies, and (b) the data necessary to incorporate technological organization strategies of Tangle Lakes populations into the broader context of Denali and Northern Archaic behavioral patterns in Alaska. | https://scholarworks.alaska.edu/handle/11122/946/browse?type=subject&value=Tangle+Lakes+National+Register+Archaeological+District |
Indian art museums and galleries are the treasure houses that preserve the rich heritage of Indian art and culture.
Share this Article :
The Indian art museums and galleries are probably the most popular ones among the tourists from all parts of world. They contain all the important aspects of India's rich aesthetic heritage whilst displaying the wonderful facets of Indian art. The tourists can see and know about Indian art and culture of the long lost days of the ancient times, the medieval times and also in present times, in the Indian art museums and galleries.
The Indian art museums and galleries are spread in all the 28 states of India and they are mainly situated in the major cities of the states. Some of the museums and galleries concentrate exclusively on the artistic heritage of their respective states. However, there are a large number of museums that deals with the heritage of the entire country and try to cover as much of artistic areas, as possible. Some of the old and premier Indian art museums and galleries also conduct research work to find out new archaeological elements of ancient period. Sometimes, they organise workshops and seminars to discuss about the important aspects of Indian art and culture, as well.
Among the numerous Indian art museums and galleries, there are a few ones that have established them as the most popular and prominent ones. These museums provide maximum information about the rich artistic heritage of India. Some of these Indian art museums and galleries include the Indian Museum, Kolkata, West Bengal; AP State Archaeology Museum, Andhra Pradesh; Academy of Fine Arts, Kolkata, West Bengal; Asutosh Museum of Indian Art, West Bengal; Baroda Museum & Picture Gallery, Gujarat; Bharat Bhavan, Bhopal; Calico Museum of Textiles, Gujarat; Jehangir Art Gallery; Government Museum and Art Gallery, Chandigarh; Museum and Picture Gallery, Gujarat; Kuthira Malika, Thiruvananthapuram; National Museum, New Delhi; The Arts Trust - Institute of Contemporary Indian Art, Mumbai; National Gallery of Modern Art, New Delhi; K. Sreenivasan Art Gallery & Textile Museum, Tamil Nadu; Art Gallery and Krishna Menon Museum, Calicut, Kerala; Karnataka Government Museum and Venkatappa Art Gallery, West Bengal; Jagdish & Kamla Mittal Museum Of Indian Art, Hyderabad; Museum of College of Arts and Crafts, Lucknow etc.
(Last Updated on : 28-01-2014)
Share this Article :
Recently Updated Articles in Indian Museums
•
Manipur State Museum
The Manipur State Museum is a reflection of the rich cultural heritage of the state as it displays historic articles related to the tribal history of Manipur.
•
Victoria Memorial Hall
Victoria Memorial is a royal museum in Kolkata. It is a reminder of the British Raj in India. The museum houses a section of artefacts, paintings and memorials from the past. This monument was dedicated to the memory of Queen Victoria.
•
Libraries in Hyderabad
Libraries in Hyderabad are famous for their large and rare collection of books, magazines, journals and several other documents. Among all the libraries, the most popular ones are State Central Library, Sundarayya Vignana Kendram, British Library and others.
•
Assam State Museum
Displaying the rich culture and heritage of the state, the Assam State Museum is an important place in Guwahati.
•
Bihar Museum
Bihar Museum is newly opened museum in Patna, the capital city of Bihar. It possesses artefacts related to Vaishali, Gaya and Nalanda.
Free E-magazine
Subscribe to Free E-Magazine on Reference
Loading...
Topics:
Art & Culture
|
Entertainment
|
Health
|
Reference
|
Sports
|
Society
|
Travel
Contact Us
|
Site Map
Follow Us :
FREE E-MAGAZINE
Copyright © 2008 Jupiter Infomedia Ltd. All rights reserved including the right to reproduce the contents in whole or in part in any form or medium without the express written permission of Jupiter Infomedia Ltd. | https://www.indianetzone.com/39/indian_art_museums_galleries.htm |
A film, by Milton Moses Ginsberg, of Marshall painting a sacred monkey from start to finish.
A personal response by Marshall about America’s violent gun culture and the mass shootings at the Jason Aldean concert in Las Vegas, Nevada, on 10/2/17.
“No one does darkness quite like Marshall Arisman, an artist whose work explores the macabre in a way that is as emotionally arresting as it is technically impressive.”
Read the interview here.
“A ‘medium’ grandmother taught him to spot the midpoint between good and evil. In a New York gallery, a selection of Marshall Arisman’s timeless works depict our macabre reality.”
Watch the interview here.
An Artist’s Journey from Dark to Light
A 10-minute overview of my exhibition for friends who are not able to come to the gallery and see the show.
A chronological look at Marshall’s paintings and sculptures, 1968 to 2017.
Marshall’s next exhibition, An Artist’s Journey from Dark to Light, will be held this August at the SVA Chelsea Gallery.
A 13-minute video compilation of subway posters, illustrations, and books for my upcoming exhibition at the SVA Chelsea Gallery in the summer of 2017.
Marshall’s creative life was highly influenced by his grandmother—a noted and gifted medium who lived in a Spiritualist community called Lily Dale in upstate New York. A Spiritualist minister who was herself a talented artist, Louise Arisman presented her young grandson with the possibility that life everlasting was not simply a religious hope, but something she found proof of every day by communicating with those who had “passed over,” bringing messages back to the living. The fascination with that possibility has shaped his work as an artist.
This 55-minute documentary follows Marshall as he returns to Lily Dale, his grandmother’s home for more than 50 years and the site of the church she founded. Using archival footage and outtakes from a documentary on his work and process (Facing the Audience: The Art of Marshall Arisman, 2003), as well as photographs, interviews with current Lily Dale psychics, and new footage, the film explores the link between Marshall’s creative output and his early immersion in psychic phenomena.
Along the way, we discover some of the history of Spiritualism, and how it influenced the work of writers and artists in the early 20th century: William Butler Yeats, Aldous Huxley, and such prominent artists as Kupka and Kandinsky. We learn of Louise Arisman’s psychic reading for Lucille Ball, who was born in Jamestown, NY, not far from Lily Dale. Ball’s museum and library are located there, and once a year during Lucy Week she is honored for her celebrated career—one predicted by Louise.
You can read more about the film and the making of it here. | https://marshallarisman.com/ |
During my first week at Holderness School, the Fall of my Sophomore year in high school, Jim Page gave me a training log. Jim told me that before opening the log I should list my goals for my ski career and for the year. Jim also told me that he and I should review my list of goals ASAP.
Articulating my goals to present to someone else for review for the first time was the largest challenge anyone has ever presented to me. While I had unspoken aspirations to make the US Ski Team and to become an Olympian, I really had no idea what it would take to reach those goals. Further, at the moment of Jim’s assignment, I realized it was much easier to maintain those aspirations (fantasies?) if I could harbor them secretly/unspoken… but I trusted and respected Jim as my dorm master and coach… and I wanted him to respect me; so I knew that I could not blow the assignment off.
I gave a lot of thought to the long and short term goals I listed for Jim’s review… making them reasonable/doable for me. When I met with Jim a few days later to submit and speak about my goals, we had a lot of fun discussing how conservative my goals were and consequently how they limited my vision… and how I was limiting my vision of myself. This was the first time anyone had spoken with me in this way. While I found it very complimentary that Jim told me that he thought that I was capable of achieving much more than I had written down, at the same time I found it very challenging that he actually accused me of lacking courage and confidence in myself as demonstrated by my “reasonable” goals. Jim told me that while I had no responsibility to him to set more lofty goals, he was disappointed that I was not aiming higher.
I didn’t realize it at the time but Jim’s challenge broke the ice which in turn made me think about myself and my aspirations completely differently than I had… my fantasies became my goals… my goals became consequences of my willingness and ability to confront/commit to address actionable items… my commitment to address actionable items became my day-to-day tasks… my day-to-day tasks became actions to record and monitor… my records became data to evaluate progress toward my goals… my progress toward my goals became my motivation to set new/better goals… my new goals became consequences of my willingness/ability to commit to address actionable items… What a great mentor and great lesson!
|Holderness ski team in the early 70’s.
|
Malmquist back left; Page in center row on right.
|Jim Page contemplative before
|
coaching a soccer game. | https://usanordic.org/usasj-story-project-6-dec-2014-walter-malmquist/ |
Film Society's Marian Masone sat down with director Alex Pitstra and chatted about Tunisian lifestyle and Pitstra's influences.
About the Film:
In his adventurous and smart debut feature, director Alex Pitstra announces himself as a neo-Tarantino; or to be more precise, the director's stand-in in the film (played by affable newcomer Abdelhamid Naouara) is a Tarantinoesque, head-strong twentysomething working in a video store. But the parallel with the more senior filmmaker does not end with character homage, Pitstra also employs a full arsenal of well-honed cinematic techniques and references to explore a life that he imagines he could have lived. Born of a Dutch mother and an absent Tunisian father, the director sets his story in a post-Jasmine Revolution Tunisia. He follows the young Abdallah as he dreams of leaving behind the video store and his conservative surroundings for the alluring promise of Europe. Based loosely on his own father's story of coming to Holland with a Dutch woman, (even casting him as the father in the film), Pitstra's semi-autobriographical voyage of discovery is set against the back drop of a contemporary, yet still very traditional Tunisia which itself is trying to find a way forward in the world.
For more ND/NF Q&As: | http://www.lincolncenter.org/video/57178273548b50e518ad4052 |
New York City, NY
In 1911, Emily Johnston de Forest gave her collection of pottery from Mexico to the Metropolitan Museum. Calling it "Mexican maiolica," she highlighted its importance as a North American artistic achievement. De Forest was the daughter of the Museum's first president and, with her husband, Robert, a founder of The American Wing.
The De Forests envisioned building a collection of Mexican art, and, even though their ambitions were frustrated at the time, the foundational gift of more than one hundred pieces of pottery anchors the Met's holdings. Today, more than a century later, their vision resonates as the Museum commits to collecting and exhibiting not just the arts of Mexico, but all of Latin America. This exhibition highlights the early contributions of the De Forests and others, and presents recent additions to the collection for the first time. | https://www.artgeek.io/exhibitions/56e0aaf72a80a02659114720/56e0aaf72a80a0265911471f |
Modernism and Figurative Art
Critique of Modernism, Pen and ink, 1982, collection of the artist.
This drawing, from 1982, ironically shows that the Modernist invention is more important than any connection with life and its beauty, that is right before the artist in the form of a live model, the kind of reality which has inspired artists for centuries.
The Philosphy of Rene Descarte initiated a dualism, a split between mind and body, soul and body, in 1619 that has only recently with “The Theology of the Body” of Pope John Paul II been revised with a philosophy and theology that restores the body to its rightful importance. The attack on the body has had a profound influence on art in Modernism leading to abstract art and the disembodied art that is Conceptual Art. Descarte’s “Cogito, ergo sum.” -“I think, therefore I am.” was his way of finding the certitude that he searched for in 1619. The only knowledge that he trusted was the fact that he thought. He rejected sense knowledge, as well as knowledge from politics, religion, and opinion, as being unreliable. With that view Modernism was born. Its characteristics have been 1. Subjectivism, (everyone is an artist) 2. Rationalism, science is prized over artistic knowledge, 3. Dualism, privilege of mind over body, separation of mind and body 4. Anti-Traditionalism, lets tear it down and start over.
Professor Michael Waldstein in the introduction to his translation of Pope John Paul II’s The Theology of the Body points out that a “…truely bottomless pit is opened only by the Cartesian universe with its complete indifference to meaning. Matter is “mere matter”, sheer externality. It is Value free. The reason for this indifference of matter to meaning lies in the rigorous reconstruction of knowledge under the guidance of the ambition for power over nature.”
Professor Waldstein goes on to say, “The scientific rationalism spearheaded by Descartes is above all an attack on the body. Its first principle is that the human body, together with all matter, shall be seen as an object of power. Form and final cause must therefore be eliminated from it. The response to such a violent scientific-technological attack on the body must be a defense of the body in its natural intrinsic meaning. The spousal mystery is the primary place at which this defense must take place, because the highest meaning of the body is found there.”
Waldstein quotes the philosopher pope countering Descartes’ dualism. John Paul II- “The philosopher who formulated the principle of “cogito, ergo sum”-I think, therefore I am- also gave the modern concept of man its distinctive dualistic character. It is typical of rationalism to make a radical contrast in man between spirit and body, between body and spirit. But man is a person in the unity of his body and spirit. The body can never be reduced to mere matter; It is a spiritualized body, just as man’s spirit is so closely united to the body that can be described as an embodied spirit.” 1.
Father Robert Barron said in his lecture at the Napa Institute Conference 2012 that “Postmodernism has broken the log jam of Modernism”. He says that postmodernism gives us space and he mentioned the Postmodern, Post-liberal theologian Urs Von Bathasar who said that “beauty changes us”. He went on to explain how the “Theology of the Body” of John Paul II is a reversal of Descarte’s attack on the body.
John Paul II, Man and Women He Created Them, A Theology of the Body, Translated by Michael Waldstein, 2006.
On a personal note, the drawing above was an acknowledgement of my realization that I lived in a time when traditional art was despised. There were no commissions for altarpieces, so I decided to master my craft by making studies from life, like the many portraits of my family. That body of work has become an important part of my art.
Rachel Reading, pencil and red lead, 1981.
See also:
On Sacred Art
Caravaggio’s Raising of Lazarus, Symbols and Stories (The Theology that started Abstract Art), Italian Insider Newspaper, Rome, July 16, 2012.
Caravaggio and the Aesthetics of Meaning , September 27, 2011. | https://corneliussullivan.com/backup-critique_of_modernism-html/ |
As of the publishing of this episode, there are exactly 42 days until Election Day. That’s 6 weeks. A month and a half.
To make sure every one of those days to count, we’ve launched our 2020 Election Action Guide, which we’re calling “Voting Is Not Enough.” Because…it’s just not. All of the segments and information can be accessed from the “Voting is Not Enough” banner at BestoftheLeft.com, or directly at BestoftheLeft.com/2020action.
We want to start today by acknowledging the devastating loss of Supreme Court Justice Ruth Bader Ginsburg. May her memory be a blessing, and a revolution. Justice Ginsburg, using the law as her tool, dedicated her life to making our society more equal and to protecting rights of all kinds, including voting rights. When her conservative colleagues gutted the Voting Rights Act in 2013, she famously wrote in her passionate descent, “Throwing out pre-clearance when it has worked and is continuing to work to stop discriminatory changes is like throwing away your umbrella in a rainstorm because you are not getting wet.”
As we face yet another national election without the key protections of the Voting Rights Act, we have to work 100 times harder to ensure marginalized groups get access to the ballot. New, strict voter ID laws, proof of citizenship laws, increased purging of voter rolls, increased closing of polling places in predominantly poor, Black, Brown and Indigenous communities…these are the fallouts of losing the full protections of the Voting Rights Act.
So, how do we overcome these racist and oppressive hurdles? Here are a few ways…
1) Confirm Voter Registration & Talk to Purged Voters: As we mentioned in our last segment, voter registration is key. But it’s not just about getting new people to register - though that’s important - it’s also about making sure registered voters haven’t been purged and have updated their address or name change. Having an updated registration can be the difference between a regular ballot or a provisional ballot on election day. With voter registration deadlines coming up fast, commit to helping people register and check their status. Visit Vote.org for all the links you need, or visit NationalVoterRegistrationDay.org and look under Resources for the Toolkit for Individuals.
This year, Grassroots Democrats HQ and Field Team 6 have made it possible to volunteer to phone or text bank actual purged voters in key states and help them get re-registered! Many people never know they have been purged until it’s too late, so help alert these voters to their status and get them registered again. Go to FieldTeam6.org and check their Calendar of Events for opportunities.
2) Help People Get Necessary Voter IDs: Voter ID laws aren’t going away any time soon, so Vote Riders has begun providing voter ID assistance to help every American cast a ballot. VoteRiders will help you identify the documents you need to get an ID, request and pay for the documents, pay the DMV fees, and even drive you to the DMV for free. Call or text their help line at 844-338-8743 or go to VoteRiders.org/freehelp to submit an online form and get started. If you don’t need an ID, you can become a volunteer to help make sure voters know the information they need and/or donate to support their sadly necessary work.
3) Increase Black Voter Turnout: The NAACP’s Black Voices Change Lives is using indirect relational voter turnout to mobilize Black voters this fall. This means engaged Black voters call unengaged Black voters in specific battleground states where the data shows that the Black vote is the determining factor in the outcome. If you don’t identify as Black, you are still welcome to volunteer. Go to BlackVoicesChangeLives.org for more.
4) Become a Poll Worker: As we’ve previously mentioned, becoming a poll worker is one of the most effective things you can do to help fight the closing of polling places and reduce long lines. Go to WorkElections.com to find out how to sign up in your state, or go to MoreThanAVote.org which is specifically recruiting poll workers in heavily Black districts across the country.
We know the Supreme Court nomination and Senate races are at the top of everyone’s mind right now, but fighting voter suppression is essential to making sure we have a shot at saving our democracy come November. We’ll be focusing on the Senate races next time, but we’ve included links in the show notes today to get you started. Use the time saved to ponder how we came to have a system where the passing of one, 87-year-old-woman caused tens of millions of people to be gripped with justifiable fear and existential dread.
So, if making sure disenfranchised voters have a voice this November is important to you, be sure to spread the word about Working to Overcome Racist Voter Suppression in Yet Another Election Without the Full Voting Rights Act via social media - or, uh, maybe call a few friends instead? - so that others in your network can spread the word too.
TAKE ACTION!
Fight Systemic Voter Suppression:
1) Confirm Voter Registration & Talk to Purged Voters:
National Voter Registration Day Toolkit
Call purged voters: Field Team 6 Actions (with Grassroots Democrats HQ)
2) Help People Get Necessary IDs to Vote:
3) Increase Black Voter Turnout:
NAACP Black Voices Change Lives
4) Become a Poll Worker:
**React to Ginsburg's Passing with Action**
Volunteer with SwingLeft to Flip the Senate & Target Super States
Donate to Targeted Super States via SwingLeft
Special Elections (Winner sworn-in in November!):
AZ: Unseat McSally. Give to Mark Kelly. (leading by 9% points) (CPR Rating: Leans D)
GA: Unseat Loeffler. Give to Rev. Raphael Warnock. (down by 4% points) (CPR Rating: Lean R)
Tightening Polls Toss Ups:
MT: Unseat Daines. Give to Steve Bullock. (down by 1% point)
GA: Unseat Perdue. Give to Jon Ossoff. (down by 2% points)
IA: Unseat Ernst: Give to Theresa Greenfield. (leading by 3% points)
ME: Unseat Collins. Give to Sara Gideon. (leading by 5-7% points)
NC: Unseat Tillis. Give to Cal Cunningham. (leading by 6% points)
CO: Unseat Gardener. Give to John Hickenlooper. (leading by 7% points)
Close Polls Lean R & Likely R: | https://www.bestoftheleft.com/voting_is_not_enough_work_to_overcome_racist_voter_suppression_in_yet_another_election_without_the_full_voting_rights_act |
2020. United States. 108 min. English.
Director: Chloé Zhao
After losing her husband and the economic collapse of her small mining town in rural Nevada, Fern, a stoic 61-year-old woman, becomes a modern-day nomad, packing up her van and setting off on the road to explore life outside of conventional society. What unfolds is a stunning portrait of a journey to self-reclamation and redefinition of community, all set against the backdrop of the American West. Based on Jessica Bruder’s 2017 nonfiction book about wandering older Americans, Nomadland offers an authentic, compassionate narrative of those living on the American margins.
Presented by Wells Fargo Bank N.A. | https://virginiafilmfestival.org/films/nomadland/ |
The Murrow Symposium is a series of events designed for high-level discussion, leading innovation and strategy development for thought leaders who are passionate and influential in the world of communication.
Our audience includes those looking to gain insight and inspiration into the advertising, public relations, science communication, communication technology, and journalism and media production industries. The Murrow Symposium is crucial for communication students and professionals who exemplify a commitment to excellence and integrity emblematic of Murrow’s career and legacy.
The Murrow Symposium celebrates new thinking in communication – innovative ways to address our challenges, concepts that build our industry and opportunities for collaboration among industry leaders.
We invite thought leaders from a variety of disciplines to share their work at Murrow Symposium, providing a dynamic force for progress in communication and media. We seek to inspire leadership, collaboration, and innovation.
Murrow Symposium brings together experts in various fields of communication to teach and interact, which serves to elevate Murrow College, Washington State University and the broader communication industry.
Violence in America: Is TV to Blame?
The Murrow Symposium is an annual program that acknowledges exceptional achievement in communication, celebrates scholarship, and connects students to industry professionals. The symposium began more than 30 years ago as a panel discussion and lecture series designed to honor the legacy of Murrow, regarded as a broadcasting icon because of the news reporting and ethical standards he brought to his craft.
Since the 1990s, the Murrow School has honored the achievements of top leaders in the communication industry at the symposium. Each year’s honoree presents a public address on campus to highlight the symposium. | https://murrow.wsu.edu/symposium/about-symposium/ |
It was not too many years ago that humans’ basic survival depended in whole or in part on the availability of biomass as a source of foodstuffs for human and animal consumption, of building materials, and of energy for heating and cooking. Not much has changed in this regard in the Third World countries since preindustrial times. But industrial societies have modified and added to this list of necessities, particularly to the energy category. Biomass is now a minor source of energy and fuels in industrialized countries. It has been replaced by coal, petroleum crude oil, and natural gas, which have become the raw materials of choice for the manufacture and production of a host of derived products and energy as heat, steam, and electric power, as well as solid, liquid, and gaseous fuels. The fossil fuel era has indeed had a large impact on civilization and industrial development. But since the reserves of fossil fuels are depleted as they are consumed, and environmental issues, mainly those concerned with air quality problems, are perceived by many scientists to be directly related to fossil fuel consumption, biomass is expected to exhibit increasing usage as an energy resource and feedstock for the production of organic fuels and commodity chemicals. Biomass is one of the few renewable, indigenous, widely dispersed, natural resources that can be utilized to reduce both the amount of fossil fuels burned and several greenhouse gases emitted by or formed during fossil fuel combustion processes. Carbon dioxide, for example, is one of the primary products of fossil fuel combustion and is a greenhouse gas that is widely believed to be associated with global warming. It is removed from the atmosphere via carbon fixation by photosynthesis of the fixed carbon in biomass. | http://banksolar.ru/?p=14499 |
The digital asset industry rang in the new year by climbing to a $1 trillion market cap as financial institutions have become more proactive in the space. To smooth the maturity of the industry, Gibraltar will announce a ’10th Core Principle’ regulation for digital assets – specifically for digital asset exchanges – at the direction of a working group consisting of industry experts recently convened.
The working group behind the Gibraltar Market Integrity Study will be primarily responsible for setting appropriate market standards for exchanges operating in the digital asset space. Additionally, the working group shall decide if the nature of the asset/item traded (i.e. security, utility or exchange tokens) affects market integrity standards.
Pawel Kuskowski, CEO of Coinfirm, noted that “digital asset exchanges are a focal point in how public perception views cryptocurrency and blockchain systems, for better or worse. New entry retail customers, investors and traders need to be better protected and it is high time that exchanges take care of all stakeholders in the ecosystem. However, the 10th Core Principle regulation should not stifle innovation.
The DLT (distributed ledger technology) framework in Gibraltar currently incorporates 9 principles applying to businesses operating under the purview of the Gibraltar Financial Services Commission (GFSC), the territory’s financial regulator.
The goal of the group is to set global market integrity and regulatory standards for crypto exchanges and other marketplace platforms in the industry, whilst acknowledging the recently defined standards by other jurisdictions. Additionally, the framework looks to aid those who have the ability to create crucial foundational concepts for the work of other watchdog and/or regulatory bodies such as the Financial Action Task Force (FATF), European Commission and the International Organization of Securities Commissions (IOSCO).
Joey Garcia, partner at international law firm ISOLAS LLP, board member of Xapo and IOV Labs (RSK) groups and a key member of the working group stated that “the creation of the Market Integrity working group is an important step for the jurisdiction as we continue to develop our DLT framework in line with an ever-evolving regulatory landscape, and also for the Global Blockchain Convergence. Gibraltar has long been a leader when it comes to fostering innovation and in the development of virtual asset service providers’ regulatory standards and we are confident the 10th Core Principle will aid us even further in our mission to achieve this, particularly as the integrity of these markets is such a key focus internationally. We already have some of the largest groups in the world regulated in Gibraltar and this should continue to place those groups at the forefront of standard setting in the industry.”
In 2020, the IOSCO published standards for trading platforms and the European Union published proposed comprehensive regulations for the digital asset space in Markets in Crypto-Asset Regulations (MiCAR).
However, Gibraltar’s amended legislation will be the first set of legislated principles to ensure digital exchanges/operators protect customers/market integrity by looking to ensure guidance of efficiency, transparency and an orderly market.
This comes at a time where digital assets have been criticized by the Financial Conduct Authority (FCA) in the UK as “cryptoassets may not be subject to regulation beyond anti-money laundering requirements” – meaning that there is little customer protection against exchanges’ malpractice beyond obvious criminality or gross negligence. Thus the 10th Core Principle of Gibraltar, once established, will lead the global regulatory guidance in this area. The territory is forward-thinking within the crypto industry, having introduced regulatory oversight to DLT with legislation since 2018.
The other 9 core principles DLT providers must adhere to in Gibraltar include:
1. A DLT Provider must conduct its business with honesty and integrity.
2. A DLT Provider must pay due regard to the interests and needs of each and all its customers and must communicate with its customers in a way which is fair, clear and not misleading.
3. A DLT Provider must maintain adequate financial and non-financial resources.
4. A DLT Provider must manage and control its business effectively, and conduct its business with due skill, care and diligence; including having proper regard to risks to its business and customers.
5. A DLT Provider must have effective arrangements in place for the protection of client assets and money when it is responsible for them.
6. A DLT Provider must have effective corporate governance arrangements.
7. A DLT Provider must ensure that all systems and security access protocols are maintained to appropriate high standards.
8. A DLT Provider must have systems in place to prevent, detect and disclose financial crime risks such as money laundering and terrorist financing.
9. A DLT Provider must be resilient and must develop contingency plans for the orderly and solvent wind down of its business.
This follows a number of important updates to the jurisdiction’s bespoke regulatory framework governing ICOs, investments funds in digital assets, etc. The jurisdiction has set out clear licencing and regulatory legislation specifically for DLT providers since ’18 in the Financial Services (Distributed Ledger Technology Providers) Regulations 2017.
No Offer, Solicitation, Investment Advice, or Recommendations
This website is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security, nor does it constitute an offer to provide investment advisory or other services by FINYEAR.
No reference to any specific security constitutes a recommendation to buy, sell or hold that security or any other security.
Nothing on this website shall be considered a solicitation or offer to buy or sell any security, future, option or other financial instrument or to offer or provide any investment advice or service to any person in any jurisdiction.
Nothing contained on the website constitutes investment advice or offers any opinion with respect to the suitability of any security, and the views expressed on this website should not be taken as advice to buy, sell or hold any security. In preparing the information contained in this website, we have not taken into account the investment needs, objectives and financial circumstances of any particular investor.
This information has no regard to the specific investment objectives, financial situation and particular needs of any specific recipient of this information and investments discussed may not be suitable for all investors.
Any views expressed on this website by us were prepared based upon the information available to us at the time such views were written. Changed or additional information could cause such views to change.
All information is subject to possible correction. Information may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances.
Copyright Finyear (c)2006-2020. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of Finyear is prohibited. FINYEAR: ISSN 2114-5369. | |
The present invention relates to medicinal compositions which are easier to administer to patients such as children, the aged or the infirm who have difficulty swallowing solid dosage forms such as tablets and capsules.
Many members of the population have difficulty in swallowing solid dosage forms. This is particularly true for the very young, the old and the infirm but can also apply to others particularly if there is not a ready supply of liquid (eg water) to wash down the solid dosage forms.
If the active medicament to be administered has a taste which is perceived by the patient to be unpleasant, then the patient will be less inclined to take the medicament. Several methods of overcoming or masking the taste of unpleasant tasting medicaments have been proposed. Many of these involve coating either the solid dosage form or smaller particles containing the medicament with a material which does not dissolve or disperse in the mouth. Coatings of this type can however slow down the absorption of the active medicament as the coating must be removed before the active medicament can be absorbed either in the stomach or in the gastrointestinal tract.
Another solution to the problem of administering medicines to those who find it difficult to swallow solid dosage forms is to use liquid or gel compositions containing the active medicament. These compositions are not however suitable for everyone. The amount of liquid or gel formulation can vary from dose to dose as the patient or a carer has to dispense an appropriate amount of the composition for example by pouring the composition into a measuring spoon or container. If insufficient care is taken doing this the patient may not be given the intended dose of the active medicament. There is also the possibility that some or all of the intended dose will be spilled before it can be administered, particularly if the patient is reluctant or not in a reasonable physical condition to take the medicine or is uncooperative.
The present invention provides a medicinal composition which avoids the problems described above with known solid, liquid and gel dosage forms but does enable the patient to be given an accurate dose of the active medicament.
a) a core comprising a medicinally effective unit dose of one or more active medicaments, and
b) said medicaments being enclosed within a film material which comprises at least 40% by weight hydroxypropylmethyl cellulose.
The present invention provides a medicinal composition which comprises
The film material is preferably composed of 40-100% by weight hydroxypropylmethyl cellulose (HPMC), more preferably 40-80% by weight of the film HPMC with 20-60% of one or more plasticisers. Suitable plasticisers include polyethyleneglycols, diacetin, propyleneglycols or glycerin. Other components such as colouring agents, flavours, fruit acids and/or sweeteners may be added to the film material. The film material may be expanded for example by pumping gas (eg nitrogen gas) into a concentrated solution of the polymer and drying the resulting mixture. However, a non-foamed (i.e. non-expanded) film material is preferred.
In a preferred embodiment, the core is a fondant core, and the core and the encapsulating film preferably provide a synergistic effect in that the encapsulating film contains and protects the core and the core supports the film. In this way, a relatively thin film may be used to encapsulate the core, which has the advantage of dissolving rapidly in the mouth.
By the term "fondant core", it is meant a fine crystalline sugar dispersed within a low melting point solid organic carrier.
The solid organic carrier preferably has a melting point in the range 22 to 60°C, preferably 25 to 40°C, more preferably 32 to 34°C. Examples of suitable solid organic carriers include hydrogenated coconut oil; Polyethylene glycol, for example selected from the PEG 1000, PEG 2000 and PEG 3000 ranges of polyethylene glycols; povidone; and gelucire.
The sugar component of the fondant core preferably has a weight average particle size in the range 1-150µm, more preferably 10-100µm and most preferably 10-25 µm. The sugar is preferably selected from sucrose, fructose, glucose, trehalose and lactose, although any suitable sugar could be used. Sugar derivatives may also be used either in addition to the sugar or as an alternative to it, provided that the sugar or sugar derivative has the required particle size properties and is pharmaceutically acceptable.
The fondant core preferably has certain physical characteristics which enable it to provide the HPMC film with a desired degree of support. In particular, the fondant core preferably has a viscosity of at least 10Pa.s at a sheer stress of 1 Pa measured at 36°C. Desirably, the fondant core viscosity is at least 50 Pa.s, more preferably at least 100 Pa.s. and most preferably at least 1000 Pa.s when measured at 36°C at a sheer stress of 1.0 Pa. The viscosity may be measured using an AR 2000 Rheometer with a 20mm cross-hatched steel plate.
In addition, the fondant core should preferably exhibit a peak normal force of at least 0.1N, more preferably at least 1N, most preferably at least 5N during a squeeze flow test conducted at 36°C over 500 seconds at a compression rate of 10µm/sec using a sample disc 4-8mm in diameter and up to 500µm thick. The squeeze flow test measures the biaxial extension (squeeze and subsequent rate of movement) of the sample when compressed. The test may be carried out using an AR 2000 Rheometer fitted with 8mm steel plates.
The preferred fondant core dissolves or disperses rapidly in the mouth of a consumer. This preferably occurs within 10 to 90 seconds after exposure of the core to saliva, preferably 20 to 80 seconds, more preferably 30 to 60 seconds. However, in certain circumstances, e.g. where the capsule is intended to treat a sore throat, it may be desired to have a longer dissolution/dispersion time, e.g. up to 300 seconds, to provide a soothing sensation over a longer period of time.
The preferred fondant core has the advantage of being a solid or semi-solid when encapsulated within the HPMC - containing film at 20-25°C (i.e. room temperature). This provides the film with the desired degree of support and results in a robust medicinal product. However, once exposed to saliva, the core dissolves and/or disperses to provide the consumer with a desirable "melt in the mouth" feeling. This assists in soothing for example sore throats and irritating coughs, without the formulation and production problems associated with providing a liquid - containing medicament.
Thus, the benefits of a liquid - containing medicament may be obtained without having to use, for example, a relatively thick and slow dissolving encapsulating film in order to provide the end product with sufficient robustness and strength for it to be commercially acceptable.
The active medicament may be an analgesic or anti-inflammatory, decongestant, cough suppressant, expectorant, mucolytic, antihistamine, antiallergy agent, agent for treating the gastrointestinal tract (for example antacid, antireflux agent, antiulcer agent, antidiarrhoeal agent, laxative or antiemetic), agent to counter motion sickness, antiviral agents, antifungal agents, antibacterial agents, diuretic agents, antiasthmatic agents, antimigraine agents, antianxiety agents, tranquilising agents, sleep promoting agents, vitamins and/or minerals, natural products and extracts thereof (for example herbs or naturally occurring oils)
Suitable analgesics include aspirin, paracetamol (acetaminophen) and non-steroidal anti-inflammatory/analgesics such as diclofenac, indomethacin, mefanamic acid, nabumetone, tolmetin, piroxicam, felbinac, diflunisal, ibuprofen, flurbiprofen, naproxen and ketoprofen, active isomers thereof or medicinally acceptable salts thereof (for example the sodium or lysine salts) or narcotic analgesics such as codeine and medicinally acceptable salts thereof (for example codeine phosphate or sulphate). Caffeine may be present in analgesic products to enhance the analgesic effect.
The amount of aspirin in a unit dose may be in the range 75 to 800 mg, preferably 200-600 mg, most preferably 75, 150, 300, 400 or 600 mg. The amount of paracetamol in a unit dose may be 50 to 2000 mg, preferably 120 to 1000 mg, most preferably 120, 250, 500 or 1000 mg. The amount of diclofenac in a unit dose may be 10 to 100 mg, preferably 20 to 80 mg, most preferably 25 or 50 mg. The amount of indomethacin in a unit dose may be in the range 25-75 mg, preferably 25 mg, 50 mg or 75 mg. The amount of mefanamic acid in a unit dose may be in the range 250-500 mg, preferably 250 mg or 500 mg. The amount of nabutmetone in a unit dose may be in the range 500-1000 mg. The amount of piroxicam in a unit dose may be in the range 10-40 mg, preferably 10, 20 or 40 mg. The amount of diflunisal in a unit dose may be in the range 250-500 mg, preferably 250 mg or 500 mg. The amount of ibuprofen in a unit dose may be in the range 50 to 800 mg, preferably 100 to 400 mg, most preferably 100, 200 or 400 mg. The amount of flurbiprofen in a unit dose may be 5 to 200 mg, preferably 5 to 150 mg, most preferably 50 or 100 mg. The amount of naproxen in a unit dose may be 100 to 800 mg, preferably 200 to 600 mg, most preferably 250, 373 or 500 mg. The amount of ketoprofen in a unit dose may be 25 to 250 mg, preferably 50 to 150 mg, most preferably 50 or 100 mg. The amount of codeine in a unit dose may be 20 to 50 mg, preferably 5 to 30 mg, most preferably 8, 12.5, 16 or 25 mg. If medicinally effective salts of the above compounds are used then the amount of salt should be increased to give a dose of the free medicament corresponding to the figures given above. The amount of caffeine in a unit dose may be 5 to 200 mg, preferably 10 to 100 mg, most preferably 30, 45, 60 or 100 mg.
Suitable decongestants include ephedrine, levomethol, pseudoephedrine preferably as its hydrochloride, phenylpropanolamine preferably as its hydrochloride and phenylephrine.
The amount of ephedrine in a unit dose may be in the range 15-60 mg. The amount of levomethol in a unit dose may be in the range 0.5-100 mg, preferably 0.5-25 mg, most preferably 1, 2, 5, 10 or 25 mg. The amount of pseudoephedrine preferably as its hydrochloride in a unit dose may be in the range 60-120 mg, preferably 30, 60 or 120 mg. The amount of phenylpropanolamine preferably as its hydrochloride in a unit dose may be in the range 5-50 mg, preferably 5-20 mg. The amount of phenylephrine in a unit dose may be in the range 5-25 mg, preferably 5, 10 or 25 mg.
Suitable cough suppressants include bibenzonium preferably as its bromide, caramiphen, carbetapentane preferably as its citrate, codeine, dextromethorphan preferably as its hydrobromide or an absorbate thereof, noscapine and pholcodine.
The amount of bibenzonium bromide in a unit dose may be in the range 20-30 mg. The amount of caramiphen in a unit dose may be in the range 5-20 mg, preferably 5 or 20 mg. The amount of carbetapentane citrate in a unit dose may be in the range 15-30 mg. The amount of codeine in a unit dose maybe in the range 2-50 mg, preferably 5-30mg, most preferably 10 mg. In the present invention medicinally acceptable salts of codeine may also be used (for example codeine phosphate or sulphate). The amount of dextromethorphan hydrobromide in a unit dose may be in the range 5-60 mg, preferably 15 or 30 mg. The amount of noscapine in a unit dose may be in the range 15-30 mg. The amount of pholcodeine in a unit dose may be in the range 2-25 mg, preferably 5 to 20 mg, more preferably 10 to 15 mg.
Suitable expectorants include ammonium bicarbonate, ammonium chloride, bromhexine hydrochloride, cocillana creosote, guaifenesin, ipecacuanha, potassium and medicinally acceptable salts thereof (for example potassium citrate or iodide), potassium guaicolsulfonate, squill and terpin hydrate.
The amount of ammonium bicarbonate in a unit dose may be in the range 300-600 mg. The amount of ammonium chloridein in a unit dose may be in the range 0.3-2 g (300-2000 mg). The amount of bromhexine hydrochloride in a unit dose may be in the range 24-64 mg. The amount of cocillana creosote in a unit dose may be in the range 0.12-0.6 ml. The amount of guaifenesin in a unit dose may be in the range 100-200 mg, preferably 100 mg. The amount of ipecacuanha in a unit dose may be in the range 25-100 mg. The amount of potassium iodide in a unit dose may be in the range 150-300 mg, preferably 100 mg. The amount of potassium citrate in a unit dose may be in the range 150-300 mg, preferably 100 mg. The amount of potassium gualcolsulfonate in a unit dose may be 80 mg. The amount of squill in a unit dose may be in the range 60-200 mg. The amount of terpin hydrate in a unit dose may be in the range 125-600 mg, preferably 300 mg.
Suitable mucolytic agents include ambroxyl, acetylcystine and carbocisteine
The amount of carbocisteine in a unit dose may be in the range 100mg to 1000mg, preferably 200 to 500mg
Suitable antihistamines include azatadine or a salt thereof such as the maleate, bromodiphenhydramine or a salt thereof such as the hydrochloride, brompheniramine or a salt thereof such as the maleate, carbinoxamine or a salt thereof such as the maleate, chlorpheniramine or a salt thereof such as the maleate, cyproheptadine or a salt thereof such as the hydrochloride, dexbrompheniramine or a salt thereof such as the maleate, dexchlorpheniramine or a salt thereof such as the maleate, diphenhydramine or a salt thereof such as the hydrochloride, doxylamine or a salt thereof such as the succinate, phenidamine or a salt thereof such as the tartrate, promethazine or a salt thereof such as the hydrochloride, pyrilamine or a salt thereof such as the maleate, pyrilamine or a salt thereof such as the tannate, tripelennamine or a salt thereof such as the hydrochloride, tripolidine or a salt thereof such as the hydrochloride, cetirizine or a salt thereof such as the hydrochloride, cinnarizine, mequitazine, dcivastine.
The amount of azatadine in the form of maleate in a unit dose may be in the range 1-2 mg, preferably 1 mg. The amount of bromodiphenhydramine in the form of hydrochloride in a unit dose may be 3.75 mg. The amount of brompheniramine in the form of maleate in a unit dose may be in the range 4-12 mg, preferably 4, 8 or 12 mg. The amount of carbinoxamine in the form of maleate in a unit dose may be 4 mg. The amount of chlorpheniramine in the form of maleate in a unit dose may be in the range 2-12 mg, preferably 4, 8 or 12 mg. The amount of dexbrompheniramine in the form of maleate in a unit dose may be 6 mg. The amount of dexchlorpheniramine in the form of maleate in a unit dose may be in the range of 2-6 mg, preferably 2, 4 or 6 mg. The amount of diphenhydramine in the form of hydrochloride in a unit dose may be in the range of 12.5 to 200 mg, preferably 12.5-50 mg, more preferably 12.5, 25 or 50 mg. The amount of doxylamine in the form of succinate in a unit dose may be in the range 7.5-10 mg, preferably 7.5 or 10 mg. The amount of phenidamine in the form of tartrate in a unit dose may be in the range 5-10 mg, preferably 5 or 10 mg. The amount of promethazine in the form of hydrochloride in a unit dose may be in the range 1.5-6 mg. The amount of pyrilamine in the form of maleate in a unit dose may be 12.5 mg. The amount of pyrilamine in the form of tannate in a unit dose may be 12.5 mg. The amount of tripelennamine in the form of hydrochloride in a unit dose may be in the range 25-50 mg, preferably 25, 37.5 or 50 mg. The amount of triprolidine in the form of hydrochloride in a unit dose may be in the range 1-2.5 mg, preferably 1.25-2.5 mg, most preferably 1.25 mg. The amount of cetirizine in a unit dose may be in the range 5-10 mg, preferably 5 mg or 10 mg. The amount of cinnarizine in a unit dose may be in the range of 15-75 mg, preferably 15 mg or 75 mg. The amount of mequitazine in a unit dose may be in the range 5-10 mg, preferably 5 mg or 10 mg. The amount of acrivastine in a unit dose may be 3-20 mg, preferably 5-10 mg, most preferably around 8 mg.
Suitable antiallergy agents include astemizole, clemastine or a salt thereof such as the hydrogen fumerate, loratadine, terfenadine.
The amount of astemizole in a unit dose may be in the range 0.5-200 mg, preferably 1-100 mg, most preferably 2, 5, 10, 20 or 40 mg. The amount of clemastine in the form of its hydrogen fumerate in a unit dose may be in the range 0.01-200 mg, preferably 0.1-10 mg, most preferably 0.2, 0.4, 0.6, 1.2 or 2.4 mg. The amount of loratadine in a unit dose may be in the range 0.5-200 mg, preferably 1-100 mg, most preferably 2, 5, 10, 20 or 40 mg. The amount of terfenadine in a unit dose may be in the range 5-1000 mg, preferably 10-600 mg, most preferably 20, 40, 60, 100 or 200 mg.
Suitable antacids include aluminium glycinate, aluminium hydroxide gel, aluminium phosphate gel, dried aluminium phosphate gel, calcium carbonate, charcoal, hydrotalcite, light kaolin, magnesium carbonate, magnesium hydroxide, magnesium oxide, magnesium trisilicate, sodium bicarbonate.
The amount of aluminium glycinate in a unit dose may be in the range 0.1-10 g, preferably 0.1-5g, most preferably 0.2, 0.5, 1 or 2 g. The amount of aluminium hydroxide gel in a unit dose may be in the range 1-50 ml, preferably 2-30 ml, most preferably 5, 7.5, 10, 15 or 30 ml. The amount of aluminium phosphate gel in a unit dose may be in the range 0.5-100 ml, preferably 1-50 ml, most preferably 2, 5, 10, 15 or 30 ml. The amount of dried aluminium phosphate gel in a unit dose may be in the range 50-5000 mg, preferably 100-2000 mg, most preferably 200, 400, 800 or 1600 mg. The amount of calcium carbonate in a unit dose may be in the range 0.1-30 g, preferably 0.5-10 g, most preferably 0.5, 1, 2 or 5 g. The amount of charcoal in a unit dose may be in the range 1-200g, preferably 1-100 g, most preferably 2, 4, 8, 16 or 50 g. The amount of hydrotalcite in a unit dose may be in the range 0.1-10 g, preferably 0.2-5 g, most preferably 0.5, 1 or 2 g. The amount of light kaolin in a unit dose may be in the range 10 mg-100 g, preferably 100 mg-75 g, most preferably 1, 10, 15, 20, 50 or 75 g. The amount of magnesium carbonate in a unit dose may be in the range 50 mg-10 g, preferably 50 mg-5 g, most preferably 100, 200 or 500 mg. The amount of magnesium hydroxide in a unit dose may be in the range 100 mg-10 g, preferably 100 mg-5 g, most preferably 100, 250, 500 or 750 mg. The amount of magnesium oxide in a unit dose may be in the range 100 mg-10 g, preferably 100 mg-5 g, most preferably 100, 250, 500 or 750 mg. The amount of sodium bicarbonate in a unit dose may be in the range 0.1-50 g, preferably 0.5-25 g, most preferably 0.5, 1, 2, 5 or 10 g.
Suitable antireflux agents include simethicone and sodium alginate.
The amount of simethicone in a unit dose may be in the range 5-1000 mg, preferably 10-500 mg, most preferably 25, 40, 50, 60, 100 or 200 mg. The amount of sodium alginate in a unit dose may be in the range 50 mg-10 g, preferably 75 mg-5 g, most preferably 100, 250, 500 or 1 g.
2
Suitable antiulcer agents include bismuth subsalicylate, H receptor antagonists such as cimetidine, famotidine, ranitidine and nizatidine and proton pump inhibitors such as omeprazole, pantoprazole and lansoprazole.
The amount of bismuth subsalicylate in a unit dose may be in the range 250-2000 mg, preferably 50-1500 mg, most preferably 75, 150, 300, 600 or 1000 mg. The amount of cimetidine in a unit dose may be in the range 10 mg-5 g, preferably 50 mg-2 g, most preferably 100, 200 or 400 mg. The amount of famotidine in a unit dose may be in the range 10-80 mg, preferably 20 or 40 mg. The amount of ranitidine in a unit dose may be in the range 100-600 mg, preferably 300-600 mg, most preferably 300 or 600 mg. The amount of nizatidine in a unit dose may be 50 to 500 mg, preferably 100 to 400 mg, more preferably 150 to 300 mg. The amount of omeprazole in a unit dose may be 5 to 50 mg, preferably 10 to 40 mg, more preferably 10, 20 or 40 mg. The amount of pantoprazole in a unit dose may be 10 to 50 mg, preferably 15 to 45 mg, more preferably 20 to 40 mg. The amount of lansoprazole in a unit dose may be 5 to 50 mg, preferably 10 to 40 mg, more preferably 15 or 30 mg.
Suitable antidiarrhoeal agents include loperamide or a salt thereof, such as the hydrochloride, methylcellulose, diphenoxylate and morphine or a salt thereof, such as the hydrochloride.
The amount of loperamide in the form of its hydrochloride in a unit dose may be in the range 0.1-50 mg, preferably 0.5-20 mg, most preferably 1, 2, 4 or 8 mg. The amount of methylcellulose in a unit dose may be in the range 20 mg-5 g, preferably 50 mg-4 g, most preferably 100, 200, 500 mg, 1 or 2 g. The amount of diphenoxylate in the form of its hydrochloride in a unit dose may be 1-10 mg, preferably 2-5 mg, more preferably 2.5 mg. The amount of morphine in the form of its hydrochloride in a unit dose may be in the range 20-4000 µg, preferably 50-2000 µg, most preferably 100, 200, 400, 800 or 1600 µg.
Suitable laxatives include agar, aloin, bisacodyl, ispaghula husk, lactulose, phenolphthalein and senna extract (including sennosides A + B).
The amount of agar in a unit dose may be in the range 1-200 mg, preferably 2-100 mg, most preferably 2.5, 5, 10, 20 or 50 mg. The amount of aloin in a unit dose may be in the range 1-200 mg, preferably 2-100 mg, most preferably 5, 10, 15 or 30 mg. The amount of bisacodyl in a unit dose may be in the range 0.1-100 mg, preferably 0.5-50 mg, most preferably 1, 2, 5, 10 or 20 mg. The amount of ispaghula husk in a unit dose may be in the range 100 mg-50 g, preferably 500 mg-25 g, most preferably 1, 2, 3, 5 or 10 g. The amount of lactulose in a unit dose may be in the range 100 mg-50 g, preferably 500 mg-30 g, most preferably 1, 2, 5, 10 or 15 g. The amount of phenolphthalein in a unit dose may be in the range 1-5000 mg, preferably 5-4000 mg, most preferably 7.5, 15, 30, 60, 100, 200 or 300 mg. The amount of senna extract (including sennosides A+B) in a unit dose may be in the range 0.5-100 mg, preferably 1-50 mg, most preferably 2.5, 5, 7.5, 10, 15 or 30 mg.
Suitable antiemetics include dimenhydrinate, metoclopromide or a salt thereof such as the hydrochloride, domperidone or a salt thereof such as the maleate, buclizine, cyclizine, prochlorperazine or a salt thereof such as the maleate, ipecacuanha, squill.
The amount of ipecacuanha in a unit dose may be in the range 25-100 mg. The amount of squill in a unit dose may be in the range 60-200 mg. The amount of domperidone may be in the range 5-50 mg, preferably 5, 10, 15, 20, 25, 30, 40 or 50 mg. The amount of buclizine in a unit dose may be in the range 2-100 mg, preferably 5-50 mg, more preferably 6.25, 13.5, 25. The amount of cyclizine in a unit dose may be in the range 1-50 mg, preferably 2-30 mg, more preferably 5, 7.5, 10, 15, 20 or 25 mg. The amount of metoclopromide in a unit dose may be in the range 2-30 mg, preferably 5, 10, 15 or 30 mg. The amount of dimenhydrinate in a unit dose may be in the range 5-50 mg, preferably 25 mg. The amount of prochlorperazine in a unit dose may be in the range 3-25 mg, preferably 3 mg or 5 mg. If medicinally effective salts of the above compounds are used then the amount of salt should be increased to give a dose of the free medicament corresponding to the figures given above.
Suitable agents to counter motion sickness include cinnarizine, dimenhydrinate, hyoscine or a salt thereof such as the hydrobromide and meclozine or a salt thereof such as the hydrochloride.
The amount of cinnarizine in a unit dose may be in the range 0.5-200 mg, preferably 1-100 mg, most preferably 5, 10, 20, 40 or 60 mg. The amount of dimenhydrinate in a unit dose may be in the range 1-500 mg, preferably 5-300 mg, most preferably 10, 20, 50, 100 or 250 mg. The amount of hyoscine hydrobromide in a unit dose may be in the range 0.01-1 mg, preferably 0.05-0.5 mg, most preferably 0.05, 0.1, 0.2, 0.3 or 0.5 mg. The amount of meclozine hydrochloride in a unit dose may be in the range 0.5-200 mg, preferably 1-100 mg, more preferably 2, 5, 10, 20 or 40 mg.
Suitable antiviral agents include aciclovir. The amount of aciclovir in a unit dose may be in the range 100 to 1000 mg, preferably 200 to 800 mg.
Suitable antifungal agents include fluconazole and terbinafine. The amount of fluconazole in a unit dose may be in the range 50-200 mg, preferably 50 mg or 200 mg. The amount of terbinafine may be in the range 250-500 mg, preferably 250 mg.
Suitable antibacterial agents include erythromycin and fusidic acid and salts thereof such as the sodium salt. The amount of erythromycin in a unit dose may be in the range 125-500 mg, preferably 125 mg, 250 mg or 500 mg. The amount of fusidic acid in a unit dose may be in the range 250-500 mg, preferably 250 mg.
Suitable diuretics include frusemide. The amount of frusemide in a unit dose may be in the range 20-80 mg, preferably 20, 40 or 80 mg.
Suitable anti-asthmatic agents include ketotifen. The amount of ketotifen in a unit dose may be in the range 1-4 mg, preferably 1 mg or 2 mg.
Suitable anti-migraine agents include the triptans such as sumatriptan. The amount of sumatriptan in a unit dose may be in the range 20-100 mg, preferably 20, 50 or 100 mg.
Suitable vitamins include A, B1, B2, B3, B5, B6, B12, C, D, E, folic acid, biotin, and K. Suitable minerals include calcium, phosphorus, iron, magnesium, zinc, iodine, copper, chloride, chromium, manganese, molybdenum, nickel, potassium, selenium, boron, tin and vanadium.
The term active medicament as used herein also embraces materials which are known and used to give relief or comfort to a patient even if they have not been shown to have any pharmacological effect. These are referred to hereinafter as "relief agents". Examples of such materials include anise oil, treacle, honey, liquorice and menthol.
Preferred actives are analgesics, antacids, decongestants, cough suppressants, expectorants, mucolytic agents and laxatives. In addition, relief agents may preferably be incorporated in the composition, either alone or in combination with other actives.
The active is preferably a solid component.
The active medicament(s) may be taste masked to further improve the taste profile of the medicinal composition. The medicament(s) may be taste masked using methods known in the art, for example adding to the core taste masking ingredients such as ethylcellulose, hydroxypropylmethylcellulose. methylethylcellulose, hydroxypropylcellulose, polyvinyl pyrrolidone, polyvinyl alcohol, mono glycerides, diglycerides, stearic acid, palmitic acid, gelatin, hydrogenated cotton seed oil and more generally any food grade polymer, starch, wax or fat. The taste masking agents may be used singly or in combination. The amount of the taste masking ingredient may be in the range by weight of the medicament(s) used.
The core may optionally include other excipients. The other excipients may include taste masking agents, artificial sweeteners, flavours, inert diluents, binders, lubricants.
Suitable taste masking agents are listed above.
Suitable artificial sweeteners include acesulfame K, sodium saccharin, aspartame. The amount of sweetener may be in the range 0.001 % to 2%.
Suitable flavours are commercially available and may be enhanced by the addition of an acid, for example citric acid, ascorbic acid, tartaric acid.
Suitable inert diluents include calcium phosphate (anhydrous and dihydrate), calcium sulphate, carboxymethylcellulose calcium, cellulose acetate, dexrates, dextrin, dextrose, fructose, glyceryl palmitostearate, hydrogenated vegetable oil, kaolin, lactitol, lactose, magnesium carbonate, magnesium oxide, maltitol, maltodextrin, maltose, microcrystalline cellulose, polymethacrylates, powdered cellulose, pregelatinised starch, silicified microcrystalline cellulose, sodium chloride, starch, sucrose, sugar, talc, xylitol. One or more diluents may be used. The amount of diluent may be in the range 10-98% w/w.
Suitable binders include acacia, alginic acid, carboxymethylcellulose, cellulose, dextrin, ethylcellulose, gelatin, glucose, guar gum, hydrogenated vegetable oil, hydroxyethylcellulose, hydroxypropylmethylcellulose, liquid glucose, magnesium aluminium silicate, maltodextrin, methylcellulose, polyethylene, oxide, polymethacrylates, povidone, sodium alginate, starch, vegetable oil and zein. One or more binders may be used. The amount of binder may be in the range 10-95% w/w.
Suitable lubricants include calcium stearate, canola oil, glyceryl palmitostearate, hydrogenated vegetable oil, magnesium stearate, mineral oil, poloxamer, polyethylene glycol, polyvinyl alcohol, sodium benzoate, sodium lauryl sulphate, sodium stearyl fumarate, stearic acid, talc and zinc stearate.
One or more lubricants may be used. The lubricant may be in the range 0.01-10% w/w.
The core preferable contains substantially no free or unbound water. This is because the film material of the capsule shell is cold water soluble. However, bound water, e.g. present as part of a carbohydrate solution such as a syrup, is acceptable, up to levels of about 40% by weight of the core. By "substantially no free or unbound water", it is meant that the core preferably contains less than 1% by weight free or unbound water, more preferably less than 0.1% by weight, even more preferably less than 0.05% by weight and most preferably 0% by weight free or unbound water.
The film material which is used to encapsulate the core contains hydroxypropylmethyl cellulose (HPMC), preferably in the form of a non-foamed
film. The film typically includes a plasticiser to give the film desired properties, such as flexibility. Examples of materials which may be used as plasticisers in the film include polyethylene glycol (PEG), monopropylene glycol, glycerol and acetates of glycerol (acetins).
The film typically has a thickness in the range 20-300µm, preferably 30-200µm, more preferably 40-150µm, most preferably 50-100µm. It is desired to use as thin a film as possible in order to provide relatively short dissolution times of the composition in the mouth. It will be appreciated that the thicker the film the longer the dissolution time will be.
Compositions in accordance with the present invention are intended to deliver the active medicament(s) carried in the core to the oral cavity or throat of the user. This is particularly useful if the active medicament is intended to treat coughs, sore throats, toothache, or ease respiratory blockages.
In use, the film starts to dissolve almost immediately after introduction into the mouth. The dissolution may be aided by the action of sucking or chewing performed by the user. The film material dissolves completely in the mouth after a short time and leaves no unpleasant residues. The dissolution time is dependent upon the film thickness, but is usually less than one minute, typically less than 30 seconds and possibly even quicker, e.g. only a few seconds.
Thus, the compositions of the present invention are intended to be ruptured in the mouth of the user for release of the core into the mouth. In other words, the compositions of the present invention comprise an edible delivery vehicle.
The film may include optional components, such as colourants, flavourings, texture modifiers and/or acid materials. The acid materials, such as organic acids, e.g. citric acid, provide an improved mouth feel for the consumer.
The encapsulating film may include an outer coating conventionally used in oral medicaments.
To produce the film, HPMC, typically in the form of a powder is mixed with the plasticiser (if present) and water to produce an aqueous solution. The further components (if present) are then dissolved or dispersed in the solution. A layer of the solution is then cast onto a suitable substrate, e.g. a conveyor belt, and the water removed, e.g. by heating with hot air, to form a dried film which is removed from the substrate.
The film is then used to encapsulate a core as described above. The encapsulation process may use any conventional process, e.g. as disclosed in WO97/355537, WO00/27367 or WO01/03676.
Although the film material is cold water soluble, the resulting capsules are nevertheless found to be sufficiently robust to withstand the production and packaging processes. In addition, they may be held in the hand without the film wall dissolving or rupturing prematurely. However, it will be appreciated that prolonged contact with sweat or other skin secretions may lead to the eventual dissolution of the film wall.
The medicinal formulations according to the present invention may be prepared by forming a first sheet of hydroxypropylmethyl cellulose with a plurality of depressions (for example by vacuum forming techniques), placing the material which comprises the core into the depressions, sealing a planar second sheet of hydroxypropylmethyl cellulose on top of the first sheet to enclose the core material, for example by adhesive or heat sealing, and cutting the individual dosage forms from the sheet.
Alternatively, the medicinal compositions of the present invention may be prepared by placing the core between two sheets of the film material and sealing the sheets together around the periphery of the core. The sheets may be sealed by using an adhesive, a solvent for the material comprising the sheets, by heat or radio frequency welding. Where the core is molten, a pocket may be formed between the two sheets of material into which the molten core is placed before the open part of the pocket is sealed to enclose the molten core. After the core has been sealed between the sheets, the material may be cut either through the sealed region or around the sealed region to give the individual dosage forms which are then packed either in containers or blister packs. One example of a suitable apparatus for preparing the formulations of the present invention is described in WO-A-9735537.
The invention will now be illustrated by reference to the following examples given by way of example only.
A film of hydroxypropylmethylcellulose was placed over a vacuum-forming mould in which indentations of the shape of the finished dosage forms were present. The film was heated and vacuum formed to give a film with a plurality of blisters depending from a planar upper surface. Each blister is filled with the appropriate amount of core material prepared as described in Examples 2 to 48 below and a flat film of the same hydroxypropylmethylcellulose attached to the planar upper surface of the vacuum-formed film by applying an adhesive to both the flat film and the planar upper surface and applying pressure to ensure a good seal. The individual capsules are then separated and packed.
Mg
Capsule core:
Hydrogenated coconut oil<sup>1</sup>
1250
Sucrose <sup>2</sup>
1275
Flavouring agents <sup>3</sup>
25
Capsule shell:
Methocel K100
6.2
Methocel E50
55.8
Glycerine
10.1
Propylene glycol
4.8
Citric acid
3.2
<sup>1</sup> RM1216
<sup>2</sup> Celebration Sucrose NCP
<sup>3</sup> Cream Flavour 514388E, Blackcurrant flavour 17.80.3606, natural menthol flavouring.
A filled capsule was prepared as follows:
The capsule of Example 2 is a placebo capsule (i.e. it contains no active agent). It is prepared using a foamed capsule film prepared by pumping nitrogen gas into a concentrated solution of the film composition prior to casting the film composition into a film. The foamed capsule film has a thickness of approximately 150µm.
The core is prepared by melting the hydrogenated coconut oil at 60-80°C. The flavourings (if present) are then added with stirring until a homogenous mixture is obtained. The sucrose is then added batchwise with mixing to ensure even dispersion.
The appropriate amount(s) of the active agent(s) is/are then dispersed in the product. The resulting capsule core is a solid or semi-solid fondant which must be heated to 50-60°C before being filled into the blisters of Example 1
Core component
EX3
EX4
EX5
EX6
EX7
Mg
Mg
Mg
Mg
Mg
Hydrogenated coconut
1246
1250
1500
1250
1139
Oil
Sucrose
1246
1250
812.5
866.7
1139
Flubiprofen
8.75
-
-
-
-
4-Hexylresorcin
-
2.4
-
-
-
Dextromethorphan
-
-
150
-
-
Adsorbate (10% drug)
Flavouring
-
-
37.5
50
-
Taste Masked Guaiphenesin
-
-
-
333.3 -
(60% drug)
Taste Masked Ibuprofen
-
-
-
-
222
(90% drug)
The following cores were produced for encapsulation by the capsule film
The flavouring in Example 5 is raspberry flavouring and in Example 6 it is cherry flavouring.
%
HPMC (Methocel E50 ex Dow)
75
Anhydrous citric acid
15
Glycerin
10
Colourant
q.s.
The fondant cores were prepared in accordance with the method detailed in Example 2. The resultant cores were filled into blisters of an unfoamed capsule film material having the following formulation:
The capsule film was about 80µm thick.
The fondant core of Examples 3 and 4 may be used in capsules intended for the treatment of sore throats; the fondant core of Example 5 may be used in capsules for the treatment of dry coughs; the fondant core of Example 6 may be used in capsules for the treatment of chesty coughs; and the fondant core of Example 7 may be used in capsules for the treatment of headaches and other similar pains or aches.
Core component
EX8
EX9
EX10
EX11
EX12
Mg
Mg
Mg
Mg
Mg
Hydrognated coconut oil
1005
1000
1250
1250
1100
Sucrose
1005
1000
1250
1250
1100
Aluminium hydroxide
420
500
-
-
-
Magnesium oxide
70
-
-
-
-
Senna
-
-
7.5
-
-
Bisacodyl
-
-
-
5
-
Pseudoephedrine Hydrochloride -
-
-
-
-
300
The following cores were produced for encapsulation by the capsule film
%
HPMC (Methocel E50 ex Dow)
80
Anhydrous citric acid
5
Propylene glycol
7.5
Glycerin
7.5
Colourant
q.s.
The fondant cores were prepared in accordance with the method detailed in Example 2. The resultant cores were filled into blisters of an unfoamed capsule film material having the following formulation:
The capsule film had a thickness of about 75µm.
The fondant core of Examples 8 and 9 may be used in capsules intended for the treatment of indigestion; the fondant core of Examples 10 and 11 may be used in capsules for the treatment of constipation; and the fondant core of Example 12 may be used in capsules for treating cold and flu symptoms.
Core component
EX13
EX14
Mg
Mg
Hydrogenated coconut oil
1139
987.5
Sucrose
1079
987.5
Taste Masked Ibuprofen
222
(90% drug content)
Pseudoephedrine HCl
60
-
Paracetamol
-
500
Diphenylhydramine HCl
-
25
Capsule cores containing two active agents were prepared as follows:
Examples 13 and 14 were prepared in accordance with the method detailed in Example 2. The cores were filled into blisters of an unfoamed capsule film material having the formulation given in Examples 8-12. The filled capsules may be used in the treatment of cold and flu symptoms.
Core component
EX15
EX16
Mg
Mg
PEG 1000
1250
1500
Sucrose
866.7
812.5
Flavouring
50
37.5
Taste Masked Guaiphenesin
333.3
-
(60% drug content)
Dextromethorphan Adsorbate
-
150
(10% drug content)
Capsule cores using an alternative low melting point solid organic component were prepared as follows:-
In Example 15, the flavouring was cherry flavouring and in Example 16, it was raspberry flavouring.
Examples 15 and 16 were prepared in accordance with the method detailed in Example 2. The cores were filled into blisters of an unfoamed capsule film material having the formulation given in Examples 8-12. The core of Example 15 may be used in capsules intended for the treatment of chesty coughs and the core of Example 16 may be used in capsules intended for the treatment of dry coughs.
The Examples 1-16 were repeated, except that the sucrose component of the fondant core was replaced with glucose having a mean particle size of 10-15µm.
The Examples 1-16 were repeated, except that the sucrose component of the fondant core was replaced with fructose having a mean particle size of 15-20µm.
Example 1
Example 2
Example 3 -7
Examples 8 - 12
Examples 13 and 14
Examples 15-16
Examples 17-32
Examples 33 -48 | |
Whether you were watching the cartoon or playing with the toys, any cool kid had a couple of these figures around the house.
Answer G.I. Joe
The Hasbro-produced classic G.I. Joe have been produced since 1963 and was one of the first toys to use the word "action figure" instead of "doll".
Asked by Dan Henry · Last updated 1 month ago · 38.6K views
This question is part of... | https://grizly.com/question/whether-you-were-watching-the-cartoon-or-playing-with-the-toys-any-cool-kid-had-a-couple-of-these-figures-around-the-house |
Last December the CRE@CTIVE Project (“Innovation for bringing creativity to activate traditional sectors in Mediterranean area”) kicked off.
It is an EU-funded project with a total budget of €3.2 million, aimed at activating traditional Mediterranean industry sectors by boosting creativity as a key for increasing economic opportunities for MSMEs. Design talents from international organizations, creative hubs, fashion schools, design centres will be linked with MSMEs fostering a creativity and innovation process with impacts on value chains and business alliances across the Mediterranean.
Due to the COVID-19 restrictions, the kick-off event had to be held online. All partners introduced their institutions, the project team and the role they will play within the project. The lead partner introduced in detail the project with its different work packages and outputs and gave an introduction to the management procedures and financial management guidelines for all partners.
The official institutional meeting and press conference of CRE@CTIVE will be held in March. | https://south.euneighbours.eu/news/textile-footwear-and-leather-eu-funded-crective-project-boost/ |
Bullying is unwanted, aggressive behavior among school aged children that involves a real or perceived power imbalance. The behavior is repeated over time. Both kids who are bullied and who bully others may have serious, lasting problems.
In order to be considered bullying, the behavior must be aggressive and include:
- An Imbalance of Power: Kids who bully use their power—such as physical strength, access to embarrassing information, or popularity—to control or harm others. Power imbalances can change over time and in different situations, even if they involve the same people.
- Repetition: Bullying behaviors happen more than once or have the potential to happen more than once.
Bullying includes actions such as making threats, spreading rumors, attacking someone physically or verbally, and excluding someone from a group on purpose.
Types of Bullying
- Verbal bullying is saying or writing mean things. Verbal bullying includes:
- Teasing
- Name-calling
- Inappropriate sexual comments
- Taunting
- Threatening to cause harm
- Social bullying, sometimes referred to as relational bullying, involves hurting someone’s reputation or relationships. Social bullying includes:
- Leaving someone out on purpose
- Telling other children not to be friends with someone
- Spreading rumors about someone
- Embarrassing someone in public
- Physical bullying involves hurting a person’s body or possessions. Physical bullying includes:
- Hitting/kicking/pinching
- Spitting
- Tripping/pushing
- Taking or breaking someone’s things
- Making mean or rude hand gestures
If you believe your child is being bullied, please encourage them to take the following steps:
1. Look at the kid bullying you and tell him or her to stop in a calm, clear voice. You can also try to laugh it off. This works best if joking is easy for you. It could catch the kid bullying you off guard.
2. If speaking up seems too hard or not safe, walk away and stay away. Don’t fight back. Find an adult to stop the bullying on the spot.
There are things you can do to stay safe in the future, too.
- Talk to an adult you trust. Don’t keep your feelings inside. Telling someone can help you feel less alone. They can help you make a plan to stop the bullying.
- Stay away from places where bullying happens.
- Stay near adults and other kids. Most bullying happens when adults aren’t around.
VIDEO GALLERY: | https://es.nscougars.com/apps/pages/index.jsp?uREC_ID=1128583&type=d&pREC_ID=1401374 |
Nepal maintained army strength even during the reigns of Lichchavi kings. Later, its operations began to generate income. The way in which King Prithvi Narayan Shah and his successors diligently mastered the art of warfare and strategy resulted in the success of the Gorkhali army. After the Kot Prava, the Rana family emerged and radically changed policies. This research studies on Nepali Army’s glorious history, transformation, and mainly its public relation. The Postmodern Military Model (PMMM) is the theoretical perspective that has guided this study. For this, a qualitative method that deals with subjectivity is adopted. Secondary data such as journals, books and standard websites are used to analyze the data. Nepali Army is not a threat to the society that it protects as it has been trying to build its trust and credibility among the public. During its Imperial Era, the Great Britain awarded several Grukha soldiers in its military the ‘Victoria Cross’ for their unparalleled bravery and courage in various battles. Nepali Army gets exposure to serve in the outside world for decades. Relation between civil and army has not been bad in Nepal for many centuries despite of having some rubbings in the modern Nepal. However, politicians, notably the sitting PM or Defense Minister, routinely try to invoke the Nepal Army and draw it into the political jurisdiction. Nepali Army has been doing its duties honestly and unfailingly both inside and outside Nepal.
Downloads
61
77
Downloads
Published
How to Cite
Issue
Section
License
The articles rest within the authority of the Nepali Army. Only with the Nepali Army's prior permission, can any article in whole or in part from this journal shall be reproduced in any form. | https://www.nepjol.info/index.php/unityj/article/view/38784 |
There are 1 remedial school companies listed in South Africa. Narrow down your search by selecting a region below.
Find a remedial schools company in your area:
- Gauteng (1 )
Cross-Over Remedial School
Cross-Over Remedial School in Randhart, Alberton, is a private school registered with the Department of Education that caters for grade R - 7 learners who have learning... | https://www.simplylinks.co.za/directory362_remedial-schools.htm |
Bierton CE Combined School demonstrates a strong culture of inclusion through the vision, values and culture of the school. Staff and governors are committed to promoting equality of opportunity across the school community and take seriously the requirements of the Public Sector Equality Duty as defined by legislation in 2010: http://www.legislation.gov.uk/ukpga/2010/15/section/149.
To promote the take-up of extra-curricular opportunities. Ensuring that there are clubs that appeal to girls to increase their participation in sporting clubs.
To extend our pupils’ understanding of cultural diversity and tolerance of differences in culture and religious beliefs through positive experiences of different cultures, traditions and languages.
To ensure that all pupils irrespective of gender, make at least good progress year on year and to close the gap between prior higher attaining girls and boys in reading.
Please click on the image below to open a copy of the below document. | http://www.biertoncombined.co.uk/School-Information/Equalities-Information-and-Objectives |
Testing Europe’s Unity
The Atlantic Council kicked off the second day of the conference “Toward a Europe Whole and Free” with a keynote conversation featuring José Manuel Durão Barroso, president of the European Commission. With the crisis in Ukraine unfolding, the Transatlantic Trade and Investment Partnership (TTIP) negotiations on the agenda, and the European Parliament elections quickly approaching, President Barroso outlined how these developments pose both opportunities and challenges for Europe’s unity.
President Barroso described the Ukrainian crisis as “the biggest challenge to peace in Europe since the fall of the Berlin wall, if not even since the second World War”, particularly in terms of its possible implications not only for European, but for global peace and security. He emphasized the importance of Ukrainian sovereignty and right to associate with both the European Union and Russia, a right now contested by Kremlin. For President Barroso, the main question the Ukraine crisis has raised is whether Russia chooses to stand by the European and transatlantic values and the international law, or intentionally breach them. To best respond to Russian actions in Ukraine, the European Union jointly with its partners, particularly the United States, needs to make it clear that it is possible to have an independent, sovereign, democratic, and prosperous Ukraine that will one day stand as an example of cooperation between Europe and Russia.
Addressing the economic sanctions implemented by the European Union against key Russian individuals involved in the crisis, President Barroso argued that the sanctions are an important way of demonstrating the Russian leadership the consequences of its actions, subsequently reaching de-escalation of the conflict. The European Union as Russia’s largest trading partner has already been able to have an impact on Russian economy as crucial investment decisions have been stopped, Russian economic growth has fallen, and its credit ratings have been downgraded. The European Union has also been successful in coordinating these sanctions with the United States, particularly through the newly established G7, turning the sanctions regime into a general transatlantic movement. However, as several member states of the European Union remain dependent on Russian energy exports, President Barroso said that the European Commission will need to continue promote the idea of energy security. First steps towards a new energy security program have already been made, as the European Union recently agreed to diversify its sources of energy supply through a new southern energy corridor.
President Barroso also addressed the importance of continued negotiations on the TTIP agreement. He emphasized his continuing support for the agreement, arguing that it as the largest trade and investment agreement can have a transformative role not only for the European Union and the United States, but also for the global economic order. Regarding the upcoming European Parliament elections in May, President Barroso voiced his strong belief in the prevailing of the moderate, pro-European parties, viewing the anti-European sentiments in some member states as a normal phenomenon in times of financial insecurity.
The opening panel of the second day of “Toward a Europe Whole and Free” was moderated by the Atlantic Council President and CEO Frederick Kempe. The conference celebrates the historic enlargements of NATO and the European Union and considers how best to sustain Europe’s path toward peace and prosperity despite the most dangerous challenge to that vision since the Cold War. | https://www.atlanticcouncil.org/commentary/event-recap/barroso-discusses-europe-s-unity-in-light-of-ukraine-crisis-ttip-and-eu-elections/ |
**Disclosure:** We declare that we have no financial and personal relationships with other people or organizations that would inappropriately influence our work.
Introduction {#os12422-sec-0005}
============
The sternoclavicular joint is a diarthrodial saddle type synovial joint. Between the upper extremity and the axial skeleton, the sternoclavicular joint is the only bony articulation[1](#os12422-bib-0001){ref-type="ref"}, [2](#os12422-bib-0002){ref-type="ref"}, [3](#os12422-bib-0003){ref-type="ref"}. The sternoclavicular joint is inherently unstable because the medial clavicular surface articulates with its corresponding articular surface on the manubrium sterni less than 50%[3](#os12422-bib-0003){ref-type="ref"}, [4](#os12422-bib-0004){ref-type="ref"}. Any movement of the shoulder girdle could cause some motion of the sternoclavicular joint. The clavicle elevates approximately 4° for every 10° of arm forward flexion[5](#os12422-bib-0005){ref-type="ref"}. When there are some combined movements, the clavicle can rotate up to 40° along its longitudinal axis. Due to the force at the shoulder girdle, the huge movement of the shoulder girdle can result in the dislocation of the sternoclavicular joint anteriorly or posteriorly. Dislocation occurs easily in patients with a short clavicle, which results in more torque.
The stability of the sternoclavicular joint depends mainly on the intrinsic and extrinsic ligament structures surrounding the joint, and this makes the sternoclavicular joint the constricted joint[3](#os12422-bib-0003){ref-type="ref"}, [4](#os12422-bib-0004){ref-type="ref"}. The costoclavicular ligament is divided into anterior fasciculus, which resists superior rotation and lateral displacement, and posterior fasciculus, which resists inferior rotation and medial displacement[1](#os12422-bib-0001){ref-type="ref"}. Besides the costoclavicular ligament, the other ligaments surrounding the sternoclavicular joint could also aid stability of the joint.
Due to the stable structures surrounding the sternoclavicular joint, sternoclavicular joint dislocations are infrequent, and represent only 3% of all dislocations in the shoulder girdle in the clinic[6](#os12422-bib-0006){ref-type="ref"}, [7](#os12422-bib-0007){ref-type="ref"}. Because of the different injury mechanisms, dislocation directions, and clinical manifestations, sternoclavicular joint dislocation can be divided into anterior dislocation and posterior dislocations. The lateral compressive force which effects the shoulder girdle could cause rupture of the anterior capsule and the costoclavicular ligament, and result in anterior dislocation of the sternoclavicular joint[4](#os12422-bib-0004){ref-type="ref"}.
The incidence of anterior dislocation is almost 90% in all sternoclavicular joint dislocations[8](#os12422-bib-0008){ref-type="ref"}. Because the stable structures around the dislocated sternoclavicular joint are broken, manual reduction of the anterior sternoclavicular joint dislocation is difficult and redislocation occurs frequently after reduction.
Because some important tissues, such as the pleura, lung, mediastinum and trachea, are just behind the sternoclavicular joint, high risks exist in the surgical treatment of sternoclavicular dislocation. Surgical methods such as Kirschner wires, FiberWire, two screws and a strong suture, T‐plates, and locking compression plates have been used in the treatment of sternoclavicular dislocation, but there are still complications, including limited movement of the shoulder girdle, redislocation, vascular and nerve rupture, pleura rupture, vital organ injuries, and displacement and breakage of plates and screws[9](#os12422-bib-0009){ref-type="ref"}, [10](#os12422-bib-0010){ref-type="ref"}, [11](#os12422-bib-0011){ref-type="ref"}, [12](#os12422-bib-0012){ref-type="ref"}, [13](#os12422-bib-0013){ref-type="ref"}. These serious complications require that the internal fixation has stability, micromotion, and fewer screws, which could avoid the severe consequences of these complications. However, there is still no ideal surgical method.
To provide micromotion and stability, and to avoid the injury of important organs by screws, we used an acromioclavicular joint hook plate for the treatment of anterior sternoclavicular joint dislocation. The purpose of the present study is to evaluate the safety and efficacy of using an acromioclavicular joint hook plate for the treatment of sternoclavicular joint dislocation, and to present the functional outcomes, in a series of 10 patients, at a minimum of 10 months of follow‐up.
Methods {#os12422-sec-0006}
=======
*General Data* {#os12422-sec-0007}
--------------
From January 2015 to May 2017, 10 patients with anterior sternoclavicular joint dislocation were admitted and surgically treated with acromioclavicular joint hook plates at our department. The Committee on Research Ethics of the Union Hospital approved this study. Exclusion criteria included that: (i) the patient had a brain injury or serious underlying chronic illness and, therefore, could not suffer the risk of surgery and anesthesia; and (ii) the patient demanded the conservative treatment even if closed reduction was pointless. There were 7 male and 3 female patients, with a mean age of 43.6 years. Two patients suffered from bilateral dislocations of the sternoclavicular joints, 8 patients suffered from unilateral dislocation, and 1 of the 10 patients had an old dislocation (more than 3 weeks). Three patients had rib fractures; 1 patient had a tibia fracture; and 1 patient had a clavicle fracture on the other side. Injury mechanism: 6 patients had car accidents, 3 patients fell from a high place, and 1 patient was injured by a crashing object.
All patients underwent the standard preoperative assessment, including preoperative history, physical examination, chest X‐ray, and computed tomography (CT) scan. Closed reduction was attempted for all the patients but failed, so surgery was undertaken. The interval between injury and surgery was from 3 to 7 days for the 9 acute patients.
*Surgical Technique* {#os12422-sec-0008}
--------------------
All patients were positioned supine on the operating table, and underwent general anesthesia. An anterosuperior straight incision was extended from the medial clavicle to the midsuperior aspect of the sternal manubrium, and the sternoclavicular joint, the sternal manubrium, and the medial clavicle were exposed. The incarcerated soft tissue of the sternoclavicular joint was cleaned, and the broken sternoclavicular joint cartilage plate was replaced or cleaned. The pointed end of the acromioclavicular joint hook plate was inserted into the dorsal osteal face of the sternal manubrium, and the lever effect was taken to press the medial end of clavicle down for reduction (Fig. [1](#os12422-fig-0001){ref-type="fig"}). When the dislocation was successfully replaced by the hook plate, three or four screws were fixed in the clavicle through the plate. The broken anterior sternoclavicular ligament and the costoclavicular ligament were repaired with absorbable sutures.
![Hook plate fixation in a patient with left anterior sternoclavicular joint dislocation. (A) An anterosuperior straight incision was extended from the medial clavicle to the midsuperior aspect of the sternal manubrium. (B) The lever effect was taken to press the medial end of the clavicle down for reduction by the hook plate. (C) The intraoperative fluoroscopy showed that the dislocation was reduced successfully by the hook plate.](OS-11-91-g001){#os12422-fig-0001}
*Postoperative Management* {#os12422-sec-0009}
--------------------------
In the first 4 weeks, the shoulder was immobilized in a sling, and easy exercises such as pendulum exercises in the glenohumeral joint were allowed, but not over 90° of abduction. After 4 weeks, the range of motion could be increased according to the clinical course. Sporting activities were to be avoided in the first 12 weeks. The hook plate could be removed 12 months postoperatively according to the clinical course.
*Follow up* {#os12422-sec-0010}
-----------
All the patients were followed up with a mean duration of 16.9 months (range, 10--24 months). Chest X‐rays and motion range measurement of the glenohumeral joint were taken on the first postoperative day, after 4, 8, and 12 weeks, and then taken once half yearly. The American Shoulder and Elbow Society (ASES) scoring system was used to assess the preoperative and postoperative physical function and ability of patients.
Results {#os12422-sec-0011}
=======
*General Data* {#os12422-sec-0012}
--------------
In this study, 8 patients underwent unilateral operations and 2 patients underwent bilateral operations. The mean operative blood loss was 45 mL (range, 30--90 mL), and the mean operative time was 0.8 h (range, 0.4--1.5 h). There were no respiratory or circulatory issues in the operation. There was no postoperative wound infection, and there were no complications such as joint redislocation, vascular rupture, pleura rupture, or vital organ injury. The associated injuries were treated effectively. There was no plate breakage or screw breakage at final follow‐up.
*X‐rays and Motion Range Measurement* {#os12422-sec-0013}
-------------------------------------
The postoperative X‐rays showed that all the dislocated joints were reduced successfully, the location and angle of the plates were suitable, and no redislocation appeared in the follow‐up period. There was no osteolysis observed in the sternum for all the patients.
*Physical Function* {#os12422-sec-0014}
-------------------
The postoperative abduction angle of the glenohumeral joint had a mean of 164.3° (range, 153°--172°) and the angles of two glenohumeral joints were less than 160°. The posterior extension angle of the glenohumeral joint had a mean of 39.9° (range, 30°--44°). The forward elevation had a mean of 147° (range, 135°--165°). The horizontal extension had a mean of 24.5° (range, 21°--29°) (Figs [2](#os12422-fig-0002){ref-type="fig"} and [3](#os12422-fig-0003){ref-type="fig"}).
![A patient with bilateral dislocation of the sternoclavicular joints and rib fratures. (A) The postoperative X‐ray image showed the well reduction by hook plates. (B--D) Functional outcome 6 months after surgery.](OS-11-91-g002){#os12422-fig-0002}
![After 17 months, the internal fixations of the patient in Fig. [2](#os12422-fig-0002){ref-type="fig"} were all removed. (A) The postoperative X‐ray image showed that the sternoclavicular joints were still stable after the removal of internal fixation. (B--D) Functional outcome 17 months after the first surgery.](OS-11-91-g003){#os12422-fig-0003}
According to the ASES scoring system, the postoperative physical function had a mean of 94.8, which was better than the preoperative function, which had a mean of 83.5 (Table [1](#os12422-tbl-0001){ref-type="table"}).
######
General data, American Shoulder and Elbow Society scoring system before and after operation, and the postoperative movements of glenohumeral joints of all 10 patients
No. Sex Age (years) Pre ASES Post ASES Follow‐up (months) Post abduction angle (°) Post extension angle (°) Post forward elevation (°) Post horizontal extension (°)
----- ----- ------------- ---------- ----------- -------------------- -------------------------- -------------------------- ---------------------------- -------------------------------
1 M 49 69 89 18 160(R)/153(L) 44(R)/38(L) 155(R)/140(L) 26(R)/22(L)
2 M 28 75 92 22 172(R)/168(L) 40(R)/43(L) 165(R)/148(L) 26(R)/22(L)
3 M 64 81 95 10 170 41 158 25
4 M 30 86 98 19 163 40 142 25
5 M 53 83 96 24 169 44 149 29
6 M 40 86 96 14 164 36 141 21
7 M 37 88 98 11 168 43 144 26
8 F 38 82 96 15 160 41 143 27
9 F 42 86 96 17 168 39 144 23
10 F 55 79 92 19 157 30 135 22
Discussion {#os12422-sec-0015}
==========
As a diarthrodial saddle type synovial joint, the sternoclavicular joint is inherently unstable, and it is also the only bony articulation between the axial skeleton and the upper extremity[1](#os12422-bib-0001){ref-type="ref"}, [2](#os12422-bib-0002){ref-type="ref"}, [3](#os12422-bib-0003){ref-type="ref"}. The ligaments surrounding the sternoclavicular joint guarantee the stability of the joint[2](#os12422-bib-0002){ref-type="ref"}. When there is a lateral compressive force effect on the shoulder girdle, it can cause rupture of the anterior capsule and part of the costoclavicular ligament, which results in the anterior dislocation of the sternoclavicular joint[8](#os12422-bib-0008){ref-type="ref"}. Due to the broken ligaments with a high energy injury, redislocation happens frequently after manual reduction, and the conservative treatment shows poor efficacy in sternoclavicular joint dislocation patients, for instance with progressive pain limiting the movement of the shoulder girdle and decreasing quality of life[14](#os12422-bib-0014){ref-type="ref"}.
There are many important thoracic structures, such as the trachea, the esophagus, brachiocephalic veins, the brachiocephalic artery, and the brachial plexus, located posteriorly to the sternoclavicular joint[15](#os12422-bib-0015){ref-type="ref"}, [16](#os12422-bib-0016){ref-type="ref"}. The complex structures surrounding the sternoclavicular joint mean that not only can the sternoclavicular joint dislocation cause trauma (e.g. rupture of blood vessel, nerve injury, mediastinum organs injury, pleura rupture, and lung rupture), but also fewer screws should be used, especially on the sternal manubrium, in the reduction of dislocation to prevent accidental injury by internal fixation[3](#os12422-bib-0003){ref-type="ref"}.
Surgical technologies including Kirschner wires, FiberWire, two screws and a strong suture, T‐plate, and locking compression plate, have been reported on for the treatment of steroclavicular joint dislocation[5](#os12422-bib-0005){ref-type="ref"}, [6](#os12422-bib-0006){ref-type="ref"}, [7](#os12422-bib-0007){ref-type="ref"}, [8](#os12422-bib-0008){ref-type="ref"}, [9](#os12422-bib-0009){ref-type="ref"}. For the methods using wires, a significant risk to mediastinal structures with wire migration has been reported, including fatal great vessel perforation[17](#os12422-bib-0017){ref-type="ref"}, [18](#os12422-bib-0018){ref-type="ref"}. In addition, the drilling and screwing on the sternal manubrium could increase the risk of mediastinal structures rupturing. For the methods using plates, such as T‐plates and locking compression plates, the plate neither decreases the risk of mediastinal structures rupturing by screwing onto the sternal manubrium nor has any benefit for the movement of the shoulder girdle because the rigid behavior of firm fixation limits the micromotion of sternoclavicular joint[19](#os12422-bib-0019){ref-type="ref"}, [20](#os12422-bib-0020){ref-type="ref"}. Meanwhile, firm fixation of the plate could also increase the risks of internal fixation displacement and breakage, reduction loss, and infection[21](#os12422-bib-0021){ref-type="ref"}.
In this study, an acromioclavicular joint hook plate was used for the treatment of sternoclavicular joint dislocation. There are some advantages of the treatment using the acromioclavicular hook plate: (i) the hook structure behind the sternal manubrium could reduce the risk of mediastinal structures rupture by screwing; (ii) the acromioclavicular hook plate could offer enough mechanical strength for maintaining the stability of the steroclavicular joint; and (iii) the hook plate could also provide micromotion with a certain range between the hook and sternal manubrium, which could be beneficial for the movement of the shoulder girdle and reduce the risk of displacement and breakage of internal fixation. For the 10 patients in this study, the operations proved safe and effective, with obvious improvement in shoulder girdle movement and no complications.
However, there are still some disadvantages of this treatment: (i) the structure of acromioclavicular joint hook plate is not highly suitable for the anatomical structure of the sternoclavicular joint, and the plate is not in line with the joint; (ii) the hook of the plate is so sharp that there is still some risk of rupture of mediastinal structures; (iii) this hook plate has no special screw hole available for the sternoclavicular joint dislocation with medial clavicle fracture; and (iv) the osteolysis of the sternum caused by the hook plate needs to be observed in further research. Based on these disadvantages, we have designed a new hook plate, for which a Chinese patient has been applied. This new hook plate conforms to the anatomical structure of the sternoclavicular joint, has a blunt hook to avoid rupture risk, and has some screw holes next to the hook for fixing the medial clavicle fracture fragments. In addition, the acromioclavicular joint hook plate has been used for anterior sternoclavicular joint dislocation in this study, but whether it could be used for posterior sternoclavicular joint dislocation should be tested further in the future.
*Conclusion* {#os12422-sec-0016}
============
The acromioclavicular joint hook plate demonstrates safety and efficacy for the treatment of anterior sternoclavicular joint dislocation. The hook structure provides micromotion, which is beneficial for the movement of the shoulder girdle, and reduces the risk of mediastinal structure rupture and breakage and displacement of the plate by screwing in the sternal manubrium. However, there are still some disadvantages of this treatment. Further studies are needed to improve the design of the new hook plate in the future.
This work was financially supported by the Development Center for Medical Science and Technology, the National Health and Family Planning Commission of the People\'s Republic of China (ZX‐01‐C2016024) and the Natural Science Foundation of Hubei Province (2016CFB424).
|