text
stringlengths 215
354k
|
---|
Earlier this year, Reddam House School in the UK, made the headlines by launching its metaverse school, with numerous classes taught in virtual reality (VR) classrooms.
While some may have been surprised by this advancement, the benefits of VR in education and training are well-known; it improves learning speed, quality of retention, and improves confidence for learners to apply new knowledge by 275%.
So, it’s clear that students can get a lot out of VR – but many educators are asking themselves: is VR the future of education? And, more importantly, what steps should educators take to ensure their students don’t miss out?
How AR and VR in education are different
Both augmented reality (AR) and virtual reality (VR) are finding valuable niches in the education sector, with distinct benefits for each technology. We already addressed the use of AR in education in our previous blog, so now we’re going to focus on the application of VR in classrooms and other educational settings.
It’s important to realize the key difference between AR and VR, as this shapes the unique advantages of each.
AR uses the real world as a ‘3D canvas’, and superimposes virtual elements onto reality. Users can also experience AR with many different devices, ranging from specialized AR glasses to regular smartphones. As a result, practically every student can access AR educational experiences.
Conversely, VR is a self-contained experience that excludes the ‘real world’ and replaces it with a virtual one. VR is generally only experienced using special VR headsets or goggles, often combined with haptic feedback devices like gloves.
So, while AR may be easier to access, VR can provide a greater depth of experience for students. For this reason, some educational institutions have already started to invest in VR headsets in much the same way as other tech, like iPads or laptops. Today, VR is being used in all levels of education, from primary schools to universities.
What is the value of VR in education?
Various industries and sectors already extensively use VR for training, and many of the same features make it equally valuable for the education sector.
According to American University, Washington DC., (a research college that trains education professionals), VR has a strong contribution to make:
“VR can bring academic subjects to life, offering students new insights and refreshing perspectives. But VR can’t replace human interaction. Learning is fundamentally a social experience, so VR is best used as a supplemental learning tool.”
There are key areas where VR in education excels:
Many students achieve better results when they can access knowledge via experiential learning. For some subjects, it’s the only way to learn. But access to experiential learning in the real world is limited by time and resources. With VR, students can gain equal access to hands-on experiences without limitation. Without needing to travel, time is saved, and more experiences are possible.
By interacting with learning resources via a gamified experience, students can let their imagination run wild. They can gain access to new experiences that wouldn’t be possible otherwise, like walking on the surface of Mars, or stepping inside a living cell. This can inspire students to explore new interests.
VR is a very active form of learning, and users get to experience content via interaction. The result is very high engagement with learning materials, and increased retention.
Many schools and universities were inspired to use VR classrooms as a direct response to the COVID pandemic, to enable a high-level of virtual learning. This advantage remains. Students in remote locations and home-learners get significant benefit from VR learning.
Seeing the invisible
Some things don’t quite come across in a textbook, or even a video. Looking at cellular biology, for example, VR can bring processes like protein synthesis and cell division to life – so students can see something that just cannot be experienced another way.
VR can take students to locations far afield, extending their learning experiences and horizons.
Dangerous and disaster training
VR enables safe teaching of difficult subjects, like preparing for emergency or disaster situations, or where there’s real danger if it’s done in the real world.
Because VR experiences are so immersive, they’re also more memorable. Many educators report that students are more actively involved in reflecting on what they’ve learned after the VR sessions, spurring more vigorous conversations in class and greater participation.
Equality of experience
With VR, everyone has the same access to learning materials, and can learn in the best ways for them. This gives students a more level playing field. There’s no being stuck at the back, where they can’t see or be involved. Everyone gets the same immersive hands-on experience.
Real-world examples of VR in Education
Still not convinced? Let’s look at some concrete examples of the diverse ways VR in classrooms can deliver better outcomes for students.
Lagos Business School, Nigeria – using VR to teach empathy and compassion.
The University of Amsterdam, The Netherlands – launched a project to make limited-access archaeological sites, laboratories, museum displays, and historical sites accessible via VR.
The University of Hertfordshire, UK – using VR to help pharmaceutical science students understand the biomechanics of pharmaceuticals and the science of drug discovery.
The Mendip School, UK – using VR to help autistic students prepare for adult life with simulations of work experiences, that prepare students for the real thing by giving them realistic expectations that reduce anxiety.
The Inspired Education Group, Worldwide – unveiled a new VR classrooms and Metaverse learning project, which they plan to extend to their 55,000 students worldwide. Students can join classes with their peers around the world in a virtual school.
Race Leys Junior School, UK – invested in VR headsets so the entire class could go on virtual school trips during the pandemic period. As the school found VR boosted student performance, they have permanently integrated VR into their learning strategy.
The Open University, UK – created a VR experience to tackle bullying, by enabling students to understand different perspectives and rethink prejudiced attitudes.
Humanitarian organization Terra Pura, Ukraine – using a VR/AR app created by Fectar to teach children about the dangers of landmines and other explosives and how to recognize them. This is a life saving (and limb-saving) tool for millions of children living in an active warzone.
As we can see from the above examples, there are many different ways learning can be enhanced with VR – and these are just the tip of the iceberg.
Given the future growth of these technologies it’s also worth thinking about teaching VR as a subject, giving students the knowledge they need to create 3D spaces and immersive content. This skill may be as useful as computer literacy has been for the past few generations.
The costs of using VR in Education
There’s always the cost to consider, and this comes from several areas. First, there’s the equipment – as each student needs access to their own headset for VR. The cost of these is coming down, making them more accessible, and already comparing favorably to something like an iPad.
Next there’s the learning content. This will be highly variable, depending on whether you’re using a specialized ‘VR classroom’ solution with its own content, or if you are using free resources, or learning material you create yourself.
With both free content and subscription-based content, you’re limited to what’s available and how it’s presented. By contrast, you can make your own learning resources and get better control over the focus and depth of materials, but there’s the time and expertise involved in making it to consider.
If you’re just starting out, it’s far easier to start using the free VR educational content that’s already available, unless you find a perfect match with a subscription-based provider. This can help you determine which VR lessons are most effective, and why.
Building custom content and VR classrooms
The next stage is to create your own materials based on your lesson plan. The advantage of this is that you’re not limited to pre-existing content, and you can update it yourself when needed. You can also use a basic template that enables you to add more depth for advanced learners.
Thanks to new tooling, it’s now possible for non-experts to create their own VR and AR spaces, including educational content and training material. As a result, it’s much easier for educators to build VR lessons they can use in their own classrooms, and customize them exactly as they want. Given the high potential value for students, this opportunity should not be missed.
Want to see for yourself how easy it can be? Discover the Fectar Studio, and start creating VR & AR without needing any code. |
Since 2018, Colombia has been making important efforts to prevent violence affecting children and adolescents. In 2018, Colombia conducted the Violence Against Children and Youth Survey (VACS), led by the Ministry of Health and Social Protection and financed by the United States Agency for International Development (USAID), with technical advisory by the US Centers for Disease Control and Prevention (CDC), financial and technical support of Together for Girls, operational support by the United Nations International Organization for Migration (IOM), and contributions by other allied partners.
The VACS showed a disturbing situation: approximately two out of five children in Colombia have been victims of violence – either physical, sexual or emotional – before the age of 18. In response to this situation, in 2019, Colombia created the “National Alliance to End Violence Against Children and Adolescents”, to develop a National Action Plan (NAP) on Violence against Children and Adolescents in Colombia 2021-2024. This process received the technical and financial support of the Global Alliance, UNICEF, USAID and Universidad de los Andes.
This report aims to identify the milestones, key actions, and lessons learned from the development of the NAP in Colombia. |
Mr. Subrata Das, Minister of Education, Embassy of the Republic of India in the Russian Federation, giving an interview to "International Affairs" journal, highlighted the importance of educating the younger generation on the basis of studying and promoting the moral heritage of the two great non-violence apologists, Mahatma Gandhi and Leo Tolstoy.
Two great and truly noble souls of the modern era, Mahatma Gandhi and Leo Tolstoy, had similar approaches to life and existence..
The commonality of their thinking increased as a result of their correspondence.
Mahatma Gandhi, non-violence advocate, called Tolstoy the greatest preacher of non-violence.
Their ideas have become even more relevant today, in times of rampant consumerism and as the world seeks to address the environmental and ecological issues.
As social reformists, both of them sought universal human progress via drawing strength from existing realities. The wisdom of putting into practice what they preached came naturally to them. In the case of Mahatma Gandhi, it led to India's independence movement by means of non-violence and civil resistance.
Both of them tirelessly championed the rights of the underprivileged, as well as the vulnerable social caps.
Scholars have shed light on the evolution of the interaction between Tolstoy and Gandhi, commencing in 1893 when Mahatma Gandhi, under Tolstoy's influence, initiated his studies in Britain. After his arrival in South Africa in 1909, Mahatma Gandhi started the correspondence with Tolstoy and, moreover, actively studied Tolstoy's writings. According to Mahatma Gandhi, Tolstoy's work "Tsatstvo Bozhiye Vnutri Nas" made a lasting impression on him. His farm in South Africa was named in Tolstoy's honor.
After his return to India, Gandhi continued to look to Tolstoy as a mentor, considering him a prominent figure who had a profound influence on the world of the XX century. As advocates and practitioners of nonviolence, both Gandhi and Tolstoy shared formidable spiritual partnership that continued even after Leo Tolstoy's demise.
This deeply resonated with Mahatma Gandhi's combat against colonialism.
While scholars continue to carry out research on the ideas of Gandhi and Tolstoy, it is crucial to convey these ideas to the younger generation so as to provide the right impetus and inspiration.
In this context, the great work done by the organization "BRICS. World of Traditions" within the framework of the International and Inter-Regional Socio-Cultural Program "BRICS People Choosing Life", as well as the role of Delhi Public School, Dwarka, and School 1409, Moscow, are by all means notable.
It is heartening to witness the commendable work done for the sake of familiarizing students with the philosophy of two of the greatest socio-political leaders of our age.
The Embassy of the Republic of India in the Russian Federation highly appreciates the significance of this socio-cultural program that encompasses a series of events in a number of Russian, as well as other BRICS countries regions and that aims to create a common cultural and educational community bringing together like-minded people from all the BRICS nations.
read more in our Telegram-channel https://t.me/The_International_Affairs |
Informed consent process: A step further towards making it meaningful!
Informed consent process is the cornerstone of ethics in clinical research. Obtaining informed consent from patients participating in clinical research is an important legal and ethical imperative for clinical trial researchers. Although informed consent is an important process in clinical research, its effectiveness and validity are always a concern. Issues related to understanding, comprehension, competence, and voluntariness of clinical trial participants may adversely affect the informed consent process. Communication of highly technical, complex, and specialized clinical trial information to participants with limited literacy, diverse sociocultural background, diminished autonomy, and debilitating diseases is a difficult task for clinical researchers. It is therefore essential to investigate and adopt innovative communication strategies to enhance understanding of clinical trial information among participants. This review article visits the challenges that affect the informed consent process and explores various innovative strategies to enhance the consent process.
All material on this website is protected by copyright, Copyright © 1994-2024 by WebMD LLC.
This website also contains material copyrighted by 3rd parties.
Your Privacy Choices |
Mobile outdoor learning is on-the-go learning that is tied to particular places and situations that encourages students to explore their environment and find solutions to challenging real-world issues. This challenges educators to create thorough and contextualised learning experiences. In one of our project teachers created mobile outdoor learning scenarios that were expected to be location based, incorporate digital data collection tools and integrate different subjects. The questions explored were: How mobile outdoor learning scenarios created by teachers develop taking into account the location, subject integration and Bloom’s taxonomy? How student experiences are related to the developed outdoor learning scenarios? Content analysis of two sets of scenarios (25 from the beginning and 20 from the end of the project) was conducted. The scenarios were categorised based on 3 indicators: the use of location (in context, through context, about context), the level of Bloom’s revised taxonomy (remembering, understanding, applying, analysing, evaluating, creating) and the type of integration used in the tracks based on Fogarty subject integration model (connected, nested, sequenced, shared, webbed, threaded, integrated, immersed). Student experiences were analysed in connection with these learning scenarios. The study demonstrates that teachers did not recognize the potential of mobile technologies and accompanying pedagogical models in order to design consistent learning experiences that emphasise higher order thinking levels, encompass contextual information, and integrate knowledge from multiple subjects at the beginning of the project but it improved over the time. Furthermore, there is no clear connection between the type of learning and scenario and students’ experience.
|Julkaistu - 2022
|Asian Conference of Education abstracts -
Kesto: 28 marrask. 2022 → …
|Asian Conference of Education abstracts
|28/11/22 → … |
In a job market where many top jobs didn’t even exist a decade ago, how can teachers help prepare students for careers that haven’t been invented yet? While preparing students for careers is not the sole purpose of education, it’s clear that teachers and guidance counselors are working hard to help students understand their post-graduation options. How do guidance counselors help students understand the jobs that are already available? And how do students, educators and counselors find out what jobs might be options in the future?
There are many answers to these questions, and schools and education organizations are becoming quite resourceful in setting students up for 21st-century success. However, according to many teachers and students in the TED-Ed community, there’s still work to be done to bridge the knowledge gap between what happens in school and what happens in the modern workplace.
With this challenge in mind, TED-Ed set out to design an interactive, open-ended series that helps young learners find out more about careers they’re interested in … and careers they simply never knew existed.
The series is called “Click Your Fortune,” Above, check out the introduction.
Click Your Fortune was created in the style of “choose your own adventure.” Each video features four professionals (selected from among the attendees and speakers of TEDGlobal 2013) reading career-related questions submitted directly by students. Once all four questions are read, the viewer can click the paths that relate to their interests.
Students can also suggest questions, participants and careers to be featured in future videos. Yes, this series is a work in progress — because we believe it has to be. Career options change fast, and we want to ensure that the series is serving the actual, and always evolving, curiosities of young learners.
The TED-Ed team is excited to get feedback from teachers and guidance counselors regarding the usefulness of this series’ approach. We’re also extremely excited to see some brave students already suggesting content for the next batch of Click Your Fortune Videos!
Edit: Thanks, readers, for your heads-up on the misleading statistic attributed to a Department of Labor report. It is not accurate to say that 65% of school-aged kids will work in jobs that are not yet invented. After reviewing the report to which the stat was attributed, it has been removed from the story. |
In the dynamic and intricate world of oil and gas exploration, communication of complex processes is often a challenge. The industry involves a myriad of intricate procedures, from drilling to refining, and conveying these processes to stakeholders, investors, or the general public can be a daunting task. This is where the power of oil and gas animation comes into play.
Oil and gas animation serves as a transformative tool in simplifying complex concepts and making them accessible to a broader audience. Whether it’s illustrating the intricate dance of drill bits through layers of the Earth or depicting the refining of crude oil into valuable end-products, animation has proven to be an invaluable medium for conveying these processes with clarity and precision.
The Power of Oil and Gas Animation
Unlike static images or dry text, animation possesses a unique ability to transcend the limitations of reality. It can shrink us down to the microscopic level, revealing the fascinating dance of oil and gas molecules within a reservoir. Or, it can whisk us away on a whirlwind tour, encompassing vast landscapes and showcasing the intricate network of pipelines that deliver these resources across continents. Time becomes malleable, allowing us to witness geological processes unfolding over millions of years within seconds, providing a clear understanding of complex formations like shale plays or the dynamic process of fracking.
But the power of animation goes beyond mere visualization. It can:
1. Break Down Barriers of Comprehension
Complex scientific concepts, often laden with jargon, can be distilled into visually compelling narratives that resonate with audiences of all backgrounds. Animation can translate technical details into digestible chunks, fostering understanding and engagement.
2. Spark Curiosity and Ignite Imagination
By bringing inanimate objects to life – from towering drilling rigs to intricate subsea pipelines – animation captures attention and fuels curiosity. This fosters a desire to learn more about the intricacies of the industry and the hidden forces at play.
3. Raise Awareness of Challenges and Opportunities
Animation can effectively depict the environmental and social challenges associated with the oil and gas industry, showcasing the impact of spills, emissions, and habitat destruction. This can spark important conversations about sustainability and responsible resource extraction. It can also highlight the industry’s efforts towards innovation and cleaner technologies, fostering a sense of optimism and hope for the future.
4. Bridge the Gap Between Experts and the Public
By making complex topics visually engaging and accessible, animation can bridge the gap between industry experts and the general public. This fosters informed dialogue, promotes better decision-making, and contributes to a more sustainable future for the industry as a whole.
The oil and gas industry is not just about pipelines and rigs; it’s a dynamic world of intricate processes, cutting-edge technologies, and ongoing challenges. Animation, with its unique ability to visualize the invisible, can demystify this world, spark critical conversations, and inspire a deeper understanding of the resource that fuels our modern lives.
Breakdown of Oil and Gas Animation Process
In the dynamic realm of the oil and gas industry, effective communication of intricate processes is paramount. Bridging the gap between complexity and accessibility, oil and gas animation emerges as a critical tool in conveying these intricate details to diverse audiences. Here’s an in-depth look at the systematic breakdown of the oil and gas animation process.
1. Research and Conceptualization
In the initial phase, animators collaborate closely with subject matter experts to gain a profound understanding of the specific oil and gas processes and technologies in focus. This collaboration not only ensures the animation’s accuracy but also aligns it with industry standards, laying the foundation for a comprehensive visual narrative.
With a wealth of information at their disposal, animators craft a script that serves as the backbone of the animation. This script meticulously outlines the narrative, key messages, and the sequential flow of events. Striking a delicate balance between technical precision and accessibility, the scripting process ensures the animation effectively communicates its intended message.
Visual planning takes center stage in the storyboarding phase. Animators translate the script into a series of still images, providing a visual roadmap for the animation. Crucially, this step allows for early stakeholder feedback, ensuring the animation resonates with both technical experts and a broader audience.
4. Animation Production
The core of the animation process involves bringing visual elements to life. Beginning with the creation of 2D or 3D models, animators meticulously texture and light these models for enhanced realism. The animation itself comprises dynamic movements illustrating complex processes, accompanied by visual effects that emphasize critical details. This intricate dance of art and science captures the essence of the oil and gas industry in a visually compelling manner.
5. Review and Feedback
Stakeholder reviews are pivotal for refining the animation. Feedback from subject matter experts, project stakeholders, and potential end-users guides iterative improvements, ensuring the animation meets industry standards and effectively communicates the intended message.
In the finalization phase, animators polish the animation to perfection. Fine-tuning visual elements, optimizing details, and conducting rigorous quality assurance checks ensure the animation is not only visually striking but also technically sound and free from discrepancies.
This comprehensive breakdown underscores the strategic blend of technical expertise and creative finesse, positioning oil and gas animation as an indispensable conduit for communicating the intricate processes of this dynamic industry.
Examples of Esimtech Oil and Gas Animation
Leading innovators like Esimtech are harnessing the power of oil and gas animation to make a real difference. Here are a few examples of their impactful projects:
The animation displays the internal framework, operational principles, assembly, and disassembly procedures of drilling and well control devices. This allows students to acquaint themselves with the components and principles of these devices, gain proficiency in examining and commissioning the primary working systems, and develop the ability to analyze and assess the operational conditions of the devices, enabling them to promptly identify and address issues.
2. Animation of Diesel Engine Assembly and Disassembly
By utilizing an exploded view, the animation showcases the internal structure and key elements of the diesel engine and its components. The assembly, disassembly, examination, maintenance, and operational principles of diesel engines are presented through animated visuals, accompanied by subtitles and dubbing. This animation serves to acquaint students with the operational principles of diesel engines, enabling them to proficiently understand and conduct examinations and commissioning of a diesel engine’s primary working system.
3. Animation Of Downhole Tools Assembly And Disassembly And Working Principle
The internal structure and components of downhole tools are revealed through an exploded view, semi-section, and translucent shell. The assembly, disassembly, and working principles of the tools are illustrated through animated visuals, complemented by subtitles and dubbing. This animation aims to empower students with a comprehensive understanding of the function, working principles, operation, and maintenance of downhole tools.
4. Land Rig Installation Animation
The land rig installation animation comprehensively depicts the entire process, starting from the baseline drawing to the installation of each of the 198 components, culminating in the rising of the derrick. This animation serves as an authentic representation of the actual installation procedure. By watching the animation, users gain a clear and comprehensive understanding of the entire land rig installation and elevation process.
In conclusion, oil and gas animation is no longer a futuristic concept; it’s a transformative tool shaping the present and future of this critical industry. By bridging the gap between technical details and public understanding, it fosters informed dialogue, responsible practices, and sustainable progress. As we delve deeper into the Earth’s hidden resources, let us harness the power of animation to illuminate the path forward, for the benefit of both industry and society as a whole. |
Government of Manitoba
As a partner of Every Child Matters the Government of Manitoba have provided the following resources for adults and teachers to support their children’s learning during the virtual event:
Critical/Courageous Conversations on Race
Parent Companion document to encourage critical/courageous conversations on Race entitled “What your child is learning at school and how you can help”
Creating Racism-Free Schools through Critical/Courageous Conversations on Race
Document for promoting school divisions, schools, teachers, parents, and students to undertake critical and courageous conversations on racism to create inclusive and equitable classrooms and schools for First Nation Métis Inuit students and all students.
NCTR’s spirit name – bezhig miigwan, meaning “one feather”.
Bezhig miigwan calls upon us to see each Survivor coming to the NCTR as a single eagle feather and to show those Survivors the same respect and attention an eagle feather deserves. It also teaches we are all in this together — we are all one, connected, and it is vital to work together to achieve reconciliation. |
Feature Engineering: Processes, Techniques & Benefits in 2024
Data scientists spend around 40% of their time on data preparation and cleaning. It was 80% in 2016, according to a report by Forbes. There seems to be an improvement thanks to automation tools but data preparation still constitutes a large part of data science work. This is because getting the best possible results from a machine learning model depends on data quality and creating better features can help provide better quality data.
In this article, we’ll explore what feature engineering is, what are its techniques, and how you can improve feature engineering efficiency.
What is feature engineering?
Feature engineering is the process of transforming raw data into useful features.
Real-world data is almost always messy. Before deploying a machine learning algorithm to work on it, the raw data must be transformed into a suitable form. This is called data preprocessing and feature engineering is a component of this process.
A feature refers to an attribute of data that is relevant to the problem that you would like to solve with the data and the machine learning model. So, the process of creating features depends on the problem, available data, and deployed machine learning algorithm. Therefore, it would not be useful to create the same features from a dataset for two different problems. In addition, different algorithms require different types of features for optimal performance.
What are feature engineering processes?
Feature engineering can involve:
- Feature construction: Constructing new features from the raw data. Feature construction requires a good knowledge of the data and the underlying problem to be solved with the data.
- Feature selection: Selecting a subset of available features that are most relevant to the problem for model training.
- Feature extraction: Creating new and more useful features by combining and reducing the number of existing features. Principal component analysis (PCA) and embedding are some methods for feature extraction.
What are some feature engineering techniques?
Some common techniques of feature engineering include:
Most ML algorithms cannot work with categorical data and require numerical values. For instance, if you have a ‘Color’ column in your tabular dataset and the observations are “Red”, “Blue” and “Green”, you may need to convert these into numerical values for the model to better process it. However, labeling “Red” = 1, “Blue” = 2, and “Green” = 3 is not enough because there is not an ordered relation between colors (i.e. blue is not two times red).
Instead, one-hot encoding involves creating two columns for being “Red” and “Blue”. if an observation is red, it takes 1 in the “Red” column and 0 in “Blue”. If it is green, it takes 0 in both columns and the model deduces that it is green.
Log-transformation is replacing each value in a column with its logarithm. It is a useful method to handle skewed data as shown in the image below. Log-transformation can transform the distribution to approximately normal and decrease the effects of the outliers. Fitting a linear predictive model, for instance, would give more accurate results after transformation because the relationship between the two variables is closer to linear after transformation.
Outliers are observations that are distant from other observations. They can be due to errors or be genuine observations. Whatever the reason, it is important to identify them because machine learning models are sensitive to the range and distribution of values. The image below demonstrates how outliers drastically change a linear model’s fit.
The outlier handling method depends on the dataset. Suppose you work with a dataset with house prices in a region. If you know that a house’s price cannot exceed a certain amount in that region and if there are observations above that value, you can
- remove those observations because they are probably erroneous
- replace outlier values with mean or median of the attribute
Binning, or discretization, is grouping observations under ‘bins’. Converting ages of individuals to age groups or grouping countries according to their continent are examples of binning. The decision for binning depends on what you are trying to obtain from the data.
Binning can prevent overfitting, which happens when a model performs well with training data but poorly with other data. On the other hand, it sacrifices granular information about data.
Handling missing values
Missing values are among the most common problems of the data preparation process. There may be due to error, unavailability of the data, or privacy reasons. A significant portion of machine learning algorithms are designed to work with complete data so you should handle missing values in a dataset. If not, the model can automatically drop those observations which can be undesirable.
For handling missing values, also called imputation, you can:
- fill missing observations with mean/median of the attribute if it is numerical.
- fill with the most frequent category if the attribute is categorical.
- use ML algorithms to capture the structure of data and fill the missing values accordingly.
- predict the missing values if you have domain knowledge about the data.
- drop the missing observations.
Feature scaling is standardizing the range of numerical features of the data. Consider these two examples:
- Suppose that you have a weight column with some values in kilograms and others in tons. Without scaling, an algorithm can consider 2000 kilograms to be greater than 10 tons.
- Suppose you have two columns for individuals in your dataset: age and height, with values ranging between 18-80 and 152-194, respectively. Without scaling, an algorithm doesn’t have a criteria to compare these values and is likely to weight larger values higher and weigh smaller values as lower, regardless of the unit of the values.
There are two common methods for scaling numerical data:
- Normalization (or Min-Max Normalization): Vales are rescaled between 0 and 1.
- Standardization (or Z-score Normalization): Values are rescaled so that it has a distribution with a 0 mean and variance equal to 1.
Why is it important now?
Feature engineering is an integral part of every machine learning application because created and selected features have a great impact on model performance. Features that are relevant to the problem and appropriate for the model would increase model accuracy. Irrelevant features, on the other hand, would result in “Garbage in-Garbage out” situation in data analysis and machine learning.
How to increase feature engineering efficiency?
Feature engineering is a process that is time-consuming, error-prone, and demands domain knowledge. It depends on the problem, the dataset, and the model so there is not a single method that solves all feature engineering problems. However, there are some methods to automate the feature creation process:
- Open-source Python libraries for automated feature engineering such as featuretools. Featuretools uses an algorithm called deep feature synthesis to generate feature sets for structured datasets.
- There are also AutoML solutions that offer automated feature engineering. For more information on AutoMl, check our comprehensive guide.
- There are MLOps platforms that provide automated feature engineering tools. Feel free to check our article on MLOPs tools and our data-driven list of MLOps platforms.
However, it should be noted that automated feature engineering tools use algorithms and may not be able to incorporate valuable domain knowledge that a data scientist may have.
If you have other questions about feature engineering for machine learning and automated ML solutions, don’t hesitate to contact us:
Cem has been the principal analyst at AIMultiple since 2017. AIMultiple informs hundreds of thousands of businesses (as per similarWeb) including 60% of Fortune 500 every month.
Cem's work has been cited by leading global publications including Business Insider, Forbes, Washington Post, global firms like Deloitte, HPE, NGOs like World Economic Forum and supranational organizations like European Commission. You can see more reputable companies and media that referenced AIMultiple.
Throughout his career, Cem served as a tech consultant, tech buyer and tech entrepreneur. He advised businesses on their enterprise software, automation, cloud, AI / ML and other technology related decisions at McKinsey & Company and Altman Solon for more than a decade. He also published a McKinsey report on digitalization.
He led technology strategy and procurement of a telco while reporting to the CEO. He has also led commercial growth of deep tech company Hypatos that reached a 7 digit annual recurring revenue and a 9 digit valuation from 0 within 2 years. Cem's work in Hypatos was covered by leading technology publications like TechCrunch and Business Insider.
Cem regularly speaks at international technology conferences. He graduated from Bogazici University as a computer engineer and holds an MBA from Columbia Business School.
To stay up-to-date on B2B tech & accelerate your enterprise:Follow on
Next to Read
Your email address will not be published. All fields are required. |
The effects of Lubigi sewage treatment plant on communities in Namugoona
Naggayi, Bridget Leticia
MetadataShow full item record
As populations increase rapidly, scarcity of clean water resources also becomes an issue thus introduction of sewage treatment plants. Sewage treatment is a type of wastewater treatment which aims at removing contaminants from sewage so as to produce an effluent that is suitable for reuse in the environment. Despite the importance of sewage treatment plants, they have many effects on the surrounding households and the environment. The overall goal of the study was to contribute to the understanding of the effects of Lubigi sewage treatment plants to the surrounding households which will help in creating awareness and feasible measures for the negative effects in Namugoona village. Specifically the study sought to assess how beneficial the Lubigi sewage treatment is to Namugoona residents, the challenges they are facing from the sewage plant and the measures developed to solve the challenges faced. A cross sectional research study was undertaken using a mixed approach where simple random sampling was employed in the section of the households for interviews and purposive sampling on the key informants. Data was collected through field surveys, interviews and direct field observations and then analyzed using thematic content were descriptive statistics. The study finding revealed that Lubigi sewage treatment plant was highly beneficial to the residents of Namugoona, air pollution was the biggest challenge people have faced from the sewage treatment plant and among the 50 respondents 40% presented air pollution as the main challenge. The study revealed that there was no significant relationship between gender and health complications from Lubigi sewage treatment plants . Based on normal approximations, the hypothesis were not significantly with p-values greater than 0.05. The study found out that air pollution 40%, Health complication 22%, reduced soil fertility 18% and climate disturbances 24% were some of the major negative effects of the Lubigi sewage treatment plant to the residents of Namugoona. The study therefore has identified sensitization by the NSWC and local government about how people can carry out disease prevention and also sewage treatment plant authorities to ensure a lot of oxygen supply in their processes so as the bad odor is controlled. |
Written by Dr. Lance Kisby
The term Molar Incisor Hypominerlaization (MIH) was first introduced in 2001 to describe ‘hypomineralisation of systemic origin, presenting as demarcated, qualitative defects of enamel of one to four first permanent molars (FPMs) frequently associated with affected incisors.’ In 2003, it was further defined as “a developmental, qualitative enamel defect caused by reduced mineralization and inorganic enamel components which leads to enamel discoloration and fractures of the affected teeth.”2 Initially, the condition was described as affecting the first permanent molars (FPMs) and incisors but more recently it has been noted that these defects could affect any primary or permanent tooth.3 Weeheijam showed they can also occur on second primary molars, permanent molars, and the cusp tips of permanent canines.2
In its mildest condition, the enamel can appear white-yellow and in its more severe condition, the enamel can be brown-orange. The discoloration is easy to differentiate from other enamel defects in that the effected areas are asymmetric with irregular borders.4
Treating pediatric dental patients with MIH pose several challenges.5-8 Severe post eruptive breakdown is common on stress bearing teeth. When the hypomineralization is on the occlusal of primary and permanent molars the result can be post eruptive breakdown which causes these teeth to be sensitive to cold. Consequently, there is, very often, an inability to achieve adequate local anesthesia and it is thought to be, possibly, related to chronic pulpal inflammation. In pediatric patients, behavior guidance problems result due to dental fear and anxiety resulting from the pain experienced from previous multiple treatment appointments.
Treating these teeth is also challenging. In children, MIH incisors create esthetic concerns which has been shown to have compromised functioning, well-being, quality of life,9 anxiety, depression, and other mood states.10
There are many treatment options available in young patients for anterior MIH incisor teeth. The best treatment is a conservative approach as these immature anterior teeth have large and sensitive pulps.11 Composites require removing the effected area, a situation where local anesthesia may not work, require long term observation and maintenance due to discoloration, wear, and marginal fractures.12
Porcelain veneers are indicated for patients over age 18 years after the gingival margin has matured and is best used when other more conservative techniques have failed.12-13
Resin infiltration has many benefits. The refractive index of enamel is 1.62 and the refractive index of resin infiltration is 1.52. This technique improves the translucency and thus improves esthetics.14-15
This article will present a conservative treatment option of Icon Resin Infiltration of anterior permanent teeth with MIH in children. *Please note that while MIH is not an approved Icon indication, the resin infiltration technique has been shown to be effective in several cases for correcting esthetic issues of MIH lesions on anterior teeth.
Figure 1 shows a well demarcated MIH buccal white lesion on tooth #8. The patient, an 8–year–old girl, presented with her mother concerned about the white spot on #8. The patient related how friends were teasing her at school, calling her names, and the mother related that the patient has no friends.
Icon resin infiltration system consists of an etch of 15% HCI, Icon Dry which is 99% ethanol, and a methacrylate based resin.
The technique used is as follows: the tooth is isolated with a rubber dam. With a carbide finishing bur, remove the thin surface layer of the lesion in order for the infiltrate resin to gain access to the lesion body. Next, Icon Etch was placed by massaging the Etch over the surface for a 2-minute period. The instructions call for inspecting the tooth to get a preview of the final result by rinsing off the etch, placing Icon Dry and the whitish-opaque area should diminish. If not, the etch and preview step can be repeated up to 2 more times. In this case, after three times of etching, rinsing, and previewing, there was no change in color. I decided to repeat both the etch and preview steps.
At the last preview step, there was only a very slight change in color. It was decided to stop etching for fear of removing too much enamel.
As indicated in the Icon instructions, the tooth was dried with an oil free air syringe. Icon Dry was placed onto the lesion and allowed to set for 30 seconds. For best treatment results, it is necessary to dry the lesion again with an oil free air syringe. Now that the lesion is completely desiccated, the tooth is ready to absorb the infiltrating resin. The lcon lnfiltrant cannot be applied under direct operatory light because the material will set prematurely. After all lights in the room were shut off, an ample amount of lcon lnfiltrant was placed onto the etched and dried surface by continuously turning the shaft of the syringe and massaging the resin material into the prepared lesion with the applicator to keep the surface wet. I determined this lesion to be deeper and larger than most MIH defects. The instructions indicate the esthetic result of the resin can be improved by extending the penetration of the resin for up to 6 minutes, which was done in this case, instead of the usual 3 minutes. The lcon lnfiltrant was light cured for 40 seconds. The resin was applied a second time, allowed to penetrate for 1 minute, and then light cured for another 40 seconds. The surface was gently smoothed with polishing cups. Figure 2 shows the final result.
At the one-month follow-up, the mother related that the patient is now smiling more, happier, making new friends, and is now getting invited to sleep-overs. I noticed the patient was smiling more and seemed happier than the first time I had seen her.
Figure 3 shows an MIH lesion on the buccal of #9. This patient was a 9–year–old boy. The technique was performed as in Case #1 except the lesion was etched with the Icon Etch 5 times. After each etch, the Icon Dry was used to preview the result. There was still no difference in color of the lesion after the fifth etch. I decided to etch fewer times to decrease the amount of enamel loss.
Figure 4 shows the final result where the lesion color matches the enamel of the rest of the tooth.
MIH effected enamel is characterized by a reduction in mineral quality as well as an increased porosity.16 Molars with MIH have 5 to 10 times more treatment than molars with no MIH.17
The most commonly encountered problems in MIH affected anterior teeth are thermal hypersensitivity, discoloration, and enamel break down.18 Young patients frequently comment on esthetic concerns regarding anterior teeth which can lead to psychosocial issues.
From the above post treatment images, the Icon Resin Infiltration technique creates excellent and pleasing esthetic results on MIH anterior teeth. It is a conservative alternative to complete removal of the lesion on anterior teeth.
The etching instructions for Icon Resin Infiltration are indicated for etching white spots in enamel commonly seen after orthodontic bands and brackets have been removed. MIH enamel white lesions on anterior teeth are the result of a different process than white spots from caries on the same teeth. This would account for why 2–to–3 etching cycles are appropriate for enamel caries while additional etching cycles were used to change the MIH lesion color. More research is needed for the optimal time for etching MIH anterior teeth. Less etching time would preserve more tooth structure and take less time, an important consideration when doing this procedure on pediatric patients.
While MIH is not an approved indication for Icon, the resin infiltration technique has been shown to be effective in several cases for correcting esthetic issues of MIH lesions on anterior teeth without local anesthesia; an important behavior guidance consideration when dealing with pediatric patients who may have been traumatized by previous restorative attempts.
Weerheijm K L, Jalevik B, Alaluusua S . Molar-incisor hypomineralisation. Caries Res 2001;5: 390-391.
- Weerheijm K L, Duggal M, Mejare I et al. Judgement criteria for molar incisor hypomineralisation (MIH) in epidemiologic studies: a summary of the European meeting on MIH held in Athens, 2003. Eur J Paediatr Dent 2003; 4: 110-113.
- Steffen R, Van Waes H . Therapy of MolarlncisorHypomineralisation under difficult circumstances. A concept for therapy. Quintessenz 2011; 62: 1613-162.
- Allazzam SM, ALAKI SM, Meligy OAS. International J Denti. 2014. Doi. Org /10.1155/2014/234508.
- Kalkani M, Balmer R C, Homer R M, Day P F, Dug gal M S. Molar incisor hypomineralisation: experience and perceived challenges among dentists specialising in paediatric dentistry anda group of general dental practitioners in the UK. Eur Arch Paediatr Dent 2016; 17: 81-88.
- Ghanim A, Silva M J, Elfrink M EC et al. Molar incisor hypomineralisation (MIH) training manual for clinical field surveys and practice. Eur Arch Paediatr Dent 2017; 18: 225-242.
- AI-Batayneh O B, Jbarat RA, AI-Khateeb S N. Effect of application sequence of fluoride and CPP-ACP on remineralization of white spot lesions in primary teeth: An in-vitro study. Arch Oral Biol 2017; 83: 236-240.
- Kalkani M, Balmer RC, Homer RM, Day PF, Duggal MS. Molar incisor hypomineralisation: experience and perceived challenges among dentists specializing in paediatric dentistry anda group of general dental practitioners in the UK. Eur Arch Paediatr Dent. 2016;17:81-88.
- Almauallem 2., Busuttil-Naudai A. Molar incisal hypomineralization (MH)- an overview. Brit Dent J. Oct. 18, 2018 225(7):601-609.
- Settineri, S., Rizzo, A., Liotta, M., & Mento, C. (2017). Clinical Psychology of Oral Health: The Link Between Teeth and Emotions. SAGE Open, 7(3). https://doi.org/10.1177/ 2158244017728319
- Ghanim A, Silva M J, Elfrink M E C et al. Molar incisor hypomineralisation (MIH) training manual for clinical field surveys and practice. Eur Arch Paediatr Dent 2017; 18: 225-242.
- Lygidakis NA. Treatment modalities in children with teeth affected by molar-incisor enamel hypomineralisation (MIH): A systematic review. Eur Arch Paediatr Dent 2010; 11: 65-74.
- Wray A, Welbury R. Treatment of intrinsic discoloration in permanent anterior teeth in children and adolescents. 2004. Available at https://www.rcseng. ac.uk/-/med ia/files/rcs/fds/publications /discolor. pdf.
- Comisi J C. Provisional materials: advances lead to extensive options for clinicians. Compend Contin Educ Dent 2015; 36: 54–59.
- Attal JP, Atlan A, Denis M, Vennat E, Tirlet G. White spots on enamel: treatment protocol by superficial or deep infiltration (part 2). Int Orthod 2014; 12: 1-31.
- Elhennawy, K. et al. Structural, mechanical and chemical evaluation of molar-incisor hypomineralization-affected enamel: A systematic review. Arch. Oral. Biol. 83, 272-281.
- Jalevik B, Klingberg G. Treatment outcomes and dental anxiety in 18-year-olds with MIH, comparisons with healthy controls-a longitudinal study. Int J Paediatr Dent. 2012;22(2):85-91.
- Souza, Juliana. Aesthetic management of molar-incisor hypomineralization. Revista Sul-brasileira de Odontologia. 2014; 11: 204 |
How to Improve Self-Esteem
Having a healthy level of self-esteem is vital for overall well-being and personal growth. When individuals possess self-confidence and a positive self-image, they are more likely to achieve their goals and build fulfilling relationships.
However, low self-esteem can hinder progress and lead to a host of challenges in various aspects of life. In this blog post, we will jump into the concept of self-esteem, explore the impact of low self-esteem, and provide actionable tips to improve self-esteem effectively.
What is Self-Esteem?
Self-esteem refers to the subjective evaluation of one’s worth and capabilities. It is a fundamental aspect of our psychological makeup that influences how we perceive ourselves, others, and the world around us.
Healthy self-esteem involves having a balanced view of oneself, acknowledging strengths and weaknesses while maintaining self-compassion.
Importance of Self-Esteem
Self-esteem plays a pivotal role in shaping our emotions, thoughts, and behaviors. It affects our decision-making, resilience, and ability to cope with challenges. When individuals have a positive self-image, they are more likely to take on new opportunities and embrace life with confidence.
Examples of Healthy Self-Esteem
- Acceptance of Imperfections: People with healthy self-esteem understand that nobody is perfect, and they embrace their flaws as part of their uniqueness.
- Positive Self-Talk: They engage in positive self-talk, reinforcing their abilities and qualities rather than focusing on self-criticism.
- Resilience: Individuals with healthy self-esteem bounce back from setbacks and view failures as opportunities for growth.
- Assertiveness: They can assert their needs and boundaries without feeling guilty or insecure.
Identifying Low Self-Esteem
Signs and Symptoms
Low self-esteem can manifest in various ways, and its symptoms may differ from person to person. Some common signs include:
- Constant self-criticism
- Negative self-talk
- Fear of failure and avoidance of challenges
- Seeking constant validation from others
- Social withdrawal and isolation
Causes of Low Self-Esteem
Low self-esteem can be influenced by several factors, including:
- Childhood Experiences: Negative experiences during childhood, such as harsh criticism or neglect, can significantly impact self-esteem in adulthood.
- Social Comparisons: Constantly comparing oneself to others, especially on social media, can lead to feelings of inadequacy.
- Trauma and Abuse: Individuals who have experienced trauma or abuse may develop low self-esteem as a coping mechanism.
The Impact of Low Self-Esteem
Low self-esteem is closely linked to mental health issues such as anxiety and depression. The constant self-doubt and negative self-perception can take a toll on one’s emotional well-being.
Individuals with low self-esteem may struggle to maintain healthy relationships. They might settle for toxic relationships or push away people who genuinely care for them.
Career and Success
Low self-esteem can hinder career growth and professional success. It may lead to a fear of failure, preventing individuals from taking on new challenges and opportunities.
Overcoming Low Self-Esteem
Self-reflection is a powerful tool for understanding the root causes of low self-esteem. Identifying negative thought patterns and addressing them is the first step towards improvement.
Seeking support from friends, family, or a therapist can be immensely beneficial in building self-esteem. Talking about one’s feelings and challenges can provide a fresh perspective.
Challenging Negative Thoughts
Learning to challenge and reframe negative thoughts can help shift the focus towards positive aspects of oneself.
Setting Realistic Goals
Setting achievable goals and celebrating small victories along the way can boost self-confidence and motivation.
Embracing Positive Affirmations
Practicing positive affirmations can help rewire the brain to focus on strengths and build a more positive self-image.
10 Tips for Improving Self-Esteem
Tip 1: Practice Self-Compassion
Self-compassion is a vital aspect of nurturing self-esteem. It involves treating oneself with kindness, understanding, and forgiveness, especially during challenging times or when facing setbacks.
Often, individuals with low self-esteem tend to be overly critical of themselves, leading to a continuous cycle of negativity. Practicing self-compassion means acknowledging that everyone makes mistakes and that it is okay to be imperfect.
By offering ourselves the same compassion we would extend to a close friend, we create a supportive and nurturing internal environment, which is essential for fostering self-esteem.
Tip 2: Take Care of Your Physical Health
The mind and body are deeply interconnected, and taking care of one’s physical health plays a crucial role in improving self-esteem. Regular exercise not only contributes to physical well-being but also releases endorphins, the “feel-good” hormones, which can significantly enhance mood and self-confidence.
Engaging in activities such as walking, jogging, yoga, or dancing can be both enjoyable and beneficial for self-esteem.
Proper nutrition is equally important. A balanced diet with a variety of nutrients supports overall health and can positively impact mental well-being.
Eating well-balanced meals and staying hydrated can lead to increased energy levels and a sense of vitality, contributing to improved self-esteem.
Sufficient rest and quality sleep are often underestimated but play a significant role in mental and emotional stability. Lack of sleep can lead to increased stress and emotional vulnerability, making it difficult to maintain a positive self-image. Prioritizing restful sleep allows the body and mind to rejuvenate, promoting a more positive outlook and greater resilience.
Tip 3: Celebrate Your Achievements
Acknowledging and celebrating accomplishments, regardless of their size, is crucial for reinforcing a sense of competence and self-value. Often, individuals with low self-esteem tend to downplay their achievements or dismiss them as insignificant.
However, taking the time to recognize and celebrate even small successes can boost self-confidence and provide a sense of accomplishment.
Creating a list of achievements, both past and present, can serve as a visual reminder of one’s capabilities and progress. Whether it’s completing a project at work, mastering a new skill, or simply getting through a challenging day, each accomplishment contributes to personal growth and should be celebrated.
Tip 4: Surround Yourself with Positive People
The people we surround ourselves with can have a significant impact on our self-esteem and overall well-being. Positive, supportive, and encouraging individuals can uplift us during difficult times, boost our confidence, and remind us of our strengths.
On the other hand, toxic relationships can be detrimental to self-esteem, as they may foster negative self-talk and feelings of inadequacy.
Seeking out positive social connections and maintaining healthy relationships can create a supportive network that reinforces self-esteem. Engaging in activities and spending time with people who share similar interests and values can foster a sense of belonging and acceptance, contributing to increased self-worth.
Tip 5: Limit Social Media Comparisons
Social media has become an integral part of modern life, but it can also significantly impact self-esteem. Constantly comparing ourselves to carefully curated and often unrealistic portrayals of others on social media can lead to feelings of inadequacy and self-doubt.
It is essential to recognize that social media is a filtered representation of people’s lives and not an accurate reflection of reality.
Limiting exposure to social media or being mindful of the emotions it evokes can help avoid unnecessary comparisons. Instead, focus on personal growth and celebrate individual accomplishments, without the need for external validation.
Tip 6: Learn to Say No
Setting boundaries and learning to say no to things that do not align with our values and priorities is crucial for self-respect and improved self-esteem.
People with low self-esteem often struggle to assert themselves and may feel obligated to please others at the expense of their well-being.
Learning to say no respectfully allows individuals to prioritize their needs and protect their emotional boundaries. It empowers them to make decisions that are aligned with their values and goals, promoting a sense of self-worth and authenticity.
Tip 7: Face Your Fears
Confronting fears and stepping out of one’s comfort zone can be intimidating but is essential for personal growth and increased self-confidence. Avoiding challenges due to fear of failure or rejection can perpetuate feelings of inadequacy.
Facing fears and embracing new experiences can lead to a sense of accomplishment and empowerment. Each successful encounter with fear reinforces the belief in one’s capabilities and resilience, contributing to improved self-esteem.
Tip 8: Engage in Activities You Love
Engaging in activities that bring joy, fulfillment, and a sense of accomplishment can significantly enhance self-esteem. Hobbies, interests, and passions provide a sense of purpose and identity outside of external validation.
By dedicating time to activities that bring genuine happiness, individuals build a strong foundation of self-worth based on personal fulfillment. These activities serve as a reminder of individual strengths and the unique contributions each person brings to the world.
Tip 9: Practice Mindfulness and Meditation
Mindfulness practices involve being present in the moment without judgment and can help reduce stress and increase self-awareness.
Engaging in mindfulness techniques, such as meditation, deep breathing, or yoga, allows individuals to cultivate a deeper understanding of their thoughts and emotions.
Mindfulness helps break free from negative thought patterns and self-critical tendencies, promoting a more compassionate and non-judgmental view of oneself. By becoming aware of thoughts and emotions without attaching judgment or importance to them, individuals can cultivate a positive and supportive inner dialogue, fostering improved self-esteem.
Tip 10: Seek Professional Help if Needed
For some individuals, low self-esteem may significantly impact daily life and require professional guidance. Seeking support from a mental health professional, such as a therapist or counselor, can be instrumental in addressing underlying issues and building healthy self-esteem.
Therapy can provide a safe and non-judgmental space for individuals to explore their thoughts and feelings, uncover root causes of low self-esteem, and develop coping strategies.
Trained professionals can offer valuable insights and evidence-based techniques to help individuals foster self-compassion, challenge negative thought patterns, and build a more positive self-image.
Improving self-esteem is a journey that requires self-awareness, patience, and consistent effort. By understanding the concept of self-esteem, identifying its impact on various aspects of life, and implementing actionable tips, individuals can gradually build a healthier sense of self-worth. Embracing self-compassion, setting realistic goals, and seeking support from loved ones can significantly contribute to personal growth and happiness.
Frequently Asked Questions (FAQs)
FAQ 1: Can low self-esteem be fixed?
Yes, low self-esteem can be improved with self-awareness, positive changes in thought patterns, and seeking support from loved ones or professionals.
FAQ 2: How long does it take to improve self-esteem?
The time required to improve self-esteem varies from person to person. It depends on individual circumstances, willingness to change, and consistency in implementing positive practices.
FAQ 3: Can self-esteem affect academic performance?
Yes, self-esteem can influence academic performance. Students with higher self-esteem tend to have better focus, motivation, and confidence, leading to improved academic outcomes.
FAQ 4: Is low self-esteem linked to body image issues?
Yes, low self-esteem is often associated with body image issues. Negative body image can contribute to feelings of inadequacy and impact overall self-esteem.
FAQ 5: Can childhood experiences impact self-esteem in adulthood?
Yes, childhood experiences, particularly those involving criticism, neglect, or abuse, can have a lasting impact on self-esteem in adulthood.
FAQ 6: What role does social media play in self-esteem?
Social media can negatively affect self-esteem by promoting comparisons, unrealistic standards, and a constant need for validation.
FAQ 7: Can self-esteem affect job satisfaction?
Yes, self-esteem can influence job satisfaction. Individuals with higher self-esteem are more likely to feel confident in their abilities and enjoy their work.
FAQ 8: Is there a connection between self-esteem and assertiveness?
Yes, self-esteem and assertiveness are closely related. Individuals with healthy self-esteem are more likely to assert their needs and boundaries.
FAQ 9: Can self-esteem impact one’s ability to handle rejection?
Yes, low self-esteem can make it challenging to handle rejection. People with healthier self-esteem are better equipped to cope with rejection constructively.
FAQ 10: How can I support a friend with low self-esteem?
Supporting a friend with low self-esteem involves active listening, showing empathy, and encouraging them to seek professional help if needed. Offer genuine compliments and be a source of encouragement. |
Promoting Gender Equality and Education to Prevent GBV
More than 1.6 million people are internally displaced across South Sudan, in addition to 786,000 people who have fled to neighbouring countries since December 2013.52 Most displaced children have not received any formal education since December 2013 and many have been exposed to numerous forms of violence such as recruitment by armed groups, acute physical violence and a high incidence of sexual and gender-based violence.
World Vision puts a strong emphasis on working with communities to reinforce the value of women and men, girls and boys, and the significance of their contribution to their families, communities and society in all settings, including emergencies and fragile contexts, to build peaceful and sustainable societies based on gender equality. In order to do so, child well-being, education and protection is at the heart of every endeavour.
In South Sudan, World Vision has been delivering a programme funded by Irish Aid which illustrates how it is possible to tackle the two main causes of GBV: gender inequality and discrimination through protection and education for internally displaced children and their respective families and communities implemented with two underlying principles: promoting gender equality and GBV prevention through education for boys, girls, women and men.
Overall, the programme believes in and promotes the idea that IDPs, particularly women and girls, are the active and effective agents of change capable of contributing towards the betterment of the community they live in. Education and Protection programmes that promote gender equality, protection and prevention of GBV at their core are vital for helping women and girls in fragile communities unlock their potential in creating sustainable and peaceful environments.
52 International Organisation for Migration (2016) Update- 6/9/16. Available at: https://southsudan.iom.int/media-and-reports/press-release/conflict-continues-drive-displacement-south-sudan
This outlook led to the formation of a number of key initiatives in three IDP camps and their host communities in Melut County including:
- The formation of girls clubs in March 2016, aimed at empowering women with a greater knowledge of gender equality, GBV, protection, and the importance of education.
- The recruitment and training of both male and female teachers and volunteers from within the community, to ensure robust engagement of men and boys.
- Close partnership between the NGOs in the field, at local, regional, national and international levels, which led to a Protection Working Group focused on child protection and GBV.
- Capacity building training for staff, volunteers, women’s groups and parents associations, around child protection, GBV, gender equality, girl’s education and early marriage to create a protective environment for all children carried out by all the different members of the community.
The involvement of religious leaders to be agents of social change. Though religious leaders were not targeted directly in this particular programme, the World Vision team recognised their roles as valuable members of the community and they were engaged through members of parent teacher associations. They actively worked to disseminate messages on child marriage and the importance of girls’ education during Sunday Church services.
“ I want my child to get better education and become a good citizen of South Sudan. I want the world to support South Sudan to get peace. Let our children go to school and get better education. Because of lack of education people are fighting for many years in South Sudan.”
“After parents saw me and other female colleagues who are working with different agencies receiving good money, they are now supporting their daughters to go to school.”
Teressa, is 24 years old and has been displaced twice in her life as a result of conflict in South Sudan. When she first entered the camp in Melut county, she didn’t see many opportunities for education and thought that there was widespread gender inequality at the camp, with most men spending their days idle while women were responsibility for collecting firewood, cooking, cleaning and looking after children. In August 2015, she applied to be a teacher at the Irish Aid supported education project and started working as a volunteer assistant teacher. After three months, she became a teacher in one of the Early Child Development centres. Since April 2016, Teressa has been working with World Vision South Sudan as a Food Monitor and is now happy to be working in the area of nutrition and supporting her son. |
Robert Hooke is born / dicovers plant cells by looking at a piece of cork tissue.
Began to map out how female genitalia works; discovered ovaries. Discovered nerve function, and that the circulatory system is not necessarily for movement. Discovred that muscles volume does not increase as it contracts.
Anton Van Leeuwenhoek
(Do not rely on date, only year) He discovers bacteria, yeast plants, explains circulation of blood. Discovers how to make microscope lenses better.
(Year correct, not date) Investigated human eye and discovered water-gas shift reaction.
(Date is birthday, not discovery date) Researched protozoans and other invertabrates.
Investigated osmosis, respiration, embryology, and effect of life on plants. Given credit for discovering cell biology,
(Correct year, not date) Discovered cell nucleus and cytoplasmic streaming. First observation of Brownian movement.
Helped develop cell theory, discovered of Schwann and peripheral nervous system, discovered of pepsin.
(Do not rely on date, only year) Famous botanist and microbiologist. Helped create cell theory.
First recognized leukemia cells. Also got on to cell division, which he plagiarized from Robert Remak,
(Correct year, not date) Introduced the notion of mitochondria.
One of the first to discover cell membranes.
(Do not rely on date, only year) Best known for her theory on the origin of eukaryotic organelles, snd her contribution to endosymbiotic theory. |
Williamson County, TN FAQs
What is the history of Williamson County, TN and what are its notable landmarks?
- located in the state of Tennessee, USA
- established on October 26, 1799
- named after Hugh Williamson, a North Carolina politician who signed the U.S. Constitution. The county is located in Middle Tennessee and is part of the Nashville-Davidson–Murfreesboro–Franklin Metropolitan Statistical Area.
- Native American tribes, including the Cherokee and Chickasaw, lived in the area before the arrival of European settlers.
- The first European settlers arrived in the late 1700s and the county played an important role in the Civil War, with several battles taking place in the area.
- Carter House
- located in Franklin
- played a significant role in the Civil War, as it served as a Union headquarters during the Battle of Franklin in 1864
- now a museum and is open for tours.
- Natchez Trace Parkway
- runs through the county
- a 444-mile scenic drive that follows a historic route used by Native Americans and early settlers
- Lotz House Museum
- also located in Franklin
- served as a Confederate field hospital during the Battle of Franklin, and the Leiper’s Fork Village, a historic village with shops, galleries, and restaurants.
- Harpeth River State Park
- Franklin Recreation Complex
Bowie Nature Park
What are the recreational activities available in Williamson County, TN?
Williamson County, TN has a variety of recreational activities for visitors and residents alike. Some popular activities include:
- Hiking: The county has several hiking trails, including the Natchez Trace National Scenic Trail, the Franklin Battlefield Trail, and the Warner Parks Nature Center Trails.
- Biking: The Natchez Trace Parkway has over 40 miles of bike trails, while the county’s rural roads provide a great opportunity for road cycling.
- Golfing: Williamson County has several golf courses, including the Vanderbilt Legends Club, the Governors Club, and the Golf Club of Tennessee.
- Fishing: The Harpeth River and several lakes in the area provide opportunities for fishing.
- Horseback riding: The county has several horseback riding trails, including the Natchez Trace National Scenic Trail and the Franklin Battlefield Trail.
- Parks: Williamson County has several parks, including Crockett Park, Pinkerton Park, and Harlinsdale Farm.
- Sports: The county has several sports facilities, including the Franklin Recreation Complex and the Williamson County Soccer Complex.
- Shopping: The county has several shopping centers, including the CoolSprings Galleria and the Factory at Franklin.
- Historic Sites: Historic sites such as the Carter House and Carnton Plantation are popular attractions for visitors to the county. |
There are many different types of sand, each with its own unique set of properties that make it suitable for different types of construction projects. The size, color and specific physical properties of different types of sand aggregate can all lend different types of advantages and drawbacks to different types of construction projects.
Having the right type of sand can be the difference between a job that goes smoothly and one that’s riddled with problems.
Why Sand is Important in Construction
One of the most important and most commonly used materials in construction can be found in a wide variety of materials ranging from concrete to mortar to plaster.
To ensure the success of a construction project the foundation must be solid. And that solid foundation starts with a strong base of concrete. Concrete is made up of three main ingredients: cement, water and sand.
Sand is the element that provides the necessary strength and stability to the concrete mixture. Without sand, the concrete would not be able to support the weight of what is being built on top of it. Different types of sand work to create a more tailored concrete quality that best matches the needs of the project.
Mortar is a type of cement used to attach bricks or stones together and also contains cement, water and sand. Unlike concrete, mortar does not require any gravel or crushed rocks; the sand provides all the necessary strength and durability for mortar.
Plaster is a type of finish that is often used on walls and ceilings that gives a smooth and finished look to a surface. Plaster is made up of four main ingredients: water, lime, sand and hair or other fibers. The sand provides the bulk of the material and helps to give the plaster mixture its strength and stability.
These products demonstrate that sand is an essential ingredient beneficial to many types of construction materials. However, due to the many different types of construction projects, there are also many different types of sands available for use. Each type of sand has its own specific properties that make it ideal for certain applications. When selecting sand for your next construction project, be sure to select the right type for the job.
5 Types of Sand Used in Construction
Sand is classified by a variety of different characteristics such as size, color and even the source from which the sand is created. Each type offers a unique set of benefits that are ideally suited for specific types of projects.
Fine sand is made up of small, individual particles that measure less than 2 millimeters in diameter. This type of sand is often used in construction projects, as it can be easily packed and molded to create the desired shape. Fine sand is also effective at filling small gaps and cracks, making it an ideal material for use in mortar and grout.
Additionally, fine sand has a high surface area-to-volume ratio, which allows it to absorb and hold a large amount of water. This type of sand, however, is not ideal for use in creating concrete. Concrete made with fine sand can lead to cracking and weathering and is not ideal for durability.
Coarse sand can be used as a base for a concrete slab, it can be added to mortar to create a stronger bond and it can also be used to fill in gaps between bricks or stones to create a solid foundation. Coarse sand is very commonly used as a surface layer on driveways and walkways to provide traction in wet weather.
Beach sand is a type of sand that is typically used in construction projects that require a smooth texture and a high-quality finish. Being a naturally compacted material, it is known for its superior quality and its ability to withstand weather conditions. Beach sand is also known for its ability to resist erosion, which makes it a perfect choice for construction projects that are located near the water.
Pit sand is found at the bottom of quarries and pits and offers great binding properties. It is often created through the use of a rock crushing machine, which also gives managers the freedom to choose between different sizes/makeups of the mixture. Due to the varieties available, pit sand is a great option for any number of different types of projects that may have unique needs.
Fill sand is a unique type of sand that is created to be used as a filler. It is also known as backfill sand and is used in construction projects to fill in gaps and level surfaces. It is usually a little bit coarser than other types of sand, and is less expensive than other types of sand, making it a popular choice for budget-minded builders.
The Right Sand for the Unique Demands of Your Project
It is important to remember that not all sands are created equally; therefore, be sure to select the right type of sand for the job. If you are unsure which type is best, consult with our team of experts at 941-621-8484. |
Shipping from the State of New York to Minnesota
The Dutch were the first to settle by the Hudson River in 1624. After two years, they founded the colony of New Amsterdam on Manhattan Island. By 1664, the English had taken control of the area and changed its name to New York. A part of the original 13 colonies, New York played a vital political and strategic role in the American Revolution.
Shipping to the State of New York to Minnesota
Minnesota, was unified as the 32nd state on May 11, 1858. Nicknamed as the Land of 10,000 Lakes or the North Star State, it is located most northerly of the 48 conterminous U.S. states. Minnesota has its boundary with the Canadian provinces of Manitoba and Ontario to the north, the Lake Superior and Wisconsin to the east, Iowa to the south and South Dakota and North Dakota to the west. Minnesota is the abode of the Mall of America, which contains over 400 stores and gathers nearly 40 million people a year. Minnesota’s standard of living index is among the highest in the country, and it is also among the best-educated and wealthiest in the nation.
The state is a section of the U.S. region dubbed as the Upper Midwest and part of North America’s Great Lakes Region. With a large area covering approximately 2.25% of the United States, Minnesota is the 12th-largest state. In addition, there is the largest concentration of transportation, business, industry, education, and government are also here.
The state capital is St. Paul. L’Étoile du Nord (“Star of the North”)- has been adopted as the state motto. |
Clarence W. Brett Park at Historic New Bridge Landing
River Rd. and Riverview Ave.
Map / Directions to Clarence W. Brett Park at Historic New Bridge Landing
The park is located next to the site of the "New Bridge," an important crossing over the Hackensack River during the Revolutionary War. The Revolutionary War significance of New Bridge Landing is explained in detail on the River Edge page.
Part of the Hackensack River Greenway in Teaneck, a 3.5-mile trail along the river, goes through the park. For more information, see the Friends of the Hackensack River Greenway in Teaneck website.
1780 Encampment Site
Teaneck Rd. and Cedar Ln. (In front of Holy Name Hospital)
Map / Directions to the 1780 Revolutionary War Encampment Site
The British occupied New York City from late 1776 until the end of the war in 1783. Because of this, General George Washington kept his army in New Jersey for much of the war, where he could keep an eye on New York City and guard against attack by British forces. Washington was trained as a surveyor as a young man, and had an appreciation and understanding of topography; therefore, he chose his encampment sites well. When in New Jersey, he usually positioned the army in locations behind the Watchung Mountains, which provided a layer of protection from British attack, notably in the Morristown and Middlebrook encampments.
The Continental (American) Army encamped in this area from August 22 - September 3, 1780, stretching through Englewood, and Leonia. What made this encampment different from some earlier New Jersey encampments was the closeness to the British troops in New York City. From New York City, British troops could travel by boats across the Hudson River, disembark at the New Jersey Palisades, and then easily reach this area by roads over the generally moderate terrain. In fact, raids were regularly made this way by British and Loyalist forces on the residents of this area throughout the war.
Aware of the danger of attack by British forces, Washington wrote the following in his orders for August 23, 1780:
(To avoid confusion, note that these orders were written by Washington - he is referring to himself in the third person throughout the orders.)
"The Army being now very near the Enemy. The Genl flatters himself every Officer and Soldier will make it a point of Honor as well as duty to keep constantly in Camp and to be at the shortest notice ready to Act as circumstances may require. He is at the same time persuaded, should an opportunity be afforded us that every part of the Army will vie with each other, in the display of that conduct[,] fortitude and bravery which ought to distinguish troops fighting for their Country, for their liberty, for every thing dear to the Citizen, or to the Soldier."
On September 4, the army would move several miles inland, where they would remain until September 20. Troops would encamp along Kinderkamack Ridge, by what is now Van Saun Park in Paramus to Soldier Hill in Oradell. While they were still open to attack by the British, the move placed them behind the added protection of the Hackensack River. As it turned out, no British attack came. The Battle of Springfield, which occurred several months earlier on June 23, ended up being the last major battle fought in the North. The emphasis of the fighting shifted to the southern states, culminating in the decisive Battle of Yorktown on October 19, 1781.
1. ^ For more information about raids made in Bergen County by British troops and/or Tories, see the Cresskill, Demarest, Dumont, Elmwood Park, Hackensack, Harrington Park, and Ridgefield Park pages of this website.
2. ^ General George Washington, After Orders, Headquarters, Aug 23, 1780 Te[a]neck, reprinted in:
Orderly Book of the New Jersey Brigade, July 30 to October 8, 1780, From the Original Manuscript in the New York Public Library (Bergen County Historical Society, 1922) Pages 27-28
Available to be read at Google Books here |
In recent years, we’ve seen robots and computers take over a variety of tasks once reserved exclusively for humans. That includes the equipment used to assemble vehicles, as more and more robots are used these days to do everything from install components to performing paint work. This has led many to fear that one day, there will be no people present in assembly plants at all. But that doesn’t appear to be in the cards for Ford, at the very least.
“I think we’ll always need the human touch, with humans getting in the vehicle and doing certain things,” Gary Johnson, Ford’s chief manufacturing and labor affairs officer, told Ford Authority executive editor, Alex Luft, in a recent interview. “We obviously want to improve the safety aspects of the assembly process and improve quality, but we’re always going to need people.”
Ford’s assembly process is a critical part of its history, of course. Henry Ford installed the very first assembly line used for the mass production of automobiles way back in 1913. That single innovation reduced the amount of time it took the automaker to assemble vehicles from over 12 hours down to just one hour and 33 minutes. It also drastically cut Ford’s production costs, which in turn allowed it to drop the price of the Model T.
As a result of those lower production costs, Ford also famously began paying his assembly line workers an astounding (at the time) $5 a day in 1914, but he was always looking for ways to improve efficiency. That led Ford to begin building machines that could stamp parts much more quickly than humans could.
Since then, automakers continue to work on improving production efficiency, and that has led to the use of many machines and robots in assembly plants. But at least for the foreseeable future, it doesn’t look like Ford’s plants will be fully automated and completely devoid of assembly workers. |
According to the American Heart Association, ongoing research shows that the cardiovascular endurance of America’s kids is getting worse. Their research shows a drop of 6% per decade between 1970 and 2010. In fact, the cardiovascular health of children in nations around the world has declined by 5 percent each decade. We just lead the pack with 6 percent.
Some other eye-opening statistics from the research show that kids today are “roughly 15 percent less fit from a cardiovascular standpoint than their parents were as youngsters” and they run a mile run a minute and a half slower than children from 30 years ago.
All of this means it’s recommended that parents think about kids fitness a lot more than they do right now. And not just think about, do something about it.
When Does Fit Mean “Fit”?
There are several ways that kids can be fit.
- Strong (like a weightlifter)
Flexible (like a gymnast)
Skillful (like a tennis player)
Not all of these types of “strength” relate well to “health,” according to Grant Tomkinson, Ph.D., lead author of the study founding the American Heart Association’s research. “The most important type of fitness for good health is cardiovascular fitness, which is the ability to exercise vigorously for a long time, like running laps around a track.”
The cardiovascular trend for the world’s children (and especially America’s) is reason for concern, but these trends can be changed even for children who are part of this “degenerating generation.” The cardiovascular habits of children can be improved (with the addition to cardiovascular activities to their daily routines and lifestyle changes) so that cardio endurance can be improved.
It is important to become familiar with the components of childhood fitness. It is multifaceted–encompassing a number of aspects that have an impact on health and well-being.
- Flexibility pertains to the body’s range of motion. The goal of flexibility training is to have the maximum range of motion without pain or stiffness.
- Strength refers to the amount of weight the muscles can push, pull or support. However, strength training also strengthens the bones.
- Cardiovascular endurance is the heart’s ability to withstand extended periods of activity.
- Muscular endurance is the time the muscles can withstand pushing, pulling or supporting weight.
- Body composition is the amount (or percentage) of fat versus non-fat (bone, skin, muscle, etc.) in the body.
It’s also important to understand the anatomical and physiological differences between children and adults. Keep in mind that every child is different–some stronger in one area more so than another.
Because children grow in spurts, they are always in the process of acclimation and may lack coordination. This makes them more vulnerable to injury, and any plan to improve child fitness should account for childhood growth patterns. Children’s core muscles those muscles in the hips, back and abdomen area are not fully developed and are therefore weaker than the core muscles of adults.
Children often lack flexibility, which is an integral part of fitness and a preventative factor when it comes to injury. Therefore, flexibility training should be incorporated into any childhood fitness program.
There are several factors that can impact a child’s cardio endurance:
A well-balanced healthy diet can improve a child’s endurance. Having a daily diet that is full of nutritious foods can provide a child with more energy during school and after-school activities.
Parents can encourage healthy eating habits in their children by making healthy food choices themselves. Foods that increase stamina include bananas, red grapes, complex cards and iron-rich foods.
The American Academy of Pediatrics suggests a diet that includes a mix of foods from the five food groups: fresh vegetables and fruits, whole-grains, low-fat dairy, and quality lean protein sources, including lean meats, fish, nuts, seeds and eggs).
A daily routine that includes physical activity will get a child into a habit of staying active throughout their life. A daily routine that encourages fitness helps a child build up endurance.
It’s important to mix up the type of activities the child is doing. Walking or jogging, cycling, swimming and low intensity dancing are all activities that are aerobic exercises, which are low to high intensity exercises that primarily depend on aerobic energy-generating processes.
Having a child walk an hour one day is just as useful for their cardio endurance as swimming for 30 minutes another day. Mixing up these activities allows the child to not get bored of repeating the same activity each day.
Some children need the motivation of competition to keep them active. Getting them involved in sports and activities such as gymnastics or cheer can keep them physically fit and active–and add enough competition to hold their interest and enthusiasm. Children should still use aerobic activities to keep them performing at their best in these activities. For example, a sport such as basketball utilizes their aerobic exercise from jogging as they will be running up and down the court.
Gymnastics or Cheer Involvement
Supplementing cardio workouts with competitive sports such as gymnastics or cheer can improve the overall experience for a child. The benefits of cardiovascular endurance for these athletes includes improved posture and health, enhanced stamina and performance ability, improvement with anaerobic ability (high intensity floor exercises, for example, are anaerobic), reduced risk of fatigue while enhancing concentration, reduced stress levels, boosted immune system and reduced risk of injury.
Follow a Plan
One of the best approaches for parents is to develop a childhood fitness plan for their child. It should be based on the components of fitness, assessment of the child’s fitness level and knowledge of the anatomical and physiological differences between adults and children. Your child’s fitness plan should include 60 minutes of physical activity every day, incorporating 3 total hours of strength training (for muscles and bones) per week. |
(This post was published on our previous blog on 8/25/2016.)
By Maryrose Grossman, Audiovisual Reference Archivist
The centennial of the National Park Service on August 25, 2016 marked an occasion to consider John F. Kennedy’s relationship with the National Park Service and its goals of preserving natural and cultural resources and making them available to present and future generations. The Organic Act of 1916, establishing the National Park Service, stated:
The service thus established shall promote and regulate the use of the Federal areas known as national parks, monuments, and reservations hereinafter specified by such means and measures as conform to the fundamental purpose of the said parks, monuments, and reservations, which purpose is to conserve the scenery and the natural and historic objects and the wild life therein and to provide for the enjoyment of the same in such manner and by such means as will leave them unimpaired for the enjoyment of future generations. (U.S.C., title 16, sec. 1)
John F. Kennedy had voiced his support for these ideals before he was President. He was in particular an advocate for water conservation and expressed his position on the matter during the 1960 presidential campaign. He spoke of…
…the development of our vital water resources – water to reclaim the land, to supply the power for cities and industry, to provide the opportunity for recreation and pleasure, and to meet the needs of a rapidly expanding population…
In 1956, the National Park Service issued a report entitled “Our Vanishing Shoreline” that decried the rapid loss of undeveloped and unspoiled seashore and urged the U.S. Government to acquire shoreline land so as to preserve it from private and commercial development and make it available for all. The Eisenhower Administration opposed public land acquisition, but John F. Kennedy answered the clarion call.
President Kennedy’s urgent advocacy of acquiring seashore as public land aided in the establishment of the Cape Cod National Seashore, Padre Island National Seashore, and Point Reyes National Seashore. At the signing of the legislation that established Point Reyes National Seashore in California on September 13, 1962, President Kennedy remarked:
The enactment of this legislation indicates an increased awareness of prompt action – and I emphasize that particularly with the population increase and these areas disappearing under that pressure – and the necessity for prompt actions to preserve our nation’s great natural beauty areas to insure their existence and enjoyment by the public in the decades and centuries to come.
In the White House Message on Conservation, released March 1, 1962, President had reiterated his support for the development of additional National Park Service areas:
Last year’s Congressional approval of the Cape Cod National Seashore Area should be regarded as the path-breaker for many park land proposals pending before Congress. I urge favorable actions on legislation to create Point Reyes National Seashore in California; Great Basin National Park in Nevada; Ozark Rivers National Monument in Missouri; Sagamore Hill National Historic Site in New York; Canyonlands National Park in Utah; Sleeping Bear Dunes Lakeshore in Michigan; Prairie National Park in Kansas; Padre Island National Seashore in Texas; and a National Lakeshore area in Northern Indiana.
President Kennedy’s efforts to bring awareness and action to the issue of conservation culminated in the Conservation Tour, a 5-day 10-state trip in September 1963, which included a stop for the Whiskeytown Dam Dedication in Whiskeytown, CA. President Kennedy remarked that with the new dam, the people can…
…enjoy new opportunities for constructive recreational use, and new access to open space as a sanctuary from urban pressures. And, of great importance, the flow… can now be regulated for the benefit of the farms and cities in the lower valley.
Not quite a National Park area in 1963, Whiskeytown Dam became part of Whiskeytown National Recreation Area in 1965.
John F. Kennedy was an active advocate for the conservation of public land and resources before and during his presidency. In his remarks at the White House Conference on Conservation on May 25, 1962, he stated:
I don’t think there is anything that could occupy our attention with more distinction than trying to preserve for those who come after us this beautiful country we have inherited.
President Kennedy’s voice was silenced within two months of the Conservation Tour, but his words still echo the goals and ideals of the National Park Service and all those who have advocated for the conservation of public land for the greater good. |
|Heacham lies on the north-west coast of Norfolk -
fourteen miles north-east of
The town sign depicts the Indian
princess Pocahontas who married John Rolfe (1585-1622).
The Rolfes had been lords of the manor at Heacham for
The story of Pocahontas has been the
subject of many novels, children's books and more
recently a Disney animated film.
Rolfe travelled to America with his wife on board the
Sea Adventure - but when they arrived in Virginia
his wife fell ill and died. Rolfe then married
Pocahontas - the daughter of Powhatan - the most
important Indian chief in Virginia. Pocahontas had
previously saved the life of Captain John Smith in 1607
by throwing herself between him and Powhatan's braves.
In 1616, Rolfe, Pocahontas and their child returned to
England - where she became a celebrity and was presented
at court. She also converted to Christianity and changed
her name to Rebecca. However, the English climate did
not agree with her and she died in March 1617. Rolfe
remarried again and returned to Virginia where he died.
Inside St Mary's Church there is an alabaster memorial
Pocahontas which was erected in 1933. |
Article Summary: Disturbed Sleep contributes to significantly diminished mental health and is a gateway to increased risk of serious mental health problems.
Depression and insomnia
Insomnia was in the past seen as a symptom of ‘something else’ or, if associated with depression, the general consensus was that the insomnia would just ‘go away’ when the depression was treated. This concept was first challenged with the finding that if individuals had previously experienced depression, sleep disturbance in the form of insomnia was found to be a symptom that preceded a recurring bout of depression (Breslau, Roth, Rosenthal, & Andreski., 1996). More direct relationships between untreated insomnia and depression have since been established (Riemann & Voderholzer, 2003; Cole & Dendukuri, 2003)
Cognitive model of insomnia
The ‘wired and tired’ or hyperarousal state is a common distressing feature of insomnia. Excessive negative cognitive activity leads to increased physiological hyperarousal and selective attention (Harvey, 2002). Chronic insomnia is maintained by worry, unhelpful beliefs about sleep, use of safety behaviours, monitoring of the sleep-related threats, and inaccurate perceptions of sleep and the consequences of sleep loss.
The challenge faced by the client/patient is not in the learning of the relaxation exercises and cognitive exercises, but in mustering the discipline to stick to a practice schedule and taking action to use the exercises on an as need basis.
Worry precipitates and/or perpetuates insomnia. Worry about the daytime consequences of not obtaining enough sleep (associated with increased absenteeism and performance anxiety) triggers the flight or fight response (Bonnet & Arand, 1997), resulting in more emotional distress and worsening sleep. Unhelpful or dysfunctional beliefs about sleep include statements such as “I need 8 hours of sleep every night to feel refreshed” and “If I don’t sleep well at night I know I cannot possibly function well the following day”, which exemplify the unrealistic beliefs maintained by worry.
As a consequence, less sleep leads to less energy and more frustration. This double whammy (negative thinking and decreased physical energy with an increase of physical health problems) affects brain chemistry and contributes to the return of clinical depression.
Safety behaviour is a paradox as the individual engages in behaviour that is more likely to make their sleep worse (e.g., napping, sleeping late, consuming large amounts of caffeine/alcohol, erratic use of medications). Monitoring of sleep-related threats further perpetuates the worry, and this attentional bias results in continual internal checking for a reason why the individual is not sleeping (e.g., too hot, too cold, stiff shoulder, breathing partner) or for an external cause for sleeplessness (e.g., dog barking, tap dripping).
An additional challenge faced by the client/patient is the temptation to self soothe with mood altering substances (clinically referred to as self-medicating). As social work clinicians and mental health experts, we remind our clients about common sense information such as how alcohol, caffeine, sugar, tobacco, etcetera throw the proverbial wrench in the in the fine balance of the complex mix of brain chemistry and prescription medications). Striving for a holistic life by eliminating negative habits and adding good habits during times of personal crisis is like bettering the Greek myth of Sisyphus. This is why we have a Wellness Program at Lidkea Stob & Associates.
All of these perceptions have a negative effect on daytime mood and performance. An inaccurate perception of sleep is common. Good sleepers tend to overestimate their sleep and individuals with insomnia underestimate their sleep. Interestingly, a difference of only 35 minutes of objectively measured sleep was found between good sleepers and those with insomnia (Chambers & Keller, 1993).
Summarized by Dan.
More to come from other LSA Associates each week.
The full article and references can be found by following this link. |
MLA indicates the Bible should be included in your Works Cited page. Each specific edition or version of the Bible or a named publication such as ESV Study Bible must have its own works cited entry.
MLA indicates that scripture such as Bible, New Testament, Old Testament, or a specific book of the Bible should not be in italics. However, if you list a specific published edition of the Bible, that edition should be in italics such as The NIV Study Bible.
MLA provides a list of abbreviations for specific books of the Bible. These abbreviations should be used in parenthetical citations.
How to cite the Bible in your Works Cited page depends on what Bible you are using.
Holy Bible. Today's New International Version, Zondervan, 2005.
ESV Study Bible. General Editor, Wayne Grudem, Crossway, 2008.
The first time a specific edition or publication is used, MLA indicates to state either in the body of the paper or in your parenthetical citation the first part of the works cited entry which is usually the title of the specific Bible used. Then give the abbreviated name of the book followed by chapter and verse number. Here is an example of a parenthetical citation.
(ESV Study Bible, I Tim 1:3)
Additional parenthetical citations for the same publication use just book, chapter, and verse.
When changing to a different edition or version, make sure to identify the new version in your parenthetical citation. |
In Roman mythology, they were Apollo and his sister Diana. In Babylonia, twins of indefinite gender were known as Mastabagalgal. Patriarchal revisions told of male twins, the Dioscuri, Castor and Pollux. Previously, old myths told of androgynous twins and twin parings of Gods and Goddesses, spouses, and mother-child. An androgynous view of the heavily Twins is more accurate as an interpretation of the union of the opposites, much like the yang and yin symbol.
The Mother of Time, also known as the Two Ladies, symbolizes the transitional gate of midwinter, looking both backward and forward. She rules the Celestial Hinge at the back of the North Wind around which the universe revolves.
The cycles of nature were renewed by the Egyptian Vulture Goddess, Nekhbet, and the Serpent Mother, Uatchet, archaic Goddesses known as the Two Mistresses. All pharaohs ruled by their authority.
Egyptian Revival Nekhbet Brooch- Silver & plique a jour enamel. Circa 1925. Image from www.veniceclayartists.com
Near Delagoa Bay, in southeast Africa, the Baronga tribe believes twins influence weather. The name Tilo is given to a woman who bears twins and her infants are called children of the sky. These women are responsible for performing a series of rituals to bring down the rain in times of drought.
The number two, embodied in myths as twins or the aspects of twin-like gods or goddesses indicates universal beliefs regarding the power of two and “two” symbolism related to duality, transition and new beginnings, cycles, and the ability to influence nature as numbers are thought to be an integral part of the harmony of the universe. |
"Half Tatami mat for waking up, one mat for sleeping."
This is the proverb used and loved by the people during late Edo period in Japan. The author is unknown. This impies no matter how you live in a huge mansion and no matter how gorgeous room you may have, the space you’d need is just half a Tatami mat for seating or standing and one mat for sleeping.
This saying is usually quoted when you’d like to emphasize on importance of satisfaction and self content life but I understand it differently. I see this saying as a great example of human equality and how we can see people equal.
Needless to say, there are tremendously many types or variety of people and many races in this world. Then how can we respect each other? And how can we be happy together? People during the
Edo period must have sought for the positive understanding because people at that time were forced to live in the world of unequally.
They must have thought about equality. What was common between the samurai and the merchant? What was common between the nobles and the common? Thus they got this viewpoint that minimum space for people are almost same. Namely they were able to reach the thought of equality by realizing the size of human being. They realized whether people are rich or poor, Samurai or farmer, merchant or priest, there are not much difference in sizes and in the minimum space for human being living.
While there are still racial discrimination going on, we should all realize we are not essentially different. Interestingly this proverb later got an additional phrase....even if you become a ruler of the world, you cannot eat more than two and a half cups of rice at a time. This implies even if you get abundant of rice, the amount you could eat daily is very limited like ordinally people.
I like this short saying very much. However I think the last part is not always correct......because I've eaten three cups of rice at a time by making Musubi. |
“In my degree programme in Physics, I deal with quanta and quantum information. In this area, the University of Vienna focuses on photons, the fundamental light particles. Light particles can have different energies that, in certain areas, correspond to the colours that we see. However, light particles cannot only have different energies but can also differ regarding other properties. In the laboratory, we can even overlay different properties.
Imagine a box filled with black or white balls. When we take a white ball out of the box, we are used to thinking that this ball has always been white. It would be counter-intuitive to assume that this ball is white only now because I see it right now, right? This intuition does not apply to smaller objects. This means that the ball can be white and black at the same time and only appears in a certain colour once I take it out of the box to look at it. This is the so-called superposition principle. Another important principle in the world of quanta is called entanglement. It means that the properties of two objects are correlated.
If you have never heard of this principle before, you might point out that this can in no case work. In fact, many physicists are confused by this theory as well – there have been many attempts to explain quantum mechanics. This is the best-known interpretation: Interactions between a measuring instrument and the physical object that is supposed to be measured cause the superpositions to collapse. So, the property of an object only becomes reality by measuring it.
I was fascinated by quantum mechanics exactly because of this inherent elusive character. The principles of superposition and entanglement are the corner-stones based on which academics around the world try to build secure communication or better computers. In future, this will open up unimaginable possibilities and especially increase computing powers immensely.
To contribute my share to this future, I decided to write my master’s thesis in this research area. I was able to prove in my experiments that quantum effects also lead to an increase in computing power when applied to certain machine learning algorithms. As my experiment uses photons, the University of Vienna – as the top performing university in this area – provides the perfect environment for my work.” – Beate Elisabeth Asenbeck
Beate studied Physics at the University of Vienna.
P.S.: On 11 Feb students of the Vienna Doctoral School of Physics (VDSP) organizes a conference highlighting the “International Day of Women and Girls in Science”. The event will feature the careers and experiences of women in science in keynote talks and scientific flash talks. For further information, please follow this link. |
At home, with proper safety measures, we can melt plastic for recycling purposes. Not all types of plastic can be melted at home. The most common and less toxic plastics that can be recycled at home are polyethylene and polypropylene, which are found in food packaging.
The most common is to have a portable oven and melt it inside that oven because it should not be done in the same oven where we will later put food.
The plastic must first be shredded, paying attention to the microplastics that should not end up in the garbage. Once crushed, it is added to the mold in which it will be melted and left for a minimum of 20 minutes at a maximum temperature of 190 degrees.
How to melt plastic bottles
There are many ways to melt plastic bottles. One of them is to take a metal container and add the bottle cut into small pieces. Fill the entire metal container and put it in the oven. If you can get a small stove and take it out of the house, this will prevent the melted plastic from sticking to another surface or the smell of melting plastic in the house.
You can set the stove to about 120ºC. Let everything melt well inside the metal container. You will see that some fumes may come out; as more plastic burns, these will be more harmful. Leave the container baking for about 4 minutes, and then increase the heat gradually. You can do this at intervals of 4ºC until you see that it is completely melted.
After the time has passed and it is well melted, remove the metal container without burning yourself. Then prepare a mold to shape the plastic. The mold can be any shape you want. Then pour the liquid material inside; you can help yourself with a wooden stick. Finally, let it cool completely. Then unmold, and you will see the figures made.
Finally, if you have made plastic flowers and you just want to give a little shape to the petals, just use a candle or a lighter and melt the edges a little. Do not put the heat source too close but at a safe distance so that it does not burn.
Ideas for molding plastic
Here are some ideas on how to use melted plastic and turn them into objects for another use.
Mold with reused materials
The first thing we have to do is to prepare the original piece that we will use to create the mold. To do this, first, clean the item thoroughly. Once clean, apply a release agent on the original piece to make sure that the original piece will come off.
The next step would be to cover the item with a layer of bubble eliminator to prevent bubbles from forming. We will place the original part inside a heat-resistant container.
The last step would be to pour the melted material over the original piece. Do it carefully and let the product cool down to harden. When this happens, the only thing left to do is to unmold.
How does it work?
On the website of Precious plastic, you can find all the videos and plans for creating the machines for free. They are educational videos because it starts from scratch, to know how to recognize what kind of plastic we have at home. In the second section, you can see how to collect the plastic and sort it (labels included). The third section is dedicated to explaining at what temperature to melt the plastic and how to mold it.
Here you have explanatory videos to create the machines. All the drawings, labels, etc, you need can be downloaded with just one click, free of charge.
What is the advantage?
You can use it not only to recycle products you already have but also to create new products, maybe it is a new way of income and work? The good thing about these machines is that it is totally Opensource, which means that at any time, you can modify them to make them to your liking. The possibilities are endless because you can use many different plastics and create as many molds as you want. |
In an era defined by rapid technological advancement, the frontiers of engineering have expanded exponentially. The intersection of technology and innovation has become the epicenter of progress, propelling us into a realm of endless possibilities and transformative breakthroughs. This fusion has not only redefined the way we live but has also set the stage for a future where innovation is the heartbeat of مهندسی ایران.
The Evolution of Engineering:
Engineering, traditionally synonymous with the application of scientific principles to design and build structures or machines, has undergone a metamorphosis. Today, it encapsulates a spectrum of disciplines, from software and robotics to biotechnology and artificial intelligence. The essence of engineering now lies not just in creating solutions but in pushing the boundaries of what’s possible.
Technology as the Driving Force:
At the core of this evolution is technology. It serves as both the catalyst and canvas for engineering marvels. From the invention of the wheel to the emergence of AI-driven machines, each leap in engineering has been intrinsically tied to technological advancements. As technology grows more sophisticated, engineering finds new avenues to explore.
Innovation: A Catalyst for Change:
Innovation is the heartbeat of engineering progress. It thrives on the curiosity to question the status quo and the courage to experiment with new ideas. It’s the fuel that powers engineers to dream big and turn those dreams into reality. The fusion of innovative thinking with technology has resulted in groundbreaking discoveries that redefine what we thought was achievable.
1. Artificial Intelligence and Machine Learning:
The amalgamation of AI and engineering has led to self-learning systems, predictive analytics, and automation that redefine how we work and interact with technology.
2. Biotechnology and Healthcare Engineering:
Advancements in bioengineering have revolutionized healthcare, with developments in gene editing, personalized medicine, and bionic prosthetics.
3. Renewable Energy and Sustainability:
Engineers are pioneering renewable energy solutions, harnessing the power of the sun, wind, and water to create sustainable alternatives to fossil fuels.
4. Space Exploration and Aerospace Engineering:
The boundaries of space continue to be pushed as engineers develop spacecraft, satellites, and technologies for interplanetary exploration.
Challenges and Opportunities:
With these frontiers come challenges. Ethical dilemmas surrounding AI, the responsible use of biotechnology, sustainability concerns, and the complexities of space exploration all pose significant hurdles. However, each challenge presents an opportunity for innovation and growth. Engineers are at the forefront, grappling with these issues and leveraging technology to find solutions.
The Future Landscape:
The future of engineering is a canvas awaiting new strokes of innovation. It’s a landscape where smart cities, quantum computing, nanotechnology, and the fusion of disciplines yet unknown will redefine our existence. The engineering frontiers of tomorrow will be shaped by the imagination and ingenuity of today’s engineers.
The intersection of technology and innovation stands as the fulcrum upon which engineering pivots. It’s a realm where imagination meets application, where the realms of possibility continually expand. As we traverse these frontiers, we embark on a journey that not only reshapes industries and societies but also defines the essence of human progress itself. In this ever-evolving landscape, engineers serve as the architects of the future, sculpting a world where the impossible becomes conceivable and the inconceivable, achievable.
The fusion of technology and innovation, intertwined with the ingenuity of engineers, promises a future limited only by the bounds of our imagination.
What do you think—does it capture the excitement and potential of engineering’s future? |
PTE written discourse is an enabling skill in the PTE test. However, many test-takers don’t understand it thoroughly leading to get trouble with this.
If you have prepared for the PTE test and find it challenging to deal with this PTE written discourse, don’t miss out on this blog.
PTE Magic will introduce written discourse and reveal some tips to handle this.
It actually can make a significant difference in your journey to success.
Let’s read on to examine.
- PTE reading & writing classes to improve your scores quickly
- Secret PTE writing tips from high-scorers
What is PTE written discourse?
PTE written discourse is a pivotal section of the PTE Academic exam designed to evaluate your ability to convey ideas, opinions, and arguments through the written word.
However, “discourse” sounds quite strange from usual conversation. To understand more about this term, let’s take a look at its introduction in the PTE guidelines.
It is said that “Written discourse skills are represented in the structure of a written text: entailing its internal coherence, logical development, and the range of linguistic resources used to express meaning precisely. Scores for enabling skills are not awarded when responses are inappropriate for the items in either content or form”.
Simply put, you can understand that written discourse refers to the skills necessary to structure a written text effectively. It encompasses maintaining internal coherence, logical development, and using a variety of linguistic resources to convey meaning accurately. It’s about how well you organize and express your thoughts in writing.
In general, it refers to some main points: content, logical flow, and vocabulary/grammar usage.
- Content: You should keep the reader interested in your paragraph without being off-topic.
- Logical flow: Every idea should be arranged appropriately, in order to let people easy to keep track of and be persuasive by your arguments.
- Vocabulary and grammar: It’s necessary to show off your prominent linguistic skills with the usage of a rich vocabulary range and complex grammar.
Why you should pay special attention to it?
Written discourse is an important enabling skill in the PTE test. The significance of PTE written discourse cannot be overlooked. Here are four main reasons why you should notice it.
- Scoring impact: PTE written discourse contributes significantly to your writing and overall communication skills scores. The more master written discourse you can, the higher points you can get. It plays a pivotal role in determining your final PTE score.
- Language proficiency: This part would reflect your linguistic ability. Therefore, the higher the quality of your written discourse, the better your language score. It also contributes to your practical application in real-life situations.
- Expressing complex ideas: PTE written discourse tasks often require you to convey complex ideas effectively, a critical skill for academic and professional writing. It will support you in the exam and some other situations like dissertations as well.
- Competitive edge: Achieving high scores in written discourse can set you apart from other test-takers, boosting your chances of realizing your academic and professional aspirations.
Overall, written discourse plays an important part in PTE part in specific and in real life in general. It is better to take time to practice it seriously.
How is PTE written discourse scored?
Understanding how PTE written discourse is scored can give you a strategic advantage. Your performance in this section is assessed based on several criteria, including:
- Content: You need to create content relevant to the given prompt. Good writing has to connect and resolve all the offered issues in the prompt.
- Form: It’s necessary to follow the standard form of an essay including the introduction, body, and conclusion. These are the basic form of all the essays
- Development and coherence: This aspect assesses how well you develop your ideas, maintain coherence, and ensure unity throughout your response. Every single paragraph should stick to each other.
- Grammar and vocabulary: In your writing, the accuracy of your language use is a key factor in written discourse. Do you apply proper grammar? How well is your word choice? The structure of the sentence is good enough? These are some questions you need to notice to get a higher score in this part.
- Spelling and punctuation: It’s crucial to pay attention to detail, including correct spelling, punctuation, and adherence to the word limit. With only a tiny mistake, you can miss your targeting score.
All the above are factors constituting the PTE written discourse. Your scores across these criteria are then averaged to determine your overall score for the written discourse section.
4 tips for improving PTE written discourse
Now you get how written discourse is important and its score range. So what you can do to improve this enabling skill? Below are some recommendations for you.
Understand the topic fully
Knowing exactly the prompt is the first step you need to be a master of written discourse. You need to identify the topic of discussion and what it requires you to deal with.
Remember that each question usually focuses on a specific issue. To recognize the main topic, you can skim and find several related keywords.
Let’s look into this question for example “An increasing number of employees are leaving rural areas to find new chances in the urban. What are the potential effects of this trend, and what remedies can you propose?”
It can easily be seen that bold parts are some keywords you can catch up quickly to identify the topic. Firstly, the question demonstrates a problem when many people relocate from rural to urban. It causes many side effects. You need to point out these consequences and offer respective solutions.
Notice that off-topic is a taboo in writing. You even might receive a zero if create a paragraph that is not related to the topic.
Follow the structure
Assuring the writing structure is a compulsory factor in written discourse. Formal writing usually contains four parts: the introduction, two body paragraphs, and the conclusion. Your essay needs to have all of these.
In addition, it is essential to keep a logical flow through the essay. You should use linking words such as moreover and therefore v.v to connect every sentence or every idea to each other.
For instance, with the above question, you could start your first paragraph this way: “Firstly, the most serious effect we can see when people move from rural to urban is the unbalanced population.”
Otherwise, remember to maintain coherence in the writing. Make sure that all the ideas are connected. You should explain every statement, and provide trustworthy examples and evidence to support your opinion.
Enrich your linguistic range
Vocabulary and grammar are crucial aspects to assess your written discourse skills. Thus, learning new words and their usage, and combining complex grammar (compound sentences, conditional sentences for example) are keys to winning.
Remember that this is a test so you should apply academic vocabulary and tone when writing. Using basic words would not appreciated.
However, it does not mean that you have to apply these all the time. You should keep it in a proper density and in the right context. Abusing complicated vocab and grammar could hinder your writing.
So, how to improve one’s linguistic range? Here are some ways:
- Keep reading: this is the most effective way to enhance your vocabulary. You can read articles, newspapers, or even academic reports,… to broaden your lexicon. In this way, you not only learn words’ meaning but also their context.
- Practice regularly: There is no point if you learn on paper only. It’s necessary to apply something new to write. Take a bit of time every day to write a short paragraph with new words you have learned that day. It would help you remember them better.
Get familiar with all writing types
There are many types of writing in the PTE essay test such as argument essays, pros and cons essays, causes and effects essays,… Every type of essay has a different structure. Therefore, you need to master it all to know exactly what you are going to write.
To acquainted with this, the only way we offer is to revise older tests and try to do it by yourself. The more practice, the more mature you can. Once you know all kinds of questions, you can build your paragraph’s frame better. All you need is to note your idea on the paper and write based on it.
It is not only time-consuming but also helps you focus on the main idea. You can avoid rambling or even off-topic as well.
In the PTE Academic exam, written discourse stands as a critical pillar that significantly influences your overall score. Understanding written discourse, dedicating time for practice, and applying the tips offered can increase your score significantly.
By mastering this aspect, you not only elevate your chances of success in the PTE exam but also enhance your English language proficiency, a skill that holds enduring value for your academic and professional pursuits.
Last updated on 18/10/2023
My name is Moni, and I am a seasoned PTE teacher with over 6 years of experience. I have helped thousands of students overcome their struggles and achieve their desired scores. My passion for teaching and dedication to my student’s success drives me to continually improve my teaching methods and provide the best possible support. Join me on this journey toward PTE success! |
It's NOT the Stork
This is our pick for talking with toddlers and young ones about sexuality. With simple and clear images, the author uses humor to explain
This book for younger children about their bodies -- a resource that parents, teachers, librarians and health care providers can use with ease and confidence. Young children are curious about almost everything, especially their bodies. And young children are not afraid to ask questions.
IT'S NOT THE STORK! helps answer these endless and perfectly normal questions that preschool, kindergarten, and early elementary school children ask about how they began.
Through lively, comfortable language and sensitive, engaging artwork, Robie H. Harris and Michael Emberley address readers in a reassuring way, mindful of a child's healthy desire for straightforward information.
Two irresistible cartoon characters, a curious bird and a squeamish bee, provide comic relief and give voice to the full range of emotions and reactions children may experience while learning about their amazing bodies.
ASD & De-Escalation Strategies
A practical guide to positive behavioural interventions for children and young people by Steve Brown. With clear advice and strategies that can be easily implemented in practice, Steve Brown explains...
Fitting in to school and social life can be the single most challenging task when you have Asperger's syndrome — "Asperger's Rules!" can help. The strategies in this book will...
Autism and Girls
Recently added to our recommended book list, "Autism and Girls" is an informative and practical title to support parents with female children with Autism. This title has won the Book... |
Dielectric Properties of Insulated Crepe Paper Tubes
A discharge of local breakdown that occurs inside an insulating medium between conductors. This discharge may occur inside or adjacent to the insulation.
a. Partial Discharge Amount: The value of the amount of charge in the local TV that is measured by the amount of apparent charge and measured under certain conditions.
b. Partial discharge starting voltage: When the voltage applied to the sample is slowly increased from the lower value of the partial discharge not observed to the lowest voltage value of the partial discharge observed in the test circuit.
c. Partial discharge extinction voltage: When the voltage applied to the test sample, the higher value of the partial discharge is slowly decreased until the lowest voltage value of the partial discharge is not observed in the test circuit.
d. Power frequency dry flashover voltage: The average value of the voltage measured by the flashover of the post insulator in the dry state under the specified test conditions.
Appearance quality: The appearance quality of the insulation includes: size, shape tolerance, surface flatness, bubble burr defect, degree of crack deformation. The dense and uniform surface is favorable for the moisture resistance and corrosion resistance of the insulating member, and the surface withstand voltage capability is improved. The high voltage does not affect the balance of the electric field, and the electric field distortion of the portion is not caused by surface defects. |
How much electricity is used to heat water?
The amount of electricity consumed in heating water is fundamental to understanding the overall financial equation.
The simple formula is to take the volume of water to be heated, the temperature of the water when cold and when hot, called the ‘temperature differential’, and the amount of electricity used to heat the volume by the temperature differential.
Volume ×Temperature Differential ÷ 860 = kWh (electricity used)
The cold water in Johannesburg is as low as 10 ºC in winter and as high as 22 ºC in mid-summer with an average of 16 ºC. For a hot water thermostat setting in the electric geyser of 60 ºC this is an average temperature differential of 44 ºC. |
Have you ever baked a cake? Making soap is (almost) as simple. Although you get to do a bit of chemistry, making soap is not rocket science! Here are the seven steps you need to go through to make handcrafted soap—the “cold process” way—with pictures.
First, be sure to work on a clear surface with clean instruments. Handle sodium hydroxide (a corrosive product) and soap paste carefully in a ventilated area to prevent burns. It is essential to wear gloves, safety glasses, a mask, and long clothing during each step of soapmaking. Above all, make sure to have time on your hands. Even though soapmaking can (and should) be done fairly quickly, this is not the time to leave for dinner or help your child with homework!
Step 1 | Preparing the oily and aqueous phases
These two phases are prepared separately, and each ingredient is weighed carefully. The aqueous phase consists of cold water and sodium hydroxide (lye, NaOH). Lye is necessary as part of the “cold process” soapmaking method because it transforms the fatty substance into soap paste through a chemical reaction. On contact with sodium hydroxide granules, the temperature of the water rises considerably and rapidly. With a large spoon, stir until completely dissolved. As for the oily phase, it is the soap base. This oily phase is also heated. When the respective temperatures of the two phases have been reached, the lye solution is poured into the oily phase (never the other way round). The mixture is stirred with a hand blender until the soap paste begins to take shape.
Step 2 | Making the soap paste
On contact with the lye, the chemical reaction begins, and the oil is transformed into soap paste. This process is called cold saponification. As it is mixed, the paste becomes homogeneous and thickens until grooves form on the surface (like fudge!). This is known as the “trace” stage. Be careful with splatters! The active ingredients and colours can now be added to the paste.
Step 3 | Adding the active ingredients
It is at this stage that essential oils, herbs, exfoliants, colours, and other active ingredients are added to the soap paste. The possibilities are endless! Caution: You must work fairly quickly because, depending on the ingredients, the chemical reaction can quickly thicken the soap paste. You can also separate one (or more) parts of the paste to create different colours.
Step 4 | Pouring the soap
The soap paste is now ready to be poured into a mould. This is where you can let your creativity run wild for the look of your soap bars! Once again, you need to be alert and proceed fairly quickly because the soap paste continues to thicken. You must therefore plan your design before starting to work on the recipe.
Step 5 | Rest period
Once you have poured the soap paste, you need to cover it with resin, and the mould with a towel or wooden board. This will help retain the heat produced by the saponification (an exothermic reaction) process. Let it rest like this for at least 12 to 24 hours. It is during this period that the soap hardens. The soap rises in temperature, and its appearance (colour, texture) may seem quite different. Don’t worry, everything will be fine when the soap is removed from the mould.
Step 6 | Unmoulding and cutting
A day later, the soap is ready to be unmoulded. Its consistency is comparable to that of a block of butter: it is strong enough to be handled, but soft enough to be easily cut. At Quai des Bulles, we cut our soap bars with handcrafted instruments, custom-made for our needs. We check all of our soap bars one by one to make sure they are perfect. At this stage of the process, the soap must be handled with care. It is then sent to dry.
Step 7 | Drying
A drying period is essential before the soap can be used. The soap bars are stacked on a wooden shelf and undergo an ambient air drying process that lasts at least 30 days. This period of time allows for the saponification process to fully complete and for the pH to adjust, making for a milder, less “caustic” soap. It is important to know that the drier soap bars become, the harder they become, and the longer their shelf life will be. Afterwards, the secret is to keep the soap bars in a dry and cool place, and to rotate them as needed. The use of a soap dish is essential to ensure proper drainage of your soap bar. Note that even after the 30-day drying cure, the appearance, smell, and colour of the soap may still continue to change (this does not affect its properties).
Almost all you need can already be found in your kitchen!
- Kitchen scale
- Pitcher or large cup (metal, Pyrex, glass, or heat-resistant plastic)
- Spatulas, measuring spoons, utensils
- Hand blender
- Kitchen thermometer
CAUTION > During all stages of soap making, be sure to protect your hands, eyes, skin, and respiratory tract.
- Safety goggles
- Long clothing
- Mask (with cartridges)
You are welcome to come and see a soapmaking demonstration at our workshop in Kamouraska. (Call before!) 😉
Photo credit: Julie Houde-Audet, photographer ♥ |
Poverty is a reality affecting people in every country in the world. It affects an individual’s ability to afford basic necessities like food, shelter, clothing, health care, and ultimately affects every aspect of the individual’s life. Though the percentage of poverty varies according to what criteria are being considered, in 2021, it was recorded by the United Nations and the World Bank[i] that more than 9.2% of the world population live in extreme poverty, which means they cannot afford to live on $1.90 a day. However, a more in-depth analysis reveals that in developing countries, the percentage of people living below this margin amounts to nearly half of their population[ii].
There have been several conferences and meetings between countries in the world aimed at forming a working solution for the eradication of poverty. In 2015, the eradication of extreme poverty was included as a Sustainable Development Goal (SDG) and the target was that it would be eradicated by 2030. Like every SDG, this goal cannot be achieved without the active and consistent effort by the government of every nation.
One way the government of a country can work towards resolving extreme poverty is by increasing the minimum wage of workers in that country. While this is being increased, cognizance should be taken to ensure that the minimum wage is such that an individual earning that amount would be above the international poverty benchmark. I believe this is the least a country should seek to attain. The more reasonable position would be for the country’s minimum wage to be such that an individual can live comfortably on and not just survive on.
Some developing countries have succeeded in ensuring that the minimum wage in their countries just barely exceed the international poverty benchmark and stopped at that point, with no active plans to continue revising this minimum wage. Meanwhile, in these countries where more than half the population live in extreme poverty, public officials earn more than five times the minimum wage set by the government. This begs the question, “what kind of country pays its public officials so much money while its minimum wage is just barely enough to ensure that the citizens within that country live above extreme poverty?”, and the answer in my opinion is a country where the public officials have missed it totally, they have lost touch with the reality of who their responsibilities lie to and what those responsibilities are, and have rather chosen to focus on what they stand to gain by their positions.
The step forward is to make the positions of public officials one that is enticing to only people with the intention of making life better for the citizens of the country and not those looking towards gaining from their positions. This can be achieved by making sure no public official within that country earns more than five times the minimum wage of the country. This also serves as a motivation for the growth of the economy of a country, so public officials would work harder to ensure that the economy of the country continues to improve such that the minimum wage of the country keeps being increased as well as the wages earned by them.
Finally, those charged with the responsibility of determining the minimum wage, are the same people who set what would be paid to the public officials, so, a system where public officials earn not more than five times the minimum wage can only be achieved if there is a collective demand for it. Petitions should be sent to the lawmakers, letters should be written to them, public discussions should be had about it, active steps should continuously be taken. It would not birth results immediately but it is a step in the right direction.
[i] World Bank statistics on https://wwwworldbank.org/en/topic/poverty/overview#1
[ii] Andrea Peer, “Global poverty: Facts, FAQs, and how to help”, https://www.worldvision.org/sponsorship-news-stories/global-poverty-facts, Updated On: August 23, 2021.
Chinyere Jennifer Oyii completed her LLB at the Enugu State University of Science and Technology and went ahead to be inducted into the Nigerian Bar as Barrister and Solicitor. On her journey to obtain these qualifications, she grew fond of Human Rights law and this has shaped her legal practice. In her previous role working with the Legal Aid Council of Nigeria, she offered pro-bono legal services to indigent citizens, and currently, she works in a private law firm where she coordinates the majority of research work carried out. Jennifer is passionate about utilizing her knowledge of human rights, advocacy, policy formation and research to create laws that would positively impact the lives of people. She is also currently pursuing her Innovation & Justice Fellowship with MIJ. |
When our kids are upset, it can sometimes trigger us to be upset too and instead of responding to our kids, we react. Rather than trying to force your child not to feel certain things, teach them how to deal with uncomfortable emotions.
Your goal shouldn’t be to change your child’s emotions. Avoid saying things like:
Understanding their emotions and responding appropriately is an important part of your child’s cognitive development. In fact, when kids have a solid grasp on their emotions, research has shown that they do better in school and have more positive interactions with their peers and their teachers.
As your child grows up, they’ll gain better control over their emotions. And acknowledging his emotions is a quicker way to reduce difficult behavior than if you brushed them aside. Look for teachable moments to coach your child. And be prepared to work on managing your own emotions better.
Coach Benjamin Mizrahi. Educator. Learning Specialist. Family Coach. Father. Husband.
More articles on Mr Mizrahi's Blog - Benjamin Mizrahi |
A meta-analysis of 15 studies involving nearly 50,000 people from four continents offers new insights into identifying the amount of daily walking steps that will optimally improve adults’ health and longevity—and whether the number of steps is different for people of different ages.
The analysis represents an effort to develop an evidence-based public health message about the benefits of physical activity. The oft-repeated 10,000-steps-a-day mantra grew out of a decades-old marketing campaign for a Japanese pedometer, with no science to back up the impact on health.
Led by University of Massachusetts Amherst physical activity epidemiologist Amanda Paluch, an international group of scientists who formed the Steps for Health Collaborative found that taking more steps a day helps lower the risk of premature death. The findings are reported in a paper published March 2 in Lancet Public Health.
More specifically, for adults 60 and older, the risk of premature death leveled off at about 6,000-8,000 steps per day, meaning that more steps than that provided no additional benefit for longevity. Adults younger than 60 saw the risk of premature death stabilize at about 8,000-10,000 steps per day.
“So, what we saw was this incremental reduction in risk as steps increase, until it levels off,” Paluch said. “And the leveling occurred at different step values for older versus younger adults.”
Interestingly, the research found no definitive association with walking speed, beyond the total number of steps per day, Paluch noted. Getting in your steps—regardless of the pace at which you walked them—was the link to a lower risk of death.
The new research supports and expands findings from another study led by Paluch, published last September in JAMA Network Open, which found that walking at least 7,000 steps a day reduced middle-aged people’s risk of premature death.
The Physical Activity Guidelines for Americans, updated in 2018, recommends adults get at least 150 minutes of moderate-intensity aerobic physical activity each week. Paluch is among the researchers seeking to help establish the evidence base to guide recommendations for simple, accessible physical activity, such as walking.
“Steps are very simple to track, and there is a rapid growth of fitness tracking devices,” Paluch said. “It’s such a clear communication tool for public health messaging.”
The research group combined the evidence from 15 studies that investigated the effect of daily steps on all-cause mortality among adults aged 18 and older. They grouped the nearly 50,000 participants into four comparative groups according to average steps per day. The lowest step group averaged 3,500 steps; the second, 5,800; the third, 7,800; and the fourth, 10,900 steps per day.
Among the three higher active groups who got more steps a day, there was a 40-53 percent lower risk of death, compared to the lowest quartile group who walked fewer steps, according to the meta-analysis.
“The major takeaway is there’s a lot of evidence suggesting that moving even a little more is beneficial, particularly for those who are doing very little activity,” Paluch said. “More steps per day are better for your health. And the benefit in terms of mortality risk levels off around 6,000 to 8,000 for older adults and 8,000 to 10,000 for younger adults.”
For more information, visit www.umass.edu. |
January 31 2023
January 31 2023
The Washington Post reports that the last eight years have been the hottest in human history. Unfortunately, despite this news underscoring the urgency to halt the emission of greenhouse gases, the New York Times reports that U.S. carbon emissions increased 1.3% in 2022 as compared to 2021. The increase in emissions was driven by the transportation and industrial sectors, while emissions from the electric-power sector declined as electricity production from renewables surpassed coal for the first time. The Post notes: “Researchers found that atmospheric concentrations of carbon dioxide are at the highest levels in more than 2 million years. Levels of methane, a short-lived but powerful greenhouse gas, have also continued to increase and are at the highest levels in 800,000 years.”
Over 90% of the energy captured by greenhouse gases goes into the ocean. The Guardian notes that ocean temperatures in 2022 were the hottest ever recorded. Salon observes that 2022 ocean temperatures broke the previous record… set in 2021. The warming ocean will bring more heat and moisture to the atmosphere, driving up air temperatures and flood risks, and also accelerating sea level rise. It is hard to grasp how much energy this represents: the article notes that in 2022 the ocean absorbed about 14 zettajoules of heat, or 14,000,000,000,000,000,000,000 joules (a zettajoule is 1021 joules). This is equivalent to releasing 400,000 Hiroshima-sized atomic bombs of energy into the ocean every day (for those interested in this calculation please see my previous post The Unseen Atom Bombs).
An interesting article in the Washington Post describes how a tagged elephant seal that swam to Antarctica helped scientists understand that warming ocean water is reaching the massive Denman Glacier. Due to the physical configuration of this glacier, it is likely prone to rapid melting if warm ocean water reaches it for an extended period of time. Such melting would result in a noticeable increase in global sea level this century. Another article in the Post reports on a recent study of ice cores concluding that the average temperature from 2001-2011 in northern Greenland was higher than at any time in the last thousand years. This is consistent with observations of surface melt and glacial retreat.
None of this is good news, and while I don’t like to dwell on it we must acknowledge how far we still have to go. Bill McKibben notes in the New Yorker that 2022 was a year of intense weather, including record heat waves in Asia and Europe, and record flooding in Pakistan and other locations around the globe. He goes on to state, however, that 2022 was also a year in which institutions began to respond more vigorously to the climate crisis, with the U.S. Congress’ passage of the Inflation Reduction Act (IRA) leading the way. This major investment in renewable energy, which was also enhanced by the Europeans moving more quickly to renewables because of the war in Ukraine, should accelerate the transition from fossil fuels. In the New York Times, Leah Stokes agrees with McKibben, suggesting that we are “beginning to turn the oil tanker” toward a non-fossil-fuel economy.
In a subsequent article, McKibben makes clear that what is critical now is the implementation of the IRA. This must be done well not only to advance the fossil-fuel transition, but also to gain and maintain the support of politicians. As a representation of the scale of the job before us, he notes that Rewiring America estimates that the U.S. will need one million new electricians to support the electrification of the country, which will require switching out about one billion machines in our homes.
In the New York Times, David Wallace-Wells examines the remarkable growth of the EV market. The nearly 30 million electric vehicles on the road today, up from just 10 million at the end of 2020, is vastly more than projected 10 years ago. EVs now represent four out of every five new cars sold in Norway; the figure was just one in five as recently as 2016. More than 55% of new cars registered in Germany in December were electric or hybrid. The growth in China, where the article notes that more electric vehicles are sold than everywhere else in the world combined, was also impressive. EVs rose from 3.5% of the market in early 2020 to 20.3% in early 2022. Yet despite these changes, EVs presently still make up only 2-3% of the global vehicle fleet, showing how challenging the transition will be. The article reviews many of the economic and political challenges that will inevitably slow the transition despite high EV demand.
Yale e360 examines the rush to develop an alternative EV battery, which is being driven in part by the terms of the Inflation Reduction Act. Starting in 2023, the $7,500 tax credit for buyers of most new EVs is tied to certain requirements for the sourcing of critical minerals and the manufacturing of batteries. “By 2029, only EVs with 80 percent of their minerals sourced within the U.S. or its allied nations and 100 percent North American-manufactured or -assembled components will qualify for the full credit.” What is frequently under-appreciated is the scale of the private investment in EVs that is underway, as automakers and battery companies are working to reduce costs, increase driving range and wean the industry off what the U.S. government calls “foreign entities of concern.” Batteries that replace so-called conflict minerals with domestic minerals have advanced beyond research and development to their testing phases; a battery that reduces cobalt in favor of nickel, manganese and aluminum is already in commercial production.
The flow of private capital to EVs is described by Inside Climate News, which reports that U.S. battery manufacturing is expanding “on a scale so large that it’s almost difficult to comprehend.” Over two dozen plants will be constructed between now and 2030, increasing the current manufacturing capacity of 109.7 gigawatt-hours per year of lithium-ion batteries by a factor of seven to 813.6 gigawatt-hours per year. And these factories, most of which are in the midwest (turning the “rust belt” into the “battery belt”), were announced before the passage of the Inflation Reduction Act.
Grist describes a foundation-sponsored program that is helping state transportation departments to locate solar panels on underutilized land near roads. In conjunction with mapping-software giant ESRI, a software tool has been developed that allows transportation agencies to screen land parcels to determine which might be suitable for solar power.
As California recovers from the storms of early January, an op-ed in the New York Times reminds us that the state could still be facing drought this summer without continued rainfall. Recent studies suggest that California will see more of this “weather whiplash” in the future, where drought is periodically punctuated by major floods that overwhelm the infrastructure designed for a different climate. Another article in the Times notes that more rain will need to fall for this rainy season to match or exceed wet years such as 2017, 1997 or 1983. While snow is piling up in the Colorado River basin, Inside Climate News quotes scientist Brian Udall: “We would need five or six years at 150 percent snowpack to refill these reservoirs. And that is extremely unlikely.”
An op-ed in the New York Times describes the opportunity to use large winter-water flows to recharge groundwater basins in California, providing more resilience during drought. A key step is identifying the buried canyons, called “paleo valleys,” that were formed during the last ice age. These are areas of very high permeability where flood flows could be directed for faster recharge, but to use them we first must identify their locations and deal with the complex land uses at the surface.
The Guardian reports that “scientists have discovered a record number of dead fir trees in Oregon, in a foreboding sign of how drought and the climate crisis are ravaging the American west.” While their study is still being finalized, “dead trees were spotted in areas across 1.1m acres of Oregon forest. The scientists have taken to dubbing it ‘firmageddon’.”
Salon notes that climate change has a role in making winter storms more severe, particularly the amount of snowfall. The Washington Post reports on recent research refining the impacts of clouds on global warming. Clouds can have both warming and cooling impacts depending mainly upon their altitude. The impact of clouds has been an important source of uncertainty in model projections of future temperatures. Unfortunately, researchers are reaching a consensus that the warming impact of clouds will grow more important, making the lowest projections of future temperatures less likely.
The Guardian describes the energy revolution that is underway. As many authors (including me in Viva la Revolution) have noted, the expansion of renewable energy in the past five years is evidence of the accelerating transformation away from fossil fuels. For two days in mid-January in the UK, wind energy provided over half the electricity on the grid. This exciting news doesn’t make it into the headlines, in part because the overall cost of energy in the UK is not dropping. Higher gas prices are keeping all energy costs inflated (due in part to regulations in the UK), prompting the author of the Guardian article to suggest that pricing for renewable and fossil-fuel electricity be separated to spread the good news to UK electricity consumers. |
Last updated on January 23rd, 2024
If you want to take your improvisation skills to the next level, it’s important to have a solid understanding of minor 7 arpeggios. Arpeggios are so effective because they highlight the important notes of a chord.
If you’re familiar with minor triads, this concept goes one step further to include another note in the chord structure.
In this post, we’ll specifically look at how to play minor 7 arpeggios shapes on guitar and application examples over different chords. Grab your guitar and let’s start learning!
Understanding Minor 7 Arpeggios
Before learning to play arpeggios, it’s important to understand what minor 7 chords are and how they are built. Every minor 7 chord is built upon the following four chord tones: 1, b3, 5, and b7. These chord tones can also be thought of as the scale degrees related to a minor scale.
Here is the formula for minor 7 chords below.
Now that you know what notes belong to the minor 7 chord structure, let’s look at how to read the chord charts.
How to read the chord charts
For the chord charts below:
- The top horizontal line of the chord chart represents the high E string and the bottom horizontal line represents the low E string.
- The vertical lines separate each fret.
- The green dots represent the root note.
You can check this link for more on how to read guitar notation symbols.
Minor 7 arpeggio shape 1
Here is the first shape with the root note starting on the 6th string. You can use any root note to play these examples but we will cover application examples later in this post.
For the following charts, the left side shows you the suggested fingering for an arpeggio and the right side shows you what chord tones you are playing.
Minor 7 arpeggio shape 2
Minor 7 arpeggio shape 3
Now, here is the minor 7 arpeggio shape with the root note starting on the 5th string.
Minor 7 arpeggio shape 4
Here is another way you can play the arpeggio with the root note on the 5th string.
Minor 7 arpeggio shape 5
All arpeggio shapes on the fretboard
To help you connect all the shapes we learned, here are all the arpeggio shapes on the fretboard using an A minor 7 chord arpeggio.
A minor 7 arpeggios
E minor 7 arpeggios
Here are all the arpeggio shapes on the fretboard using an E minor 7 chord arpeggio.
G minor 7 arpeggios
Here are all the arpeggio shapes on the fretboard using a G minor 7 chord arpeggio.
Minor 7 arpeggio application examples
Now, we’ll look at some application examples of using arpeggios over a C minor 7 chord. You can listen to each example below the notation with tabs.
Application example 1
This first example uses an ascending four note pattern using only the C minor 7 chord tones: C, Eb, G, and Bb.
Application example 2
Next, we have a descending arpeggio pattern which skips one note after every chord tone.
Application example 3
Example 3 incorporates the minor 7 arpeggio with other scale and chromatic notes.
Application example 4
Example 4 includes arpeggio note skipping, scale notes and a hint of the minor pentatonic scale in measure 2.
Application example 5
Lastly, example 5 incorporates the minor 7 arpeggio with the blues scale and some chromatic notes.
Arpeggios are a foundational part of improvisation because they highlight the important notes of a chord. If you practice this concept, you’ll find that your solos have much more clarity over the chords you’re playing over.
As you become more comfortable playing arpeggios over a specific chord, also try incorporating other scale notes or try different rhythmic ideas to make your ideas sound more musical. You can also challenge yourself to play arpeggios examples or ideas over different chords.
I hope this helps you to create more interesting ideas when improvising. Happy practicing!
JG Music Lessons |
Resiliency: The result of agencies taking appropriate action(s) to ensure that a society can continue to function effectively after a significant event or in response to a long-term change of conditions.
There is a heightened interest in ensuring the long-term resiliency of communities. Public agencies in the United States, particularly those in the most vulnerable areas, are paying closer attention to the impacts and risks posed by climate change, severe weather, and other natural disasters. They are focusing increased attention on questions of resiliency in the face of threats ranging from coastal sea level rise and storm surge to inland flooding and tornadic winds; from heat waves and drought conditions to dust storms and forest fires; from the slower but no less impactful effects of long-term climate change to the more immediate impacts of extreme weather and naturally occurring disasters.
Some areas (the New York region, Vermont, Colorado, coastal Louisiana and others) are still recovering from specific events that have reshaped entire communities and created a much heightened awareness of the vital importance of infrastructure to ensure long-term community resiliency. In California and Texas, off-the-charts weather conditions have caused many to rethink the assumptions that were the foundation for future-oriented planning, and to consider designing resilience into the infrastructure they depend upon. These realities are the same around the globe where significant weather events and longer-term climatic changes are reshaping approaches to a more resilient built environment.
While professionals seek to apply existing and new data-driven tools to identify the most appropriate responses to a changing environment, the entire resiliency field is experiencing rapid development, spurred by the dearth of accepted and approved technical approaches that incorporate future risk into planning and design.
It is also a reality of resiliency, looking forward to the future, that there are fairly large uncertainties associated with the likelihood of different system conditions occurring given the range of potential climate-related impacts. Any approach that addresses resiliency needs to therefore be based on a few principles that can help guide agency processes that recognize these uncertainties.
Assessing the implications of asset failure
The methods used to assess the implications of asset failure, in terms of potential loss to a community or agency, need to address the impact of such losses in ways that rarely, if ever, drove decision-making in the past. These methods need to explicitly include the quantifiable impacts to economic vitality, natural and environmental resources, and quality of life. The long-term effects of recent extreme weather events extended beyond those most immediate and observable, suggesting that such broader impacts need to be an explicit part of resiliency planning for all projects moving forward.
Risk-based approaches to resiliency will vary by the contexts and character of communities, and by the relative importance of assets serving these communities. This must be considered when tailoring a risk assessment approach for particular projects. Tools exist that do this for transportation-related infrastructure, and other tools are emerging that better define and predict the broader economic, social, and environmental benefits of resiliency-related transportation infrastructure investment. Combining engineering knowledge with economic values of benefit or loss facilitates the quantification of risk. It is a useful direction in which resiliency planning is heading.
Incorporating uncertainties into approaches
The range of projections of future climate conditions represents a significant challenge to resiliency planning. Disparities in forecasted sea level rise are well-known. Similar uncertainties are inherent in projections of other climactic conditions, extreme weather, and natural events. However, projections, when appropriately utilized, can provide a means to bound the range of future potential impacts (example – establish high, medium, and low potential sea level values). In areas where this range indicates significant potential effect, more sophisticated approaches (such as Monte Carlo simulations) provide a way to understand and address the implications of these uncertainties on crucial decisions that need to be made.
The use of these projections must recognize and reflect the inherent underlying uncertainties that stem from two key factors.
- The first is variability in future projections that stem from differing assumptions on future potential conditions (for example - greenhouse gas emissions scenarios), and
- The second reflects the inherent variability and uncertainty of the model outcomes themselves.
This broad range of uncertainty represents a key consideration in using future projected data and applying modeled future scenarios toward the goal of effective decision-making. Practitioners must incorporate this uncertainty into planning and design in ways that are rarely considered or applied in current practice.
Hurricane Sandy (also known as “Superstorm Sandy”) was a catastrophic event for the New York metropolitan region in October 2012 resulting in an estimated U.S. $700 billion in damage. It devastated neighborhoods, coastlines, and critical infrastructure throughout the NY region and highlighted the reality that many cities are unprepared for extreme weather events as they have not fully defined the broader implications of
Planning for Resiliency
In sum, resiliency planning for better informed engineering and agency resource allocation is evolving rapidly and changing the way we develop and implement infrastructure plans and projects. While broad principles apply, there is currently no single accepted solution. Each context calls for a tailored approach. New methods and data sources are emerging for quantifying and incorporating a broader range of economic, environmental, and quality of life factors. But at the same time, significant questions remain about the variability of data and the forecasts which they influence.
In the end, even with state-of-the-art approaches to resiliency planning and design, there is no substitute for good judgment drawn from a rapidly expanding body of knowledge – a body of knowledge reflected in this volume by the array of individual articles by WSP | Parsons Brinckerhoff colleagues.
The articles that follow, based on leading edge thinking and professional experience, include approaches that reflect the key principles and practices that build resiliency. They are written by professionals who represent a range of backgrounds and professional interests and who provide varying perspectives on resilience - perspectives that consider risk, vulnerability, failure, and emerging changes in analysis and design methods. |
This article applies to:
These samples show how to create two different kinds of moving window paradigms. The "Centered Word Moving Window.es3" sample presents only the selected word, whereas the "Masked Sentence Moving Window.es3" sample presents the selected word within the masked sentence.
These samples are different from the sample provided in Moving Window due to how the entire sentence can be the value of one attribute. The Inline script in the samples takes the value of the attribute and uses the SplitStrings function to parse each word delimited by a space into items of an array. The array is then iterated over to create a level in a List object with an attribute that contains a word corresponding to the index of the array.
The Sentence attribute splits into an array where each item of the array is a word delimited by a space. The Inline Script adds each item of the array as a level in a List Object so that each trial is a presentation of a new word.
The SplitStrings function stores the sentence attribute into an array. The experiment will error if there is more than one space separating each word in the Sentence attribute. |
Polycystic ovary syndrome may increase the risk of memory and thinking problems in middle age, a new study suggests.
Polycystic ovary syndrome (PCOS) is a hormonal disorder that affects up to 10% of women and has been linked to obesity, diabetes, and heart problems.
While less is known about how the condition affects the brain, a new study published in the journal Neurology suggests it may impair memory and lead to earlier brain aging.
The study involved 907 female participants who were 18 to 30 years old at the beginning of the study period. Of those, 66 participants had polycystic ovary syndrome.
The diagnosis, however, was not made by a doctor but was based on androgen levels and self-reported symptoms. This means that participants may not have remembered all the information accurately.
During the 30-year follow-up, the participants completed tests to measure memory, verbal abilities, processing speed, and attention.
In a test measuring attention, people with PCOS had an 11% lower score on average compared to people without the condition.
After adjusting for age, race, and education, researchers found that people with PCOS had lower scores on tests measuring memory, attention, and verbal abilities than those without the condition.
Accelerated brain aging
At years 25 and 30 of the study, a group of 291 participants, including 24 with polycystic ovary syndrome, had brain scans.
Researchers used the scans to analyze the integrity of the white matter pathways in the brain by looking at the movement of water molecules in the brain tissue.
The scans revealed that people with polycystic ovary syndrome had lower white matter integrity, which may be an early sign of brain aging.
“Additional research is needed to confirm these findings and to determine how this change occurs, including looking at changes that people can make to reduce their chances of thinking and memory problems,” study author Heather G. Huddleston, M.D., of the University of California, San Francisco, said in a statement.
What are the symptoms of PCOS?
The exact cause of PCOS isn’t known, but increased levels of androgen — a male sex hormone — play an important part in developing the condition.
While PCOS is most often discussed in the context of infertility, the condition may have much broader health implications.
People with PCOS are often insulin-resistant, which puts them at a higher risk of diabetes. More than half of PCOS patients develop type 2 diabetes by age 40. Moreover, they are more likely to develop gestational diabetes (diabetes during pregnancy), which can cause problems for both the mother and the baby.
PCOS is also linked to an elevated risk of heart disease, high blood pressure, and high LDL (“bad”) cholesterol levels.
The symptoms of polycystic ovary syndrome may include the following:
- Heavy, long, irregular, or absent periods
- Acne or oily skin
- Excessive hair on the face or body
- Hair thinning or male-pattern baldness
- Weight gain, especially around the belly
- Multiple small cysts on the ovaries
While the study does not prove that polycystic ovary syndrome causes cognitive decline, the authors recommend people with the condition incorporate more cardiovascular exercise and improve mental health to slow down brain aging. |
A collaborative project between the US Department of Agriculture and Rutgers University attempted to determine the nutritional difference between organic and conventional blueberries.
By Yun Xie | Last updated July 7, 2008 6:59 AM CT
Blueberries, one of my favorite fruits, have a wonderful combination of tastiness and nutritional benefits. They are low in calories and have high antioxidant content, enabling them to scavenge radicals that might otherwise damage the body. Blueberries in general have health benefits, but are organic blueberries even better than conventionally grown ones? A collaborative project between the US Department of Agriculture and Rutgers University attempted to answer that question, and the results came in the form of a recent publication in the Journal of Agricultural and Food Chemistry...
click here to read the full article:http://arstechnica.com/science/news/2008/07/are-organic-blueberries-better-for-you.ars
Recent studies show that of all fresh fruits and blueberries provide the most health-protecting antioxidants. Blueberries are rich in Vitamins A, C, E and beta-carotene as well as rich in the minerals potassium, manganese, magnesium. They are very high in fiber and low in saturated fat, cholesterol and sodium.</p><p>Sunset Valley Organics Blueberries are even more nutritious than most. Independent testing shows that our berries have double the Vitamin A and C, and more Calcium and minerals than other organic berries and wild blueberries. <p>The properties of blueberries cross the blood brain barrier to effect these benefits. Antioxidants help to stop the production of free radicals. Free radicals are groups of atoms that impair the cells and the immune system which leads to disease. Anti-oxidants bind the the free electrons in free radicals.</p><p><b>Anthocyanins</b> create the blue color in blueberries. They are water-soluble and will bleed into water (or on mouths and clothes). Anthocyanins are antioxidants, known to reduce heart disease and cancer in humans. They are found throughout the plant world, but blueberries are the highest of any fruit or vegetable. This substance is believed to combat E. Coli. </p> <p><b>Chlorogenic acid</b> is another antioxidant which may also slow the release of glucose into the bloodstream after a meal. Chlorogenic acid's antioxidant properties may help fight damaging free radicals.</p> <p><b>Ellagic acid</b> also appears to bind cancer-causing body chemicals,rendering them inactive.</p> <p><b>Catechins</b> are the phytochemical compounds that helped make a nutritional star out of green tea which is so rich in them. Current belief holds that their antioxidant effect diminishes the formation of plaque in the arteries. Further research is being done to see if they combat and/or suppress cancerous tumors and cell proliferation, but to date no evidence is solid. is a substance that is produced by several plants. A number of beneficial health effects, such as anti-cancer, anti-viral, neuroprotective, anti-aging, anti-inflammatory and life-prolonging effects have been reported for this substance. It is found in the skin of red grapes. <p><b>Pterostilbene</b> is yet another antioxidant found in blueberries. Current belief holds that it may fight cancer and may also help lower cholesterol.</p>
We're still reeling from last weeks record heat, but remarkably enough, our blueberries didn't suffer much. We think that's because our organic, biological farming techniques help protect the plants and keep them healthy and more robust for all kinds of threats and conditions. During the 105 degree weather, we only lost about 5% of our blueberries, mostly the top ones on the plant, which took the full brunt of the sun. All in all, we think our loss was less than a fourth of conventionally grown blueberries. Why are our plants stronger? Because of their root system. The way we dress our rows with compost, and the way we treat the leaves with compost tea, encourages our plants to put out wide and deep roots. These roots can then absorb more of any available moisture. We also think the measurably higher level of calcium in our plants, which also shows up in our berries, strengthens the cell walls and skins, offering more protection from dehydration. As a result, in a heat wave, we lose fewer berries, the ones that we pick are plump, round and juicy. And in normal conditions, we've got amazing healthy, nutritious berries with extra vitamins, calcium and other minerals. The proof is in the berries. |
New findings suggest that if more individuals would receive the flu shot, more influenza pneumonia cases and hospitalization could be prevented. The findings came from the Vanderbilt University Medical Center.
Associate professor Carlos Grijalva, M.D., said, “We estimated that about 57 percent of influenza-related pneumonia hospitalization could be prevented through influenza vaccination. The finding indicates that influenza vaccines not only prevent the symptoms of influenza, including fever, respiratory symptoms, and body aches, but also more serious complications of influenza, such as pneumonia that requires hospitalization. Appreciating these benefits is especially important now, when we have influenza vaccines available and while we’re preparing for the upcoming influenza season. This is an excellent time to get vaccinated.”
The recommendations from the Centers for Disease Control and Prevention (CDC) are that anyone over the age of six months should be receiving the flu shot.
Data was used from the Etiology of Pneumonia in the Community study. It contained information of hospitalizations due to influenza pneumonia.
The study used 2,767 patients over the age of six month. Close to six percent of these patients had confirmed influenza and a remaining 94 percent did not have the flu. Data showed that 29 percent of those without influenza had the current flu shot, and 17 percent of those with influenza had the flu shot.
Effectiveness of the flu shot in preventing influenza pneumonia was shown to be less in the elderly and those with immunosuppressive conditions. To combat this a higher-dose flu shot may be more effective in the elderly but has yet to be studied in those with immunosuppressive conditions.
The findings were published in Journal of the American Medical Association.
Also, read Bel Marra Healths article on Is pneumonia contagious? |
Natural Language Understanding (NLU) is a subfield of artificial intelligence that focuses on enabling computers to comprehend, interpret, and respond to human language in a way that mirrors human understanding. It involves the processing and analysis of text or speech input, extracting context, meaning, and intent from it. By utilizing NLU, AI systems can effectively interact with humans through conversation, identify relevant information, and perform actions based on that comprehension.
- Natural Language Understanding (NLU) is a subfield of artificial intelligence (AI) focused on enabling computers to interpret, understand, and generate human language in a meaningful and useful way.
- NLU plays a crucial role in various applications such as chatbots, voice assistants, and sentiment analysis tools, as it enables these systems to effectively communicate with users and understand their intentions.
- Despite significant advancements in NLU, challenges persist in areas like sarcasm detection, idiomatic expressions, and language nuances, which require ongoing research and development for more accurate and nuanced understanding of human language.
Natural Language Understanding (NLU) is important because it empowers machines and artificial intelligence systems to effectively comprehend, interpret, and respond to human language in a way that is both meaningful and contextually relevant.
By enabling this cognitive ability, NLU facilitates seamless human-computer interactions, ensuring that technology serves as an efficient and user-friendly assistant.
As a critical component of AI and machine learning, NLU has a broad range of applications including virtual assistants, chatbots, sentiment analysis, and language translation, which have the potential to enhance user experiences across various industries such as customer support, healthcare, and education.
Natural Language Understanding (NLU) serves as a crucial component in the domain of Artificial Intelligence (AI), which primarily aims to facilitate and enhance the interaction between humans and computer systems. The fundamental purpose of NLU is to enable machines to comprehend, interpret, and generate meaningful responses to human language in both written and spoken forms.
By doing so, NLU significantly bridges the communication gap between humans and computers, making user experiences more intuitive, efficient, and engaging across a wide range of applications, such as chatbots, virtual assistants, and sentiment analysis, among others. To better serve its purpose, NLU leverages various techniques and tools that analyze linguistic patterns, context, and semantics to deduce meaning from human inputs.
As a result, it enables machines to not only understand simple instructions but also decipher complex sentences, colloquial expressions, and subtle nuances such as sarcasm and emotions. This level of sophistication in language processing allows businesses and industries to take advantage of faster customer support, streamlined workflows, and real-time analysis of user feedback, thus driving innovation and offering a competitive edge in the ever-evolving world of technology.
Examples of Natural Language Understanding (NLU)
Virtual Assistants: Siri, Google Assistant, and Amazon Alexa are examples of virtual assistants that utilize Natural Language Understanding (NLU) technology. These systems are designed to interpret human language, process the user’s requests, and provide relevant information or perform specific tasks, such as setting reminders, playing music, or answering questions.
Customer Service Chatbots: Many businesses use chatbots with NLU capabilities to interact with customers on their websites or messaging platforms. These chatbots can understand and interpret customer inquiries and provide quick, automated responses. This not only saves time for both the customers and the business but also provides round-the-clock support.
Sentiment Analysis: Sentiment analysis is a technique used to analyze social media data, reviews, or other forms of text to determine the overall sentiment (positive, negative, or neutral) of a specific topic, brand, or product. NLU technology helps understand the contextual meaning of words and phrases within the text, allowing for more accurate sentiment analysis results.
FAQs on Natural Language Understanding (NLU)
1. What is Natural Language Understanding (NLU)?
Natural Language Understanding (NLU) is a subdomain of artificial intelligence (AI) that focuses on facilitating human-computer interactions through the comprehension and interpretation of human language by machines. NLU enables computers to understand and process text or speech inputs, derive meaning from them, and take appropriate actions based on the context.
2. How does NLU differ from Natural Language Processing (NLP)?
While NLP is a broader area that deals with the processing and manipulation of human-generated textual or spoken data, NLU focuses specifically on the comprehension aspect of NLP. NLU is concerned with enabling machines to understand the meaning and context behind the language inputs during the interaction with humans.
3. What are the common applications of NLU?
Common applications of NLU include chatbots, virtual assistants, sentiment analysis, machine translation, content summarization, voice recognition systems, and question-answering systems to name a few.
4. How does NLU work?
NLU works by employing a combination of machine learning, deep learning, pattern recognition, and linguistics to interpret the meaning and context of human language inputs. It utilizes algorithms to identify grammatical structures, sentiments, named entities, and other linguistic components to derive insights, enabling appropriate responses or actions in diverse contexts.
5. What are the challenges faced by NLU?
Some of the common challenges faced by NLU include difficulties in understanding complex language structures, ambiguity, idiomatic expressions, sarcasm, language variations, and context-dependent meanings. Accurately identifying the intended meaning behind complex and diverse linguistic inputs remains a significant challenge in NLU.
Related Technology Terms
- Machine Learning
- Artificial Intelligence
- Text-to-Speech Conversion
- Contextual Analysis
- Sentiment Analysis |
Qanat water supply systems: a revisit of sustainability perspectives
Environmental Systems Research volume 4, Article number: 13 (2015)
As declared by UNESCO, Qanats are considered as a great human heritage. For many centuries, they presented a rational way of groundwater management in arid rural areas. This paper aims at revisiting this ancient water supply system reviewing its structure and characteristics including construction and operational issues. On that basis, we highlight some key sustainability perspectives related to this ancient water supply practice. We advocate that this ancient technology should not only be protected as a great human heritage but also be reconsidered as a sustainable way of groundwater management in arid/semi-arid regions.
Iran is known as the birth place of Qanat. With an average annual rainfall of 250 mm, it is comprised of vast regions with less than 100 mm annual rainfall, in the east and center, and small areas in the west and north with up to 1,400 mm annual rainfall ( Motiee et al. 2006). In such circumstances, an efficient use of subsurface water resources became vital. In this sense, around 800 BC, Persians mastered a groundwater exploitation technology (Goblot 1979; Behnia 1988), in form of man-made underground water channels titled Kanehat (now called Kariz or Qanat). This technology was then spread to other Middle Eastern countries, China, India, Japan, North Africa, Spain and from there to Latin America (Abdin 2006; ICQHS 2015).
Qanat captures water seeped into the ground and brings it back to the land surface through a slightly sloped underground tunnel. Qanat technology became the main supplier of water in central Iran, in particular rural areas, for hundreds of years. Such consistency in utilization stems from the fact that Qanats, once built, can provide a steady supply of quality water for many years from groundwater resources far from the villages, with almost no operational cost as it uses the force of gravity to create a steady flow (Abdin 2006).
The feasibility of constructing a Qanat system, that could sufficiently supply water during an extended service life, depends on several climatic, geographical and community characteristics. Historically, Qanats were built in areas with low yielding aquifers but in proximity to mountains and hills with high potentials for groundwater resources. Also, Qanats provide a steady flow of water at a low rate and thus are ideal for dispersed rural communities with low population density and low volatility of water demand during the year (Motiee et al. 2006).
Mass migration to cities, availability of high yielding water extraction technologies, and lack of funding to maintain the existing Qanats have resulted in the decline of the share of Qanats in supply of water to rural communities in many Middle-Eastern countries. The climatic change and its impact on regional water resources have also led to decline of water tables (Yin 2003). This resulted in overexploitation of groundwater resources due to excessive issuance of permits for deep wells and caused many Qanats to dry up. As such, the community distribution of water rights that has been provided and protected by Qanats over many centuries is now being replaced by demands, from individual farmers, for deep well permits. This has led to growing inefficiencies in irrigation water uses in a region affected by severe droughts and water scarcity.
In this sense, after a brief review of Qanat structure, we will review and emphasize the advantages of Qanats in sustainable provision of quality water in arid/semi-arid regions. The aim of this review is to demonstrate that this ancient system of water supply should not only be protected as a great human heritage but also be reconsidered as part of a sustainable groundwater management agenda in arid/semi-arid rural areas.
Description of a Qanat and the technologies used to build such a system has been well documented since early times. The Qanat building process has changed very little since that time. Qanats have been constructed by the hand labor of skilled workers, called Muqannis, who have mastered a great understanding of geology and engineering. A Qanat system, as depicted by Fig. 1, consists of the following components (Beaumont 1971; Abdin 2006; Moteei et al. 2006; Boustani 2008; Farzadmehr and Samani 2009; WH 2015):
This is the initial step in building a Qanat. A mother well (with a width of approximately 0.8–1.0 m) is dug deep into the water table. As Qanats lead water by the force of gravity, the mother well is usually constructed on alluvial depositions at the bed of mountains and hills. This phenomenon has the advantage of reaching the water at a place and depth that is usually protected from outside contamination. To reach the water table, and locate the place of the mother well, the Qanat builders had to go through a trial an error process by digging a few boreholes. If one of these trial wells could reach the water at a height that provides an acceptable slope for water flow toward the Qanat outlet it is selected as the mother well. The deepest mother well among Qanats, more than 300 m, belongs to the 2,700-year old Gonabad Qanat in Khorasan Razavi province in Iran (Boustani 2008).
It is the place that water is emerged to the surface. There are often several candidate positions for the exit point of water. The final location is determined with respect to a number of factors such as the proximity to the points of water consumption (villages, farm lands, etc.) and the slope it makes when connected to the mother well.
Once the mother well and the outlet are located, the Muqannis start to build the gallery, which is a slightly sloped tunnel. The job is started from the outlet toward the mother well. The choice of the slope is a trade off between erosion and sedimentation. Highly sloped tunnels are subject to more erosion as water flows at a higher speed. On the other hand, less sloped galleries need frequent maintenance due to the problem of sedimentation. As seen in most of the Qanats, the slope is around 0.5 percent. The cross-section of the gallery is elliptical with a height of 1.2–2.0 and width of 0.8–1.0 m. In some advanced Qanats, the bed of the tunnel is sealed by a hard material such as mortar. Also, in loose soils, to avoid roof and wall collapse, baked clay rings are employed. On the basis of the distance between the outlet and the mother well, the length of the gallery could vary from few hundred meters to several kilometers. The longest gallery among the Qanats in Iran, i.e. about 120 km, belongs to Zarach Qanat in Yazd province (Molle et al. 2004).
These are a series of vertical wells built along the gallery between the destination point (water outlet) and the mother well to facilitate the removal of soil and to provide ventilation and access for Muqannis while they are building the gallery. The shafts are built with a 20–50 m distance from each other and their depth increases toward the mother well. These wells are protected even after the gallery is fully built, as they provide access to Qanat for cleaning and maintenance purposes.
Challenges and issues
As Table 1 shows, despite the efforts to maintain and refurbish some Qanats with heritage values, the share of Qanats in water supply is decreasing in Iran (Boustani 2008; Farzadmehr and Nazari Samani 2009). From the total volume of water discharge coming from various sources (approximately 69.5 billion m3), some 11% comes from Qanats (Motiee et al. 2006). There are several factors, socio-economic and technical, that contribute to this decline (Motiee et al. 2006). The main socio-economical aspect is the increasing migration of population from rural areas to cities, and consequently, reducing the share of agricultural sector (as the main traditional force behind Qanat development) in economy. From a technical perspective, we may point to two main issues. First, the dewatering of Qanats due to decline of water tables resulted from the extended use of deep wells that pump groundwater faster than its natural replenishment rate. In addition, building a Qanat, using traditional techniques, is very time-consuming, and thus, it is now less preferred to faster alternatives such as deep-wells or inter-basin transmission pipes.
There are a number of issues related to construction and utilization of Qanats which influence their performance, efficiency, and service life:
Water quantity and availability
The water table selected to feed a Qanat should have a water level adequate and in balance with the demand from the points of consumption. If not, the Qanat would not provide a permanent supply of water and will be dried soon. This would waste the time and resources used for Qanat construction. In fall and winter, with a lower water demand, the Qanat opening in the gallery should be adjusted to save water for high demand seasons. Also, during the nights, excess water could be led to a pond built right after the outlet to be stored for day-time usage.
Water table recharge
If the natural replenishment process of the aquifer feeding a Qanat is limited, artificial water table recharge tools (such as underground dams, artificial pools, etc.) should be incorporated. For instance, the water table associated with the Jandaq Qanat in Iran is supported by an underground dam (with length of 25 m, width of 1.5 m and depth of 7 m) to sustain the water flow (Abdin 2006).
The slope, roof and wall structure of a Qanat should be designed such that to minimize the erosion. This not only prevents the collapse of the Qanat but also reduces the solid content of the water, which is extremely important if the water is used for drinking purposes. Also, to prevent the risk of contamination (ex. Coliform), knowledge of the proximate sewage systems and other potential sources of water pollution is critical while digging a Qanat.
Maintenance and revitalization
Periodic maintenance and cleaning of the Qanat is required as sediments might be accumulated in the tunnel of the Qanat after years of water flow.
Water rights and community support
Water from Qanat could be used for both irrigation and drinking. Thus, a fair water allocation system should be in place to address the needs and preferences of the water users, who are in most case the providers of the financial support for construction and maintenance of Qanats.
Over the centuries, Qanats have served as the main supplier of fresh water in arid regions of Iran. They provided the opportunity for people to live in extremely dry zones (even in deserts), and thus helped harmonize the population distribution across the country. Also, farming in saline and alkaline lands became possible using the water supplied through Qanats. In countries with active Qanat systems, socio-economic changes along with some technical drawbacks associated with Qanats (i.e. long construction time, lower rate of water withdrawal, etc.) have significantly increased the use of motor-equipped deep water wells since 1950s. In this sense, comparing a Qanat system with a typical deep well, through their service life, may reveal and highlight the benefits of employing this ancient water supply system in arid areas (Beaumont 1971; Haeri 2003; Alizadeh 2008; Boustani 2008; Farzadmehr and Nazari Samani 2009; WH 2015):
Qanat uses the force of gravity to surface up water. Thus, there is virtually no need to electric power, diesel, pump spare-parts or oil products for lubrication leading to cost recovery and significant energy savings. When compared with diesel motor-equipped deep wells, Qanats also contribute in reduction of green houses gases emissions. Table 2 reflects the potential energy savings of a Qanat by illustrating the energy consumption of electric and diesel deep-well water pumps (Alizadeh 2008).
Life cycle cost
In spite of being a labor-intensive structure to build, Qanat still poses an efficient choice when compared with deep-wells that have shorter life span (around 20 years) and require frequent replacement and maintenance. A life-cycle comparison of water withdrawal costs for typical Qanats and deep wells is provided in Table 2 (Haeri 2003). A typical Qanat may take years to be built, while a deep well could be dug in few months (Beaumont 1971). We do not present actual data on construction cost and time of Qanat systems due to lack of documentation and the extent of variation in Qanat systems in terms of length, depth of mother well, and the size of gallery. However, it should be mentioned that the cost and time could be remarkably reduced, compared to traditional practices, with the use of modern digging tools and utilization of geographical information systems and remote sensing technologies to optimally locate Qanat’s mother well (Abdin 2006).
The rate of water flow in Qanat directly depends on the groundwater natural flow preventing over exploitation of the tapped aquifer. Thus, conservation of the aquifer is another advantage of Qanats. In addition, evaporation losses are small as the sealed water transferring channel of Qanat is placed underground, which also reduces water loss from seepage. In areas with low yielding aquifer, where digging deep wells is not a feasible choice, Qanat could well serve the irrigation needs. A hybrid well could also be utilized in such circumstances, which is a Qanat-like gallery attached to a deep well to increase the well yield (Helweg 1973). As such Qanat is a sustainable water supply system that could indefinitely provide water by preserving subsurface water tables (Motiee et al. 2006).
Qanats transfer freshwater from the mountain plateau to the lower lying plains that have a saltier soil. Thus, the salinity of the soil is kept under control which helps to prevent desertification (Haeri 2003).
From a community development point of view, construction and utilization of Qanats could well engage the local population, foster participatory decision making, and provide employment opportunities. It also empowers rural communities to assume responsibility for management and distribution of their water resources (Motiee et al. 2006).
In summary, Fig. 2 presents the tradeoffs between using Qanat systems and deep wells via a casual loop diagram. There are reinforcing tradeoffs/relationships captured by ‘+’ signs, where a change in one element triggers change, in the same direction, in another element. There are also balancing tradeoffs/relationships represented by ‘−’ signs, where a change in one element triggers change, in the opposite direction, in another element. The potential of deep wells in overexploitation of groundwater aquifers serves as the main motivation to sustain a share of water supply from Qanats (represented by a reinforcing/positive causal loop of relationships at the center of Fig. 2). However, the main barrier to maintaining this share would be the increasing water scarcity due to inefficient farming or irrigation practices (represented by a reinforcing/positive causal loop of relationships on the left side of Fig. 2), which encourages the farmers to seek access to a higher number of deep wells which in turn results in overexploitation of groundwater resources and drying of the remaining Qanats (represented by a reinforcing/positive causal loop of relationships on the right side of Fig. 2). In this sense availability of Qanats as an alternative means of water supply, contributes to sustainability of rural communities and optimal management of water resources (Zheng et al. 2011) in the Middle-East region that is under the adverse effects of climate change with severe droughts and widespread water shortages.
Qanats are considered as a great human heritage, contributing to sustainable management of groundwater (Abdin 2006). In addition to certain construction details such as slope and depth, the performance of a Qanat system is highly influenced by community, geographical, and climate attributes. We advocated that Qanat could present itself as an alternative means of water supply for irrigation uses in arid/semi-arid regions where an increase in the number of deep wells could result in permanent depletion of aquifers. In that sense, we reviewed the advantages of Qanat from a sustainability perspective, while considering its technical/operational limitations. The multiplicity and importance of benefits associated with Qanats emphasize the fact that this ancient technology should be reconsidered, in particular, for provision of irrigation water in arid zones. As a future avenue of research, if the traditional know-how of Qanat building is combined with modern tools and technologies, it may further improve the performance of this water supply system in its long and low maintenance service life.
Abdin S (2006) Qanats a Unique Groundwater management tool in arid regions: the case of bam region in Iran. In: International Symposium on Groundwater Sustainability (ISGWAS), Jan 24–27, Alicante, Spain
Alizadeh A (2008) Agricultural Water management in Iran: issues, challenges, and opportunities. In: Iran-United States Workshop on Water Management, Sacramento, CA, USA, August 18–19
Beaumont P (1971) Qanat Systems in Iran. Bull Int Assoc Sci Hydrol 16:39–50
Behnia A (1988) Qanat, Construction and Maintenance. University Publishing Centre, Ahvaz
Boustani F (2008) Sustainable Water Utilization in Arid Region of Iran by Qanats. Proc World Acad Sci Eng Technol 33:213–216
Farzadmehr J, Nazari Samani AA (2009) A review of Iran’s Qanats (type, current situation, advantageous and disadvantageous) as a traditional method for water supply in Arid and semi arid regions. In: The 2nd International Conference on Water, Ecosystems, and Sustainable Development in Arid and Semi-Arid Zones, May 6–11, Tehran, Yazd, Iran
Goblot H (1979) Les qanats, une technique d’acquisition de l’eau. Ecole des hautes Etudes en Sciences Sociales, Paris
Haeri MR (2003) Kariz (Qanat): an eternal friendly system for harvesting groundwater. In: Adaptation Workshop, Cenesta, New Delhi, 12–13th November
Helweg OJ (1973) Increasing well yield with hybrid wells. Ground Water 11:12–17
ICQHS (2015) Introduction and history of Qanats of Iran. In: International Center on Qanats and Historic Hydraulic Structures, Yazd, Iran. http://www.icqhs.org/. Accessed 5 Jun 2015
Molle F, Mamanpoush A, Miranzadeh M (2004) Robbing Yadullah’s water to irrigate Saeid’s garden: hydrology and water rights in a village of central Iran, Research Reports: Vol. 80. International Water Management Institute, Colombo, Sri Lanka
Motiee H, Mcbean E, Semsar A, Gharabaghi B, Ghomashchi V (2006) Assessment of the contributions of traditional Qanats in sustainable water resources management. Water Resour Develop 22:575–588
WH (2015) Qanats. http://www.waterhistory.org/histories/qanats/ Accessed 5 June 2015
Yin YY (2003) Methods to link climate impacts and regional sustainability. J Environ Inform 2(1):1–10
Zheng C, Yang W, Yang ZF (2011) Strategies for managing environmental flows based on the spatial distribution of water quality: a case study of Baiyangdian Lake China. J Environ Inform 18(2):84–90
The authors have co-developed the research agenda and analysis. FN have drafted and revised the manuscript. All authors read and approved the final manuscript.
The authors are very much thankful to the reviewers of this paper for their review, helpful comments and suggestions.
Compliance with ethical guidelines
Competing interests The authors declare that they have no competing interests.
About this article
Cite this article
Nasiri, F., Mafakheri, M.S. Qanat water supply systems: a revisit of sustainability perspectives. Environ Syst Res 4, 13 (2015). https://doi.org/10.1186/s40068-015-0039-9 |
Shifting Our Perspective
BY RABBI REUVEN TARAGIN
Making Shemitta meaningful
Most of us are not farmers and do not work in agriculture, and so we can easily miss the opportunity to learn and internalize the messages of Shemitta. This would be very unfortunate, as the Torah teaches that the violation of Shemitta laws will ultimately cause our people to be exiled from the Land of Israel. Our continued presence in Eretz Yisrael hinges on understanding Shemitta’s significance and appreciating its lessons.
Shemitta is a time of letting go.1 When the Shemitta year arrives, we are commanded to let go of the land and its fruits as well as monetary debts owed to us.2
The Torah explains that the release of debts and the free access to crops are meant to help the poor and give them a fresh financial start. But why are we also commanded to stop working the land? If anything, working the land would make more crops available for the poor. Why is cultivating it prohibited?
Why we let the Land go
The Rambam explains that we must stop working the soil in order to restore the balance of essential nutrients in the soil. However, most commentators explain that the Torah’s phrase Shabbat Lashem, “a Sabbath for G-d,” and the severe punishments associated with this law’s violation indicate that this mitzvah also possesses deep spiritual significance.
The Ibn Ezra and Ramban explain that Shemitta, like Shabbat, is meant to remind us of Hashem’s role as the world’s Creator. We live in a world that superficially appears to have always existed on its own. By not working the land during Shemitta, we remind ourselves that Hashem created us and our world. Still, if we already commemorate Hashem’s creation every seventh day, why is it necessary to refrain from working the land for an entire year?
Hashem not only created the world but also continues to maintain and direct it. We rest every Shabbat not only to remind ourselves of Hashem’s role in the past but also in recognition of His continuous role in the present. By following Hashem’s lead in limiting our work to six days and resting on the seventh we demonstrate that even with all of our efforts, we still require His assistance.
The Chinuch and Kli Yakar explain that Shemitta extends the lessons of Shabbat to the land we use to create. When a farmer plants and reaps he can easily reach the mistaken conclusion that he alone is responsible for the produce he reaps. By not working the land in the seventh year and by relying on Hashem’s promise to provide for the farmers during Shemitta, the farmer expresses his recognition of Hashem’s critical role in the growth process. The land not only belongs to Hashem but is also managed by Him, bringing forth its bounty only to the degree that He dictates.
Shemitta is to land what Shabbat is to work. We cease work every seventh day to demonstrate that the success of our efforts require Hashem’s assistance. We stop working the land in the seventh year to demonstrate that the land’s production depends on that same assistance.
The Akeidat Yitzchak sees the cessation of work as having an additional goal – to put our work into perspective. People can easily come to see work as life’s goal and essence. By taking off each seventh year from work, we remember that we must maintain a healthy balance of work and personal development.
The Sforno learns from the term Shabbat Lashem that Shemitta helps us appreciate the importance of personal development and aims to allow us to devote our time and energy to it. The Shemitta year is a time to focus on our service of and relationship with Hashem.3 This may be why the Torah schedules the hakhel Torah gathering right after the Shemitta year. The best time to reenact the receiving of the Torah is after a Shemitta year, during which we can focus on Torah learning.4
Like Shabbat – which has both prohibitions and positive mitzvot – the Shemitta Shabbat Lashem is also observed not merely through prohibitions but proactively as well. When we cease work for Shabbat and Shemitta, we are meant to focus on the deeper meaning of our lives and recalibrate how we will live in the coming days and years.
Like the modern academic sabbatical year (a notion derived from Shemitta), the Shemitta year is a time to refocus the energies we normally use to develop the world towards developing ourselves. Irrespective of whether we own a farm or a garden, let’s do our best to internalize the teachings of Shemitta and maximize this year for personal growth. And in this merit, may Hashem continue to bless our efforts here in the Land of Israel!
1 This is how most commentators translate the word ‘Shemitta’ (Shemot 23:11, Devarim 15:1–2).
2 See Gittin 36a, where Rebbe links between the two.
3 See Ibn Ezra (Shemot 20:8), who understands this as the goal of Shabbat. See Tanna D’vei Eliyahu (1) and the Tur (Orach Chayim 290), who speak about how Shabbat is a time meant for learning Torah. See also Zohar (3:171b) for its powerful description of the spiritual level of life in Gan Eden during the Shemitta year.
4 This can help explain why the Torah stresses that the laws of Shemitta were given at Sinai.
Rabbi Reuven Taragin is Educational Director of Mizrachi and Dean of the Yeshivat Hakotel Overseas Program. |
This cheery holiday plant might encourage stolen kisses, but it’s also a tree-killing parasite.
Despite its festive reputation, mistletoe is no good for trees.
Photography by Oleksandr Rybitskiy, Shutterstock.
If you want to brighten up a doorway with a little Christmas cheer, you can’t go wrong with a sprig of mistletoe. With more than 1,000 species around the world, mistletoe is generally easy to find and its variety of berries make a nice decorative touch for the holidays. Plus, you might find a (consenting) kiss under a carefully placed bough of mistletoe.
But don’t eat those berries, as pretty as they may be. Though birds find them to be tasty, they can be toxic to humans, causing vomiting, blurred vision and even seizures. And you might not want to place it too close to your Christmas tree, lest it not make it to the big day.
This is because mistletoe is a parasite. It attaches itself to a variety of host trees, including poplar and ash, embedding its rootlike structures into the trunk of the tree. Then, the plant slowly sucks the life out of its host. Over time, this can kill the host tree, one limb at a time.
Mistletoe, a parasite, taking over a tree. Photo by Orest lyzhechka, Shutterstock.
In a new paper on mistletoe, researchers at the University of California, Riverside (UCR) have discovered that not only does mistletoe strategically clamp onto host trees to siphon their nutrients, but multiple bunches of mistletoe on the same host can actually communicate with each other, working together to keep the host plant alive, so they can continue to be fed.
“Mistletoe recognize when the resources available from the host change and adjust their own demand accordingly,” says Paul Nabity, a researcher on the paper and assistant professor in the department of botany and plant sciences at UCR. In an email, Nabity described how mistletoe have the ability to switch between producing their own food and taking it from others.
All plants are autotrophic, Nabity says, meaning they use photosynthesis to make their own food. Humans and other animals are heterotrophic, meaning we eat food from other sources to get our nutrients, rather than making it ourselves. But mistletoe are unique. They can create their own food, using photosynthesis, but instead, often switch to collect most of their carbon from a host tree. Nabity says the parasitic plant can steal up to 80 percent of their needed carbon, essentially switching between autotrophic and heterotrophic systems, as needed. That means mistletoe evolved to use less energy and get more resources, allowing a host plant to do their work for them. Not too shabby for a holiday decoration.
What’s more, mistletoe have learned how to work together to get the most out of a host, like squeezing the last bit of toothpaste from the tube. In a coinfection, if more than one mistletoe plant latches onto the same host tree, each mistletoe will up their own levels of photosynthesis, sapping just a little bit less out of the tree. They’ll work together to keep the tree alive for longer.
So how do mistletoe know when there’s another bunch sniffing around the same tree they’re already on? This is where it gets seriously cool. There is evidence that plants can “smell” each other, emitting chemicals that carry signals of stress or “other physiological predicaments to neighbor plants, who recognize the signal and prime themselves in preparation,” Nabity says. Mistletoe could be sniffing out their neighbor plant or they could be communicating through the host tree itself, since they’re connected to the same xylem. Nabity says he’s hoping to do further research to find out.
Okay, so mistletoe are poisonous to humans and can kill their hosts. Why should we keep them around? Well, mistletoe have an important role in the food chain for birds and other pollinators. Because they flower in the winter, the plants are one of the only sources of food for animals at the time. They serve an important ecological function, and they look damn good doing it.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and are used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. |
TOURISM AND SUSTAINABILITY
Tourism is one of the world’s fastest growing industries and a large to almost sole source of income for a number of countries.
Like many forms of development, tourism is the cause of problems such as economic dependencies, ecological degradation, loss of cultural heritage and social issues. Ironically the natural environment that tourism in many instances relies on is also destroyed by tourism.
2020 has been the year in which, due to COVID-19’s travel bans and restrictions, many countries are struggling financially and the tourism industry has been hit extremely hard. Yet, undeniably, it has also had a positive impact on natural landscapes, resulting in cleaner waters and bringing back certain species to certain areas.Increasingly, people are becoming aware of their contribution to some of the negative impact on the places they visit and are looking for more responsible holidays. This is where sustainable tourism has really started to shine.
In essence, sustainable tourism should help in conserving natural heritage and biodiversity by making use of environmental resources. It should also respect the host communities’ socio-cultural authenticity in conserving their cultural heritage, traditional values and contribute to cultural understanding and lack of prejudice. It needs to secure long-term economic functions, fair distribution of socio-economic benefits to all stakeholders (communities, tourists, NGOs, governments, employees, suppliers, education, small businesses, transport etc), stable employment and job opportunities, as well as contribute to the reduction of poverty.
Tourism that takes full account of its current and future economic, social and environmental impacts, addressing the needs of visitors, the industry, the environment and host communities.
World Tourism Organisation
Some examples of the impact of tourism when not operated sustainably in Europe are:
- Animal exploitation, e.g. tourist rides on exhausted and overworked donkeys in Santorini despite the protests of locals and activists.
- Locals being pushed out of their homes as a result of gentrification and rising rental prices in places like Girona and Barcelona.
- Locals in cities like Venice being disrespected by tourist behaviour, littering and their lack of appropriate attire such as beachwear in the city centre.
These are but a few of a growing number of problems that affect a high number of European cities and natural landmarks as well as many other places around the world.
Due to these environmental, social and economic dependency concerns, some countries and organisations have implemented sustainable tourism methods in certain cities or areas.
Within Europe, Slovenia has one of the best protected natural habitats in the world and takes great care of its biodiversity. Many of the hotels in Ljubljana offer electric scooters as well as green food in order to reduce the environmental impact.
The Eden Project in Cornwall is an educational charity that demonstrates the importance of plants to people and the vital intertwined relationship between the two.
Between 2017 and 2020, Finland has invested millions of euros in developing sustainable tourism in Lapland and has an extensive guide on their website on how to travel responsibly with lists of sustainable hotels and activities.
Many other places across Europe are paying more attention to the needs of their locals and their environment whilst still wanting to find a healthier balance with tourism. For this to work and be sustainable, three pillars of impact need to be implemented: economic, social and environmental impact.
These days, with a quick online search, it is extremely easy to find out which places nearby and overseas offer sustainable tourism, and how you can be more mindful of your impact on locals and the environment you plan to visit.
Autor: Marijke Everts. Źródło artykułu: Europeana Foundation. Licencja CC BY-SA 4.0.
Definicje i przykłady zdań pochodzą ze słownika Cambridge Dictionary.
sole /səʊl/ – being one only; single;
dependence (also: dipendency) /dɪˈpen.dən.se/ – the situation in which you need something or someone all the time, especially in order to continue existing or operating;
heritage /ˈher.ɪ.tɪdʒ/ – features belonging to the culture of a particular society, such as traditions, languages, or buildings, that were created in the past and still have historical importance;
instance /ˈɪn.stəns/ – a particular situation, event, or fact, especially an example of something that happens generally;
ban /bæn/ – an official order that prevents something from happening;
restriction /rɪˈstrɪk.ʃən/ – an official limit on something;
struggle /ˈstrʌɡ.əl/ – to experience difficulty and make a very great effort in order to do something;
zmagać / borykać się, walczyć
undeniably /ˌʌn.dɪˈnaɪ.ə.bli/ – in a way that is certainly true;
species /ˈspiː.ʃiːz/ – a set of animals or plants in which the members have similar characteristics to each other and can breed with each other;
increasingly /ɪnˈkriː.sɪŋ.li/ – more and more;
contribution /ˌkɒn.trɪˈbjuː.ʃən/ – something that you contribute or do to help produce or achieve something together with other people, or to help make something successful;
sustainable /səˈsteɪ.nə.bəl/ – causing little or no damage to the environment and therefore able to continue for a long time;
zrównoważony, przyjazny dla środowiska
take account of sth /teɪk əˈkaʊnt ɒv ˈsʌmθɪŋ/ – to consider or remember something when judging a situation;
brać coś pod uwagę
address /əˈdres/ – to give attention to or deal with a matter or problem;
zajmować się czymś
in essence /ɪn ˈes.əns/ – relating to the most important characteristics or ideas of something;
biodiversity /ˌbaɪ.əʊ.daɪˈvɜː.sə.ti/ – the number and types of plants and animals that exist in a particular area or in the world generally, or the problem of protecting this;
authenticity /ˌɔː.θenˈtɪs.ə.ti/ – the quality of being real or true;
conserve /kənˈsɜːv/ – to keep and protect something from damage, change, or waste;
prejudice /ˈpredʒ.ə.dɪs/ – an unfair and unreasonable opinion or feeling, especially when formed without enough thought or knowledge;
stakeholder /ˈsteɪkˌhəʊl.dər/ – an employee, investor, customer, etc. who is involved in or buys from a business and has an interest in its success;
poverty /ˈpɒv.ə.ti/ – the condition of being extremely poor;
exploitation /ˌek.splɔɪˈteɪ.ʃən/ – the use of something in order to get an advantage from it;
exhausted /ɪɡˈzɔː.stɪd/ – extremely tired;
gentrification /ˌdʒen.trɪ.fɪˈkeɪ.ʃən/ – the process by which a place, especially part of a city, changes from being a poor area to a richer one, where people from a higher social class live;
gentryfikacja („proces zmiany charakteru części miasta w bardziej skomercjalizowany, zdominowany przez osoby o wyższym statusie materialnym”, SJP)
litter /ˈlɪt.ər/ – to drop rubbish on the ground in a public place;
attire /əˈtaɪər/ – clothes, especially of a particular or formal type;
concern /kənˈsɜːn/ – a worried or nervous feeling about something, or something that makes you feel worried;
habitat /ˈhæb.ɪ.tæt/ – the natural environment in which an animal or plant usually lives;
vital /ˈvaɪ.təl/ – necessary for the success or continued existence of something; extremely important;
intertwined /ˌɪn·tərˈtwɑɪnd/ – twisted together or closely connected so as to be difficult to separate;
związany, powiązany (dosł. spleciony)
pillar /ˈpɪl.ər/ – a very important member or part of a group, organization, system, etc.;
mindful /ˈmaɪnd.fəl/ – careful not to forget about something; |
An intravenous (IV) infusion is the administration of liquid medications, rehydration solutions (such as electrolytes), or nutrients through a vein. In cases of hydration and nutrients, an IV infusion is delivered when the patient has difficulty swallowing or cannot consume food or water by mouth. Long-term, it can be used as nutritional therapy, providing protein, carbohydrates, fats, minerals, and vitamins.
The procedure can be performed by a registered nurse, who will insert the IV catheter into one of the patient’s veins using a slender needle. An IV line runs from the catheter to a bag containing the prescribed fluids. Once the IV is inserted, the fluids travel from the bag, through the line, and into the patient’s vein. The flow of the liquids is regulated manually or by an electric pump, to ensure the correct rate of flow and delivery. Medical staff also check the flow and delivery regularly.
If the patient does not get enough nutrients from a traditional IV, a special feeding tube, called a jejunum, may be inserted directly into the small intestine through the skin of the abdomen, bypassing the stomach. This treatment (jejunostomy) is performed using an endoscope – a long, flexile tube equipped with a camera and light, that is inserted into the esophagus, then through to the stomach, and into the top of the small intestine. Using the camera as a guide, the gastroenterologist can pass tiny instruments through the tube and use them to insert the feeding tube.
In most cases, an IV infusion requires no preparation. An endoscopic jejunostomy, however, will require sedation so the patient will have to fast for at least eight hours beforehand and have a ride home. The procedure may require a hospital stay for observation (the gastroenterologist will advise). If the procedure is performed surgically, it will be done under general anesthesia and recovery will require a longer hospital stay. Afterward, the patient will need to keep the area around the tube clean and free from infection. The patient also will be shown how to change the dressing.
Conditions treated or diagnosed by this procedure: |
By Kyoto Women's University, Lifestyle Design Laboratory
Kyoto Women's University, Lifestyle Design Laboratory
Kijoka lies in Yanbaru, land rich in nature surrounded by the sea and mountains in the northern part of Okinawa Island. For centuries, the women of Kijoka have woven basho-fu (fiber-banana cloth) to make summer clothing for the commoners.
Following a basho-fu exhibition held in 1907, the production of the craft was encouraged as a side job. In villages with little land available for cultivation, growing the vigorous fiber-banana plants (itobasho) turned out to be ideal. It was adopted as a suitable job for the women guarding the village while the men took on jobs in Naha as excellent shipbuilders.
Ogimi Village Basho-fu Weaving Association was established in1940 for research and the development of basho-fu as an industry. This was suspended temporarily during the Pacific War. Despite setbacks in reestablishing production after the War, the efforts of Toshiko Taira and others paid off with growing social recognition. In 1984, the Kijoka Basho-fu Industrial Cooperative Association was established and in 1986, Ogimi Village Basho-fu Hall opened to sponsor training programs for successors, among other things.
Kijoka Basho-fu, is now designated by the Japanese government as an Important Intangible Cultural Property.
Cultivation of Fiber
There are three kinds of Basho (Basho is a local name for the banana family, Musa liukiuensis):Fruit-basho (banana), Flower-basho, and Fiber-basho. The fiber-basho, which provides the raw material for basho-fu, reproduces through subterranean stems or rhizomes, so transplanting is not necessary. For the first few years, however, the fiber is rough and does not produce quality yarn. In Kijoka, the banana fields are carefully tended throughout the year, fertilizing the soil, prunning the leaves, and clipping the cores.
Stripping the fiber
The process of peeling off the skin of the harvested basho is called u-hagi. Holding the stem so the root of the basho faces upwards, the skin is peeled off strip by strip.
Stripping the fiber, Basho-fuKyoto Women's University, Lifestyle Design Laboratory
The fiber is categorized in to four types: the outer top layer (uwa-ha) is used mainly for making sitting cushions (zabuton); the middle layer (nahau) is woven into kimono sashes (obi); the third layer (hanagu) has the finest fibers and is used to weave kimono fabric; while the core (kiyagi) is mainly used for dyeing.
Drying the Fibers
After several steps, the sorted fibers are dried in the shade avoiding exposure to the wind.
the Fibers into Threads
Forming a continuous thread by joining the split fibers that have been wound into a ball (chingu) is called ply-joining (u-umi). The chingu ball is allowed to steep in a bowl of water for about thirty minutes and then squeezed. Next the fibers are split to the desired thickness with a small knife. This is most time consuming because the quality of the final bolt of cloth depends on the uniformity of the ply-joined threads.
Adding Twist to the threads
To increase the strength and prevent loose fiber ends, twist is added to the warp and weft threads while spraying them with mist. This process requires skill. If the twist is not enough, the threads develop loose ends making it difficult to weave, but if the twist is too strong, the kasuri pattern becomes hard to match and the texture will not be pleasant.
In Kijoka, the main natural dyes used are Yeddo hawthorn (techi) and Ryukyu indigo (Ee)
Because basho threads are sensitive to dryness, the weft is soaked in water before weaving and the warp applied with mist while weaving. Rainy days and the rainy season (May to mid-June) are most suited to weaving. It generally takes three to four weeks to weave one bolt of basho-fu.
In 1943 the book Basho-fu Story by Yanagi Soetsu (1889–1961) was published privately in a limited edition of 225 copies based on the detailed interviews he conducted when visiting the village of Kijoka. This masterpiece inspired Toshiko Taira to revitalize the production of basho-fu.
"It is rare to see such beautiful textiles in this age. Whenever I see the cloth, it seems as if it alone is truly authentic. Questioning where this beauty comes from, the reason seems obvious: the creation process itself makes the beauty inevitable." (From the foreword of Basho-fu Story)
Taira worked at a spinning mill in Kurashiki, Okayama (now known as Kurabo Industries, Ltd) after Japan surrendered at the end of WWII. During her days in Kurashiki, President Soichiro Ohara of Kurabo Industries introduced her to Kichinosuke Tonomura, the Founder of Kurashiki Museum of Folk Craft, so she could study the basics of dyeing and weaving. Highly influenced by the Mingei Movement of Soetsu Yanagi, she became determined to revive basho-fu production and returned to Okinawa in 1946.
Social and life style changes made it difficult to build a viable industry, so hard times dragged on, but after repeatedly showcasing basho-fu in various exhibitions, she gained high acclaim. Then, in the year 2000, Toshiko Taira was designated a Living National Treasure.
Basho-fuKyoto Women's University, Lifestyle Design Laboratory
Currently, Kijoka produces
about 170 bolts a year. The weavers pour continues effort into nurturing their
successors. Also, Mieko Taira, President of Kijoka Basho Business Cooperative Association, playing a central
role, they have been
actively exhibiting not only domestically but also internationally. In the
autumn of 2016, the Victoria and Albert Museum in London added a basho-fu kimono by Toshiko Taira to
Village Bashofu Hall
The Basho-fu Hall is a facility open to the public. In the exhibition room on the first floor, they hold exhibitions, sell products, and show videos of the production process. Training sessions for those learning the craft are held in the workshop on the second floor. Observers are welcome.
Taira Mieko, Bashofu textile workshop, Kijoka Bashofu Association
Images provided by:
Taira Mieko, kijoka bashofu preservation society
English translation by:
Miyo Kurosaki Bethe
English edited by:
Melissa rinne, Kyoto National Museum
This exhibition is created by:
Ikeda Yuuka and Ueyama Emiko, Kyoto Women's University
Dr Maezaki Shinya, Associate Professor, Kyoto Women's University
Dr Yamamoto Masako, Ritsumeikan University |
Chapter Three - The Establishment of the Eighth Federal Reserve District
Would St. Louis have a Federal Reserve bank? Without a doubt, thought the city's bankers. It was the nation's fourth largest city, and one of only three central reserve cities in the national banking system. With 26 trunk line railroads, it was a major hub of the midcontinent and southwestern distribution systems, and it led the nation in shipping hardware, hardwood lumber and a variety of agricultural products. St. Louis was the world's largest fur market, a major livestock market, a brewing center and a leading distributor of dry goods. In manufacturing, the city was the national leader in shoes, stoves, streetcars and millinery. After being known primarily as a wholesaling and jobbing center for more than a half-century, St. Louis by the end of the nineteenth century had achieved parity between manufacturing and wholesaling.
In 1914, St. Louis was the nation's fourth-largest city, a major railroad hub, the world's largest fur market, a major livestock market, a brewing center and a leading distributor of dry goods. View looking west on Washington Ave. at Broadway.
In preparing for their appearance before the Federal Reserve Bank Organizing Committee, the St. Louis Clearing House's representatives concentrated on winning a generously sized district with a balance of economic interests. It seemed to them unlikely that they would be denied a reserve bank, but there was a chance that rival claimants might threaten the "natural" boundaries of their district-to-be. These ideal boundaries covered a lot of ground, as H. Parker Willis had pointed out. Willis's "preliminary committee" had sounded out aspirants for a Federal Reserve bank before the Organizing Committee's visit, and they had found according to Willis that New York, Chicago and St. Louis together wanted the whole country for their districts.
Secretary of the Treasury William G. McAdoo and Secretary of Agriculture David Houston
Though there is no evidence that Secretary Houston's St. Louis connections affected the Committee's deliberations, they did create a favorable climate for St. Louis.
Under the Federal Reserve Act, after hearing testimony from the interested cities, the Organizing Committee would choose the bank locations and draw up district boundaries. Secretary of the Treasury William G. McAdoo, a New Yorker; Secretary of Agriculture David Houston, a New England native who had been president of Texas A.&M. University before becoming chancellor of Washington University; and Comptroller of the Currency John Skelton Williams, a native of Richmond, Virginia, were the members of the Organizing Committee. All three of the members took part in the decisions, but the interviews in most cities were conducted by McAdoo and Houston.
The committee was to determine the number of districts, between eight and 12 under the Federal Reserve Act, set the boundaries of the districts according to the "customary course of business," and select the Federal Reserve cities. Since the act mandated a minimum capital of $4 million for each reserve bank, based upon a investment of 6 percent of their capital and surplus by the member banks, it followed that some western districts would have to be much larger in area than the eastern. Each national bank was required to join the Federal Reserve System within 30 days after notification by the Organizing Committee or surrender its federal charter. One-sixth of each bank's required investment was due immediately in gold or gold certificates, and similar amounts three and then six months thereafter. The remaining 50 percent was due upon the call of the Federal Reserve Bank.
Beginning in New York on January 4, 1914, the Committee held hearings in 18 cities, taking testimony from clearing house associations, chambers of commerce and business groups from more than 200 cities, 37 of which requested designation as Federal Reserve cities. Within the area that St. Louis considered to be its territory, there were eight other aspirants for that designation: Kansas City, Memphis, New Orleans, Indianapolis, Nashville, Dallas, Houston and Fort Worth. Outside cities including Chicago, Birmingham, Cincinnati, Atlanta, Louisville, Omaha and Denver, also claimed some part of this territory.
The Organizing Committee sent ballots to 7,471 national banks and more than 16,000 state banks and trust companies, asking them for their preferences for a reserve bank connection. St. Louis received 299 first and 580 second-choice votes from national banks, a majority of them from Missouri, Arkansas, southern Illinois and Oklahoma, with a scattering of first and substantial second-choice support from Texas, Tennessee, Louisiana, Kansas, Mississippi and Indiana. Kansas City had more first-choices from national banks in Missouri than St. Louis, but when state chartered banks were included, St. Louis ranked fourth nationally, after New York, Chicago and San Francisco.
Frank O. Watts laid out the St. Louis bankers' plan for an eight-district Federal Reserve System, along with Festus Wade.
Secretaries McAdoo and Houston conducted their St. Louis hearings on January 21 and 22, 1914. In preparation for this event, Festus Wade and Frank O. Watts, president of the St. Louis Clearing House Association and chairman of its special "Committee of 18," respectively, sent a letter to the clearinghouse's bank correspondents, asking them for their support for a St. Louis-based Federal Reserve district. The clearinghouse wanted a long north-and-south axis to ensure a balance of economic interests. The cotton-belt bankers from Tennessee through Arkansas, Mississippi and Louisiana to Texas, with their heavy seasonal demands for credit, should press the Organizing Committee to give St. Louis a self-sufficient district with a variety of economic interests, such as mining and manufacturing, and enough banking resources to absorb seasonal credit demands.
To persuade bankers in New Orleans, Dallas, Memphis and other cities that wanted their own reserve bank, the St. Louisans claimed that the system was designed to provide plenty of branches, so that all sections of a district would be well-served. No doubt there would be 10 to 15 branches in a St. Louis district, each of which would provide all essential services. There would be local control in each branch through a seven-man board selected by the reserve bank and the Federal Reserve Board, as good as having the bank itself, so the letter implied. As for St. Louis, it had been the center of commerce and finance for "this splendid district" for a half-century. Since the law was intended to give the natural flow of business "new and effective aid," St. Louis bankers assumed that their correspondents would want to be in a St. Louis district, and that they would so inform the Organizing Committee. The letter was signed by 19 St. Louis bank presidents.
David R. Francis' St. Louis Republic, "American's Foremost Democratic Newspaper" led the celebration following the announcement that St. Louis would get a Reserve Bank headquarters.
David R. Francis' St. Louis Republic, the oldest newspaper west of the Mississippi River, which carried the slogan "America's Foremost Democratic Newspaper" on its masthead, hailed the impending arrival of McAdoo and Houston as a major event in St. Louis history. St. Louis was prepared, according to the Republic, to make a showing that would give it one of the four largest regional banks, including 12 states within its district boundaries. Spokesmen for thousands of banks from Missouri, Kansas, Nebraska, Texas, Arkansas, Oklahoma, Kentucky, Tennessee, Louisiana, Mississippi, southern Illinois and southern Indiana would speak for St. Louis. The Republic had been told that civic and business groups everywhere in the lower Mississippi Valley had sent hundreds of letters and resolutions favoring St. Louis to the Committee. No other city in the Southwest could command such support according to the jubilant editorialist.
On the 21st, a reception committee of the Businessmen's League headed by the league's president, A.L. Shapleigh, and including Festus Wade and Albert Bond Lambert, met McAdoo and Houston at the Union Station. After checking in at the Jefferson Hotel, the two officials were escorted to the Federal Building at Eighth and Olive streets, where the hearings were held in the United States Circuit Courtroom. The Republic reported that the crowd overflowed into the hall and adjacent rooms. The Committee of 18, which handled the arrangements for the stay, was not surprisingly an honor roll of the business leadership. In addition to Shapleigh and Wade, it consisted of Frank O. Watts, E.C. Simmons, Walker Hill, J.C. VanRiper, Edwards Whitaker, Jackson Johnson, Thomas H. West, James Barroll, Robert S. Brookings, David R. Francis, Murray Carleton, Breckinridge Jones, E.F. Goltra, H.F. Bush, D.C. Nugent and James Bulck.
Rolla Wells entertained McAdoo, Houston and 25 other quests at his home on Lindell Boulevard on their first evening in St. Louis.
McAdoo and Houston were entertained privately the first evening, along with 25 other guests, by Rolla Wells at his home on Lindell Boulevard. In addition to most of the members of the Committee of 8, Wells had invited Charles Nagel, a distinguished Republican attorney who had been Secretary of Commerce and Labor in the Taft administration, and James Campbell, a utilities magnate who was a major investor in Mexican silver mines, a matter of interest to the Wilson administration because of its heavy involvement in Mexican internal affairs, an involvement which led to the American seizure of Vera Cruz a few weeks later. On the second evening, the two cabinet members were guests of honor at a dinner for 600 people at the Planters Hotel.
St. Louis, as did every city on their schedule, used every advantage it could muster to impress the visitors. Ex-mayor Wells, David R. Francis, Edward F. Goltra and Breckinridge Jones were nationally prominent Democrats with close ties to the Wilson administration. Wells had been the president's campaign treasurer in 1912. Francis was not only publisher of a major Democratic newspaper, he had been mayor of St. Louis, governor of Missouri, and Secretary of the Interior, and he was soon to be named Minister to Russia by Wilson. Goltra, a Democratic national committeemen, had been an early Wilson supporter in 1912. Banker Breckinridge Jones was an important Democratic fund-raiser. From another angle, Secretary Houston, who was on leave as chancellor of Washington University, knew most of the welcoming committee personally. When he sat down to dinner at Wells' home, he must have thought it was a meeting of his board of trustees. Francis, an alumnus, had been a trustee for years, as had Jackson Johnson and A. L. Shapleigh, and several other committee members. Robert S. Brookings, one of Houston's predecessors as chancellor, was Washington University's greatest benefactor, having given it a fortune in money and land. In addition to their interest in university affairs, Rolla Wells and Houston saw a lot of each other at their summer homes in Wequetonsing, Michigan. While there is no evidence that any of these considerations affected the Organizing Committee's deliberations, this web of relationships certainly did not create an unfavorable climate for St. Louis's case.
Festus Wade, whose standing among American bankers and his 'important if at times irritating role in the formation period of the Federal Reserve Act was well understood by the Organizing Committee, and Frank O. Watts, president of the Third National Bank and Chairman of the Clearing House's presentation committee, laid out the banker's case for McAdoo and Houston. Wade, the first witness, requested the committee to create eight banks, the minimum under the law, so that each would have sufficient capital to serve its district adequately and so that excessive decentralization of reserves might be avoided. Branches could meet the needs of distant areas in a district. Wades argument reflected his confidence that St. Louis was high on the list for a regional bank, and it was a characteristic view of big bankers who had favored the Aldrich plan and still wanted as much concentration of reserves as possible.
As usual, Wade stressed financial balance. Both borrowing and lending areas should be included in a St. Louis district, with credit-hungry cotton and other agricultural territory offset by cities with large banking resources. Reaching out in all directions, he pleaded for a district broad and long enough to include a variety of crops harvested at different times. Even the touted "natural course of business" should give way if necessary to achieve balance. In short, St. Louis's district would be extended beyond its existing trade patterns. At this point, Wade presented his proposed "District Five," built around St. Louis and including Missouri, Arkansas, Oklahoma, Texas, Louisiana, southern Illinois (including Springfield), southern Indiana (including Indianapolis), western and central Tennessee (including Nashville), and southeastern Iowa with its Keokuk dam.
This ambitious proposal, which had been "tamed down" from Wades original version as printed in the newspapers, was not unreasonable if there were to be eight districts of similar size and financial strength. As of October 31, 1913, there were 1,483 national banks and 1,806 state banks and trust companies eligible for membership in the Federal Reserve System. The national banks had an aggregate capital and surplus of $262.7 million, providing the reserve bank with $15 .8 million in capital subscriptions. If all of the eligible state banks became members, they would add $9.4 million to the reserve bank's capital. The 62 banks and trust companies of the St. Louis Clearing House had a combined capital and surplus of $78.6 million, one-seventh of the aggregate in the proposed territory, and deposits of $302 million, one-sixth of the total. This was twice the capital and surplus of the banks in New Orleans, despite talk that the Crescent City had been gaining on St. Louis in the lower Mississippi Valley.
Frank Watts, who followed Wade before the committee, laid out the remainder of St. Louis's plan for the entire system. District One would be New England (Boston); District Two, New York and bits of New Jersey and Connecticut (New York City); District Three, the seaboard-South; District Four, the Ohio Valley; District Six, the North Central States (Chicago); District Seven, the Great Plains and Rocky Mountains; District Eight, the Pacific Coast (San Francisco). This plan severely restricted New York in area to keep its capital down to $24.1 million. Chicago, Boston, St. Louis and the Ohio Valley (including Philadelphia!) would have reserve banks with capitals ranging from $14.7 to $17.9 million. The two western banks and the seaboard-south would be smaller, but well above the $4 million mandated in the Federal Reserve Act.
This banking plan would nave distributed banking capital tar more evenly than the 12-district plan finally adopted. Ironically, it would have reduced New York's financial dominance, in contrast to the larger number of districts favored by Carter Glass and Parker Willis, for whom the concentration of reserves in New York was the major reason for banking reform. Paul Warburg had argued before the Glass Committee that there should be no more than five reserve banks, warning that a larger number would guarantee that New York would dominate the system, exacerbating the condition the committee was trying to rectify.
A.L. Shapleigh of the Shapleigh Hardware Company, one of the nation's largest wholesale firms, Jackson Johnson, president of the International Shoe Company, the largest shoe manufacturer in the country, and Murray Carleton, president of Ferguson-Carleton Dry Goods Company, made the St. Louis case for businessmen. Shapleigh had a larger view than Wade or Watts: he expanded their plan to include Kentucky and Kansas as far west as Wichita, both areas being a part of the St. Louis trade territory. Shapleigh also reminded the committee that one-third of the United States' population was within 12 hours of St. Louis by train. St. Louis firms had sold $568 million worth of goods in 1913, chiefly in Missouri, Illinois, Texas, Indiana, Kansas, Arkansas, Oklahoma, Iowa, Louisiana, Mississippi, Tennessee and Kentucky, in that order. Johnson and Carleton agreed with Shapleigh, stressing that St. Louis's trade was even more far-ranging than its banking influence, and that since banking followed trade, that influence was certain to grow. McAdoo interposed after Carleton's statement, saying "Yes, and trade follows transportation." This had to be considered a friendly comment, since St. Louis' transportation facilities were unsurpassed elsewhere.
Several other St. Louis speakers followed, making the case that St. Louis was one of the great grain and livestock markets in the world, the third largest manufacturing city in the nation and the largest wholesaler in many lines. David R. Francis, after praising the committee for its objectivity, submitted a map illustrating in detail that St. Louis was the hub of the greatest producing area in the country. Kansas City was in that territory and it would be a shame to divide Missouri between two districts. As an inveterate world traveler, Francis stated that St. Louis was better and more favorably known in Europe, China and Japan than any other American city, a not-too-subtle allusion to his own contribution to that end. J.C. VanRiper advised the committee that St. Louis, Chicago and San Francisco could handle the entire West, beginning with Ohio's western boundary. Edwards Whitaker, president of Boatmen's Bank, noted that St. Louis had been the ultimate lender for the area in question for more than 50 years. Not a single national bank had failed in St. Louis since 1887, which could not be said of Kansas City, Pittsburgh, Chicago, New York, Boston or Philadelphia.
In a second appearance before the committee, Frank O. Watts made the point that while St. Louis had received deposits because it was a central reserve city, most of its out-of-state deposits were the products of St. Louis investments. On October 21, 1913, St. Louis banks' investments outside of Missouri had been $63.5 million, and they had held deposits of $32.4 million from non-Missouri banks. Texas, Illinois, Oklahoma and Arkansas were St. Louis' principal partners, followed by Kansas, Louisiana, Tennessee, Mississippi and Indiana. Festus Wade added that St. Louis had relatively more banking capital than any city in the United States with a population of 200,000 or more, with its aggregate capital and surplus constituting more than 25 percent of its deposits on that October date. "There had never been a day, a week, or month," according to Wade, "when any banker, planter, or farmer in the Southwest, banking in St. Louis and entitled to credit, was delayed one hour in getting all of the cash or credit to move crops . . . not excepting the panicky days of 1907." St. Louis had been the source of development funds for Southwestern hotels, street railway, and utility plants.
After the St. Louisans had completed their testimony, the Organizing Committee heard from guests from the trade territory. O.H. Leonard of the Tulsa Exchange Bank testified to Oklahoma's dependence on St. Louis for long-term capital. He did not wish to be attached to a Texas bank. Kansas City was a little closer, but "when we want anything we usually come to St. Louis and we usually get it." H.V. Bird of Ryan, Oklahoma (on the Texas border near Wichita Falls) said that southwestern Oklahoma was more closely allied with St. Louis than any other city. J. C. Reynolds, of Moody, Texas (near Waco) said "we would prefer St. Louis to New Orleans or any other city save Dallas." This sentiment was echoed in writing by bankers from Corpus Christi, Denison, Brownwood and several other cities and towns from all sections of Texas except the El Paso area. More than 50 Arkansas cities and towns endorsed St. Louis, as did R.L. Pennfox of Boyle, Mississippi (near Greenville) who wrote, "St. Louis can serve us better than Memphis. Memphis feels the burden of making a cotton crop just as we do, and is so dependent on the cotton industry that its funds are low at the same times our funds are low."
Many Illinois and Missouri bankers were present at the hearing to testify for St. Louis. J.K. McAlpen of Metropolis, Illinois, said southern Illinois was unanimous in favor of St. Louis. H. W. Harris of Sedalia believed that three-fourths of Sedalia's business was done with St. Louis. A.H. Waite of Joplin acknowledged that he had signed a petition for Kansas City, but he knew St. Louis would be better for his territory. "You signed with repugnance, then?" asked McAdoo. "Yes," Waite admitted, "the K. C. boys are full of pep, and they are nice fellows and we have nothing against them."
According to one history of Missouri banking, there was considerable doubt that St. Louis would get a reserve bank when McAdoo and Houston conducted their hearings in St. Louis. While Secretary Houston did write in his Eight Years with Wilson's Cabinet that he was surprised that St. Louis did not have more first-place support in Texas and Oklahoma, there is no evidence that this did any more than constrict St. Louis' territory. There were subtleties in the local situation that apparently eluded some observers. During the hearing, Houston asked aloud, of no one in particular, whether "a community that would not accommodate itself to a task like finishing the free bridge ought to have a reserve bank?" The authors of the study mentioned above apparently accepted this as a serious question, reflecting Houston's distaste for "St. Louis's spoils-dominated administration." The infamous Butler machine that had ruled the city at the turn of the century had been routed by crusading district attorney Joseph W Folk and rather more quietly by Mayor Rolla Wells during Wells' first term (1901-05). St. Louis in 1914, in comparison to its past and that of other major cities, was a "clean" city, and Houston knew it. He also knew that his friends Wells and Francis, Festus Wade, and several other insiders in the hearing room had been fighting the free bridge for years. Even if Houston were a free-bridge advocate, which is by no means certain, the question was irrelevant to the discussion, merely a needling comment.
More to the point, Houston recalled in his memoirs that he had entered into the hearing process with the idea that Boston, New York, Chicago and San Francisco were obvious choices, followed by St. Louis, New Orleans, and either Washington, Baltimore or Philadelphia. Richmond had never entered his head, and New Orleans was fatally weakened by having virtually no financial or trade connections with Texas. That state related primarily to St. Louis, but the Texans wanted a reserve bank themselves. As for Kansas City, which was favored in Kansas, Missouri and Oklahoma, Houston thought it was too near St. Louis, which was a more impressive banking center. He had favored having only eight banks in the beginning, but it soon became obvious that if they did not go to the maximum, "the Reserve Board would have no peace until that number was reached". The hearings demonstrated that a great deal of local pride was involved, that the committee was "in for a great deal of roasting no matter what we decided."
As Secretary Houston had predicted, a great deal of local pride was involved in the reserve bank city selection, and the committee was "in for a great deal of roasting no matter what we decided."
Cities and states acted as if their very survival depended upon their being selected. St. Louisans were outraged when Chicago claimed East St. Louis, and Chicagoans resented St. Louis's pretensions to their state's capital. When Secretary McAdoo suggested in Kansas City that it might become part of a St. Louis district, bankers there protested that he had it backward, since Kansas City's clearings had been growing at a much faster rate than St. Louis'. They did not mention absolute increases nor the fact that St. Louis had three times Kansas City's banking capital. The president of the Kansas City Clearing House Association agreed that St. Louis should have a reserve bank, but he thought it "would be fatal to attach Kansas City to it." The Kansas City Journal denounced the "effrontery of St. Louis" in claiming the Kansas City trade territory. To follow up their protests at their hearing, Kansas Citians sent a delegation to Washington to plead their case with the third Organizing Committee member, Comptroller John Skelton Williams.
Many factors affected the final choices. Clearly, the members paid a great deal of attention to the bankers' preferential ballots. In some instances, seemingly illogical selections had been based upon future prospects rather than upon present conditions. Texas was still heavily dependent upon St. Louis, Chicago and New York financially, and Dallas, its largest city, had less than one-seventh of St. Louis' population (687,000) in 1910, but the state was huge and it was growing rapidly. In the main, it opposed being attached to an out-of-state bank, most fiercely to a New Orleans bank. A San Antonio clearinghouse official suggested a district including Texas, Louisiana, Arkansas, Oklahoma and Missouri, with its reserve city in Texas. Under questioning, he conceded that St. Louis would be a better choice for such a district. St. Louis had received the largest number of first-choice votes in Texas except for Texas cities; it was behind only Dallas in second-place votes; and it had by far the largest number of third-place votes. The committee's main problem with attaching Texas to St. Louis was the distance from St. Louis to points in West Texas and along the Rio Grande. Bankers Wade and Watts had urged in vain that branches would take care of the distance problems, conceding that at some time in the future Texas would need its own bank.
Parker Willis, as an author of the Federal Reserve Act and chairman of the preliminary technical committee, had considerable influence on the Organizing Committee's deliberations, though at times he gave contradictory advice. He advocated districts relatively similar in strength, urging the Committee especially to avoid creating a large bank which would dominate the rest. Neither should it set up two classes of banks, with one class very strong and the other dependent on it. He said that the historic volume of clearings was unimportant, since that volume would be rearranged by the system itself once it was in operation. Banking capitalization was relatively unimportant, but railway facilities were of the utmost importance. Paradoxically. Willis dismissed the idea that large borrowing and lending areas should be included in one district. Since one reserve bank might rediscount the paper of another, self-sufficient districts were unnecessary. Carter Glass shared this view with Willis, as he did on nearly every point, a strange position for the framers of the Glass-Owen bill. If the districts, irrespective of size, were not to be relatively equal in financial strength, why have districts at all? What had happened to the concept of regionally controlled central banking?
The Organizing Committee, with Willis's advice, created its own monster. The Federal Reserve Act required a minimum capital of $4 million for each reserve bank. Since banking capital was heavily concentrated in the East, especially in the New York area, the Committee's decision to create 12 reserve districts made it virtually impossible to approach parity among them without reducing New York's territory to Manhattan Island. Because of the $4 million minimum, the shortage of capital in the South and West forced the Committee to extend some district boundaries deep into areas where they were not wanted and where the reserve bank city had never had a commercial and financial presence. Prompt enrollment in the system by eligible state banks would have helped, but at the time the committee made its decisions, very few had done so. Willis thought there would be no harm done if a few districts could not meet their minimum capital, but the committee chose to follow the law.
In the end, the committee's selection of reserve cities and district boundaries reflected a combination of city size, preference ballots, some banking realities and a lot of politics. The eagerly awaited announcement came on April 2, 1914. Some of the decisions had been easy, according to the Committee report. New York, Chicago, Philadelphia, St. Louis, Boston and Cleveland were the largest cities in the United States; their accessibility and banking strength justified their selection. As the only major metropolis on the Pacific Coast, San Francisco was an obvious choice. Portland, Oregon, had been considered, but it finally had been rejected because it lacked banking capital, a consideration the committee was less sensitive to in Minneapolis and Atlanta. By including the Northwest in the San Francisco district, the Committee had achieved a balance of borrowers and lenders, a standard that it had rejected in principle and did not apply consistently.
The original districts--modified later, principally to enlarge the New York district--were as follows:
|Captial in Millions
|Area in Square Miles
Editorial cartoons, particularly in Missouri, pointed out the state's unusual catch: two Federal Reserve Bank headquarters in the same state.
In St. Louis, the press hailed the selection in their lead articles as a great victory for Missouri and for St. Louis. The Democratic St. Louis Republic published a front-page cartoon on April 4, showing the symbolic Missourian, a black-hatted, frock-coated mustachioed southern colonel, smoking an enormous black cheroot which had emitted two puffs of smoke, the one labeled "St. Louis," the other "Kansas City." The caption read, "D'you All notice Ouah smoke?" The accompanying editorial was more restrained, reflecting the conflicting reactions of bankers and businessmen. Following the lead of Frank O. Watts, the editorialist called St. Louis's selection "a foregone conclusion." Twelve banks instead of eight "has somewhat reduced the area of which the city felt sure." The district's eastern limits were "about what was forecast . . . our territory to the West and Southwest is deeply cut into by the Dallas and Kansas City district.
The bankers were disappointed that they had lost Texas, Oklahoma and the western tier of Missouri counties, but they did have a district in which they quickly discovered formerly hidden virtues. Now they realized that Arkansas had the greatest potential of any Mississippi Valley state, with new cotton land being reclaimed every day from its northeastern swamps. Kentucky was "a big surprise," a delightful one. Now the great Mammoth Caverns and most of Kentucky's white tobacco-growing area were in the St. Louis district, as well as western Tennessee and northern Mississippi. Louisville and Memphis were fine catches, though the former was a reluctant captive.
Within a few hours of the announcement, Festus Wade could find virtue in the previously unthinkable. He told the Globe-Democrat that any disappointment over the loss of Texas and eastern Oklahoma was overbalanced by Missouri getting two banks. "All Missourians should rejoice," he said. "Each will augment the other, give a financial strength to this section of the country, and make us a great lending power." Besides, Kansas City's district was chiefly far to the north of the St. Louis's trade territory, extending as it did all the way to Yellowstone Park, in the extreme northwestern corner of Wyoming. As for the St. Louis district, it had a compact appearance, compared to some others.
Reflecting this overnight conversion, the Republic advised one and all to "cease wondering why Atlanta received a bank instead of New Orleans, Cleveland instead of Cincinnati, Richmond instead of Baltimore," and so on, "and devote our thought to things that may be made out (understood)." Dallas had received the cream of the St. Louis territory, but it should be remembered that banking follows trade; trade does not follow banking. St. Louis manufacturers and jobbers sell millions of dollars worth of goods, in the territory of the Dallas Federal Reserve Bank. "Our Texas customers give promissory notes for their purchases. But they do not give these notes in Texas, they give them to the St. Louis manufacturer or jobber. They will be discounted by St. Louis banks and rediscounted by the St. Louis Federal Reserve Bank." In the end, the "result will be the same as if Texas were in the St. Louis district." Texas merchants and shippers do business in St. Louis "because it pays them to do so." The trade that St. Louis has in Texas would build up the St. Louis bank rather than the Dallas bank.
Perhaps also reflecting its status as the nation's "foremost Democratic newspaper," the Republic stressed Carter Glass and Parker Willis' major argument for the district reserve system. New York would still be the greatest financial center. St. Louis would handle as much Oklahoma, Texas and Louisiana paper as ever. "It is the artificial elements in finance that will be done away-the vast accumulations of money in New York, not sent there by purchases of New York business men, but heaped up for stock exchange speculation because the call loan market was the only place in the United States where great sums of money could earn interest and still be subject to instant demand. No longer will New York monopolize the country's credit." Determined to make the best of the situation and consoled by not having been shut out as Baltimore, Pittsburgh, Cincinnati and New Orleans had been, St. Louis bankers had decided to take high ground and look to the future. Prophecy was not their strong point, but they were among the winners, after all.
As David Houston had predicted, the disappointed cities and states cried foul. Not only did New Orleans, Baltimore, Pittsburgh, Cincinnati, Denver, Omaha and Washington raise a ruckus, so did the New York bankers, frustrated by their squeezed-down condition. A look at the map of the districts, with its variety of contortionate shapes, supported the view that the Organizing Committee had done a hard job poorly, but some of the charges went far beyond that, to allegations of favoritism and base motives. Among the milder criticisms was that of James Forgan, president of the First National Bank of Chicago, who claimed that the committee had ignored the overwhelming opinion of the nation's bankers by creating 12 districts instead of eight. Wall Street agreed, and there was some talk of seeking an injunction to prevent the plan from being carried out, but that project died after the bankers were assured by someone, perhaps McAdoo, that their district's boundaries would be expanded by the Federal Reserve Board. New York's major complaint, their bankers said, was that political considerations had invaded the selection process, which boded ill for the future of the Federal Reserve System. Was it a coincidence that two of the reserve banks were to be in Missouri, the home of Secretary Houston? Was not Atlanta the birthplace of Secretary McAdoo, and Richmond the native city of Comptroller Williams? Were Missouri, Georgia and Virginia solidly Democratic? Indeed they were.
Republican Senator John W Weeks of Massachusetts echoed these charges, alleging that only one of the four cities in question (presumably St. Louis) was entitled to a Federal Reserve bank. These charges were readily accepted by the disappointed or cynical, but they lacked substance. The Organizing Committee reacted by explaining its decisions, but it ignored the political slander. McAdoo had been born in Atlanta, but he had lived in Tennessee as a youth, and he had made his career in New York City. Houston had only lived in Missouri for a few years, he had been in Texas for a longer time, and he was a native New Englander. Even if he had favored St. Louis unfairly, the critics agreed that St. Louis was a logical choice, and he was no more pro-Kansas City than the St. Louis bankers were before April 2, which was not much. The complaint about Williams and Richmond was more persistent, but it was still speculative.
New Orleans could hardly believe, and much of the country wondered with it, that it had been denied a Federal Reserve bank. Sol Wexler, a prominent member of the A.B.A.'s currency committee throughout the Federal Reserve System's gestation period, drew up a slashing set of resolutions which were adopted at a mass meeting in New Orleans on April 4, and read into the Congressional Record the next day. The resolutions dismissed Richmond as an insignificant trade center, and charged that it had been selected for political and personal reasons. As for Atlanta, the Federal Reserve city for the district including New Orleans, it had neither the population nor the banking resources that New Orleans had, its only commercial connections with the Crescent City were of a tributary nature, and it had received no Louisiana votes in the banker's poll, not even third-place votes. St. Louis had been the only outside city receiving first-place votes in Louisiana. Memphis had attracted some second- and third-place support.
Baltimore hated to be in the Richmond district. The Maryland metropolis was the seventh-largest city in the United States, five times the size of Richmond, which ranked 39th. It had been a major commercial center since colonial days, while Richmond's claim to fame was having been the Confederate capital. Baltimoreans thought they were the victims of a political payoff, and they thought it no coincidence that John Skelton Williams was a native of Richmond and Carter Glass a near neighbor. The Globe-Democrat quoted unnamed local bankers in support of this position, noting that St. Louis, Baltimore and New Orleans had done most of the banking business east of the Mississippi and south of the mouth of the Ohio since the Civil War. The Globe was in an equivocal position. Most of the political charges were being made by Republicans, and it was a self-styled Independent Republican newspaper. But St. Louis had been awarded a reserve bank, and the editors approved the Organizing Committee's effort. It did give more space to the negative news than the Republic, perhaps because the latter was Democratic.
Denver and Omaha were outraged that Kansas City had been given a reserve bank "at their expense." Denver bankers asked why the 10th district, which covered one-sixth of the country, was the only district whose reserve bank was at its extreme eastern edge. They furnished their own answer. Senator John Thomas of Colorado had traded Denver's chances for an appointment in Secretary McAdoo's office! His son-in-law, William P. Malburn, had just been named Assistant Secretary of the Treasury. Until they heard that the appointment was coming, they had felt certain of a bank if there were 12 districts, and thought it a possibility if there were only eight. But with the appointment certain, they "threw up their hands," knowing that their city had been traded for "a mess of pottage." Omaha bankers resolved to campaign for a reserve bank of their own or to be transferred to the Chicago district. "Nothing in the world but politics" dictated the Committee's "disgraceful" choices, according to the president of the Nebraska National Bank.
Pittsburgh newspapers charged politics, too; Cleveland was selected, Pittsburgh's bankers believed, because of its connections in the Wilson administration. Secretary of War Newton D. Baker was one of its own. Cincinnati, also placed in the Cleveland district, ridiculed the choice. Some of its bankers suggested that Cincinnati's inveterate Republicanism contrasted unfavorably with Cleveland's affinity for the Democracy. Milwaukee was unhappy, too, but not for the usual reasons. It had been put in the Chicago district, which was agreeable, but most of the rest of Wisconsin had been required for the Minneapolis district, which cut off Milwaukee from its own constituency.
Whatever its reasons, the Committee's decision to establish two districts in the Southeast created a host of difficulties. Baltimore was too close to New York and Philadelphia to be considered for a reserve bank, the Committee reasoned. It could not go north, and it had no support in the Carolinas. Richmond had to have the richer part of the seaboard South, making it necessary to extend the capital-poor Atlanta district far to the West to enable it to meet the $4 million minimum capital requirement. Atlanta had to have a reserve bank, it has been charged, because of powerful pressure exerted on the committee by the Bryanite Senator Hoke Smith of Georgia, a reform-minded ex-governor of that state. New Orleans was the big prize for Atlanta but it could not be isolated from the rest of the district, which meant that southern Mississippi had to be in the Atlanta district to provide a corridor. With New Orleans out of the picture, these Mississippians would have preferred St. Louis, in company with the rest of their state and western Tennessee. Without eastern Oklahoma, Missouri's Joplin lead district, or southern Mississippi, St. Louis's district had insufficient capital, a condition the Committee remedied by giving it southern Indiana and western Kentucky including Louisville, both of which had preferred Cincinnati. Without these areas, Cincinnati was not a viable candidate for a Federal Reserve bank.
At first, despite the indignity of being charged with "tangoing about the country asking the people if they wanted a reserve bank" (by Senator Weeks), the Organizing Committee declined to respond to the avalanche of complaints. But on April 10, Senator Gilbert M. Hitchcock of Nebraska, whose obstreperousness as a member of the Banking Committee during the Glass-Owen hearings was well- remembered, launched a stinging assault on the committee's judgment and its motives. He demanded to see the documents it had used; he thought it contemptible that Kansas City had been chosen as a reserve bank city, especially for a district that included Omaha and all of Nebraska; and he questioned the choices of second-rank cities such as Richmond and Atlanta while omitting New Orleans.
This attack from the Senate floor from such a prominent politician forced McAdoo's hand. He released a 4,000-word statement, stressing the committee's hard work and careful attention to the claims of Omaha, Lincoln, Denver and Kansas City. Denver had wanted Montana, but Montana preferred Minneapolis or Chicago. Neither Kansas, west Texas nor Nebraska wanted Denver, and Idaho favored Portland or San Francisco. Only Nebraska among the eight plains and mountain states Omaha asked for cared to be in an Omaha district. Kansas City banks served a vast territory and they had loans and discounts totaling $91.7 million, more than Denver, Omaha and Lincoln combined. McAdoo did not mention that Kansas City also had Senator James A. Reed, whose late conversion had broken the deadlock in the Senate Banking Committee, allowing the Glass-Owen bill to pass. Reed was a powerful friend and a dangerous enemy, he had the administration's attention, and he had given the Organizing Committee the benefit of his views.
As for New Orleans, it had selected a district extending from New Mexico to the Atlantic Ocean. Texas had no trade with New Orleans and its bankers preferred St. Louis or Kansas City after one of its own. New Orleans had a larger capital and surplus than Atlanta or Dallas, but its national banks had a smaller total in loans and discounts than either of them. McAdoo's letter made it clear that there would be no reversals of the Organizing Committee's selection, but that point hardly needed to be stated. President Wilson had told the press on April 6 that he had "unqualified confidence" in the Organizing Committee's decisions on the 12 Federal Reserve districts, a statement intended to quiet the clamor from the disgruntled.
The Globe-Democrat, now that the protests "swelling into wails" had been heard, was sure that the banking community had confidence in its new system, attested to by the fact that nearly every national bank in the country had applied for membership well within the 60-day grace period provided after the passage of the Federal Reserve Act. Now the chief concern was the caliber of the Federal Reserve Board. "Superb ability and high character" were needed. The editor believed that President Wilson would meet the challenge, especially now that members of Congress had promised that they would make no recommendations for appointments. Even ex-Senator Aldrich hoped for the best. He was quoted in the press as saying there was a chance the system might succeed, depending upon the "character and wisdom" of those who controlled the banks, especially the Federal Reserve Board. By ability, character and wisdom, the Globe-Democrat and Aldrich meant conservative men acceptable to the major bankers. Wall Street's grumbling reaction to the districting plan served notice that it had better be satisfied with the president's appointments. Paul Warburg advised his friends to mute their criticisms until the Board was in place. As usual, Warburg was in close touch with Colonel E.M. House, Wilson's closest adviser.
At the White House, Secretary McAdoo and Colonel House battled for influence over Board appointments. McAdoo pleaded with the president for men who would work with him to break Wall Street's grip on the nation's credit. House wanted a Board that would satisfy the bankers. Wilson agreed with House, who claimed that the president feared his future son-in-law was trying to subordinate the Federal Reserve Board to the Treasury Department. Accordingly, House dominated the selections, with one or two exceptions. The extent of House's victory was apparent when Wilson offered an appointment to Richard Olney, a noted Boston railroad attorney. As Cleveland's attorney-general in 1894, Olney had broken the Pullman strike near Chicago, and then had jailed the American Railway Union's president, Eugene V Debs. Rewarded by a promotion to Secretary of State, Olney in 1895 faced down the British government in a confrontation over the boundary line between Venezuela and British Guiana, thereby adding a corollary to the Monroe Doctrine. Bankers and businessmen were delighted with the Olney nomination, but Olney was nearly 80 years old and eventually he turned Wilson down, as did Henry A. Wheeler of Chicago.
On May 4, 1914, the President submitted his nominees to the Senate. In addition to Olney and Wheeler, the list included W P G. Harding of Birmingham, president of Alabama's biggest bank; Paul Warburg of Kuhn, Loeb, and Company; and Thomas D. Jones of Chicago, a director of the International Harvester Company, who had been a trustee of Princeton University when Woodrow Wilson was its president. Progressives and conservatives alike were stunned. When they recovered, financial and business spokesmen gave the nominees their delighted approval.
Carter Glass and Parker Willis, who had not been consulted, were dismayed. They feared their federal reserve system had been handed over to its enemies, the Aldrich plan crowd. One midwestern progressive senator thought Frank A. Vanderlip of the National City Bank must have selected the nominees, "a more reactionary crowd could not have been found with a fine-tooth comb."
To replace Olney and Wheeler, Wilson nominated Charles S. Hamlin, a Boston attorney, and Adolph Miller, a former professor of economics at the University of California. When the revised list of nominees reached the Senate on July 15, Senator Reed of Missouri, with his Kansas City reserve bank safely in hand, loosed his heavy artillery on the Jones and Warburg appointments. Warburg was a target for the obvious reasons: he was a Wall Street banker and a noted advocate of central banking. Jones was worse. His "Harvester Trust" was the most hated of all businesses by midwestern farmers, and it was under indictment as an illegal combination. Even ex-president Taft joined the chorus, saying that if he had nominated such a man for an important position, "the condemnation that would have followed it staggers my imagination."
President Wilson fought hard for Jones, alleging that his friend had joined the Harvester Board to clean up the organization, but under grilling by Reed, Senator Hitchcock and other banking committee members, Jones admitted that he had approved all of the company's policies since he had joined its board in 1909. Noting that the president had just persuaded the Senate to approve an anti-trust bill (the Clayton Act), Hitchcock wondered how he could ask senators to approve "a maker of trusts." Finally Wilson asked Secretary Bryan to intercede, which he did at no small cost to his conscience. Hitchcock had no use for Bryan anyway, and Reed was not persuaded. Ironically, by opposing Jones, Hitchcock was helping McAdoo, whom he had so bitterly denounced for not giving Omaha a reserve bank. The banking committee refused to move, and the president withdrew the nomination. Wilson had suffered his first defeat in Congress, and he was angry.
The Senate committee also refused to confirm Warburg unless he appeared before it for questioning. Members wanted him to explain how a Wall Street banker proposed to conquer the Money Trust. His pride wounded, Warburg refused to appear and asked Wilson to withdraw his nomination. The president would not do it, and Senator Hitchcock broke the stalemate by asking the imperious banker to come before the banking committee, not for a grilling but for a "conference." Warburg conferred with the committee on August 1 and 3, 1914. Either he satisfied the senators or they did not wish another confrontation with the president. Paul Warburg was confirmed by the Senate on August 7, along with Frederic A. Delano, a Chicago railroad man who had replaced Jones as a nominee. Since Adolph Miller and Charles Hamlin had already been confirmed, the board was completed, and the bankers were well-satisfied.
Of the five appointed members, only Charles Hamlin allied himself in policy matters with ex-officio members McAdoo and Williams. One immediate issue facing the Board was that of redistricting. All members agreed that some alterations in district boundaries had to be made, especially in New Jersey counties within eyeshot of Manhattan which had not been included in the New York district. McAdoo, as chairman, appointed Delano, Harding and Warburg to a redistricting committee. In McAdoo's view, boundary readjustments were all that was necessary, but the committee and Adolph Miller were determined to reduce the number of districts, perhaps to as low as eight. Warburg believed that the language of the Federal Reserve Act (Section 2, paragraph 1), which stated that the Organizing Committee's decisions "shall not be subject to review except by the Federal Reserve Board when organized" gave the Board the power to reduce the number of districts if it thought it necessary. The power to review included the power to consolidate, in the opinion of the majority of the Board. Six strong districts (One, Two, Three, Four, Seven, and Twelve) had been created. The other six were weak. If the ideal of self-sufficient districts were to be realized, their number should be reduced to eight or nine. Atlanta (Six) and Minneapolis (Nine) were especially vulnerable, followed by Kansas City (Ten) and Dallas (Eleven).
Board Chairman McAdoo and members Hamlin and Williams, and Carter Glass and Parker Willis as well, saw an Aldrich-Plan conspiracy in the effort to consolidate districts. This reaction seems unjustified if not ridiculous. The minimum was eight districts under the law; only Congress could change it. Warburg, Delano, and Harding had supported the Aldrich plan, but they had lost the battle. In their view, they were simply trying to strengthen the Federal Reserve System-to make it work. In their redistricting committee report, they warned that decentralization would defeat its purpose unless the regional banks were "strong enough in themselves to be effective, large enough to command respect, and active enough to exert a continuous and decisive effect on banking affairs in their districts."
Since he was outnumbered on the Board, McAdoo looked for outside help. Not surprisingly, he found it in one of his cabinet colleagues. He requested an opinion from Attorney General T.W. Gregory on the question of the Federal Reserve Board's power to alter the Organizing Committee's districting decisions. As expected, Gregory took a narrow view of the statute, ruling on November 22, 1915, that the power to readjust districts "does not carry with it the power to abolish districts and banks." In April, 1916 Gregory gave the opinion that the Board could not change the location of any Federal Reserve bank.
On May 4, 1915, the Board transferred 12 counties in northern New Jersey from the Philadelphia to the New York district; two counties in northern West Virginia from the Richmond to the Cleveland district; and 25 counties in southern Oklahoma from the Dallas to the Kansas City district. These moves were made in response to petitions from the areas affected. One county in western Connecticut was transferred from the Boston to the New York district in March, 1916, and in October of that year 20 counties in eastern Wisconsin were shifted from the Minneapolis to the Chicago district. St. Louis picked up a Mississippi county in 1920, at the expense of Atlanta.
On May 11, 1914, the Organizing Committee designated the German National Bank of Little Rock; the Ayers National Bank of Jacksonville, Illinois; the Second National Bank of New Albany, Indiana; the National Bank of Kentucky at Louisville; and the First National Bank of Memphis to execute the Eighth Federal Reserve District's organizing certificate. Representatives of these banks met in St. Louis on May 18, signed the certificate, and sent it to the Comptroller of the Currency. The Federal Reserve Bank of St. Louis was now a corporate body.
The Federal Reserve Act provided that each reserve bank should have nine directors, divided into three classes. Three class A directors were to be bankers representing stockholding banks; three Class B directors were also to be elected by the stockholding banks, from persons "actively engaged in their district in commerce, agriculture, or some other industrial pursuit." The district's member banks were to be divided into three groups according to size, and each group was entitled to elect one Class A and one Class B director. Three Class C directors were to be appointed by the Federal Reserve Board, one of them to be the chairman of the district bank's board and Federal Reserve Agent. No Class C director could be an officer, director or stockholder of any bank, although the one named Chairman and Federal Reserve Agent had to be a person of "tested banking experience." The terms of office for directors was three years, staggered so that one director in each class would complete his term each year.
On June 4, 1914, member banks of the Eighth District met in St. Louis to determine the procedure for electing directors, and then to elect them. Festus Wade of the Mercantile Trust Company, the only state-chartered bank in the district that had joined the Federal Reserve System, was elected temporary chairman. In turn, Wade appointed a Rules Committee consisting of one member from each of the seven states in the district. The committee ruled that there would be no proxy voting, which gave rise to charges that St. Louis would dominate the choices because it had more delegates present. A motion that would have nullified that ruling was defeated.
Walker Hill, president of the Mechanics-American National Bank of St. Louis, was elected a Class A director by Group One banks (those with more than $100,000 in capital and surplus). For its Class B director, Group One selected Murray Carleton of the Ferguson-Carleton Hardware Company of St. Louis. Group Two ($50,000 to $100,000 in capital and surplus) elected Frank O. Watts of the Third National Bank of St. Louis as their Class A director and W B. Plunkett, president of the Plunkett-Jewell Grocery Company of Little Rock as their Class B director. Group Three (banks with under $50,000 in capital and surplus) named Oscar Fenley, president of the Kentucky National Bank of Louisville to their Class A position, and former United States Senator Leroy Percy of Greenville, Mississippi, to the Class B seat. There was no requirement that Class A directors be selected from their own group. Both Watts and Fenley were presidents of large banks.
First Board of Directors, Federal Reserve Bank of St. Louis. Seated: John W. Boehne, Rolla Wells, William McChesny Martin, Walker Hill, and W.B. Plunkett. Standing: Murray Carleton, Oscar Fenley, F.O. Watts, Walter W. Smith, and LeRoy Percy.
This situation was corrected on September 26, 1918, by an amendment to the Federal Reserve Act, requiring Class A directors to be members of the group that elected them. On the same day, to give banks voting power commensurate with their stock ownership in their reserve banks, the Federal Reserve Board reclassified the groups. Group One was defined as those with over $599,000 in capital and surplus; Group Two, $100,000 to $599,000; and Group Three, under $100,000. In the Eighth Federal Reserve District on that date there were 34 Group One, 168 Group Two and 307 Group Three banks. Primarily because nine more large state banks and trust companies, including the Mississippi Valley Trust Company, the District's second largest bank, had joined the Mercantile Trust, the largest, as member banks, the St. Louis Federal Reserve Bank's authorized capital had increased from the original $6.2 million to $7.6 million. At that time, the Mercantile Trust Company's capital and surplus was $9.5 million and the Mississippi Valley Trust Company's $6.5 million.
The Bank's greatest problem during its first year of operation, according to its first chairman of the board, William McChesny Martin, was to get member banks to understand the facilities available and the ease with which they could be used.
The Federal Reserve Board announced St. Louis' Class C directors' appointments on September 30. William McChesney Martin, a 40-year old native of Lexington, Kentucky, was named Chairman of the Board and Federal Reserve Agent. Martin, a graduate of Washington and Lee University, had come to St. Louis in 1896 as secretary to his uncle, William S. McChesney, the superintendent of the Louisville and Nashville Railroad's St. Louis terminals. After graduating from the Washington University School of Law and being admitted to the St. Louis bar in 1900, Martin entered the trust department of the Mississippi Valley Trust Company. He became vice president of the company in April, 1914.
On September 15, 1914 Chairman McAdoo had offered the Chairman-Agent position at St. Louis to Rolla Wells, hoping that Wells would serve at least until "things were in good working order." Wells declined, but at McAdoo's request he agreed to find someone for the position. As Mayor of St. Louis, Wells had worked closely with Martin's uncle William McChesney, who had become president of the St. Louis Terminal Railway Association in 1903, in an effort to block the municipal free bridge movement. As a director of the Mississippi Valley Trust Company, Wells had been impressed with Martin's performance as a trust officer. Accordingly, he took Martin to Washington, where they had a successful interview with McAdoo. The other Class C appointees were Walter W Smith of St. Louis, Deputy Federal Reserve Agent, and John W Boehne of Evansville, Indiana.
Each reserve bank's operating officers, under the law, were to be elected by its board of directors, but Secretary McAdoo took a hand in selecting them, at least in St. Louis's case. He wired Rolla Wells on October 27, asking him to accept the governorship. "You will render great public service by so doing. I do not think it will burden you heavily, and it will not be necessary for you to give up your business interests or investments . . . Have telegraphed to Watts and Martin." The wording of McAdoo's telegram suggests that he expected Wells to be an impressive figurehead, with the management of the bank, in its daily routine if not in all matters, in the hands of others. Either its subordinate officers or the federal reserve agent would run the bank.
The Federal Reserve Bank of St. Louis opened for business on November 16, 1914, in rented quarters at the northeast corner of Broadway and Olive with six officers and 17 other employees.
The board of directors held its first regular meeting on October 28, 1914, in the boardroom of the Mississippi Valley Trust Company in St. Louis. After adopting a set of bylaws, the board elected Rolla Wells governor, WW. Hoxton deputy governor and secretary, and C.E. French cashier. Gold arriving from member banks to pay for their reserve bank stock was stored in a vault at that location until the reserve bank's temporary quarters were ready. The St. Louis Federal Reserve Bank opened for business on November 16, 1914, on the fourth floor of the Boatmen's Bank on the northeast corner of Olive Street and Broadway, with six officers and 17 other employees.
Ignoring the Organizing Committee's suggestion that district boards' executive committees should consist of three elected board members, the Eighth district board chose a five-man executive committee made up of the governor, the federal reserve agent, and three board members elected from Classes A and B. Walker Hill, Murray Carleton, and Frank O. Watts joined Wells and Martin on the committee. All of the members were from St. Louis, presumably because they would be readily available. In most of the other districts, the Federal Reserve Agent was not on the executive committee. By including Martin, the St. Louis board added to his status and power. Since Rolla Wells had accepted the governorship with the understanding that it would not seriously disrupt his other activities, the way was open for Martin to assume the primary managerial responsibilities, which he did with Well's approval.
After stating publicly that it would have to pay good salaries to attract able men, the Federal Reserve Board set the agents' salaries at less than the going rate for top-level bank officials. Only one of the district agents was paid more than the $12,000 annual stipend for Board members. In New York the agent was paid $ 16,000; in Dallas and Atlanta, $6,000; in the other districts from $7,500 to $12,000. Martin's was near the average at $10,000. Governors' salaries were determined by the district boards with the approval of the Federal Reserve Board, and most of the governors were paid much more than their agents, supporting the view that theirs was the most important office. Their salaries ranged from $30,000 for Benjamin Strong in New York to $7,500 for the Kansas City governor. While the agents' salaries reflected local conditions and the relative importance of their districts, it is hard to explain why Kansas City valued its governor so little except that his stipend matched that of its Federal Reserve Agent. Rolla Wells' salary, at $20,000, was among the four highest paid to governors. St. Louis was a larger banking center, despite the relative weakness of the Eighth District, than most of the other Federal Reserve cities, but the major reason for Wells' high salary was probably his standing as one of the most powerful men in St. Louis. He was an important national political figure as well, with intimate ties to the Wilson administration.
Looking north on Broadway, where the Federal Reserve Bank would rent its first quarters. View from Market Street.
Eight days after it opened, the St. Louis bank offered to collect for member banks checks and drafts drawn on any Federal Reserve bank or any member bank in the Eighth District. To take care of its clearing responsibilities, the bank's staff was expanded from six officers and 17 other employees to six officers and 40 other employees. This proved to be more than was necessary, and a few weeks later the staff was reduced to five officers and 34 employees. The board met on the first and third Wednesday of each month, and during the first year the average attendance was seven of the nine directors. In his First Annual Report, Chairman-agent Martin noted that it had been rumored that directors were paid $5,000 a year. This was not true; the directors were paid their travel expenses and the "usual fee" for attending meetings. This small amount was not at all adequate. One of the directors, Leroy Percy, spent a night and a day traveling from his home to St. Louis.
In December 1914, the board gave the executive committee power to fix and change the rediscount rate for the district, subject to the approval of the Federal Reserve Board. The executive committee met at 10:30 A.M. on Monday and Thursday of each week and at other times when necessary. By December 31, 1915, the committee had met 150 times. From the beginning, the board regarded adjustment of the rediscount rate as the most important of its functions. The rate was set at 6 percent when the bank opened for all maturities. In January, 1915 money became more plentiful, and the committee decided to lower the rate for shorter maturities, to 4.5 percent for 30-day paper, 5 percent for 60 days, and 5.5 percent for 90 days. Agricultural loans running from 90 days to 6 months remained at 6 percent. During the first year, demand was disappointingly low, and on several occasions rates were dropped to attract more business.
Partly because of pressure from the Federal Reserve Board, Martin and the directors tried to encourage the use of trade acceptances by lowering their rate to 3.5 percent, but there was very little response to this and other preferential rates. From the opening in November 1914 to December 31, 1915, the St. Louis Federal Reserve Bank accepted 3,828 notes for rediscount, totalling $8.2 million. One hundred thirty-one banks were accommodated, just over a fourth of the district's member banks. Smaller banks in Missouri, Tennessee, Arkansas and Illinois made the heaviest use of their rediscounting privilege. Large banks in St. Louis and Louisville seldom did so. A year after it opened, the reserve bank held 25 percent of the Eighth District's total loans, including 48 percent of the loans in its part of Tennessee and 30 percent of the Illinois paper. The Indiana and Kentucky banks had made the least use of their Federal Reserve Bank, but Missouri was disappointing as well. Eighth District member banks had placed one-third of their loans outside of their district, in most cases at rates that were as high or higher than the rates offered at their reserve bank.
An early example of currency issued by the Federal Reserve Bank of St. Louis.
The greatest problem faced by the St. Louis Federal Reserve Bank, during its first year of operation, according to Chairman Martin, was to get member banks to understand the facilities available and the ease with which they could be used. Banks often thought they had no paper eligible for rediscount, "when in fact the greater part of their paper was eligible." Many bankers thought there was a lot of red tape, that it was difficult to do business with the reserve bank. This impression arose largely from the fact that the reserve bank would not accept paper for rediscount that was unaccompanied by a statement, either from the maker of a note or the banker offering it, revealing the customer's assets and obligations. Despite the issuance of a circular letter to all member banks covering eligible paper and giving specific examples of the types of paper that were eligible for discount, 22 addresses on the topic throughout the district by the Chairman, and many visits to individual banks by the deputy agent, after a year a substantial minority of the banks were still "uninformed." Apparently some did not read their mail and others did not understand it.
In December 1915, the St. Louis Federal Reserve Bank moved into new quarters in the New Bank of Commerce building, on the northeast corner of Broadway and Pine Streets, one block south of its former location. The building was renamed the Federal Reserve Bank Building, and it furnished a "light, commodious, and convenient" banking area on the second floor, with plenty of vault space.
In its first year of operation, the Bank operated at a loss, but had "stablized conditions and made it possible for any customer in the district to get money at a reasonable rate."
During its first year of operation, chiefly because rediscounting volume had not met expectations, the St. Louis Federal Reserve Bank had operated at a loss, though it did show a month-to-month profit beginning in the fall of 1915. Far more important, according to Chairman Martin "a much higher service to the district than the making of money has been rendered. It has stabilized conditions and made it possible for any customer in the district to get money at a reasonable rate." It had also operated a clearing system that had eliminated exchange charges on a majority of the checks drawn on member banks in the district.
No doubt Martin, Rolla Wells and the board of directors of the St. Louis Federal Reserve Bank were consoled by remarks made by Paul Warburg at a conference of reserve bank governors on October 22, 1915.
Earning Capacity must never be the test of the efficiency of Federal Reserve banks . . . I should have felt heartily ashamed had all our banks, considering the circumstances . . . earned their dividends in the past year . . . [it] would have been proof that they had completely misunderstood their proper function and obligations.
Despite these comforting words, Chairman Martin installed a Spartan discipline at the bank in 1916. Rediscounting volume actually decreased, but economies and large and profitable purchases of bankers' acceptances in New York and Boston resulted in a profit of $145,000 on total earnings of $286,000. The bank declared a 6 percent dividend on its capital stock, covering the period from the opening of the bank to March 15, 1915. Relationships between the district bank and its constituency were still in a tentative, formative stage, but by the most conservative standard, the St. Louis Federal Reserve Bank was on a sound footing. St. Louis bankers had played a prominent role in the banking reform movement, and they could congratulate themselves that their city and its area were assured of a significant role in the future of the Federal Reserve System.
- Frank A. Vanderlip, "The Modern Bank"
in The Currency Problem, and the Present Financial Situation
(New York, 1908), 3, cited in Robert A. Degan, The American
Monetary System (Lexington, Massachusetts, 1987), 15-16. See
also Gabriel Kolko, The Triumph of Conservatism (New York, 1963),
152; and Robert H. Wiebe, Businessmen and Reform (Cambridge,
Massachusetts, 1962), 63-65. (Back to text)
- Milton Friedman and Anna Jacobson Schwartz,
in A Monetary History of the United States (New York, 1963),
165-167, suggest that the restriction of payments was "therapeutic,"
giving time for the panic to "wear off." (Back
- Kolko, Triumph of Conservatism, 152; Wiebe,
Businessmen and Reform, 65. (Back to text)
- Kolko, Triumph of Conservatism, 149. (Back
- Hubbard and Davids, Banking in Mid-America,
122. (Back to text)
- W B. Stevens, A Centennial History of Missouri
(St. Louis, 1922), III, 56-60; Hubbard and Davids, Banking in
Mid-America, 86. (Back to text)
- Stevens, A Centennial History of Missouri,
III, 56-60; Primm, Lion of the valley, 423. (Back
- Kolko, Triumph of Conservatism, 183-184.
(Back to text)
- Ibid., 184; Wiebe, Businessmen and Reform,
76. (Back to text)
- Degen, American Monetary System, 26-27.
(Back to text)
- Ibid., 27; Kolko, Triumph of Conservatism,
184. (Back to text)
- Degen, American Monetary System, 25;
Robert Craig West, Banking Reform and the Federal Reserve (Ithaca,
1977), 76-77, 82, 84-85; Paul Warburg, The Federal Reserve System;
Its Origin and Growth (New York, 1930), II, 201-214. (Back
- Thibault de Saint Phalle, The Federal
Reserve, An International Mystery (New York, 1985), 49; Nathaniel
W Stephenson, Nelson W. Aldrich (New York, 1930), 340. (Back
- In each branch district and in each
association, three-fifths of the governing directors would be
chosen by member banks, each having one vote. The remaining
two-fifths were to be chosen with each member bank voting in
proportion to its capital, See J. Laurence Laughlin, Banking
Reform (Chicago, 1912), 13-14. (Back to text)
- West, Banking Reform, 74-75; Warburg,
The Federal Reserve System (New York, 1930), 1, 90-91. (Back
- West, Banking Reform, 84. (Back
- Wiebe, Businessmen and Reform, 77;
Laughlin, Banking Reform, 16-18. (Back to text)
- Wiebe, Businessmen and Reform, 77-78.
(Back to text)
- Kolko, Triumph of Conservatism, 185-186.
(Back to text)
- Ibid., 187-189. (Back to
- Paolo E. Coletta, William Jennings
Bryan (Lincoln, 1969), II, 126; Kolko, Triumph of Conservatism,
189. (Back to text)
- Carter Glass, An Adventure in Constructive
Finance (New York, 1927), 29. (Back to text)
- From the Progressive Party platform
of 1912, quoted in Arthur S. Link, Wilson, The New Freedom (Princeton,
1956), 201. (Back to text)
- Testimony of J. P. Morgan, senior
partner of J. P. Morgan and Company, December 19, 1912. Final
Report from the Pujo Committee, February 28, 1913, in Herman
E. Krooss, editor, Documentary History of Banking and Currency
in the United States (New York, 1969), III, 2107-2122, 2143-2195.
(Back to text)
- Glass, Constructive Finance, 29.
(Back to text)
- William Diamond, The Economic Thought
of Woodrow Wilson (Baltimore, 1943), 101. (Back
- Kolko, Triumph of Conservatism,
218, 226. (Back to text)
- Ibid., 223-226; Link, The New Freedom,
202; West, Banking Reform, 92-96. (Back to text)
- Glass, Constructive Finance, 61,
83-84. (Back to text)
- Quoted in Kolko, Triumph of Conservatism,
226. (Back to text)
- Glass, Constructive Finance, 81-84.
(Back to text)
- Ibid., 91; Kolko, Triumph of Conservatism,
225-226. (Back to text)
- Glass to Festus Wade, July 31, 1913;
quoted in Ibid., 222; Glass, Constructive Finance, 157. (Back
- Krooss, Documentary History, 2207-2209;
Glass, Constructive Finance, 90; Kolko, Triumph of Conservatism,
227; J. Laurence Laughlin, The Federal Reserve Act: Its Origins
and Problems (New York, 1933), 136. (Back to text)
- Coletta, Bryan, II, 130-131; Glass,
Constructive Finance, 94-96. (Back to text)
- Krooss, Documentary History, 2196-2206.
(Back to text)
- Glass, Constructive Finance, 100-110.
(Back to text)
- Ibid., 123-126; Coletta, Bryan,
II, 130-133; David F. Houston, Eight Years With Wilson's Cabinet
(Garden City, N.Y, 1926) I, 47-48. (Back to text)
- Brandeis suggested that bankers could
assist the Federal Reserve Board as technical advisers. He believed
they should have no voice in policy matters; their interests
were "irreconcilable" with administration goals. Brandeis
had been Wilson's first choice for Attorney-General, but New
York and Boston bankers and railroad interests, fearing that
he would press anti-trust prosecutions, had persuaded Wilson
to look elsewhere. Brandeis' views were generally consistent
with those of Bryan, La Follette, and other "radicals,"
but unlike them he did not attack business concentration as
undemocratic or oppressive, but simply as unwieldy and inefficient.
Henry L. Higginson, a Boston investment banker and Wilson supporter,
orchestrated the attack on Brandeis' nomination. President A.
Lawrence Lowell of Harvard also actively opposed Brandeis. See
Coletta, Bryan, II, 133; Link, The New Freedom, 10-13, 212;
Kolko, Triumph of Conservatism, 208. (Back to
- Krooss, Documentary History, 2207-2229.
(Back to text)
- Link, The New Freedom, 216. (Back
- Ibid., 217; Glass, Constructive Finance,
116. (Back to text)
- Kolko, Triumph of Conservatism, 232-233.
(Back to text)
- Glass, Constructive Finance, 134-136.
(Back to text)
- Ibid., 138-143; Link, The New Freedom,
218-223. (Back to text)
- Coletta, Bryan, 11, 135. (Back
- Wiebe, Businessmen and Reform, 130-131;
Link, The New Freedom, 225. (Back to text)
- Wiebe, Businessmen and Reform, 131-133.
(Back to text)
- Ibid., 134-135; Link, The New Freedom,
225-227. (Back to text)
- Ibid., 228-229. (Back to
- Ibid., 229-232; Glass, Constructive
Finance, 166-167; Kolko, Triumph of Conservatism, 239. (Back
- Link, The New Freedom, 232-234; Wiebe,
Businessmen and Reform, 136; Glass, Constructive Finance, 166-167.
(Back to text)
- New York World, October 25, 1913,
cited in Link, The New Freedom, 233-235. (Back
- Kolko, Triumph of Conservatism, 240;
Glass, Constructive Finance, 168-169. (Back to
- St. Louis Republic, November 20, 1913;
Glass, Constructive Finance, 158. (Back to text)
- Ibid., 196, 220, 242-243. (Back
- Link, The New Freedom, 237. (Back
- Ibid., 237-238; West, Banking Reform,
132-135. (Back to text)
- New York Times, December 24, 1913;
cited in Link, The New Freedom, 237-238. (Back
- Coletta, Bryan, 11, 138-139. The Nation,
November 26, 1914, 622; the New York Times, May 30, 1915; and
the New York Tribune, December 24, 1913, all anti--Bryan for
decades, conceded that his support for the Federal Reserve Act
had been crucially important. (Back to text)
- Glass, Constructive Finance, 235; Kolko, Triumph of Conservatism, 242. (Back to text) |
A new Australian study shows how many reptiles that become cat food in the country. Many people probably know that mice and birds are on the menu for most house cats.
But now Australian scientists have shown that wild and domestic cats in Australia eat about 2 million reptiles a year based on the dietary habits of more than 10,000 cats around the country. Extrapolating from the data, the researchers estimate the feral cat population in Australia, which totals between two and six million
“On average each feral cat kills 225 reptiles per year,”
“Some cats eat staggering numbers of reptiles. We found many examples of single cats bingeing on lizards, with a record of 40 individual lizards in a single cat stomach.”
– John Woinarski of Charles Darwin University, lead author of the study in the journal Wildlife Research
The cats snack on 258 reptile species and could push some to the brink of extinction, according to the researchers. This new knowledge is important to understand the impact cats can have on domestic wildlife and endangered species.
Australia is home to more than 10 per cent of the world’s reptile species, and while Professor Woinarski said cat predation is unlikely to be the biggest threat these animals face, it has clearly been taking a significant toll.
J. C. Z. Woinarski. How many reptiles are killed by cats in Australia? 2018. Wildlife Research. DOI: 10.1071 / WR17160. |
Cultural Competence in Crisis Intervention
Cultural competence is defined as a set of congruent behaviors, attitudes, and policies that come together in a system, agency, or among professionals and enable that system, agency, or those professionals to work effectively in cross–cultural situations (King, 2009).
Operationally defined, cultural competence is the integration and transformation of knowledge about individuals and groups of people into specific standards, policies, practices, and attitudes used in appropriate cultural settings to increase the quality of services; thereby producing better outcomes (King, 2009).
“Culture” in Cultural Competence
To understand cultural competence, it is important to grasp the full meaning of the word culture first. According to Chamberlain (2005), culture represents “the values, norms, and traditions that affect how individuals of a particular group perceive, think, interact, behave, and make judgments about their world.”
In the field of human services, cultural competence is important because it can provide a framework for connecting with others in a genuine way, as well as providing services to clientele in an authentic, respectful, and truthful manner that assists in establishing a foundation of trust.
This is particularly true of crisis management in human services.
The Role of Culture in "Crisis"
Given the immediate demands placed upon the professional during a crisis situation, factors of culture and cultural identity are often neglected. Yet, the professional and client in crisis often come from different cultures, i.e., age, gender, race, ethnicity, language, nationality, religion, occupation, income, education, mental and physical abilities.
To this end, crisis intervention often requires an immediate development of trust between two people from different cultures for purposes of restoring the client’s coping mechanisms to a pre-crisis level of functioning. The quick development of rapport and trust between people of different cultures often requires the professional to communicate, both nonverbally and verbally, a demeanor that one is knowledgeable about and accepting of cultural differences (Dykeman, 2005).
Most professionals in the field of human services would agree that this is a task that is often easier said than done during a crisis situation. Most professionals in the field of human services have witnessed the potent power of cultural competency in action when observing a colleague de-escalate a client whom no one else could assist. More often than not, cultural competence can account for many of the powerful connections we have witnessed between professionals and clients.
As human beings, it is important for us to feel validated and respected. This is particularly true in crisis situations. It is important to note that a professional’s lack of cultural competency is not indicative of lack of validation and respect of his or her client, but rather a lack of a repertoire of skills that allow professionals to genuinely and effectively communicate this validation and respect to the clients they serve. When professionals in the field of human services lack cultural competency, limitations and boundaries may be placed on the level of connectedness and trust the client has toward the professional serving them.
In an article written by James Cunningham (2003), it is suggested that culture, socialization, and race impact our thinking, feelings, and behavior during crisis intervention by playing an integral role in determining what a crisis is, and how, when, and if we intervene in a crisis situation. Cunningham (2003) explains that professionals who are not aware of these critical variables risk failures in cross-cultural interactions. These failures can be defined as a series of crisis situations that result in negative outcomes.
Issues of Diversity That Impact Crisis Situations
As professionals work with people in crisis, it is extremely crucial that they are aware of their own issues, and when intervening in cross-cultural situations, it is important that they ask themselves silent questions (Cunningham, 2003). “What am I feeling now?” (Cunningham, 2003). Additionally, when professionals intervene during a crisis situation, it is paramount that they develop an awareness of their own prejudices around cultural diversity.
This can present challenges for professionals. Self-awareness work around issues of culture diversity requires a willingness to engage in deep reflection regarding how prejudice can manifest in one’s work during crisis intervention. For example, if you believe your clients of Latino heritage only respond to Latino professionals, it is likely that you will take this belief into a crisis situation and that this belief could possibly impede your ability to effectively serve the client during the crisis.
Cunningham defines this awareness and recognition as “recognizing the dynamics of difference.” He describes the dynamics of difference as a fruitful way to conceptualize the components of cultural competence (Cunningham, 2003).
The dynamics of difference encompass three principle objectives for professionals to embrace when working from a framework of cultural competence during crisis intervention. First, professionals must seek to increase their self-awareness by becoming comfortable with their own issues around diversity. Additionally, agencies must examine their past histories with diverse cultures and consider how culture may impact the manner in which staff are hired and services are delivered.
Second, professionals must gain an accurate and working knowledge of their clients’ cultures thus acknowledging they have taken time to learn about them, while simultaneously communicating a willingness and openness to continually learn. Finally, professionals should be able to adapt their skills to different cultures, refraining from a one-size-fits-all approach to crisis intervention.
If professionals are willing to engage in the necessary work required in practicing from a culturally competent framework, such as developing self-awareness of their own cultural biases, they assist clients in feeling validated and respected during crisis situations. Providing services from a framework of cultural competency may also assist professionals in helping clients reach their pre-crisis state after intervention.
Cultural competence is a value that must be embraced by both professionals and the agencies they work within in order to effectively manifest at a level that will be meaningful to clients during crisis intervention. Effective crisis intervention practiced with cultural competence results in positive outcomes for all involved in the crisis intervention.
About the Authors
Dr. Nasiah Cirincione-Ulezi holds a Master’s degree in Special Education from the University of Illinois at Chicago and a Doctorate in Curriculum and Instruction from Loyola University of Chicago. Currently, Dr. Cirincione-Ulezi is an Assistant Professor of Special Education at Chicago State University.
Dr. Angelique Jackson holds a Master’s degree in Urban Education and Accelerated Brain Based Learning from Cambridge College and a Doctorate in Curriculum and Instruction from Loyola University of Chicago. Dr. Jackson is an Assistant Professor of Elementary Education at Chicago State University.
Cunningham, J. (2003). A “cool pose”: Cultural perspectives on conflict management. Reclaiming Children and Youth: The Journal of Strength-based Interventions, 88-92.
Dykeman, B. F. (March 2005). Cultural implication of crisis intervention. Journal of Instructional Psychology, 45-48.
King, M. A. (2009, January 21). How is cultural competence integrated in education. Retrieved from http://cecp.air.org/cultural/Q_integrated.htm#def
Lopes, A. S. (2001). The student national medical association cultural competency position statement. Atlanta, Georgia: Student National Medical Association.
Sullivan, M. A., Harris, E., Collado, C., & Chen, T. (2006). Noways tired: Perspectives of clinicians of color on culturally competent crisis intervention. Journal of Clinical Psychology, 987-999.
Originally published in the Journal of Safe Management of Disruptive and Assaultive Behavior, March 2010. © 2010 CPI. |
For an overview of English Language, the study design, what’s involved in the exam and more, take a look at our Ultimate Guide to English Language.
How To Effectively Build an Essay Evidence Bank
Essays in English Language require contemporary examples of language being used in Australia, in order to justify your response to the topic. English Language essays are often said to only be as good as the examples that are used, so it follows that your essays will only be as good and interesting as the examples that you find. It’s a really good idea to start collecting examples, or evidence, in a “bank” from day one, and throughout the year as you prepare for essay SACs and the final exam. Great examples not only lead your discussion, but also make your essay more interesting and therefore stand out.
What Makes a Good Piece of Evidence?
Primarily you want your evidence to comprise examples of how language is being used within a specific context in contemporary Australia. For instance, you might explore how leaders in Australia use overtly prestigious language with Victorian Premier Daniel Andrews’ use of the formal vocative phrase 'my fellow Victorians' at a press briefing. You may not always be able to find a specific instance of a particular language feature being used, which can be especially true for language that is not frequently used in public contexts, such as slang and ethnolects. It is okay to just have general examples that you discuss in these instances; perhaps the ellipsis (omission of understood words) of auxiliary verbs in varieties such as Greek Australian English. What is important is that the majority of your examples are actually instances of language features being used, and not simply a quote of someone else’s analysis of language, such as a linguist’s quote. Such quotes can be used in essays, but should complement your own discussion of your own examples.
Good examples must also be 'contemporary', as per the majority of essay prompts. As a general rule of thumb, ask yourself if the example you have is older than two years, and if so you may want to think of something newer. This does not mean you can never employ an older example. For instance, you may want to discuss language change in an essay, which sometimes necessitates discussing the historical context of certain language features.
How To Build an Example Bank
Many students find it highly beneficial to create a table or list of examples that they will practice and get comfortable with – you cannot bring this into the exam of course, but it is a very effective tool for preparation. In your table or list, consider including the following:
- Your example itself (this may not always be just a quote, sometimes you might have a phonetic transcription, for instance)
- The context that surrounds the example
- The metalanguage that you can use to analyse it
- The areas of the study design and essay topics it can cover
- A few short sentences of analysis
An example is given below:
These examples do not necessarily have to be something that you put a huge effort into going out and finding, so long as you make sure that you write down interesting language features that you come across in your day-to-day life. Keep an eye on places like the news, social media (including emojis and text speak), and any Australian television, radio, podcasts you watch or listen to. You will of course also discuss different examples of contemporary language use in class too, so make sure to add them as well.
Getting evidence is only step one of preparing for essay writing in English Language, but is the most important step for writing interesting and engaging essays. Keep in mind that this doesn’t have to be a solo activity; collaborating with classmates and group discussions, especially as you prepare for the exam can be a great way to make evidence collection fun. Be sure to check out our other blog, What Is an English Language Essay? for other tips and tricks to make your essays stand out. |
“Creationists Aren’t Scientists. They Don’t Get Published.”
A useful aspect of modern scientific research is the peer review process. Scientists write articles on the status of their research and submit them to peer-reviewed scientific journals to be published for others to read and use. If their research is accepted, the research is given attention by the public, which helps the scientist gain credibility and financial support for further research. Before an article can be accepted, however, a handful of independent scientists who are knowledgeable about the subject read and critique the article. If their recommendations are not implemented by the scientist(s), the paper will likely be rejected for publication by the journal. In order for scientists to get their research funded, they have to prove to their supporters that they are making progress and accomplishing the goals that prompted their supporters to give them grant money. There is a natural pressure, therefore, for scientists to interpret and report their results in a way that will gain attention and please their supporters. The peer review process helps keep them honest, since it ensures that other independent scientists who are knowledgeable about the research have “signed off” on the legitimacy of the work. Hence, if the scientific research of Creation scientists is not peer-reviewed, how can it be considered legitimate?
It is true that peer review can be very helpful in filtering out bad science and research, and making sure the public is aware of it. Some Creation science research is not legitimate, in the same way that some naturalistic science research is not legitimate—as has been highlighted in various science magazines in recent years.1 A person should always be careful not to be too quick to believe what he is told by others, regardless of who they are. One should only draw those conclusions that are warranted by the evidence, as the Law of Rationality says.2
That said, while peer review can be helpful, peer review does not make a scientific statement or scientific research right or wrong—nor does it make something “scientific” or “unscientific.” Many scientists through the ages would not have had the luxury of peer review (e.g., Isaac Newton, Leonardo da Vinci, Galileo Galilei, Archimedes, Aristotle, Hipparchus, and Hippocrates), considering the fact that the modern peer review process did not begin until 1731 and did not become mainstream until the 1900s.3 In some cases in the past, there would have been few, if any, other scientists a researcher was in contact with who could review his/her work—much less scientists who would be knowledgeable enough about a specific scientific discipline to be of much help in a review process. Should the research of such scientists be rejected outright? Of course not. One should weigh the evidence presented by those scientists and assess whether their conclusions are trustworthy.
Also keep in mind that, just because research is peer reviewed and accepted by a journal, it does not make the research accurate or its conclusions legitimate. The reviewers could be wrong or biased. Just because research is peer reviewed and rejected by a journal, it also does not make the research inaccurate or its conclusions illegitimate. Again, the reviewers could be wrong. Peer review does not establish truth. The reproducibility crisis in recent years is evidence of that fact. If research is legitimate, a separate scientist or lab should be able to follow the same steps carried out by the researchers and achieve the same results. It has been discovered in recent years, however, that a large amount of published research has not been able to be reproduced by others.4 And yet, the research survived the peer review process and was published.
All of that said, it simply is not the case that Creation scientists do not publish in peer reviewed journals. Some choose not to do so, in the same way that some evolutionary scientists do not do so, while others do. Granted, most of the papers Creation scientists submit to secular peer reviewed journals do not directly mention biblical Creation. Why? Because belief in biblical Creation presupposes the existence of the supernatural realm and secular journals today are, by-and-large, overtly naturalistic. Obviously, research concerning a model whose explanations of the natural world presuppose the occurrence of supernatural phenomena in the past would not be accepted by journals that, by edict, demand that only natural explanations be used for the natural world. It would be non-sensical, therefore, for biblical Creation scientists even to try to submit papers on Creation to journals that, as a rule, will not accept such papers. Biblical Creation research, therefore, cannot be published in such journals, not because biblical Creation is false, but because such journals possess biased presuppositions that cause them to reject the supernatural proposition outright.
A fairer question would be: do Creation scientists get their research on Creation science published in actual peer-reviewed journals. In other words, are there legitimate peer-reviewed journals for biblical Creation scientists and their biblical Creation scientist peers? The answer to that question is unequivocally “yes.” This very journal is peer-reviewed by biblical Creation apologists, since it is an apologetics journal. Other peer-reviewed journals published by biblical creationists include Answers Research Journal, the Journal of Creation, and Creation Research Society Quarterly, among others. Regardless of whether an idea or paper has been peer-reviewed, however, one is under obligation to God to consider the evidence: “But examine everything carefully; hold fast to that which is good” (1 Thessalonians 5:21, ESV).
1 E.g., Marcia McNutt (2014), “Reproducibility,” Science, 343:229, January; Monya Baker (2016), “Is There a Reproducibility Crisis?” Nature, 533:452-454, May; Todd Pittinsky (2015), “America’s Crisis of Faith in Science,” Science, 348:511-512, May; Donald S. Kornfield and Sandra Titus (2016), “Stop Ignoring Misconduct,” Nature, 537:29-30, September.
2 Lionel Ruby (1960), Logic: An Introduction (Chicago, IL: J.B. Lippincott), pp. 130-131.
3 Hadas Shema (2014), “The Birth of Modern Peer Review,” Scientific American On-line, April 19, https://blogs.scientificamerican.com/information-culture/the-birth-of-modern-peer-review/.
4 See Endnote 1.
REPRODUCTION & DISCLAIMERS: We are happy to grant permission for this article to be reproduced in part or in its entirety, as long as our stipulations are observed. |
From 1940 to 1945, in a secluded marshy forest in northwest Poland, Nazi scientists conducted top-secret ballistic tests for short and long range missiles. What was kept from public view for decades is now the Muzeum Wyrzutni Rakiet, or the Rocket Launcher Museum.
The museum is within the grounds of the Słowinski National Forest, just a few miles from the Baltic Sea. The site had originally been a World War II testing ground for the Germans, and in 1967 was turned into a national public park. But access to the missile site was still restricted, and occupying Soviet forces began testing and experimenting with meteorological rockets. Those trials lasted until 1974, when the grounds were abandoned.
Prototypes from the rockets on display ultimately led to the development of two missiles used during WWII: the R1, R2, and R3 models of the Rheintochter surface-to-air missile, and the V4 Rheinbote short range ballistic rocket. Technology from the tests and launch pads were adapted to the V2 rocket, the Nazi’s foundational guided ballistic missile design. After the war, when German engineers such as Wernher von Braun came to the US, their knowledge and designs were used by NASA in the development of the Saturn Rocket that took the first men to the moon in the Apollo 11 mission.
In a way, these launch tests were the great-grandparents of the successful Saturn program, and the outdoor museum doesn’t let you forget it. They have a shell from a V2 rocket and several information panels explaining the progression of the technology, from the Nazis in the 1940s to the eventual NASA launch on Cape Canaveral in 1969.
Know Before You Go
This site is tucked away inside of the Słowinski National Forest and rarely advertised. Following the main trail to the sand dunes, the entrance is halfway to the other end of the park, about 2.5km (a mile and a half).
From the entrance of the park, where you can leave your car, there is a main trail. Walking takes between 30-45 minutes and is a beautiful trail through the marsh. You can pay for an electric golf cart to ferry down the path, or you can arrive by actual ferry. By far the easiest thing is to rent a bike at the entrance, or from any of the vendors in the area, and take the ride yourself. A small fee gets you in, about 7.50 złoty (about $2 US), cash only.
Once there, in addition to the V2 rocket shell there is a wide variety other rockets and missiles on display which were catapulted into the heavens from this location, German and Soviet alike. |
Allergy or hypersensitivity is a condition when an individual overreacts to any substance which in normal course is considered harmless, for example, body spray, dust mite, pollen, etc. These substances are known as “allergens” which stimulate the allergic reaction in hypersensitive persons. The hypersensitive reaction can be manifested by different symptoms like hay fever, urticarial rash, asthma, atopic dermatitis, and anaphylaxis. To know more information about homeopathy medicine for allergy and symptoms, causes follow the below information.
Allergens can be any normal substance that is usually harmless in individuals. Ina hypersensitive person when exposed to an allergen it stimulates the circulating antibodies Immunoglobulin E (IgE), a part of our body’s defense mechanism. This, in turn, triggers the White blood Cells/WBC’s (Basophils) to release a chemical called Histamine which is responsible for the allergic symptoms. It causes blood vessels to dilate leading to local swelling and heat along with itching sensation.
What stimulates an allergic reaction in a few and why not in others is a matter of considerable debate. It was earlier believed that allergies run in families making it a genetic disorder, but now it is believed that both hereditary and environmental factors play a role. The allergens could be anything-food items like peanuts, almonds, shellfish, etc, synthetic clothes, soap, watch straps, artificial jewelry, tobacco smoke, pollen, dust mite, aerosol sprays, etc.
They can be many different forms of allergies depending on the location and trigger factor for allergic reactions. Common types of allergic reactions are:
The symptoms of an allergic reaction vary depending on the site and severity of the allergic reactions. Some of the common symptoms are:
Though most of the allergic symptoms are mild and transient in nature severe reactions can be life-threatening. They also affect a person’s lifestyle and morale.
The principle behind homeopathy treatment is to cure the person, not the disease. It’s a highly individualistic approach based on a person’s mental, physical and emotional symptoms, modalities and trigger factors not only help to shorten an attack and reduce its intensity but also helps first to reduce the frequency of attacks and reduce the need for dependency on anti-allergic and homeopathy medicine for allergy gives a long-lasting relief. It helps the asthmatic persons to lead a normal and healthy lifestyle without any side effects. At Dr.Care homeopathy, our team of well-experienced doctors and dynamic medicines has helped many in overcoming their allergies. For more about homeopathy medicine for allergy contact us. |
How it all began: optical discs and their history
In general, optical compact discs appeared in 1982, the prototype saw the light even earlier - in 1979. Initially, compacts were developed as a replacement for vinyl discs, as a better and more reliable carrier. It is believed that laser discs are the result of the joint work of teams of two technology corporations - Japanese Sony and Dutch Philips.
At the same time, the base technology of “cold lasers”, which made the appearance of laser discs possible, was developed by Soviet scientists Alexander Prokhorov and Nikolai Basov.. For their invention, they were awarded the Nobel Prize. In the future, technology has evolved, and in the 70s, Philips developed a method for recording CDs, which initiated the CD. First, the company's engineers created ALP (audio long play) as an alternative to vinyl records.
The diameter of the ALP disks was approximately 30 centimeters. A little later, the engineers reduced the diameter of the discs, while the playback time dropped to 1 hour. Laser discs and playback devices for them were first demonstrated by Philips in 1979. After that, the company began to search for a partner for further work on the project - the technology was seen by developers as international, and it was difficult to develop it to the necessary level and popularize on its own.
The beginning of everything
Management decided to try to establish contacts with technology companies from Japan, while this country was at the forefront of hi-end technology. To do this, Philips delegates went to the country, they managed to meet with the President of Sony, who became interested in technology.
Almost immediately was formedPhilips-Sony engineering team, they developed the first specifications of the technology. Sony Vice President insisted on increasing the disk volume, he wanted the compact to accommodate Beethoven’s ninth symphony, for which the disk volume was expanded from 1 hour to 74 minutes (there is also an opinion that this is just a beautiful marketing story). The amount of data that fit on such a disk was 640 MB. Engineers have developed sound quality parameters. For example, the sampling frequency of stereo signals was regulated at the level of 44.1 kHz (for one channel 22.05 kHz) with a bit width of 16 bits each. This is how the standard Red Book appeared.
The name of the new technology did not appear suddenly - it was chosen from several options, including Minirack, Mini Disc, Compact Rack. As a result, the developers have combined the two names, getting a hybrid Compact Disc. Last but not least, this name was chosen because of the growing popularity of audio cassettes ( Compact Cassette technology ).
Philips and Sony also played a crucial role in the development of the first digital CD specification, called the Yellow Book or CD-ROM. The new specification made it possible to store not only audio, but also text and graphic data on disks. The disc type was determined in automatic mode when reading the header. The problem was that a CD that conforms to the Yellow Book standard could only work with a certain type of drives that were not universal.
On August 17, 1982, the first CD was released at the Philips factory in Langenhagen, Germany. The Visitors album was recorded on it.ABBA groups. It is worth noting that the lacquer coating of the first discs was not of very high quality, so that compact buyers often spoiled them. Over time, the quality of the disks has improved. The first few years they were used exclusively in hi-fi equipment, they were used as a replacement for vinyl records and cassettes.
Since 2000, 700 MB discs began to appear on the market, making it possible to record audio with a total duration of up to 80 minutes. They have completely ousted 650 MB disks from the market. There are also 800 MB carriers, but they were not suitable for all drives, so such discs were not particularly popular. It was possible to increase the amount of storage space available by reducing the distance between tracks. So, for example, for discs with a capacity of 650 MB, the distance between tracks is 1.7 microns, and for 800 MB discs, this indicator is reduced to 1.5 microns. Also at the first speed is 1.41 m / s, and the second is 1.39 m / s.
How it works
The disk consists of several layers. The substrate is polycarbonate, its thickness is 1.2 mm, diameter - 120 mm. On the substrate is placed another layer - metal (it can be gold, silver, or, most often, aluminum). Next, the metal layer is protected with a varnish, which is applied graphics. The substrate reliably protects the metal layer, so that very deep scratches interfere with the reading. The diameter of the hole in the disk - 15 mm.
The format of data storage for disks is Red Book (it was mentioned above). Reading errors are corrected using the Reed-Solomon code, so that light scratches do not reduce the readability of the disk.
The data on the disc is recorded in the form of a spiral track of the so-called pits (recesses), which are squeezed into the polycarbonate base. The depth of each pit is about 100 nm, width - 500 nm. Pita length from 850 nm to 3.5 μm. Pitas scatter or absorb light, the substrate reflects. Thus, a recorded disc is an excellent example of a reflective diffraction grating.
The disk is read using a laser beam with a wavelength of 780 nm, which is emitted by a semiconductor laser. The principle of reading is to register changes in the intensity of the reflected light. So, the laser beam converges on the information layer, the diameter of the light spot in this case is 1.2 μm. The maximum signal is recorded between pitas. In the case of contact with the pit, a lower light intensity is recorded. Changes in intensity are converted into an electrical signal with which the equipment operates.
How to create a disc
- The first step is to prepare the data for launch in the series;
- Photolithography - the second stage, is the process of creating a disk stamp. First, a glass disc is created, onto which a layer of photoresistive material is applied, and information is recorded on it. The material changes the physico-chemical properties under the action of light;
- Data recording is performed using a laser beam. With increasing laser power (when you need to create a pit), the chemical bonds of the molecules of the photoresistive material are destroyed, and it freezes;
- The photoresist is etched (in different ways, from plasma to acid), areas not affected by the laser are removed from the matrix;
- The disk is placed in a galvanic bath, where a nickel layer is deposited on its surface;
- The discs are molded by injection molding, the original glass disc is used as the source code;
- Next, metal is sprayed onto the information layer;
- A protective varnish is applied to the outside, on which a graphic image is already applied.
What about CD-RW?
CD-RW is a kind of CD that appeared in 1997. Initially, the standard was called CD-Erasable (CD-E, erasable CD).
It was a real breakthrough in the field of recording and storing information. After all, getting an inexpensive and capacious information carrier was a dream of thousands of engineers and users. CD-RW is similar in structure and operation to a regular CD, but the recording layer is different - it is a specialized alloy of chalcogenides. Silver indium antimony tellurium is most commonly used. When heated above the melting point, such an alloy changes from a crystalline to an amorphous state.
The phase transition in this case is reversible, which is the basis for the rewriting process. The thickness of the active layer of the disk is only 0.1 μm, so that the laser is easy to act on the substance. The recording process takes place when exposed to a laser beam, the active layer in this case goes into the melt (those areas that were affected by the laser). Then the heat diffuses into the substrate, and the melt passes into the amorphous state. In amorphous segments, such characteristics as the dielectric constant, the reflection coefficient and, consequently, the intensity of the reflected light change. It carries information about recording on disk. The reading is performed using a laser of lower power, which cannot affect the active layer. When recording, the active layer heats up to 200 degrees Celsius,
Repeated use of CD-RW leads to mechanical fatigue of the working layer. Therefore, the engineers who developed the technology used substances with a low fatigue rate. A CD-RW can sustain about a thousand rewriting cycles.
DVD - even more capacity!
The first DVDs appeared in Japan in 1996, they appeared as a response to the request of users and businesses who needed more and more capacious media. Initially, high-capacity drives were developed by several companies at once. Two independent development directions have emerged: Multimedia Compact Disc (Philips and Sony), - Super Disc (8 large corporations, including Toshiba and Time Warner). A little later, both directions merged into one under the influence of IBM. She convinced partners not to repeat the events of the “war of formats” times, when there was a battle for the priority between the standards of videotapes “Video Home System” and “Betamax”.
The technology was announced in September 1995, and in the same year, the developers published specifications. The first DVD burner was released in 1997.
It was possible to increase the recording capacity while maintaining the previous dimensions by using a red laser with a wavelength of 650 nm. The track pitch is two times less than that of the CD and is 0.74 μm.
Blu-ray - the most modern optical media
Another type of optical media with much higher data recording density than a CD or DVD. The standard was developed by an international consortium of BDA. The first prototype appeared in October 2000.
The technology involves the use of a short-wave laser (wavelength of 405 nm), hence the name. The letter “e” was removed because the expression blue ray is commonly used in English and cannot be patented. The use of a blue (blue-violet) laser made it possible to narrow the track to 0.32 μm, increasing the data recording density. Media reading speed increased to 432 Mbps.
UDF - Universal Disk Format
UDF is a file system format specification that is independent of the OS. It is designed to store files on optical media - both CD, and DVD and Blu-Ray. UDF does not have a limit of 2 and 4 GB for recordable files, so this format is ideal for high-capacity drives - DVD and Blu-Ray.
Optical discs and the Internet
Technology companies continue to improve optical discs. So, Sony and Panasonic in 2016 were able to increase the capacity of optical media up to 3.3 TB. At the same time, disk performance remains, according to Sony representatives, up to 100 years.
Nevertheless, all types of optical discs are gradually losing popularity - with the development of the Internet, the need for users to accumulate data on disks disappears. Information can be stored in the cloud, which is much more convenient (how much safer it is - another question). Compact discs are not nearly as popular as they were several years ago, but they are most likely not threatened with complete oblivion (as in the case of audio cassettes) - they will be used to create archives of information important for business.
If terabyte optical discs go into a series, their use will be limited - maybe with their help they will distribute films in 4K and modern games with a set of various bonuses. But most actively they will be used to create backups. And if Sony is telling the truth about the age-old preservation of recorded data, the business will use very new technology. |
What are cookies?
Cookies are small text files that are stored in a user’s browser. Most cookie contains a unique identifier called a cookie ID: a string of characters that websites and servers associate with the browser that stores the cookie. This allows websites and servers to distinguish the browser from other browsers that store different cookies and to recognize each browser according to their unique cookie ID.
Cookies are widely used by websites and servers to provide many of the basic services available online. If you shop on a website, a cookie allows the web site to remember the items you’ve added to your virtual shopping cart purchases. If you set the preferences in a web site, a cookie allows the website to remember your preferences for your next visit. If you sign into a website, the latter might use a cookie to recognize your browser at a later time, so that you do not need to enter credentials to log in again. Cookies also allow websites to collect data about user activity, such as the number of unique visitors to a page every month. All these uses depend on the information stored in cookies.
The cookie ID in each DoubleClick cookie is essential for these uses. For example, DoubleClick uses cookie IDs to keep a log of ads to display on different browsers. When it’s time to publish an ad in a browser, DoubleClick can use the browser cookie ID to check which DoubleClick ads have already been shown by that particular browser. In this way, DoubleClick does not show users that have already been displayed ads. Similarly, cookie IDs allow DoubleClick to log conversions related to ad requests, such as when a user views a DoubleClick ad and later uses the same browser to visit the advertiser’s website and make a purchase.
DoubleClick cookies contain no personally identifiable information of users. With permission from the user, information associated with the DoubleClick cookie you can be added to your Google Account for that user.
If a user who has not logged chooses to proceed with disabling ads in the Settings section, the ID unique DoubleClick cookie in the user’s browser is overwritten with the phrase “OPT_OUT” and will not be used to personalize ads . Since there is no longer a unique cookie ID, the opt-out cookie can not be associated with a specific browser.
If a user who has logged chooses to proceed with disabling ads in the Settings section, the ID unique DoubleClick cookie in the user’s browser will not be overwritten. However, the DoubleClick cookie ID will not be used to personalize ads when the user logs on.
When cookies are sent to a browser by DoubleClick?
DoubleClick sends a cookie to the browser after impression, click, or other activities that result in a call to the DoubleClick servers. If your browser accepts cookies, it is stored in the browser.
Most commonly, DoubleClick sends a cookie to the browser when a user visits a page on which DoubleClick ads. Pages with DoubleClick ads include ad tags that indicate the browser to request content of ads in the DoubleClick ad server. When the server displays the content of ads, also sends a cookie. For this to happen, it is not necessary in a DoubleClick ads show page; just include DoubleClick ad tags, which might load one of the detection tool click or impression pixel.
Original cookies and third-party
Cookies are classified as original or third-parties for their potential association with the domain of the site visited by the user. Keep in mind that the name or the contents of the cookie do not change. Depending on the domain to which you can point your browser to determine whether the cookie is original or third-party.
What are cookies? |
- Autodesk's BrickBot, which is programmed with 3D models to erect Lego structures, demonstrates how robots can use artificial intelligence to autonomously carry out complex assembly tasks, reported Fast Company.
- The development team behind BrickBot hopes the technology can be built out and scaled up for factory and construction applications, according to the Autodesk blog. By fitting industrial robotic arms with cameras and sensors and creating neural networks that enable the robots to process and respond to corresponding data, the system can infer what's happening in its environment and then adapt to it in order to complete a task.
- Software architect Yotto Koga said the next steps involve working with manufacturing and construction industry customers to see how BrickBot can be applied in practice.
Although the BrickBot is relatively unchartered technology, robots in construction aren't a new concept. Some companies, such as Gramazio Kohler Research, develop robots to perform construction tasks like stacking materials in nonstandard arrangements and work with timber construction and metal meshes. Rabren General Contractors used a bricklaying robot, which is capable of placing 3,000 bricks per day, to lay the foundation for a performing arts center at Auburn University last month.
Other robotic innovations for construction include Tybot, which ties rebar on bridges and can save hours of labor, as well as wear and tear on a person, and RoboMiner, which explores and mines areas considered unsafe for human exploration.
Robots also may help help ease Japan's labor shortage. Shimizu Corp. in May announced it was developing several models of autonomous robots it plans to use on a high-rise construction site in Osaka this year. The robots will carry material, work on floors and ceilings and weld steel columns. Stateside, technology may ease the labor shortage by helping attract and retain younger workers who expect technology to be part of their jobs.
Technology and the corresponding automation that comes with it isn't without consequences, though. A report from the Midwest Economic Policy Institute issued earlier this year estimated that automation could displace up to 2.7 million construction workers by 2057, with operating engineers, cement masons and painters being especially at risk of job displacement. The study’s authors hope that, in light of the changing work environment, state policymakers will address workforce redevelopment and re-skill construction employees as their trades become more automated. |
Nutrition experts advise diversifying your dietary intake with a broader range of fruits, vegetables, whole grains and protein sources when compared to childhood diets.
Re-evaluating your dietary way of life starts with examining your current eating habits. While on a path to a more conscious awareness of your dietary choices, it’s worthwhile to examine why your old dietary habits may no longer benefit you.
Assess your diet and consider these factors
Age. What your body needed during childhood may be needed in lesser amounts in adulthood, e.g., milk protein. As we age, our metabolism slows. What we ate as children may not have made us corpulent kids but as we age, these same foods, in these same amounts, will likely make us ample adults.
Different life stages require different nutritional needs. Children, adults and seniors have varying requirements for nutrients like vitamins, minerals and protein. Adapting your diet to your age can help ensure you meet these specific needs.
Illness and disease. Certain medical conditions may require dietary adjustments, such as monitoring carbohydrate intake for diabetes or adopting a low-sodium diet for hypertension. Adapting your diet to manage or prevent specific illnesses can promote good health. If you have a chronic condition, your previous diet may worsen it. Even if you’re healthy, consider modifying current habits to prevent potential issues. Just because illness has not yet been diagnosed, doesn’t mean it’s not in the making.
Food sensitivities and reactions. People develop food allergies and sensitivities later in life. This is noteworthy because, unlike allergies, sensitivities often involve a less severe immune response and can manifest as digestive issues, skin rashes or other symptoms. People develop food allergies and sensitivities later in life for various reasons. For example, the factors below can all contribute to the development of adulthood food sensitivities:
- Changes in digestive enzymes: As we age, the production of certain digestive enzymes can decrease, making it more challenging for the body to break down certain foods.
- Altered gut microbiota: The composition of the gut microbiota can change over time, impacting how the body processes different foods. Imbalances in gut bacteria may contribute to food sensitivities.
- Hormonal changes: Hormonal fluctuations, especially in women during menopause, can influence digestion and impact how the body reacts to certain foods.
- Weakening immune system: The immune system tends to weaken with age, and this can affect how the body responds to various substances, potentially leading to sensitivities.
- Medication use: Some medications may affect the digestive system or interact with certain foods, leading to sensitivities.
- Environmental factors: Exposure to different environmental factors, pollutants and chemicals over time may contribute to the development of food sensitivities.
- Stress: Chronic stress can affect the digestive system and may contribute to the development of sensitivities to certain foods.
New scientific findings. In the ever-evolving field of nutritional science, new information emerges every day. For example, we know now that we should avoid foods containing unhealthy trans fatty acids. At one time, the USDA encouraged Americans’ consumption of margarine, which contained these unhealthy fats.
Lifestyle. Sedentary? Active? Somewhere in between?
Factors such as these help determine the kinds of foods you need to promote health and wellness. A health-promoting diet excludes regular consumption of high fat, artery-clogging foods such as pizza, hamburgers, bacon, French fries and hot dogs. Avoiding or minimizing highly refined flour products such as cakes, cookies, donuts and crackers keeps you on the path to dietary wellness. And avoiding or minimizing sugary, high-fat foods like ice cream helps keep the pounds off. Most who eat poor diets probably realize their diets should include more fresh whole foods, including fruits and vegetables.
Decide which dietary behaviors you should abandon
Your awareness of a problem, an imbalance, a need may be signaling that it’s for a round of growth and change in your life. Re-examine your eating habits every now and then and decide if those same habits still serve you. If they don’t, then it’s time to upgrade your food choices and form new habits. Start with minor changes. Bigger changes can come later.
If you found value in this article, please share it. |
Welcome to the wonderful world of histology. Histology is how we study the body on a cellular level. In the world of medicine, we have histopathology which is the diagnosis and study of disease with the examination of tissues under a microscope.
The histopathologist is a specialised type of doctor who has a very important role. They must determine if the tissues and cells are healthy or abnormal. They reach a diagnosis by examining a small piece of collected tissue, through a biopsy. Once they recognise an abnormality, they must determine what it is and how severe it is. For example, if it is cancer, they must identify the type of cancer and what stage of cancer the patient has.
Around 20 million histopathology slides are examined in the UK each year! That’s a lot of biopsies and looking into a microscope!
Our friends down in the lab play a very important role in the care of our living patients, as well as those that are deceased. They examine tissues collected in an autopsy to help in deciding a cause of death. They would have played an important role in the autopsy of Anton Orlov! |
Also known as Purple Shamrock, the Oxalis triangularis is a ground growing, bulbous, and perennial plant native to Brazil and other parts of South America. The plant has an average height of 12 inches. It’s a member of the Oxalidaceae family, known for having divided leaves that close up at night or in low light.
The Oxalis triangularis is also called false shamrock and love plant, all on account of the color and shape of its leaves, which are deep purple with triangular leaflets.
This plant will do well either as an indoor or outdoor plant, provided the conditions are favorable. As an outdoor plant, the purple shamrock may be an invasive weed. Still, the roots and leaves of the false shamrock are edible, the purple foliage often serving as a dressing for salads
The purple shamrock is famous as a houseplant, due to the color and shape of its foliage. These features are unmistakable in any space you find the flower. The following is a guide on how to grow the purple shamrock.
Caring for Oxalis Triangularis
Soil, water, light, and dormancy periods are major areas of interest in this plant’s care.
Oxalis triangularis thrives in well-drained and aerated soils. Heavy soils that retain water will cause the bulbs and roots of the plant to rot. Standard potting mixtures will work well for the plant, but ensure that it’s not one that retains water.
The purple shamrock doesn’t like too much water, as it can survive for extended periods without it, especially in cold weather. The soil should consistently remain moist but not so wet that you can see water on the surface of the soil.
There are ways to tell how often to water houseplants. One such way is to push a finger into the soil and only water if the finger comes out dry. In fall and winter, when the plant experiences no active growth, you may water the plant once every other week.
Oxalis triangularis is not a low light house plant; as such, position the plant such that it receives ample light. Insufficient light will hamper the growth of the plant and may cause it to appear leggy. A south-facing window offers the best lighting, while north-facing windows provide next to no sunlight. In the absence of a south-facing window, east or west-facing windows may suffice.
The purple leaves of the plant are light-sensitive, and in a process referred to as photonasty, they turn towards light sources and will open to their fullest with bright light. Partially closed leaves are an indication that the plant is not getting enough light. You can augment poor lighting by taking the plant outside for a few hours. Too much sun can burn the foliage, so limit sun exposure.
Unlike many house plants, such as the Hindu rope plant, purple shamrocks do not demand much humidity. Average household humidity is enough for this plant. If the air in the room is excessively dry as a result of central heating or climate control, you may use a humidifier or pebble and water tray to improve humidity.
Like with watering, cut back on fertilizing the plant in the fall and winter. In summer through spring, when the plant is experiencing active growth, apply a balanced and diluted liquid fertilizer once in two weeks.
With continued fertilizer application, salts may build up in the potting soil, resulting in burned leaves. To get rid of the excess salt, flush the soil by allowing water to flow through it and out of the drain holes.
The oxalis triangularis is famous for its purple foliage and not its flowers. Still, the plant produces white, trumpet-like flowers with a tinge of purple. The flowers will remain on the plant for a few weeks, growing in clusters slightly above the purple leaves.
The flowers bloom mostly in the summer, as there is more light and warmth available. Still, they can bloom anytime in the right light and temperature. It is best to trim off the flowers as they start to die, else they will dry out before falling off and make for a messy looking plant.
The ideal temperature for the purple shamrock is between 65 to 75 degrees Fahrenheit. Under a higher temperature than this, the foliage will begin to wilt. In the cold season, position the plant away from artificial heat sources. Fifty degrees Fahrenheit is the allowable minimum temperature for most house plants, including Oxalis triangularis.
The purple leaves will start to drop after the growth period, as the plant enters into dormancy. At this point, you may cut off what vegetation that remains. Other than this, the plant does not require much pruning.
Typically, after summer and in months of active growth, the leaves on the Oxalis may become dry and start to turn brown. These changes in the plant are not signs of illness, but indications that the plant is going into its dormancy phase.
Temperatures exceeding 80 degrees Fahrenheit can also trigger dormancy. In outdoor purple shamrock plants, dormancy is more easily predicted than for indoor plants, as the plant may experience several periods of growth before becoming dormant.
During dormancy, water the plant less frequently and allow the browning leaves to fall off or dry out before trimming them. Relocate the plant to a cold and dark place and give it two to four weeks of rest.
After four weeks, move the plant to its original position or an area with sufficient light. Water the plant and apply fertilizer as you would typically do to encourage regrowth.
Although the purple shamrock is edible and people have consumed the plant for years, it can be poisonous to animals. As such, keep the plant away from pets.
The Oxalis triangularis is, to a large extent, free from any severe diseases. Root rot, the result of overwatering or soils with poor drainage, can cause severe problems for the plant.
Signs of root rot are black and softened bulbs in the soil and limpness of the plant. When decay occurs, it is best to dispose of the plant, as the disease will lead to the eventual death of the plant. If the soil is the cause of the problem, change it before planting a new bulb, and ensure that there are holes at the bottom of the container and give it less water.
Other than root rot, fungal diseases, such as leaf spot/rust and powdery mildew, arising from excessive amounts of humidity and poor light may affect the plant.
Leaf rust comes up as yellow streaks appearing on the leaves. Leaf spot manifests as red, tan, brown, or black spots on the leaves. The foliage may eventually turn yellow and drop.
Leaf spot is frequently an aesthetic problem and can be handled by cutting off the affected leaves and relocating the plant to a warmer and brighter place. If the disease persists, use a fungicide as per the instructions on the label.
This disease appears as white patches on the entire plant. Consequently, the leaves may turn yellow and fall off, as powdery mildew leeches nutrients from the stems and impairs its ability to photosynthesize.
Similar to leaf spot, powdery mildew is a fungal disease, the primary culprit being humidity. Remediation is the same as with leaf rust/spot.
Spider mites and mealybugs are the primary pests of the purple shamrock. In extreme cases and if left unattended, these pests will kill the plant and likely spread to other house plants.
Spider mites spin delicate webs over the Oxalis while sucking its sap, and mealybugs resemble white cotton-like materials but are congregating tiny insects.
While planting purple shamrock bulbs, ensure that whatever container you use has holes at the bottom to allow easy drainage. Also, make sure the soil won’t retain water to the extent of being soggy. Follow the steps below to pot Oxalis triangularis.
- Fill three-quarters of the pot with suitable potting soil and water it.
- Depending on the size of the pot, place two to three bulbs on the soil.
- Cover the bulbs with one and a half inches of soil.
- Water the soil again until water flows out of the holes in the container
- Position the pot in a place with sufficient bright light and expect to see sprouts in two to three weeks.
Purple shamrocks grow to about one foot all around. As such, repotting is only necessary to change the soil or grow new plants from the offsets that have developed. It’s best to repot the plant in winter when the plant is dormant. The steps for repotting are the same as for potting.
By fall, the bulbs would have multiplied in the soil. Propagate the Oxalis triangularis by repotting the offsets to create new plants. It’s always better to propagate the plant while it’s in its dormant state. To grow fresh purple shamrocks, take the plant out of the pot and carefully separate the bulbs, planting them in new pots.
Allow the leaves to die off entirely before you attempt to propagate. Do not cut off the dying leaves while it still has some color. It’s likely the bulbs have not finished gathering nutrients, and prematurely trimmed foliage will result in weak bulbs and, by extension, weak plants. |
Not everyone likes to think of their vagina as being like a garden. But when you are pregnant, it turns out that many of the bacteria in your vagina, like plants in a garden, help keep your baby healthy. Babies who are born by cesarean section do not travel through the vagina on their way into the world. That means they miss picking up your helpful bacteria. At least, that is the hypothesis. Vaginal seeding is the term used to describe transferring some of those healthy plants to your baby even if you don’t have a vaginal delivery.
Vaginal seeding, according to the American College of Obstetricians and Gynecologists (ACOG), is the swabbing of a newborn’s mouth, nose, or skin with a cotton gauze or swab soaked in its mother’s vaginal fluids. The purpose of vaginal seeding is to transfer the mother’s vaginal flora to her newborn infant.
Why would I need to do vaginal seeding?
Cesarean delivery, antibiotics given before and during labor, and formula feeding can prevent the normal transfer of bacteria from mom to baby.
We know that bacteria are very important to our health and well-being in many ways. They are necessary for:
- Healthy digestion
- A strong immune system to keep us healthy
- Preventing the growth of disease-causing bacteria
- Helping our bodies make important vitamins
While in your womb, your baby’s gut (gastrointestinal tract) does not have any bacteria. Through vaginal birth or once your water breaks, bacteria are transferred from you to your baby. You can also pass on your healthy bacteria to your baby when holding them skin-to-skin, after your delivery and when breastfeeding.
What are some of the possible benefits of vaginal seeding?
Scientists are just beginning to study the importance of this mother-baby bacterial transfer. We don’t yet know how the makeup of a bacteria in an infant’s gut may impact its health and well-being for the rest of its life. The crazy assortment of bacteria that live in a healthy woman’s vagina may help prevent allergies, asthma, and immune disorders like diabetes in children.
What are the possible risks of vaginal seeding?
Unfortunately, not all bacteria in the vagina are good for babies. Some types can cause infection of an infant’s eyes or lungs. Fortunately, your doctor has tests to screen for some types of bad bacteria that can cause infection. These bacteria are:
- Herpes simplex virus
- Group B streptococci (usually tested on all women at 36 weeks gestation with a vaginal swab, about 20-30% of pregnant women are group B strep positive at the time of delivery)
- Chlamydia trachomatis (usually screened for at the beginning of all pregnancies)
- Neisseria gonorrhea (usually screened for at the beginning of all pregnancies)
Women who test positive for these bacteria or infections should not perform vaginal seeding.
Other reasons you may not choose vaginal seeding:
- Research shows that the mix of bacteria in a babies’ guts were the same by 6 months of age, regardless of the type of delivery (cesarean or vaginal) or feeding (bottle or breast). Bacteria were different at birth and when checked when the babies were three months old. We don’t know if this difference in bacteria in a baby’s first 3 months of life makes a big difference in terms of health benefits.
- We don’t have enough scientific data to definitively say that vaginal seeding is risk-free.
- We also don’t have conclusive data to show that vaginal seeding actually makes it less likely that children will develop allergies, asthma, or immune disorders. Studies have only observed that more of these conditions in babies born by cesarean section.
- Your doctor or midwife may refuse to allow you to perform vaginal seeding.
- Most women are given antibiotics before a cesarean section to reduce their risk of infection. Researchers believe this treatment also kills all of mom’s good bacteria. Thus, swabbing after cesarean may not make any difference because there wouldn’t be any bacteria to transfer.
Is vaginal seeding recommended?
Studies are underway, but we don’t yet know whether vaginal seeding should be a part of every cesarean delivery.
For this reason, ACOG recommends OBGYN’s do NOT routinely practice vaginal seeding. Instead, they suggest that OBGYNs could choose to allow their own patients to do vaginal seeding themselves, but only after a thorough conversation about the possible risks.
What should you do if you think vaginal seeding makes sense?
- Have a thorough discussion with your OBGYN, midwife, doula, birth partner, and any other people who will be present at your birth about the risks and benefits, why you are requesting this be done, and who can do the swabbing at the time of delivery if you are unable to.
- Put your plan for vaginal seeding in your birth plan and confirm with any labor and delivery nurses, operating room nurses, or medical staff.
- Tell your pediatrician about your plans so they can look for early signs of infection in your baby.
Ultimately, whether or not your baby has a bacterial baptism depends on a combination of factors:
- Your childbirth provider’s opinion
- Whether or not you have a cesarean section
- If you have any of the infections listed above
- The circumstances of your delivery
- Most importantly, whether anyone even remembers to do it in the excitement of your baby being born!
Like so many of the delivery details expectant moms worry about while pregnant, the idea of vaginal seeding may fade once you hold your healthy newborn in your arms. Learning about delivery preferences like vaginal seeding can help you to be a more a informed patient and parent. Remember though, until we have more answers, try not to add vaginal seeding to your list of pregnancy worries.
Sources for this post:
- Vaginal seeding. Committee Opinion No. 725. American College of Obstetricians and Gynecologists. Obstet Gynecol 2017:130:e274–8. Available at: https://www.acog.org/Clinical-Guidance-and-Publications/Committee-Opinions/Committee-on-Obstetric-Practice/Vaginal-Seeding?IsMobileSet=false |
Inigo Jones (July 15, 1573 – June 21, 1652) the first significant architect in England in the early modern period. He revolutionised the concept of English building introducing the style of Italian mannerist buildings onto the streets of London. Among his works are St Paul’s Church in Covent Garden, of which he was also involved in the designing of the piazza; his lavish banqueting house in Whitehall; and his elegant Queen’s Chapel at St James’s Palace. Locally he designed the perfectly proportioned Queen’s House at Greenwich. This road within former grounds of Charlton House estate which includes features by or in the style of Jones. Jones was he was born in Smithfield, into a Welsh speaking family. Little is known of his early life but he served as an apprentice on the rebuilding of St Paul’s Cathedral. His obvious talents secured him the patronage of a wealthy benefactor who sent him to Italy to study. His career came to a crashing end with the defeat of the king during the English Civil War. |
Topmarks EYFSAccess to games to practise key skills in English and Maths. Some of these games are also tablet friendly.
Phonics phase oneFun games for children in Nursery and Reception children to help them 'get ready' for learning letters and sounds
Phonics Play Phase 2Access Phonic games for phase 2 - learning letter sounds, ck, ll. ff, ss and blending sounds to read simple words containing those sounds.
Phonics Play Phase 3Access games to practise learning the remaining letter sounds and graphemes - zz, qu, ch, sh, th, ng, ai, ee, igh, oa, oo, ar, or, er, ur, ow, oi, ear, air, ure. Children will also practise blending sounds to read words containing these graphemes.
CBeebiesAccess, videos, stories, games and songs linked to different areas of the EYFS curriculum (including Numberblocks for Maths).
Busy ThingsAccess to a selection of games and activities that are accessible through a computer or tablet. A username and password will be needed to access this site which will be sent to you via an email from Mrs Naffati.
Our cookies ensure you get the best experience on our website.
Please make your choice!
Some cookies are necessary in order to make this website function correctly. These are set
by default and whilst you can block or delete them by changing your browser settings, some
functionality such as being able to log in to the website will not work if you do this.
The necessary cookies set on this website are as follows:
A 'sessionid' token is required for logging in to the website and a 'crfstoken' token is
used to prevent cross site request forgery.
An 'alertDismissed' token is used to prevent certain alerts from re-appearing if they have
An 'awsUploads' object is used to facilitate file uploads.
to improve the website performance by capturing information such as browser and device
types. The data from this cookie is anonymised.
Cookies are used to help distinguish between humans and bots on contact forms on this
A cookie is used to store your cookie preferences for this website.
Cookies that are not necessary to make the website work, but which enable additional
functionality, can also be set. By default these cookies are disabled, but you can choose to
enable them below: |
Inverter Topology and Transfer Time
The term Inverter comes from the physics principal of “Inversion” which is the conversion of direct current (DC) to alternating current (AC). Inverters have been deployed as a source of emergency power for many years, and while the physics has remained the same, different inverter topologies have been developed that are differentiated by the speed in which the equipment transfers from normal to emergency power. This “transfer speed,” or “transfer time,” measures the amount of time it takes the Inverter to begin supplying power to the emergency load from its battery supply once a power loss occurs.
Historically, transfer time was one of the most important considerations when designing an emergency lighting system with an Emergency Inverter. Some legacy lamp types, such as HID, require a constant arc be maintained to ensure proper operation after power is lost, thus requiring a short or immediate transfer to the inverter battery supply (typically less than 4ms). In modern LED lighting applications, transfer time becomes less of a concern. LED diodes will provide instant illumination when supplied by an emergency power source no matter how much time has passed since the AC utility power failed. Some emergency lighting designs, however, still specify that the transfer time be zero or near-zero based on yesterday’s plans. A common product solution to eliminate transfer time is utilizing an inverter with “Double Conversion” topology. This article will discuss the theory of operation behind Double Conversion and compare its usefulness in emergency lighting applications against the Line Interactive Fast Transfer topology utilized by IOTA IIS FT Inverters.
What is Double Conversion?
The Double Conversion topology eliminates transfer time by keeping the inverter circuitry continuously energized whether a power loss condition has occurred or not. This differs from a line interactive system where utility power is simply routed through the inverter to be supplied directly to the lighting load until a power loss is detected. In a double conversion system, the emergency lighting load never receives AC utility power directly and is instead always powered by way of the inverter circuitry. This means that there is never a measurable transfer time between AC utility and the battery supply feeding the inverter circuit.
The name “Double Conversion” comes from the fact that the incoming AC utility power is “converted” twice before reaching the emergency load (see Illustration 1). During normal power conditions, the incoming AC utility is first converted (or “rectified”) to DC power. This DC power travels through a DC link that routes the power to the Batteries for charging and to the inverter circuitry. The Inverter circuitry again converts, or in this case, “inverts” the DC power back to AC line voltage to supply the emergency lighting load. If a power loss occurs, the DC link ensures that battery voltage is immediately available to supply the emergency load without requiring time to transfer from AC utility to the battery supply.
In addition to eliminating transfer time, the double conversion process filters the output power to the connected load by de-constructing the AC utility and then re-constructing it to be a clean AC waveform. This provides protection for the load against power anomalies (such as surges or transients) which can be necessary in some applications with highly sensitive equipment (such as a data center). However, the voltage protection/filtering provided by double conversion is not critical in lighting applications. Emergency luminaires are designed to operate in typical AC utility power conditions and have a higher tolerance for power anomalies than more sensitive loads. This is evident through features such as universal input tolerances and healthy surge ratings. Considering that double conversion systems tend to be costlier than other alternatives, it is important to keep in mind a few limitations about double conversion that outweigh its value in lighting applications.
Limitations of Double Conversion in Lighting Applications
A major drawback of double conversion systems is lower power efficiency when compared to other Inverter topologies. Every time power is converted from AC to DC or vice versa, some energy is lost in the form of circuit inefficiencies or heat. Since double conversion systems are continuously converting the AC utility to DC and then back again, the energy loss can be significant. To compensate, double conversion systems must draw more power than other alternatives, increasing the energy cost to the building owner. Additionally, the heat generated by the continuous AC to DC to AC conversions requires forced-air cooling fans to run continuously. This produces extra noise, further increases power consumption and may add maintenance costs associated with replacing fans and/or filters.
Another limitation of double conversion systems is that the electronics are continuously energized. Most alternative system designs pass-through AC utility power during normal operation and only engage the inverter circuity during power loss conditions. Keeping the inverter circuitry energized in a double conversion system can reduce component lifespan and require building owners to more frequently replace the emergency lighting equipment. Additionally, operating the inverter circuitry at all times produces additional electrical noise on top of what is generated by forced-air cooling.
Lastly, where double conversion is most prominent is in applications that require an auxiliary power system listed to UL 1778. Conversely, The UL standard for Emergency Lighting Systems is UL 924 which involves a separate (and often more rigorous) set of equipment safety and reliability tests that must be passed to obtain a UL 924 listing. It is a safe assumption that many double conversion inverters are re-purposed UL 1778 systems that have been modified to meet the rigorous standards for life safety. This may affect the reliability of the equipment, especially when compared to an Emergency Lighting Inverter that was designed from the ground up to meet the UL 924 standard.
In emergency lighting applications, the disadvantages of Double Conversion often outweigh its value, especially when compared to the next alternative: Line Interactive Fast Transfer Systems.
Line Interactive Fast Transfer Systems
IIS FT (Fast Transfer) Inverters offer a more cost-effective solution for near-zero transfer time (2ms nominal). These systems were built specifically to the UL 924 standard and feature a line interactive design that rapidly engages the inverter circuitry when AC utility power is lost (see Illustration 2). During normal power conditions, IIS FT Inverters will pass-through AC utility power to the lighting load, while drawing power to charge its batteries as needed. This topology optimizes system efficiency and only adds the battery charging requirements to the bulk load rating of the system. If a power loss occurs, the inverter will disconnect the AC utility and rapidly engage the inverter circuit, allowing power from the batteries to energize the emergency lighting load.
In IIS FT systems, the transfer from AC utility to the battery supply occurs in only 2ms, which is adequate for any lighting load (including HID). Critically, this line interactive fast transfer design only converts power from AC to DC and then back when it is needed. This makes these systems more energy efficient than double conversion systems because the incoming power is not subjected to continuous energy loss from the double conversion process before it reaches the lighting load. Additionally, the inverter circuitry on IIS FT systems is only engaged during power loss conditions, causing less wear on the components than double conversion designs. IIS Fast transfer systems are ideal for today’s lighting loads and eliminate excess power loss, noise, and cost, while providing a reliable source of emergency power.
Conclusion: Are Double Conversion Systems Ideal for Emergency Lighting Applications?
Double Conversion systems do offer advantages with regards to protecting sensitive equipment against power anomalies, but these advantages are not realized in lighting applications. The limitations of double conversion in the form of reduced energy efficiency and reliability make it less attractive than alternative inverter designs. IIS Fast Transfer Emergency Lighting Inverters are a more ideal solution for a reliable and cost-efficient emergency lighting system.
To learn more about IIS FT Inverter Systems, visit us online at iotaengineering.com. If you have any additional questions about the content of this article, or about Inverter Systems in general, please give us a call at (800) 866-4682, or email us at firstname.lastname@example.org. |
Yugyd Va National Park. Public domain photo.
Hat-tip: Die kalte Sonne here.
The DLF reports:
Ural: snow causing the tree line to rise.
Climate change does not only mean that the temperature is increasing, it can also change the precipitation patterns. In the Ural Mountains of Russia significantly more snow is falling in the wintertime than 100 years ago. The development is having surprising consequences: The bigger amounts of snow is causing the tree line to rise. […]
In the summertime in the Urals its has not gotten notably warmer over the past 100 years. The wintertime temperatures, however, have increased from minus 18°C to minus 16°C. Warmer low pressure systems are bringing more precipitation to the mountains. In the Urals today twice as much snow is falling than 100 years ago. And that is having an impact on the treeline.”
According to the DLF, a team of German and Russian scientists say the tree line is currently rising at a rate of about 4 to 6 meters per decade.
The scientists believe that the doubled snowfall serves to protect young saplings during the winter and allow soil conditions that foster growth during the summer time. Photos of the region has allowed the scientists to determine treelines that today are up to 60 meters higher than 100 years ago. |
Time passes us bills, of what we do with nature, therefore, it is essential that we be as kind as possible, plant a tree and give greater impetus to reforestation. This is our second article in TCRN Verde, remember that in the previous one, we commented as part of the project, that if we all eat fruits, it is preferable not to throw away the seeds, but to let them dry and store them in a bag so that on the way we spread them, so that it is Mother Nature who takes the next step.
Restoring green life in Latin America
Many will think that the restoration of green life in Latin American countries is a challenge, a situation that can take its time, but it is not impossible. It is worth noting that part of the loss of green life has been through forest fires, changes in land use, falling of trees, among others.
In fact, actions by various countries such as Costa Rica, Colombia, Ecuador and Bolivia are an example for the recovery of green life, everything is oriented towards planting trees, cleaning beaches and rivers, as well as caring for species.
Lost of nature
According to the United Nations (UN), each year at least 4.7 million hectares of tropical forest are lost in the world. Rivers, lakes and ponds suffer from pollution and overfishing; while mountains and oceans are exposed to the degradation and loss of their ecosystems. Until now, it is known that half of the planet’s wetlands have disappeared and the same has happened with 50% of coral reefs.
It is necessary, to start thinking, that the planet is losing its essence, its green life and in part, we have been responsible. In Latin America, experts emphasize that ecosystems are being lost due to deforestation, due to the opening of roads, but the main driver has been the change in land use to convert forests into fields, pastures for livestock, mining activities. and illicit crops.
Beyond all, the intention is to promote planting as one of the ways to generate awareness and contribute to nature, making the call, denouncing negative activities. The green life of nations is everyone’s responsibility, their waters, their soils, forests and even the air we breathe. |
A study from Oregon Health & Science University has discovered that by alternating the arm in which you receive your vaccine doses—your immune response could be significantly enhanced, up to four times stronger, in fact!
This research examined nearly a thousand individuals and found those who switched arms for their second dose showed a much more robust immunity against both the original virus and variants like omicron, and this heightened immune response lasted for over a year. It seems that activating immune cells in both arms, rather than just one, can create a more powerful and long-lasting defense mechanism against the virus.
The science behind it suggests that using both arms might engage more areas of your immune system, essentially creating a more comprehensive memory of the virus across your body. This could be a simple yet effective strategy to improve vaccine efficacy without any additional medical interventions.
While this is an exciting development, it’s important to remember that more research is needed to fully understand the implications and to see if this approach applies to other vaccines, especially in children. However, it’s promising news, and we’ll be keeping a close eye on how these findings might influence vaccination recommendations in the future. |
It's getting hot, hot, hot! Could that be a good thing? When a fever shows up, it's easy to freak out and grab the Tylenol IMMEDIATELY. However, since acetaminophen is rough on the liver (especially for babies and kids), let's take a step back and talk about how fevers can actually be GOOD. A fever isn't only a sign of illness, it's also a sign of nutritional deficiencies. Understanding how important nutrients like calcium, vitamin D, and vitamin C play into the physiological process of a fever can help you and your family take a more natural route next time the thermometer gets toasty. Here's how to support a fever with nutrition, and get over illness faster:
Like, literally HOT. Contrary to popular belief, fevers are generally safe and helpful. It's a well-regulated way your body can control it's immune response. High body heat slows the growth of bacteria and viruses, and stimulates your immune system cells. Therefore, stopping a fever with medication immediately can actually make you sicker, longer! Blowing your mind? Check out this recent article in the Journal of the American Academy of Pediatrics for more clarity on the benefits of letting fevers run their course. Now, how does nutrition play into this? Calcium, vitamin D, and vitamin C are going to be your big hot-body-helpers!
Not like a "hey, girl hey!" kind of wave, but more like doing "the wave" at a football game. The calcium wave is an important process in your immune system. In fact, low calcium is extremely common in the chronically sick population. How? One weird word: Phagocyte. A phagocyte is a cell our immune system uses to chomp up bacteria, viruses, and dysfunctional cells. It's basically the destroyer and garbage collector in one, and you'd be a goner without this guy. These cells take calcium from your body and send it in a wave around the perimeter of the cell by splitting in two. The waves make a calcium circle around microbes that needs to be chewed up and disposed of. The circle acts like a little compartment so the phagocyte can unleash some enzymes and destroy the target (bacteria, virus, etc.). Obviously, the cell has to get this calcium somewhere, so if it's not readily available in your bloodstream, it's going to steal from your bones (please, no!). During a fever it's best to give your body plenty of what it needs so it doesn't have to work as hard or steal from your bones. If you have a fever, your body is working hard enough already! So should you start chugging milk? Nah. Dairy does have some calcium, but milk sugars can also prolong illness by feeding bacteria. In this case, a calcium supplement or lots of rich green foods are your best bet. Taking vitamin D with a quality source of calcium will increase its absorption, helping your body use it more effectively.
You know to rest when you're sick, but why? During a feverish illness, working muscles use calcium. Decreasing muscle work by being a couch potato all day (hi, Netflix) allows immune cells to use the calcium in your body rather than using it for muscle movement. You can also make calcium more available by decreasing refined carbohydrates. Bread, sugar, and pasta LOVE to bind minerals like calcium so your body can't use them.
More good news? High doses of calcium have even been found to combat E. Coli, a bacteria famously known to cause food poisoning and other not-so-fun tummy troubles.
(a picture of the calcium wave in action... looks riveting, right? It's totally keeping you alive, and you didn't even know it!)
A fever can also be a sign of low vitamin C. When microbes invade, it may mean our phagocytes are struggling to keep up. In return, our body raises the temperature as a way to tell the phagocytes "HEY! WAKE UP AND GET TO WORK." Vitamin C is a heat-free way to stimulate the phagocytes, so the temperature can lower again. Vitamin C is so crucial that kids and adults low in vitamin C and calcium will experience more severe fevers than a similar person who is nutritionally sound. Use this list of top 10 vitamin C foods to feed your body during your illness. Together, calcium, vitamin D, and vitamin C work as partners to protect us from infection, both viral and bacterial.
When should you take the next active step? If you've tried the natural route and the following is happening:
Understanding how your body functions is key for long term health! Pat yourself on the back for acquiring one more piece of information that will keep your immune system strong. |
As the days grow shorter and the weather colder, many wonder how cold can chickens tolerate.
Baby chicks cannot be in temperatures below 85 degrees for the first few weeks of life. 90 to 95 degrees is optimal, and anything colder may kill them.
In this blog post, we’ll explore the temperatures that baby chickens can safely tolerate and what you can do to keep them warm during extreme weather conditions.
*This post may have affiliate links, which means I may receive commissions if you choose to purchase through links I provide (at no extra cost to you). As an Amazon Associate I earn from qualifying purchases. Please read my disclaimer for additional details.
What is the Cold Tolerance of Baby Chickens?
Baby chickens are not born with the ability to tolerate cold temperatures.
They must be slowly introduced to cooler weather and given time to develop their feathers.
There is a process of getting baby chicks to tolerate cold temperatures.
It typically involves exposing the chick to colder temperatures 5 degrees at a time so that they can grow in their feathers and also get used to the colder climates.
However, all baby chickens should be fully feathered before exposure to cold and should remain in 90 to 95-degree temperatures for the first few weeks of life.NOTE: wind chill can significantly reduce the cold tolerance of chickens. Baby chickens should always be protected from drafts and windy conditions.
How to Keep Baby Chickens Warm in Cold Weather
Baby chickens are delicate creatures and must be kept warm, even in cold weather.
Here are some tips on how to keep baby chickens warm in cold weather:
The chicken coop should be well-insulated to protect the chickens from drafts.
A safe heat lamp is used to provide additional warmth for the chickens.
Baby chickens are unable to tolerate extreme cold very well. If they become too cold, they can die.
To keep baby chickens warm, you must provide them with a heat source.
The simplest way to do this is to use a heat lamp.
However, you will need to be sure that the heat lamp is not too close to the baby chickens, as they can quickly overheat and die.
Professional chicken breeders often use brood boxes.
Whatever type of heat source you use, monitor the temperature carefully to ensure that the baby chickens are kept warm without being at risk of overheating. Read our related article,What Temp is too Hot for Chicks? We explore the opposite temperature extreme in this guide.
Why it’s Essential to Keep Baby Chickens Warm
Baby chickens cannot generate their body heat and must rely on an external source to stay warm.
For this reason, it is essential to keep baby chickens warm, especially during the first few weeks of life.
The ideal temperature for baby chickens is between 95 and 100 degrees Fahrenheit.
However, they can tolerate temperatures as low as 85 degrees Fahrenheit for short periods.
If the temperature drops below 85 degrees Fahrenheit, the risk of death increases significantly.
It is essential to monitor the temperature closely and make adjustments to ensure that the baby chickens are kept warm and comfortable.
Read More:Best Bedding for Chickens. Bedding can help to keep your chicks warm – here are our favorites!
Dangers of Cold Weather for Baby Chickens
One of the dangers of cold weather is that it can be deadly for baby chickens.
When chickens are young, they cannot tolerate freezing temperatures like adult chickens.
This is because they have not yet developed their feathers, which provide insulation against the cold.
Chicks should be kept warm by providing a heat source in their coop, such as a heat lamp.
The temperature in the enclosure should be maintained between 95 and 100 degrees Fahrenheit.
If the temperature drops below this, the chicks need to be moved to a warmer location.
In conclusion, we found that baby chicks need to be kept at a temperature of 95 to 100 Degrees Fahrenheit in order to be safe.
Baby chicks don’t have their feathers yet which helps in keeping them warm.
Once the chickens are adults then they can withstand temperatures as low as 45 Degrees Fahrenheit.
Hopefully, this article can help you when maintaining your chicken’s temperatures so that you can raise happy and healthy chicks.
William is a 5th-generation farmer whose passion for farming stretches far beyond the barnyard. When he’s not tending to cows and picking cherry ripe tomatoes, he’s sharing his ideas with fellow farmers and homesteaders. |
In the previous section, we have discussed various preliminary
atom models, Rutherford’s alpha particle scattering experiment and Bohr atom
model. These played a vital role to understand the structure of the atom and
the nucleus. In this section, the structure and the properties of the nucleus,
and its classifications are discussed. |
Professor John Sodeau, UCC. Photo: Emmet Curtin.
26 Aug 2016
Professor John Sodeau discusses climate change and what Ireland can do to play its part in the fight against the phenomenon termed global warming.
97% of climate scientists believe that we humans have a direct influence on our climate and are responsible for the phenomenon termed global warming. After many years of dithering, politicians have recognised that something needs to be done, and done quickly, to avoid globally catastrophic events such as drought, extreme weather events, desertification, food shortages and mass-scale emigrations from the most likely affected countries such as Bangladesh. But the general public still are skeptical, in part due to some politicians who hold strident opinions not based on scientific facts.
Fortunately, a fight-back by people who are knowledgeable has begun to happen. For example, a striking temperature spiral graphic was shown at the recent opening ceremony of the Rio Olympic Games to show how close we are to now breaching the 1.5 oC barrier beyond which our planet might not be able to environmentally recover. And so here is my own small contribution to the debate.
Q. Is a greenhouse a good way of explaining to the public why climate change is happening?
A. It is good for explaining the natural (or baseline) Greenhouse Effect that allows planet Earth to be habitable. As we all know seedlings, flowers and vegetables thrive in the glass structures often found in our gardens. And we thrive in a world that has an atmosphere like ours. Ultraviolet and visible light from the Sun gets through to the surface turning into heat, which then tries to escape the planet. But much of it is effectively trapped by an atmosphere that contains water vapour, carbon dioxide and ozone.
However, there is a better analogy for the enhanced Greenhouse Effect that we have experienced since the Industrial Revolution began in about 1830. That is to think of our atmosphere as a woolen shawl with holes in it. Then, if we increase the thickness of the wool (the carbon dioxide content) or fill in the holes (with other “Greenhouse” materials like methane or nitrous oxide or soot-like particles), we get hotter. And that is the global warming experience we read about most days of the week now, although that is just one of the changes in our climate system that we are currently experiencing.
Q. Who is to blame for global warming?
A. We all are because we all want three cars, four TVs and five laptops/i-Pads/mobile phones in every household (go on count them!) Using all that energy in a fossil fuel based economy leads to carbon dioxide production. But if we want to identify some particular whipping boys besides prevaricating politicians and decision makers around the world then one of them would be me for keeping quiet for too long about the dangers we face from climate change. The reason I have held back, but it’s not a good enough defence really, is that scientists tend to use safe, precise, remote vocabulary that has any emotional resonance stripped away. And so we often lose the ability to communicate effectively with the general public. That has meant demagogues and village idiots have been allowed to occupy (until recently) an empty playing field in order to seize the climate change agenda for their own purposes.
Q. What current scientific data is available about climate change that worries you the most?
A. There are three, likely connected, sets of measurements that have worried me from observations made over the last couple of years. The first is from an Observatory sited in a remote location in Hawaii called Mauna Loa. Measurements of carbon dioxide levels in the atmosphere have been taken there since 1958. Between 1960 and the mid-seventies the increase was about a 1 ppmv (parts per million by volume) increase on average per year. This value increased to an average of about 1.5 ppmv between 1975 and 1995 and again increased after that, by about 2015, to a level of 2 ppmv. But between 2015 and 2016 the figure doubled to 4 ppmv. That worries me.
Also there are now measurements from NASA that show the 10 warmest years in the 134-year record that is available (since 1884) have all occurred since 2000, with the exception of 1998. And the year 2015 ranks as the warmest on record!
And then there are the record lows in Arctic sea-ice that have been observed this past year.
Q. Is it too late for us to prevent the environmental problems (such as extreme weather events, drought, food insecurity and mass migrations) predicted to accompany climate change over the next 30 years?
A. It might be.
Q. What can Ireland do to play its part in the fight against global warming?
A. Individuals can always do more by reducing their carbon footprint in a fossil fuel-based economy. But every country can always do more by continually assessing their policies about agriculture, transportation and generation of energy that underpin their respective economies. However, we should be wary of making political deals that put the strength of the national economy first and foremost. No matter what the calculations about “Greenhouse” gas reductions are and the relative fairness to national economies, we should all aim to reduce our footprints to zero. Otherwise triumphant politicians may simply win Pyrrhic victories because the various national economies will not be there if we are under water or live in a desert or on an island that has no fields or wildlife or agriculture. Just trees.
John Sodeau is an atmospheric scientist who has who has performed research in the area since the late 1970s. When he worked in the University of California at Irvine, UCI, he had coffee most mornings with Sherry Rowland and Mario Molina who were soon to win a Noble Prize in Chemistry for making the connection between chlorofluorocarbons (CFCs) and stratospheric ozone depletion. He came to UCC in 1998 where he set up the Centre for Research into Atmospheric Chemistry, CRACLab, which is part of the Chemistry Department and the ERI, alongside John Wenger. He had an epiphany about two years ago when he realised how little scientific knowledge breakfast-time radio presenters possessed and decided to become much more active in communicating with the public about the problems and challenges we face with air pollution and global warming. Check out the crac.ucc.ie website for much more information on air pollution and climate change. |
Get Lost! Navigating the Brain
Combining virtual reality and treadmills to map navigation in the brain. We ask how starlings flock in such a synchronised murmuration. And in the news, inducing creativity by electrically tweaking human brains, how video games could help national security, plus we find out what's been keeping Professor Gage up all night!
In this episode
01:28 - Mapping navigation in the brain
Mapping navigation in the brain
with Professor John O'Keefe, University College London
Professor John O'Keefe discusses how researchers are combining virtual reality and treadmills to map navigation in the brain.
Hannah - Kicking off the programme, let's map our way around the brain. Arguably, the most important structure for navigation is the hippocampus. So, let's kick start with some facts about it.
The hippocampus is buried deep within your brain. It's shaped like a seahorse which is why it's called the hippocampus from the Greek meaning curved horse, and there's one hippocampus on each side of the brain. Each one is just smaller than the size of your curled little finger, and each one is packed with about 40 million nerve cells. Each of these cells can be connected with up to 10,000 others, so it's essentially a very complicated circuit board sending spikes of electrical information in the region and to other parts of the brain. Why does it do this? To help you find your way around the world, learn your place in it and let you process where you want to go.
But how exactly do all of these cells work together to do this? I spoke to Professor John O'Keefe from University College London......
John - What's happening now, and what's very exciting in the field, is that we're now beginning to gain the tools (to study navigation). With an artificial virtual reality environment we can begin to assess exactly what cues and what stimuli in the environment are causing the cells to fire in particular places. So, we're optimistic that in the near future, coupling virtual reality environments to new recording techniques, we'll be able to record from large numbers of cells and to see how the cells are interacting with each other.
Hannah - So, we've got particular cells and they all seem to work together in order for us to have an idea of where we are and even where we may want to go. What signals are coming into the brain in order to help change the activity of those cells? Are there particular things like reward or pleasure that might affect those cells in the hippocampus?
John - That's a very good question and there is not simple answer. The hippocampus is located very far away from the sensory inputs and is quite a bit away from the motor outputs. We do know anatomically that it receives information from many different areas of the sensory, neocortex and under certain circumstances, it's been demonstrated that the cells are capable of using any combination of sensory inputs that are available to the animal.
Hannah - And is the hippocampal formation made up of other cells? They're not just these navigation to cells. Are there other cells in the hippocampus?
John - So that's a contentious question and over the years, there have been reports that there are other cell types in the hippocampus. However, if you take an animal, put it into a succession of different environments, you will find that there are cells in the second environment which didn't respond in the first environment. And there are cells in the third environment which didn't respond in the first two environments. So that when you place the animal into a multitude of environments and certainly, by the time you placed into say, 6 or 7 environments, you will find that the majority of cells in the hippocampus have a place response location field in one of those environments.
Hannah - So, if the majority of these cells within the hippocampal formation are involved in spatial navigation and people with Alzheimer's for example, find it very difficult to navigate their way around, particularly, new environments and they have less connections between cells in the hippocampus, and they also have cell death in this region. Could this be a reason why?
John - Well, yes. There are two questions really that you're asking. One is, what's the relationship between the human hippocampus and the animal hippocampus? And the other is, how much of the human hippocampus is involved in functions similar to say, those in the hippocampus in the rat? And the story isn't a simple one. It's clear that in humans, the hippocampus is involved in other types of memory and other types of processes in addition to special processes.
Our own thinking, and I guess the simplest way of characterising this is that the human hippocampus is involved in what we call episodic memory. That is the memory for what you did at a particular time in a particular place. Now, our own thinking is that this is an extension of the special functions of the animal hippocampus. So, as well as being primarily a spatial memory system also processes information about the time at which the animal went into the location and what it did in that location. If you put those together, you have the beginnings of what we would call - if we were looking at the human memory - an episodic memory system.
And it's known that one of the first things that go wrong early of Alzheimer's disease is both deficits in their navigation ability and also, they tend to get lost in familiar environments, and also, their episodic memory system. So, we think that your hypothesis could be right, that you could demonstrate a relationship between the function and the loss of function in the hippocampus, and deficits in spatial and more broadly episodic memory is a very good one.
We have looked at the changes in these cells in a mouse model of Alzheimer's disease and we do in fact see that the mice, as they get older - not when they're younger, but as they get older, they show changes in their spatial memory ability and that these changes are very highly correlated with changes in the function of the hippocampal place cells and also incidentally, with the accumulation of plaque where the typical Alzheimer plaque burden which is seen in patients with Alzheimer's disease.
Hannah - That was Professor John O'Keefe from University College London.
Can I improve my navigation skills?
Hannah - I believe that you were involved with some work with Eleanor Maguire, looking at taxi drivers and their ability to navigate their way around really complicated, convoluted roadway systems in London for example, and how their brain might have changed as a result of the training they experienced in navigating their way through London? Hugo - Yes, that's right. It was fascinating when Eleanor Maguire discovered that London taxi drivers, the ones that drive the black cabs, actually have an enlarged posterior hippocampus and a shrunken anterior hippocampus as part of the job. So, we compared their brain structure to a normal healthy person who's not a taxi driver. She found all these differences. The posterior end of the taxi drivers seemed to expand. And so, it's fascinating to see that longer they had driven a taxi in London, the larger their hippocampus became. It was interesting to see it expand physically, but what is it actually used for? We presume it's involved in spatial navigation, but we're quite keen to tie down when. So, to do that, we got hold of a simulation of London which was available at that time on the Play Station II. There's a game called the Getaway to simulate the extent of driving through London and so, we're able to have London taxi drivers drive through and do their job on a regular basis, but in a controlled virtual setting. Because it was on a computer screen, we could also examine what's going on inside their brain while they're driving using functional magnetic resonance imaging. So, what we found doing that because the taxi drivers really used their hippocampus maximally when they're first thinking where to go when you get on a taxi and you give the taxi driver a destination. That's when they'll be using their hippocampus, but once they've done that and they see these themselves on neuroimaging data, backed up what they had to say, they kind of switch off. They don't really think. They go on to automatic pilot and they just get you there. Other bits of the brain sort of takeover and do all sorts of fine-tuning, and so on. But what we found, the hippocampus was really key in that initial moment of planning a route.
Hannah - So, going back to Dave Collier's point, it may be that he could exercise his hippocampus by testing himself with new navigation and new routes?
Hugo - That's right. There is some evidence. If Dave left his books and went off and played a video game. In fact, went out of his office perhaps even better every day to navigate through a new bit of city or environment, it's quite possible he might expand his hippocampus as a consequence.
How do birds fly in a flock?
We posed this question to hippocampal researcher Dr Hugo Spiers from University College London.
Hugo - Paul is right to ask about it. It's really interesting. If you think about animals navigating as they travel over the Earth from one point to another, they do have to synchronise their movements. They don't knock each other in the air, but it's less of a navigation question. Actually, it's more of a coordinated motor problem. But I guess, the key thing is how do they know that the bird at the front is going in the right direction? And do they all decide the direction like a voting system to make the flock go in the right direction? There must be something going on there. Very hard to study, certainly, very hard to get at the neuroscience how their brain does that.
Hannah - Thanks, Hugo and if you've got any burning questions about your brain and the nervous system, just email them to email@example.com, you can tweet us @nakedneuroscience, or you can post on our Facebook page, and we'll do our best to answer them for you.
12:22 - Picking out features in a crowd
Picking out features in a crowd
PhD Students, David Weston from Cambridge University describes his top papers of the month!
David - The first paper this week ties into the podcast's theme of navigation. Researchers from the University of Toronto have just published evidence that playing action-based video games can improve your ability to navigate your environment, specifically your ability to pick out features in a crowd.
Sijing Wu and Ian Spence, in the journal Attention, Perception & Psychophysics showed that people who played action games such as Call of Duty, Counter Stroke and Halo, so called First-Person-Shooter or FPS players, for at least 4 hours a week in the previous six months were much faster at finding specific objects in a typical visual search task. The FPS players were also more accurate at identifying objects in their peripheral vision when taking on tasks in their central vision.
These results collectively suggested that FPS players are better at searching visual scenes and splitting their attention across their visual field. These experiments used self-confessed action video game players but what the authors of the paper really wanted to know, was whether their abilities were trainable, ie. Can playing video games make a non-player better at these tasks?
A group of 60 students, who didn't have any experience with video games were split into groups and given different video games to play: two groups were given action video games and a third group was given a 3D puzzle game. All of the participants were asked to play the video games for 10 hours, as part of their training.
The results showed that even after only 10 hours of training the participants who played the two action games showed remarkable improvements in their visual search abilities, compared to those given the puzzle game. Although these experiments may seem trivial the investigators highlighted the importance of these findings. We use searching behaviours in many critical aspects of life from looking for a face in a crowd to poring over complex MRI scans. Using video games may be a great way to train people's visual searching skills! Sounds like a fun way to train to me!
Wu & Spence. Playing shooter and driving videogames improves top-down guidance in visual search. Attention, Perception, & Psychophysics, March 2013.
http - //link.springer.com/article/10.3758%2Fs13414-013-0440-2
14:21 - Getting creative!
Hannah - So, my top paper for month is to do with electrically tweaking brains to induce creativity and been published in the Journal of Cognitive Neuroscience by Sharon Thompson-Schill and her colleagues at the University of Pennsylvania.
So, they did this really quite neat experiment. They took 48 volunteers and they presented them with everyday objects.
In fact actually David, I'm going to make you one of the volunteers for this experiment. I'm going to present you with an everyday object. I'm going to ask you to tell us what it is and what you would do with it, and then I want you to think outside the box, and just off the top of your head, tell me other things that you could do with this particular object. Ready? So, I'm presenting David with a picture of - I'm going to turn on a piece of paper now. David, what is this and what would you do with it?
David - It's a rolling pin and I guess you'd bake with it, so you can roll out pastry. That's the kind of conventional task I guess.
Hannah - And so, what else could you use it for outside the kitchen?
David - Well, you could hit someone over the head with it, you could use it as a rounder's bat, if you heat it up, you could use it as an iron. If it's made of wood, you could burn it and have it as a source of heat.
Hannah - So, would you say that you're creative?
David - I don't know. Not really. I find it very difficult to come up with those out-of-the-box solutions right on the spot there like that, but I came up with some reasonable ones, I think.
Hannah - Yeah, I think you did pretty well.
So, what the researchers that published this paper wanted to do was, they wanted to find out if they could make people more creative. So, come up with more out-of-the-box creative ways of using that rolling pin or other devices. So, in order to foster creativity in their volunteers, they decided to electrically shock, apply a small electric current, just 1.5 milliamps or 1.5 thousandth of an amp of current to the left side of their brains and also had a control group where they applied the same current to the right side of the front of the brain, and also, a control group that just had this placebo effect. So, they just had a very small electric stimulus right in the middle, just for a very small amount of time.
David - So, why would you want to look at the left versus right side of the brain? What's the impetus for doing that?
Hannah - So, they were stimulating the pre-frontal cortex, the bit just behind your forehead that's involved in cognitive thoughts - so learning, reasoning, planning, flexibility, and thought. And the left side of the brain, the left hemisphere is thought to be involved in more linear kind of thought and the right side is thought to be involved in more kind of global kind of creative thinking.
David - So, what did the researchers find from this experiment.
Hannah - Well amazingly, they did see quite a dramatic increase in the creativity as measured by the task that we just demonstrated, when they were applying a small inhibitory electrical current to the left side of the brain - so, by inhibiting the left pre-frontal cortex, they actually fostered more creativity in these volunteers.
David - So, what does that tell us about the way that the brain is setup to come up with creative ideas then?
Hannah - Scientists think that the pre-frontal cortex and in particular, the left side is usually involved in filtering information and kind of dampening down how your brain works. So, this study really unifies that hypothesis.
David - Excellent! That's really interesting.
Hannah - So, do you think we should all wander around with little electrical thought caps on our heads when we need to be creative?
David - Maybe and I was thinking a lot of kind of companies seem to value people who can think outside the box. So, I'm interested to see what the implications of this kind of research would have on how people view creativity and whether you can force somebody to be creative when they don't think they are.
Hannah - Hummm, maybe teachers can bring it into art classes?
David - Yeah, it would certainly make me better at art, I think!
17:31 - Locating the GPS system in the brain
Locating the GPS system in the brain
PhD student David Weston reports on his top stories for the month....
David - The second paper this week marks a significant milestone for brain navigation research. A group of scientists at Princeton University have been working towards locating the human GPS system, an area of the brain that is responsible for allowing us to successfully navigate our environment.
David Tank and his team focused on the medial entorhinal cortex, an area of the brain that has previously been associated with the way in which we map physical space.
Within the entorhinal cortex we find neurons called grid cells that fire electrical signals at particular places within the space. Remarkably these grid cells form a pattern of activation that looks like the hexagonal spaces on a Chinese Checkers board. So a single grid cell will fire when you occupy specific hexagons within a room for example.
Now two competing theories have predicted how these grid cells electrically encode the physical landscape. The first, the so-called oscillatory interference model, proposes that individuals grid cells produce oscillations in their electrical activity that inform you of where you are, while the competing attractor model suggests that the ramping electrical patterns of grid cells, communicating with one another, are responsible for the positional information.
The authors decided to test which of these theories is most valid by taking electrical recordings from the entorhinal cortices of mice navigating a virtual-reality environment. Although they found that both the oscillations and ramps were present in these cells, the ramping electrical activity much more reliably predicted the positional information, giving support to the attractor model.
These important findings have come some way into understanding how the complex patterns of electrical activity in the brain co-ordinate to give us reliable and accurate information about the ways in which we can interact with our world.
20:10 - How did you beat addiction?
How did you beat addiction?
with Anonymous, Squeaky Gate student
Hannah - Moving to the back of a bus, where I'm returning from the Royal Albert Hall with the Cambridge based music charity, Squeaky Gate, that I volunteer with, having just performed live as part of a series of concerts to help destigmatise mental health issues.
I'm speaking with one of the anonymous lead vocalists about his experiences with addiction, a condition that also involves the hippocampus.
Male - I discovered alcohol at 15 and it seemed to turn the lights on in my head and it did everything that I wanted it to do.
It gave me confidence. It enabled me to talk to people. At first, it was just going to pubs just like everybody else, socialising and it was fun.
It led to 27 hospital visits, 17 detox visits, various psychiatric evaluations. It eventually led to me throwing up 4 pints of blood because I had the blood vessels in my throat burst.
I remember coming back from that experience, and I went back to alcoholics anonymous and for the first time, I opened my ears and I listened. I sat and I listened, and discovered the reasons why I was doing what I was doing.
So, I went to old pubs that I used to drink in, I just forced myself to walk in and either just stand in them or occasionally, I would speak to a couple of people that are still there, and just rewrote a different memory. I walked out and I've never been back.
Hannah - Next, we meet up with a scientist to find out about a new treatment for addiction and posttraumatic stress disorder to find out more about re-writing the script in your brain.
21:43 - Eternal sunshine on a spotless mind?
Eternal sunshine on a spotless mind?
with Professor Barry Everitt, Cambridge University
Hannah - Next, we meet up with a scientist to find out about a new treatment for addiction and posttraumatic stress disorder.
Barry - I'm Barry Everitt. I'm a Professor of Behavioural Neuroscience in the Department of Experimental Psychology at the University of Cambridge.
Hannah - Hello, Barry. So, we've just been talking to somebody who experienced quite severe addiction for a number of years and he spoke about overcoming his addiction and rewriting the script as it were, so removing associations that he had whether it be particular environment, particular social situations, with his addiction to alcohol. I wondered if you could tell us a little bit about the research that's going on in this area of rewriting the script in your brain.
Barry - Yes and that's an interesting thing that this person has done. So, when people begin taking drugs and then make a transition to becoming addicted to them, what happens is, generally, they take their drugs in a very ritualistic way in a very restricted range of environment, and in the presence of rather few specific stimuli that are related to their drug taking, obviously, for drinking alcohol is often in bars and with certain people, and it's true for people taking cocaine as well. They have certain kinds of equipment that they use in certain places, sometimes with certain people.
And quite naturally, those stimuli become associated with the effects of taking the drug or drinking alcohol through a learning process that we've known about for well over a hundred years called Pavlovian conditioning and everybody remembers about Pavlov's dogs and bells, and salivation. So, a restricted range of otherwise innocuous stimuli in the environment become alcohol cues or cocaine cues, or heroine cues.
Now, it turns out what we've come to understand from our work and others in a clinical situation is those stimuli become very, very significant in the course of addiction because they can evoke craving.
Hannah - So, if somebody may have gotten over the physiological addiction to the particular drug and then after that, there's a second wave where they have to get over this association with the habit?
Barry - Yes. In fact, you don't ever really unlearn those associations except recent research has suggested that it might be possible to unlearn or eradicate their meaning.
Now, this has been known for many years in some treatment centres for addiction where clinicians have used something called cue exposure therapy. So here, what happens is that people seeking to become abstinent - for example, to stop drinking - go in to the clinic through questioning and discussion, their particular cues are identified, and those are presented repeatedly, but without alcohol and this is what the person you were describing earlier has been doing by visiting places where he used to drink, but hasn't drunk. And so, you keep repeating over and over again, presentation of these cues and it turns out that they elicit less and less, and less craving whether you measure that in terms of how people feel or you measure some objective - physiological measure like their heart rate or their skin conduction, a measure of some sweating on the palms.
And this process in psychology is called extinction. Now, that's all well and good, but it turns out that extinction is very context or situation dependent.
So, if you bring somebody to the clinic, extinguish those cues so they're not responding to them. When they go back out into the real world and encounter those same cues, but in the context in which their normally present, you find they haven't been extinguished at all. So, it's not a very successful treatment.
Hannah - So, it's a bit like taking someone into the clinic and presenting them maybe with some beer bottles or particular advertising associated with alcohol which is a very different situation to meeting up with your friends on a Friday night in a restaurant where there's wine or there's beer available?
Barry - Exactly so. So, how do you get over that? A couple of things have happened recently which have suggested that the memories elicited by drug-associated stimuli, drug use, can indeed be erased, and this was a big surprise when it was appreciated in about 2000.
When you retrieve a memory in the point where the memory is active again, and in the case of drugs, when the cue has elicited your memories of drinking and cause you to crave, that memory becomes labile in a new active and transient state in the brain neurochemically. It has to be re-stabilised in the brain.
It turns out through another round of protein synthesis in cells in the brain, and that's kind of, but not exactly the same as, the process that would've occurred when those memories are being formed in the first place. So, we talk about consolidating new learning to form memories and this process where the memory becomes plastic and must be re-stabilised to gain a retrieval, it's called reconsolidation.
Hannah - And so, that reconsolidation of the memory that point when the memory is quite labile and quite plastic may be a point where therapeutically, you could intervene and start some kind of treatment to dissociate that habit, that association with the drug?
Barry - Exactly, right. Now, it turns out that a very common drug used to treat hypertension, propranolol which is a beta receptor blocker can block that reconsolidation, that re-stabilisation process.
So, if you retrieve a memory in the presence of a beta receptor blocker and then come back and look a day or a weak or a month later, you find that memory has been erased.
Now, that's being translated to a clinical setting in the case of posttraumatic stress disorder, people who are greatly affected by horrific and intrusive memories of terrible events like war or accidents, or rape, and it turns out if you retrieve the memory under beta receptor blockade, not just once but two or three times, there's an enormous reduction in the intrusiveness of the memory and people are much less bothered by it.
Now, that particular approach to treatment hasn't yet been applied to drug addiction, but it's something we are planning to do here in our alcohol study.
Hannah - And how do you make sure that you're not going to be erasing any other memories at the same time whilst administering this beta blocker because that must be a bit of a concern that you would erase a person's memory?
Barry - Like the eternal sunshine of the spotless mind, that's one of those fears that people always bring up, but actually, the conditions under which you can make the memory labile and can influence it are incredibly precisely defined. So, you would only ever be able to modulate a memory by focusing on the conditions specific to that memory and reactivating it.
Hannah - And which areas of the brain are these memories, these associations, being affected by the beta blocker?
Barry - The amygdala which is well-known to be associated with emotional learning and memory. It is certainly the case for drug cue and fear cue memories, but also the hippocampus which is much more to do with spaces and places, and contextual memories.
So, that's a treatment that's based on something that is now called reconsolidation blockade, but it's also the case that in the last couple of years, it's become clear that you can take the first process that I spoke about which is extinction and the second process I spoke about which is reconsolidation and put them together because they interact in a very interesting and unexpected way.
Now, what's recently being investigated in the very exciting way is that if you briefly retrieve a memory and then wait a little while, 10 minutes or an hour, and then do extinction, that combination of brief retrieval delay extinction training, actually results in the true erasure of the memory, particularly if it's done perhaps on a couple of three times, and the memory never spontaneously recovers. You can never cue it back into existence again.
Now, that combination of treatments has been done experimentally in a drug setting, in a lab in China, and this was published in Science. And there, they've taken heroin addicts and brought them into a treatment centre, briefly exposed them to heroin cues to retrieve the memory, they've waited a little while and then taken them through extinction training room. They've done that two or three times and discovered that subsequently, those cues have lost their ability to elicit craving and they've lost their ability to activate the physiological measures of the impact of those cues in terms of reduced heart rates and reduced sweating skin conductance.
Hannah - And do you think that holds promising in terms of translating into the real world situation where you have more of these social cue?
Barry - The evidence is, that reconsolidation blockade and this extinction following brief memory retrieval which some people call super extinction is not constrained by context. So, you can do it in one place and it transfers to other places. So, that's really quite an exciting development because it suggests you could take advantage of what already goes on.
For example, in the treatment of PTSD or phobias you would conduct a normal cognitive behavioural therapy session where you'd bring someone in and actually go through an extinction process, but you do it under these conditions of brief memory retrieval followed by extinction. In a consultation, there's already going on, just a subtle change in the way you conduct it and it might have a profound impact on the success of the extinction training.
Hannah - That was Professor Barry Everitt from Cambridge University.
28:05 - What keeps a Californian Prof up?
What keeps a Californian Prof up?
with Professor Fred Gage, Salk Institute, California
And closing this month's show, Professor Fred Gage from the Salk Institute California describes what's been keeping him up all night. Fred also researches the hippocampus, looking at a specific area of it, called the dentate gyrus. This region is one of the few brain areas that continues to get new brain cells being born throughout life.
Fred - One of the things that I find remarkable right this minute, there are a lot of things, but there are individual differences between neurons of the same brain structure. So here, we're talking about the dentate gyrus of the hippocampus and that first pass, it looks like all these cells are exactly the same and in another area of the hippocampus called the CA1, there are pyramidal neurons and they look all the same. But if you go in and record from those cells individually or you go look at the expression patterns of RNA or even if you look at the DNA of those cells, they're different. All are different and the next wave of interest I predict will be looking at these individual differences that occur between individual neurons within the brain.
Hannah - That was Professor Fred Gage from the Salk Institute California. That's all for now. I'll be back again next month to pay attention to ADHD - Attention Deficit Hyperactivity Disorder. |
Africander cattle (also known as Afrikaner), were developed from the native Khoi-Khoi cattle of the Cape of Good Hope which are thought to have arisen from the longhorned Zebu and the Egyptian longhorn and is a native South African breed. The Africander belongs to the Sanga type. Sanga type cattle, in huge herds, were owned by the Hottentots when the Dutch established the Cape Colony in 1652. The animals were obtained by the colonists who improved them for use as draft animals. It was Africander oxen that drew the wagons which carried Boer farmers and families on the Great Trek of 1835-36 from the Cape of Good Hope to the Orange Free State, Natal and the Transvaal to escape British rule. the word trek is originally Afrikaans, meaning draft.
Photo courtesy of Afrikaner Cattle Breeders’ Society of South Africa,studbook.co.za
The breed was almost exterminated when huge numbers died of rinderpest (viral disease of cattle) or were destroyed during the South African War.
Since then the breed has been developed into a competitive beef breed and was the first indigenous South African breed to form a breed society in 1912. Nowadays it is probably the most popular indigenous cattle breed in South Africa. Though the population size in 1998 is reported to be 20,465, its cryogenic conservation in semen form and keeping the animals in the different improvement stations in South Africa is expected to ensure that it is not at risk.
The breed has been bred according to breed standards for many generations and shows a high degree of uniformity in colour and conformation, rarely encountered in other African livestock breeds, it is among the largest breeds in Africa. The breed is typically red which can vary from light tan to deep cherry red . They have long lateral horns of a flesh to creamy white in colour with amber tips. A polled type has also been developed.
Africander cattle exhibit good resistance to heat, a high level of tick resistance, quiet temperament and a satisfactorily high level of fertility under harsh conditions. It is a heavy beef-type animal and has good meat quality but show lactational anoestrus in times of environmental stress. Mature cows are of a medium size weighing approximately 525 to 600 kg (1150 – 1350 pounds) and bulls weigh 750 to 1000 kg (1650 – 2200 pounds). They have loose skin and large drooping ears. The bulls have the the hump of muscle and fat on the neck which can rise to 7cm or more above the topline similar to the Zebu.
There is considerable variation in the performance of pure Africander cattle, especially in weaning weight and growth rate to slaughter age; but in general they tend to be slow maturing with comparatively low fat cover. The dressing percentage is 54%.
It is not considered to be a milk animal in a country where European dairy breeds supply most of the milk.
– Excellent mothering ability, easy calving and low calf mortality rates
– Weans heavy cross-breed calves
– Capable of producing 10 and more calves in a lifetime
– Are virile, active and prolaps free
– Has a long productive life (Up to 12 years and older)
– Produces outstanding mother-line progeny in any cross-breeding program
– Produces top class slaughter oxen on the hoof as well as on the hook with any breed of cow
This hardy, no-nonsense breed has a number of outstanding traits, its value in cross-breeding programmes being particularly appreciated. The Afrikaner is well named the no-nonsense breed. Livestock specialists say the Afrikaners doesn’t have the compact, block-like conformation of many other beef breeds. It has longer legs, yet good depth, and a muscular back, loins, rump and thigh, but a fairly shallow body.
As a purely beef-producing breed, the Afrikaner cow yield excellent and adequate milk for its calves. Experiments have shown that, during a suckling period of 210 days, the calf on average consumes 900 liters of milk. The cow has excellent mothering abilities. It’s common on many farms to see a lone cow surrounded by several calves, which are guarded by her, while their own mothers are grazing or on their way to distant watering points. Given good grazing and ample fresh water, the cows calves regularly once a year. Due to the cow’s slightly drooping rump and wide vaginal passage, there are few if any calving problems. Calves at birth weigh only about 34 kg, which also ensures easy an uncomplicated calving. The breed thrives under extreme heat.
The age at first calving is about 36 months and calving interval about 445 days; calving percentages are variable and have been compared unfavourably with other indigenous breeds (e.g. Tswana, Tuli, Angoni, Mashona); however, the cows have been used extensively for commercial crossbreeding, with bulls of European breeds, especially Hereford, Sussex, Devon, Simmental and Charolais (Maule 1990).
The breed is currently found in Africa, Australia and tropical countries.
References (the above information was cited from the following sites)
Original article Here |
A Brief History of Baseball
Though it is now considered America’s national pastime, baseball did not always enjoy such popularity. In fact, the game has its roots in England, where it was being played as early as the 1500s. By the early 1800s, it had become quite popular in the United States as well. However, the game underwent a number of changes before it became the baseball we know today. For example, originally, there were ten players on each team, and the field wasn’t divided into innings. It wasn’t until 1845 that baseball began to resemble the game we play today. With only nine players on each side and a fifty-yard field divided into innings, the modern game of baseball was born. Since then, baseball has gone on to become one of America’s most beloved traditions. From small-town sandlots to Major League stadiums, millions of people across the country enjoy playing and watching this timeless game. |
Life on Earth can be viewed as a complex network of interactions between living organisms and their respective environments. By parsing the natural world into various ecosystems and biomes, the extent and significance of such interaction among species and between organisms and their natural habitats becomes abundantly clear. The study of ecology forms the heart of this engaging volume, which explores the formation of ecological communities and examines the biological diversity that forms the backbone of life on the planet.
Series: The Environment: Ours to Save
Interest Level: Grades 7-12
Guided Reading Level: Z
Lexile Level: 1310
ISBN: 9781615305568 (E-book) |
In engineering and material handling, understanding key terms like Safe Working Load (SWL) and Working Load Limit (WLL) is essential to ensure safety, efficiency, and compliance with best practices. Whether you’re a seasoned engineer, a safety officer, or new to the industry, getting a firm grasp on these concepts will greatly contribute to a safer working environment.
SWL and WLL are not just industry jargon; they define the maximum load lifting equipment can safely handle. Over time, the usage of these terms has evolved, and their interpretation varies across different international standards. In this blog post, we aim to clarify these crucial terms, delving into their definitions, calculations, and implications and highlighting their differences.
What exactly do SWL and WLL mean? How are they calculated? What distinguishes them from each other, and are they interchangeable? This post will answer all these questions and more. So, let’s unpack these terms and understand how a subtle change in terminology can significantly influence the interpretation and application of safety standards.
What is Safe Working Load (SWL)?
Safe Working Load (SWL), sometimes also referred to as Normal Working Load (NWL), is the maximum force or load that a piece of lifting equipment, such as a crane, winch, hoist, or an accessory, can safely handle without the risk of failure or breaking.
SWL is determined by the equipment manufacturer and is calculated based on the Minimum Breaking Strength (MBS) or Minimum Breaking Load (MBL), which is the lowest load that would cause the piece of equipment to fail or break under force. This MBS is then divided by a safety factor to determine the SWL.
The safety factor is a number that’s typically between 4 to 6 and is chosen based on the type of equipment, its intended use, and the potential risks associated with equipment failure. In some cases where the risk of equipment failure could harm people, the safety factor may be as high as 10.
In the formula form:
SWL = MBS / Safety Factor
SWL aims to ensure safety in operations that involve lifting heavy objects. It provides a guideline for operators, helping them understand the limits of their equipment and preventing situations where the equipment is overloaded, which could lead to equipment failure and potentially serious accidents or damage.
Please note that it’s crucial not to exceed the SWL of a piece of lifting equipment. Even if the equipment seems to be handling a load heavier than its SWL, that does not mean it’s safe or advisable to continue operating in that way. Overloading equipment can cause wear and tear, reduce the lifespan of the equipment, and increase the risk of unexpected failure.
What is Working Load Limit (WLL)?
The Working Load Limit (WLL) is the maximum load a piece of lifting equipment, such as a crane, hoist, sling, chain, or any other lifting device, can handle under normal service conditions. This limit is set by the manufacturer based on rigorous testing and should not be exceeded to ensure the safe operation of the equipment.
Like the Safe Working Load (SWL), the WLL is derived from the Minimum Breaking Strength (MBS) or Minimum Breaking Load (MBL) of the equipment, divided by a safety factor. The safety factor varies depending on the type of equipment and its intended use, but generally ranges between 4 to 6, and can go up to 10 in cases where the failure of the equipment could pose a risk to human life.
In simple mathematical terms:
WLL = MBS / Safety Factor
WLL is crucial in construction, shipping, and any industry where lifting and moving heavy loads is a regular part of operations. It ensures the safe use of lifting equipment, helping to prevent equipment failure, accidents, and potential injuries.
It’s important to note that the WLL can change based on the configuration of the lifting equipment. For example, the angle at which a sling or chain can affect its WLL. Therefore, all components in a lifting configuration, including hooks, shackles, and slings, should have a WLL suitable for the lift load.
In many regions, it’s a legal requirement to clearly mark the WLL on lifting equipment, and it’s also recommended that regular inspections are carried out to ensure equipment is in good condition and safe to use.
Difference Between Safe Working Load (SWL) and Working Load Limit (WLL)
Safe Working Load (SWL) and Working Load Limit (WLL) are used in engineering and material handling fields to denote the maximum capacity equipment or hardware can safely handle. While they have been used interchangeably in some contexts, they have distinct meanings in different regions and standards. Here’s a brief explanation of the differences:
- SWL, or Safe Working Load, is an older term that indicates the maximum weight or force that equipment or machinery could safely handle. The maximum load can be applied to a component or equipment without causing deformation or failure. The term is calculated as the Minimum Breaking Load (MBL) or Minimum Breaking Strength (MBS) divided by a safety factor, which typically ranges from 4 to 6.
- WLL, or Working Load Limit, is a term that has largely replaced SWL in the United States, European, and ISO standards due to the more specific and legally clear nature of its definition. The WLL is the maximum mass or force a product can support in general service when the pull is applied inline.
In terms of practical usage, both define the same concept — the maximum safe load a piece of lifting equipment can handle. However, the shift from SWL to WLL has been due to an attempt to provide more specificity and clarity in safety regulations and to avoid potential legal implications associated with the term “safe.”
It’s important to note that these terms provide a guideline for safe operation, and loads should not exceed these limits. Proper training, inspection, and maintenance of lifting equipment are essential in ensuring safe operations. |
The media focus tends to be on fluvial (river) flooding, as this tends to be more predictable and longer lasting than flash floods, making them easier to film and report on.
This often means other types of flooding can go ignored by the media, resulting in both less public and government interest.
Of all the flood risks to the UK - from coasts, rivers, groundwater, sewers and surface water – it is surface water flooding which threatens more people and properties than any other form of flood risk.
Over three million properties in England are at risk of surface water flooding, which is greater than the number at risk from rivers or the sea (2.7 million).
The most worrying thing here is, most do not know they are at risk. If you don’t live near a river or the sea, it’s not unreasonable to think that you are not at risk of flooding, however this may not be correct.
Surface water flooding occurs when intense rainfall overwhelms drainage systems. It is reported that 35,000 properties were affected by surface water during the major floods of 2007.
It is also important to note that surface water flood maps are not designed to be used to identify if an individual property will flood. They do not take into account drainage connections and therefore an experienced flood risk consultant is needed to translate what the flood mapping is showing to what would likely happen on the ground, at property level.
It is important to remember that surface water flooding may not come with any warning.
Surface water flooding does not just hit homes and businesses, but also has far reaching effects across society, disrupting road, rail, utilities of towns or cities. It is a risk area which is growing.
An increasing population and drive to build new homes will mean more concrete and fewer areas for rainfall to safely drain away. The government set a target in 2015 to build 200,000 houses a year, so that by 2020 one million new homes would have been built.
Incredibly, this target was deemed too small and in 2017 it was increased to 250,000 homes a year with the suggestion that the target should rise to 300,000 a year.
Alongside new developments, changing land uses and deforestation will exacerbate flood risk as interception and infiltration rates are reduced. Climate change will bring more frequent and intense rainfall, bringing more flash flooding and overloading of our ageing sewer network.
Government-funded Environment Agency flood mitigation schemes are not designed to protect against surface water flooding. It is a risk which tends to affect built up urban areas. Poor urban areas are the most susceptible of all because there are a lot of people, with paved drives and roads which don’t absorb the rainwater.
If there is a possible upside to surface water flooding, it would be that it generally does not last as long as river flooding, and therefore there is a greater chance of keeping the water out of properties using property flood resilience measures, such as flood barriers, water pumps, and anti-flood airbricks.
Moving forwards, we urgently need to reduce carbon emissions to limit climate change, or the risk of surface water flooding will keep increasing. Alongside this, warnings need improving, as generally surface water flooding occurs without warning.
Any new developments should be resilient to flooding, and not exacerbate the risk elsewhere through well-executed (and maintained) sustainable urban drainage schemes.
Alongside this, catchments should be managed better to reduce overland flow, by reforesting and contour ploughing to slow the flow. It is important to remember that there will always be a residual risk, and therefore property flood resilience is a key part of the flood mitigation jigsaw.
With 67% of the population not knowing their flood risk, we have a long way to go to drive changes in behaviour!
*Simon Crowther is a Civil Engineer and Chartered Water & Environmental Manager. Simon founded Flood Protection Solutions Ltd in 2012, and as a 2007 flood victim he has huge empathy with his clients. |
A 2014 Study published by the American Medical Association shows a positive association between full-day preschool interventions and school readiness, attendance, and parent involvement.
Herman (1984) describes in detail the advantages of full-day kindergarten. He and others believe full-day programs provide a relaxed, unhurried school day with more time for a variety of experiences, greater opportunity for screening and assessment to detect and deal with potential learning problems, and more occasions for good quality interaction between adults and students.
While the long term effects of full-day kindergarten are yet to be determined, Thomas Stinard's (1982) review of 10 research studies comparing half-day and full-day kindergarten indicates that students taking part in full-day programs demonstrate strong academic advantages as much as a year after the kindergarten experience. Stinard found that full-day students performed at least as well as half-day students in every study (and better in many studies) with no significant adverse effects.
A recent longitudinal study of full-day kindergarten in the EvansvilleVanderburgh, Ohio, School District indicates that fourth graders maintained the academic advantage gained during full-day kindergarten (Humphrey 1983).
Despite often expressed fears that full-day kindergartners would experience fatigue and stress, school districts that have taken care to plan a developmentally appropriate, nonacademic curriculum with carefully paced activities have reported few problems (Evans and Marken 1983; Stinard 1982).
What are the Disadvantages of Full-Day Programs?
Critics of full-day kindergarten point out that such programs are expensive because they require additional teaching staff and aides to maintain an acceptable childadult ratio. These costs may or may not be offset by transportation savings and, in some cases, additional state aid.
Other requirements of full-day kindergarten, including more classroom space, may be difficult to satisfy in districts where kindergarten or primary grade enrollment is increasing and/or where school buildings have been sold.
In addition to citing added expense and space requirements as problems, those in disagreement claim that full-day programs may become too academic, concentrating on basic skills before children are ready for them. In addition, they are concerned that half of the day's programming in an all-day kindergarten setting may become merely child care. |
Gather the necessary tools and materials
Repairing a zipper on a jacket is a relatively simple task that can be done at home. To get started, you will need a few tools and materials:
- Needle-nose pliers
- Thread (matching the color of the jacket)
- Seam ripper (optional, but helpful for removing stitches)
- Replacement zipper (if the zipper is beyond repair)
Assess the damage
The first step in repairing a zipper on a jacket is to assess the damage. Carefully examine the zipper to determine what is causing the problem. Common issues include a missing or damaged zipper pull, a misaligned zipper track, or a broken or separated zipper.
If the zipper pull is missing or damaged, you may be able to replace it by attaching a new one. If the zipper track is misaligned, you can try realigning it by gently pulling the teeth back into place with the pliers. If the zipper is broken or separated, it may need to be replaced entirely.
Fixing a zipper pull
If the zipper pull is missing or damaged, you can easily replace it by attaching a new one. Start by removing any remaining pieces of the old zipper pull with the pliers or scissors. Then, thread the new zipper pull onto the zipper track, making sure it is securely attached.
Realigning a zipper track
If the zipper track is misaligned, start by gently pulling the teeth back into place with the pliers. Be careful not to force the teeth too much, as this could cause them to break. Once the teeth are aligned, test the zipper to ensure it is working properly.
If the misalignment persists, you may need to use a seam ripper to remove the stitches holding the zipper in place. This will allow you to reposition the zipper and sew it back into the jacket in the correct position. Use a needle and thread to sew the zipper back in place, making sure the stitches are secure.
Replacing a broken or separated zipper
If the zipper is broken or separated and cannot be repaired, it will need to be replaced. Start by using the seam ripper to remove the stitches holding the zipper in place. Once the zipper is removed, measure the length of the old zipper and purchase a replacement zipper of the same length.
Align the new zipper with the opening in the jacket and use a needle and thread to sew it in place. Make sure to secure the zipper firmly to prevent it from coming loose. Test the zipper to ensure it is working smoothly before finishing up.
By following these steps, you can successfully repair a zipper on a jacket and save yourself the expense of purchasing a new one. |
The researchers’ conclusions are “still very much in the discovery phase”, said David Carlson, the IPY’s international programme office director. However, he said, “we have enough to say the whole ice/ocean/atmosphere system in both hemispheres is changing faster than we thought,” based on computer models. Carlson added: “It would be scientifically a failure and we would be remiss if we didn’t call attention to this and get back in there and look at it again quickly.”
The WMO released its preliminary report, The State of Polar Research, and some of the projects in the 60-country, US$1.2-billion scientific collaboration are ongoing. The summary of results outlines what has been learned so far from the polar-year projects.
Research dealt with sea-level rises due to the melting of ice sheets, sea-ice decreases in the Arctic, anomalous warming in the Southern Ocean, and the storage and release of methane in permafrost.
In addition to the detailed study of the geophysical and climatic systems of the poles, the projects also studied biodiversity, epidemiology and sociological issues in the Arctic. Indigenous Arctic peoples were, and are, actively involved in climate monitoring, measuring local wildlife populations and other aspects of the research.
See full story |
Can Epsom salt be beneficial for aquarium plants?
If you’re an avid aquarium hobbyist, you’ve probably heard about how beneficial Epsom salt can be for your aquarium plants. But what exactly is Epsom salt, and how can it help your aquatic vegetation thrive? In this article, we will explore the benefits of using Epsom salt for aquarium plants and provide some useful tips on its application.
What is Epsom salt?
Epsom salt, also known as magnesium sulfate (MgSO4), is a naturally occurring mineral compound composed of magnesium, sulfur, and oxygen. It gets its name from the town of Epsom in England, where it was first discovered in natural springs. While it is widely known for its therapeutic properties, Epsom salt can also be used as a fertilizer for plants, including those in your aquarium.
Why is Epsom salt beneficial for aquarium plants?
Epsom salt contains essential nutrients that can promote the healthy growth of aquarium plants. Here are some reasons why it can be beneficial for your submerged botanical beauties:
1. Magnesium source: Magnesium is a vital nutrient for plants as it plays a crucial role in chlorophyll production, enzyme activation, and overall plant metabolism. Epsom salt provides a readily available source of magnesium, ensuring that your aquarium plants have an adequate supply of this essential element.
2. Increased nutrient uptake: The presence of magnesium in Epsom salt can enhance the uptake of other essential nutrients by aquarium plants. It improves the plant’s ability to absorb nutrients such as nitrogen, phosphorus, and potassium, which are necessary for healthy growth.
3. Improved photosynthesis:Magnesium is an essential component of chlorophyll, the pigment that enables plants to carry out photosynthesis. By providing an adequate supply of magnesium, Epsom salt can enhance the photosynthetic process, leading to better growth and vitality of aquarium plants.
4. Prevention of nutrient deficiencies: Magnesium deficiency can manifest in aquarium plants as yellowing or browning of leaves, stunted growth, and poor overall health. By adding Epsom salt to your aquarium substrate or water, you can help prevent magnesium deficiency and maintain the optimal health of your aquatic vegetation.
How to use Epsom salt for aquarium plants?
Now that we understand the benefits of using Epsom salt for aquarium plants, let’s explore how to incorporate it into your aquatic ecosystem. Here are some tips to help you use Epsom salt effectively:
1. Dosage:It is important to use Epsom salt in the correct dosage to avoid any adverse effects on your aquarium plants or fish. As a general guideline, you can add around 1-2 teaspoons of Epsom salt per 20 gallons of aquarium water. However, it’s always best to start with a lower dosage and gradually increase if needed, based on the specific needs of your plants.
2. Application methods: There are different ways to apply Epsom salt to your aquarium plants. You can either dissolve it in water and directly add it to your aquarium, or you can apply it to the substrate around the plants. If you choose the latter method, make sure to mix the Epsom salt with the substrate thoroughly to ensure even distribution and avoid direct contact with the plant’s roots.
3. Frequency: The frequency of Epsom salt application will depend on the specific requirements of your plants. In general, it is recommended to apply Epsom salt once every couple of months or as needed. Monitor the health and growth of your plants closely to determine if they need additional doses of Epsom salt.
4. Water parameters: Before adding Epsom salt to your aquarium, it is crucial to test your water parameters, including the magnesium levels. This will help you assess whether Epsom salt supplementation is necessary or if your aquarium already has sufficient magnesium levels.
Frequently Asked Questions
1: Can Epsom salt be harmful to aquarium plants if used in excess?
Like any fertilizer, Epsom salt should be used in moderation. Excessive use of Epsom salt can lead to imbalances in your aquarium’s nutrient levels, potentially causing harm to your plants. It is essential to follow the recommended dosage and monitor the health of your plants to ensure they are not being over-fertilized.
2: Does Epsom salt benefit all types of aquarium plants?
While Epsom salt can be beneficial for most aquarium plants, it is essential to consider the specific needs of each plant species. Some plants may require different nutrient ratios or may not respond positively to Epsom salt supplementation. Researching the specific requirements of your plants will help you determine if Epsom salt is suitable for them.
3: Can Epsom salt be used alongside other fertilizers?
Yes, Epsom salt can be used in conjunction with other fertilizers to provide a balanced nutrient supply for your aquarium plants. However, it is important to ensure that the combination of fertilizers does not lead to excessive nutrient levels, which can be detrimental to your plants and fish.
4: Can Epsom salt be used for treating plant diseases in aquariums?
While Epsom salt is primarily used as a nutrient supplement for aquarium plants, it does have some antimicrobial properties. It may help to prevent or alleviate certain plant diseases caused by pathogens. However, if you suspect a disease or infection in your aquarium plants, it is best to consult a professional for proper diagnosis and treatment.
Using Epsom salt in your aquarium can be a beneficial practice for promoting the healthy growth of your plants. Its magnesium content provides essential nutrients, enhances nutrient uptake, and improves photosynthesis, leading to vibrant and robust aquatic vegetation. By following the proper dosage and application methods, you can effectively incorporate Epsom salt into your aquarium’s care routine and enjoy the beauty of thriving plant life in your underwater paradise. |
A fuel pump draws between 5 and 7 amps when it is operating. This means that if your car’s battery only has enough power to run the fuel pump for 30 minutes, it will be completely drained in two hours. To avoid this, make sure to keep your car’s battery charged and check it regularly.
A fuel pump is a mechanical device that moves fuel from the tank to the engine. It is usually located near the fuel tank, and it consists of a pump, a filter, and a pressure regulator. The pump draws fuel from the tank and pushes it through the filter to remove any contaminants.
The filtered fuel then passes through the pressure regulator, which controls the pressure of the fuel system.
Voltage at Fuel Pump
If your car’s fuel pump has failed, it may be due to a problem with the voltage. The fuel pump needs a certain amount of voltage in order to function properly, and if there is not enough, the pump will not be able to do its job. There are a few things that can cause low voltage at the fuel pump, including:
1. A problem with the battery – If the battery is not providing enough power, it can affect the voltage at the fuel pump. This is most likely to be an issue if the battery is old or damaged. 2. A problem with the alternator – The alternator charges the battery and also provides power to other systems in the car, including the fuel pump.
If there is a problem with the alternator, it can cause low voltage at the fuel pump. 3. A problem with wiring – If there are any loose or damaged wires between the battery and fuel pump, it can disrupt the flow of electricity and cause low voltage at the fuel pump.
How Much Power Does a Fuel Pump Need?
How much power does a fuel pump need?
This is a difficult question to answer without knowing more about the specific fuel pump in question. Generally speaking, however, most fuel pumps will require between 1 and 5 horsepower to function properly.
Some larger or more powerful pumps may require up to 10 horsepower. Ultimately, the amount of power required will depend on the size and capacity of the pump.
How Do You Check Amp Draw on a Fuel Pump?
To check the amperage draw on a fuel pump, you will need to use a multimeter. First, disconnect the negative battery terminal to prevent any electrical shorts. Next, locate the fuel pump relay in the engine bay and remove it.
With the relay removed, connect one lead of the multimeter to the ground terminal on the relay socket and touch the other lead to each of the terminals on thesocket. The terminal that has continuity with ground is your power (85)terminal. The other two terminals are your accessories (86)and trigger (87).
Now that you know which terminals are which, you can test for amperage draw. With the key in the ON position but not running, touch your meter lead tothe power terminal and notethe reading. It should be around 1 amp.
If it’s significantly higher or lower than that, there may be an issue with your fuel pump or wiring.
What Causes High Fuel Pump Amperage Draw?
A fuel pump is responsible for moving fuel from the tank to the engine. It uses an electric motor to operate a diaphragm that pressurizes the fuel and forces it through the system.
The current draw of a fuel pump is affected by a number of factors, including the size of the pump, the speed at which it is operated, and the resistance of the fuel line.
A higher current draw indicates that the pump is working harder to move the same amount of fuel, which can be caused by any of these factors. If you suspect that your fuel pump may be drawing too much current, there are a few things you can check. First, make sure that all connections are tight and there are no leaks in the system.
Next, check the pressure regulator to ensure that it is set correctly. Finally, inspect the electrical wiring to ensure that there are no damaged or frayed wires.
How Many Amps Does a Gm Fuel Pump Draw?
A GM fuel pump draws approximately 10 amps.
A fuel pump draws between 5 and 10 amps when it is running. This means that it takes between 0.5 and 1 seconds for the fuel pump to draw enough current to start the engine. |
Individualism relates to the way people relate to others. In an individualist culture, the wants of the person supersede the wants of other people in the group. In an individualist community, people are expected to watch for themselves or even the immediate family (Andre 270). Conversely, in a collectivist populace, individuals from infancy henceforth are merged into robust, cohesive in-unions, and extended families.
Leadership enhancement is crucial because companies adopt the characters of their leaders (Andre 273). Leadership is an intricate matter. Regardless of how some leaders may appear to go about their work of management effortlessly, the trajectory of a leader is usually accompanied by persistent difficulties and contingencies. The work of a leader is not to tackle every problem individually, but to motivate his followers to address the challenges. The best leaders identify that they do not have solutions to every problem and so they continually re-educate themselves.
Individualism is an old concept. It animated about 250 years ago in the United States (Andre 275). The country was founded upon the principles of individual responsibility and self-dependence. These virtues were key to the Founding Fathers mentality as they impacted the cultural, economic and political entities of the promising country. The Founders particularly emphasized on individual liberty (Zaharna 200).
It is normal for conflict to emerge once in a while. Everyone perceives the environment around distinctively, and thus we take different choices. Always have a positive mind when looking at conflict (Zaharna 205). Conflict is a natural development procedure and shapes the culture of the company. Creating a formal grievance method accessible to all employees is important. All workers at all ranks should be able to voice their opinions, and they should receive reasonable feedbacks promptly.
The mass media has influenced and continues to shape an essential function in the evolution of today's individualism. The culture of individual ecstasy, the importance of pleasure and the model of intimate satisfaction are dispersed throughout mass norms, television, and press. These medias have made it possible to fulfill these wants, and the self-love has become a socially acceptable way of practice, exemplary for the masses.
Providing discounts for new registrants will encourage more people to enlist. Involving employees and the administrators to augment visibility within the campus builds cohesion. Adopting different approaches to sending invitations is a great step of convincing college students. Incentivizing learners with physical rewards tend to bring the students together (Zaharna 208). Issuing refreshments at occasions make the students happy and keep them going during discussions.
Team satisfaction is cardinal in assessing whether the approach is successful. Most of the time, this issue is ignored assuming that people will always appear when required. The quality of a venture is a great metric to measure the effectiveness of a technique. The outcome of a given project normally impacts another. Therefore it is key to govern quality and adopt changes to undertakings ahead accordingly.
Companies create groups to accomplish goals that enhance the quality of commodities, decline wastes or eliminate inefficiencies in a system. Leaders ensure that there is effective communication between team members. Also, leaders should regularly update team members on the progress of an undertaking. Leaders should inspire their followers. They do this by leading by example. Leaders also have the responsibility of establishing a plan to attain goals of a given venture.
Van Hoorn, Andre. "IndividualistCollectivist Culture and Trust Radius: A Multilevel Approach." Journal of Cross-Cultural Psychology 46.2 (2015): 269-276.
Zaharna, R. S. "Beyond the IndividualismCollectivism Divide to Relationalism: Explicating Cultural Assumptions in the Concept of Relationships. " Communication Theory 26.2 (2015): 190-211.
If you are the original author of this essay and no longer wish to have it published on the thesishelpers.org website, please click below to request its removal:
- Essay Example on Strategic Planning
- Safety, Health, and Welfare at Work Act 2005
- Mastering Change Management by C. Pope
- Managing and Leading People - Essay Example
- Fundraising in California Community Foundation. Essay Example
- Research Paper on Corporate Communication
- Essay on the Leadership Style of Bill Clinton |
The beloved children’s author and illustrator, Dr. Seuss, was born March 2, 1904. Children around the world celebrate his birthday by reading his treasured books and spending time in school and at home doing fun activities that reflect his brilliant work. Don’t miss out on the chance to have some fun with your kids!
Here are some fun, fan-seussical activities to do with your kids:
- The National Education Association has chosen Dr. Seuss’s birthday for its Read Across America Day, a nationwide reading celebration that shines a spotlight on the importance of reading to children. The obvious activity of choice for the day – read to your kids! There are 46 Dr. Seuss children’s books to choose from!
- Dr. Seuss books are ideal for reading aloud to children, even infants who will be mesmerized by the rhymes and preschoolers who will get big kicks out of the nonsensical nature of the stories. Early readers get a boost of confidence from tackling “Hop on Pop,” or other beginner favorites.
- The official Dr. Seuss website by Random House is chock full of activities for kids to do on their own and a Parents section to provide tips on maximizing both the quality and the quantity of reading time in your home. The website’s Books section provides a search tool to help you select Seuss books by your child’s age, a favorite character or series, and other criteria.
- Create a Reading Chart to track reading time and progress. The blogger of Mom Endeavors crafted a simple DIY chart centered on a favorite Seuss quote:
“The more that you read, the more things you will know. The more that you learn, the more places you’ll go.”
- Use these free printables to have kids color and create bookmarks that encourage them to read well beyond today’s festivities.
- Make and play with Oobleck. This ooey, gooey not-quite-liquid-not-quite-solid play gunk is inspired from the strange stuff that falls from the sky in “Bartholomew and the Oobleck.” The non-edible recipe: Mix together 2 cups cornstarch, 3 drops of food coloring and up to 1 cup of water in a medium sized bowl. Add water slowly as you may not need entire cup.
- One Fish, Two Fish Matching Game and Magnets. Before you can play this simple matching game with your kids, you’ll need to bust out a bit of DIY. You’ll need foam board, magnets, a Sharpie, and images from either printables or those you’ve traced or sketched yourself. Detailed instructions are here.
- Record your early reader reading “Green Eggs and Ham.” This handy app makes it fun and easy to record and the accompanying games and activities will hold your child’s attention long enough for you to create the matching game listed above.
- Host a playdate or a movie night (or afternoon) by popping some popcorn and showing a Seuss classic. “Horton Hears a Who” is entertaining for little kids and “The Lorax” is tops with older kids who can appreciate the environmental message, as much as the characters and cool animation.
- A celebration isn’t a celebration without food! If you aren’t up to tackling The Lorax cake pictured below, there are many other simple treats to make the day even more memorable and fun.
- Truffula Trees – Adorable mini cupcakes you can create with a cake mix, pretzel sticks and cotton candy, along with a few extras to make them look like the trees from The Lorax movie.
- Do You Like Green Eggs and Ham? Here’s a Green Eggs and Ham recipe that will be just as appealing to adults as kids (and it doesn’t involved icky food coloring!). Try it, try it, you will see!
And, for those with wee-little ones who shouldn’t miss out on the fun, how about this photo opt for the baby book?! |
This article by Dr David Frawley (Pandit Vamadeva Shastri) was first published by DailyO.
India has been primarily Hindu in terms of culture and religion for many centuries, extending to thousands of years. Hinduism has endured remarkably through long periods of foreign invasion and hostile rule, though other ancient religions have long since perished.
We find this vast spiritual and cultural tradition comprehensively explained as early as the Mahabharata, and synthesised philosophically in the Bhagavad Gita more than 2,000 years ago. The Mahabharata describes the geography of the entire subcontinent of India relative to the worship of Krishna, Rama, Vishnu, Shiva and Durga, explaining the main deity forms and yogic teachings of later Hinduism, as well as delineating the rule of kings. Other important dharmic traditions, notably Buddhism and Jainism, share a common culture, values and practices with the Hindu.
Christianity arrived in India at an early period but was a minor influence until the colonial period. Islam began inroads in the eighth century and become a strong force after the thirteenth century. Yet these religions, in spite of great efforts, could not replace Hinduism as the dominant cultural tradition.
Composite culture and cultural continuity
Culture has an identity and continuity that evolves over time. In this regard, we can speak of an Indian culture and identity that is predominantly Hindu, just as we can speak of a European culture and identity that is predominantly Judeo-Christian, or a Middle Eastern culture that is predominantly Islamic.
There is certainly much beautiful art, profound philosophy, transformative yoga practices and deep experiential spirituality in Hindu and related dharmic traditions. This ancient dharmic culture spread to East Asia, Indochina and Indonesia, but also to Central Asia and influenced West Asia and Europe.
Yet Hindu dharma has not been frozen in time and continues to assimilate not only other religions, but also science, democracy and other modern trends, without losing its identity as promoting the spiritual quest above outer forms or dogmas.
It is crucial that India recognises its past, which has a strong Hindu component, in order to understand its cultural heritage. There may be aspects of older traditions that are not politically or scientifically correct in terms of current standards or may need reform, just as is the case with older cultures of the world. But there is much of tremendous value that should not be forgotten.
The fear of Hindu majoritarianism
There is a fear in India that highlighting its Hindu past may alienate non-Hindus or make Hindus intolerant today. There is a fear of Hindu majoritarianism in India, just as there isa fear of Christian majoritarianism in the West, or Islamic majoritarianism in the Middle East.
Yet Hinduism has never had a single book, church, or religious law, nor any single savior or religious leader. It recognises that the Divine dwells in the hearts of all beings as the very power of consciousness. Its views of religion and culture are pluralistic and synthetic, not exclusivist or monolithic. Hinduism has not been an aggressive religion, but one often under siege owing to its emphasis on inner spiritual practice over seeking power in the external world.
The British tried to eradicate pride in India’s past through denigrating Hindu teachings starting with the Vedas. Though they preserved certain Sanskrit texts, their interpretations were condescending and inaccurate. Marxist and Freudian scholars have continued with demeaning interpretations of Hinduism and miss its sublime art and spirituality.
The great gurus of modern India since Vivekananda have kept the teachings alive and expanding in spite of such concerted efforts that have even targeted them personally.
India’s characteristic culture and yogic spirituality that the world honours owes a great deal to its Hindu background. India has more peace and tolerance today than Pakistan and Bangladesh that have rejected their Hindu past and where the percentage of Hindus in the country has been radically reduced. Muslims have greater religious freedom in India than in Pakistan, with Islamic groups like Shias and Ahmadiyyas that are often attacked in Pakistan able to operate freely in India.
Mahatma Gandhi referred to himself as a “proud Hindu”. Yet such a term will rarely be found repeated in media and academic circles in the country today.
Hindu dharma has supported the timeless spirit of India and should be respected for its role. Hinduism remains one of the greatest cultural, religious and spiritual traditions in the world. An India without Hindu dharma would not be India. |
ISSN (International Standard Serial Number) is an eight-digit numeric code that uniquely identifies the titles of periodicals and other so-called continuing resources published anywhere in the world. ISSN records are stored in a reference database – the ISSN Portal. The numeric code itself has no meaning, no connotation to the origin or content of the serial.
What is ISSN for?
- You can use ISSN in citations from peer-reviewed journals.
- ISSN is used as an identification code for computer processing, retrieval and data transfer purposes.
- ISSN is used by libraries to identify and order journals, for interlibrary loan and file cataloging needs.
- The ISSN is the basic data for efficient electronic delivery of documents.
- The ISSN can be used to create a GTIN 13 barcode according to CSN 97 7116 for the distribution of periodicals. Detailed information on the GS1 Czech Republic website
- ISO 3297 (Information and documentation – International standard serial number) defines a standard code (ISSN) to uniquely identify serial publications (continuing resources).
- No copyright is derived from the assignment of ISSNs or from their use in place of or in connection with the publications they represent (see ISO 3297 clause 5.10).
- Act No 46/2000 Coll., as amended (the Press Act), regulates the obligations of publishers of periodicals, including the obligation to register printed periodicals (distributed publicly and published at least twice a year or more frequently) with the Ministry of Culture, Department of Media and Audiovisual. This Act does not yet regulate the obligations of publishers of online periodicals (continuing sources), nor does it authorise any institution to register or register them.
- CSN 97 7116 (Barcodes – Marking serial publications with the GTIN system barcode).
- For more information on ISSN and its general assignment criteria, please refer to the ISSN Manual of the ISSN International Centre. |
Table of Contents
Why is a banana not living?
Like other fresh fruit and vegetables, bananas stay alive after picking. Like people, bananas breathe, or respire, taking in oxygen and releasing carbon dioxide ― but through their skin. The more a banana respires, the quicker it ripens.
Is fruit a living thing?
The fruits and vegetables we buy in the grocery store are actually still alive, and it matters to them what time of day it is. “Vegetables and fruits don’t die the moment they are harvested,” said lead researcher Dr. Janet Braam, Professor of Biochemistry and Cell Biology at Rice University in Houston, Texas.
What is a banana classified as?
Bananas are both a fruit and not a fruit. While the banana plant is colloquially called a banana tree, it’s actually an herb distantly related to ginger, since the plant has a succulent tree stem, instead of a wood one. The yellow thing you peel and eat is, in fact, a fruit because it contains the seeds of the plant.
Are fruits non-living?
Fruits and vegetables when they are in plants they grow and hence they are called as living things. But once plucked from the plants or trees, they do not grow and hence they become a non-living things.
Is Onion a living thing?
Paper is made of plant cells so it must be living. Onions are loaded with cells so onions must be a living thing.
Is Carrot a living thing?
A: No! When you harvest the carrot it will turn into a non-living thing. So now that the carrot has nothing to feed itself on, it can not live. The carrot will only survive in a good condition a couple of days, depends where you put it.
Is a banana a fruit or a berry?
Bananas develop from a flower with a single ovary and have a soft skin, fleshy middle and small seeds. As such, they meet all botanical requirements of a berry and can be considered both a fruit and berry.
Is banana considered a fruit?
Banana/Fruit or Vegetable
Is an onion a living thing?
Is it true that bananas are alive when you eat them?
Bananas are considered as fruit while plantains are considered a vegetable. No more or less than any other fruit with seeds in it. I have a feeling that the tiny seeds in dessert bananas aren’t viable though, so I guess the case could be made that banans are slightly less alive than apples are when you eat them.
Are there any seeds in a ripe banana?
The only thing that might be considered as “alive’ in a picked ripe banana is the presence of any seeds. However, the most popular banana type (the Cavendish) does not have seeds in them. They are specifically bred not to have them and are the result of a specialized growth out of a banana plant stem called a rhizome.
Are there any health benefits to eating bananas?
A wide variety of health benefits are associated with the curvy yellow fruit. Bananas are high in potassium and pectin, a form of fiber, said Laura Flores, a San Diego-based nutritionist. They can also be a good way to ge Bananas are one of the world’s most appealing fruits. |
First digital morphing
Silent, black and white film
1) A ten-minute computer animated film by Charles Csuri and James Shaffer. Was awarded a prize at the 4th annual International Experimental Film Competition in Brussels, Belgium and in the collection of The Museum of Modern Art, New York City. The subject was a line drawing of a hummingbird for which a sequence of movements appropriate to the bird were programmed. Over 30,000 images comprising some 25 motion sequences were generated by the computer.
2) To facilitate control over the motion of some sequences, the programs were written to read all the controlling parameters from cards, one card for each frame. Curve fit or other date generating programs were used to punch the parameter decks. Each line of the bird was distributed at random. The computer drew the chaotic version first, and in progressive stages brought the bird back together.
3) Digital morphing - computer technique involved distorting one image at the same time that it faded into another through marking corresponding points and vectors on the "before" and "after" images used in the morph. For example, one would morph one face into another by marking key points on the first face, such as the contour of the nose or location of an eye, and mark where these same points existed on the second face. The computer would then distort the first face to have the shape of the second face at the same time that it faded the two faces. To compute the transformation of image coordinates required for the distortion, the algorithm of Beier and Neely can be used.
4) First primitive photorealistic morphing was in NYIT demo 2 (1980). |
Treatment of Water Systems Contaminated with Pesticide (Monocrotophos) and using for Recharghing Ground Water
Owing to large-scale use of monochrotophos, an OP compound, contaminations of water systems have been
reported from all parts of the world. Despite strict environment laws, The Water (Prevention and Control of Pollution) Act ,
there is nothing done to tackle this problem, and therefore decontamination and detoxification of the pesticide polluted water
is essential. Floodwaters contribute to high levels of pesticides residues in streams, lakes and rivers and ground water.
Chemical and physical methods of decontamination are costly and time-consuming. Bioremediation provides a suitable way
to remove contaminants from water. The water which is contaminated with pesticides will have to be pumped out to a
storage tank for land application for degradation and later can be used recharge the ground water. The role of soil micro
organisms in attenuation of monocrotophos in two major varieties of soils was determined by comparing the relative rates of
their degradation in sterilised and non sterilised soil samples. To evaluate the degradation of a widely used
organophosphorus insecticide, monocrotophos (dimethyl (E)-1-methyl-2-methylcarbamoyl vinyl phosphate) by soil
microorganisms, in two major variety of Indian agricultural soils (red sandy loam and black loam soil) at concentration level
of 2225ppm under aerobic conditions at 60% water-holding capacity at 28 ± 4 °C was studied in a laboratory. The
degradation of monocrotophos at this concentrations in non sterilized black and red soils was rapid accounting for 100%
degradation of the applied quantity within 16 and 20 days. In contrast 31% and 22% of initial amount of MCP remained in
sterile-soil, even after 30 days of incubation. |
Today a quiet park adjacent to a railroad line takes one back one hundred and fifty seven years. For this woodland of long leaf oak and slash pine was the scene of a fierce battle between forces of the Confederacy and the Union. It was 1864 and five thousand Union troops had moved inland from Jacksonville toward Lake City following an existing rail line. At Olustee they were met by an equal number of men under Confederate General Joseph Finegan.
The ensuing battle was the largest to take place in Florida and not of major importance to the war as a whole. But it was a bloody affair indeed and a complete defeat for the Union forces that suffered casualties of one third of the number that took part. The Confederates lost twenty per cent of their soldiers to death and wounds.
Our visit to the Olustee Battlefield Historic State Park was on a Sunday, a one hour drive from Jacksonville. We were greeted by Park Ranger Frank Loughlan who immediately turned on the Television monitor for a fifteen minute video documentary that recreates the battle. Each year a reenactment takes place in February, the month of the battle with thousands of volunteers dressed as Confederate and Union soldiers. And most of the scenes depicted in the film were taken during an reenactment.
Then Frank left us to wander around taking photos and mulling through the museum which was once the Olustee Depot for the railroad. The Park and Museum are open daily from 9 a.m to 5 p.m. The surrounding land is managed by the U.S. Forest Service. There are tables for picnics and trails that follow the battle lines. Called the Battlefield Trail it loops through the park allowing one to see the tactics of the battle and what followed the defeat of the Union forces. The Park is on U.S. 90, a short distance from I-10. for more information call 366 758-0400, the web at www.floridastateparks.org.
A short distance from the Park is Ocean Pond Campground operated by the U.S. Forest Service. Ocean Pond played a role in the Olustee battlefield by restricting the movements of the Union advance toward Lake City.
Graphic Design by Impact Graphics |
Usually the occurrence of a stroke takes place in people over the age of 65. According to a recent report, young people are now at a high risk for the condition.
Published in the journal of Neurology, the consensus report analyzed the management of stroke in people between ages 15-44 in the US.
This analysis revealed that 15% of the most common type of strokes occur in young people. More young people are also showing higher risk factors for strokes.
Figures reveal that between 532,000 and 852,000 people between the ages of 18 and 44 have had a stroke in the US.
Statistics show that US hospital discharges for strokes among people between 15 and 44 years of age increased from 23% in 1995/96 to 53% in 2007/08.
Eighty-five percent of all strokes are ischemic which is caused by blockages that restrict blood flow to the brain.
The risk factors for ischemic strokes are higher for young people. The risks include high blood pressure, high cholesterol, diabetes, smoking, obesity, and congenital heart disease.
There has been limited public health efforts to address the issue of strokes in young people today.
Due to lack of awareness, early diagnoses of the condition remains challenging.
Researchers know that the issue needs to be addressed by making young people more aware of the risks and effects of a stroke. |
The Shoebill Stork; lifespan, Where to see the Shoebill stork
The shoebill stork is one of the most elusive rarest, most famous bird in Africa and one of the world’s ugliest bird. The Shoebill is a censoriously endangered bird species that can only be observed on an African safari mainly in Uganda because this is where they can be easily spotted compared to other African Safari destinations. The shoebill stork is one of the most majestic pre-historical looking bird that still exists today and it is believed that the shoebill is related to the now-extinct dinosaurs because of its appearance.
What is a Shoebill Stork?
The Shoebill is a very large long-legged wading bird that is also commonly known as the Whale bill, Balaeniceps rex, Whale-headed stork and shoe-billed stork. The shoebill bird is an elusive bird whose name was derived from its enormous shoe-shaped bill and its bill is ranked as the third longest bill among extant birds after pelicans and other large storks, it is classified with the storks in the order Ciconiiformes basing on its morphology, however, the genetic evidence classifies it with the pelicans and herons in the Pelecaniformes. The shoebill is a tall bird with a typical height ranging from 110 to 140 cm and weighing between 4 to 7 kg, its bill is straw-colored with erratic greyish markings. The shoebill’s feet is exceptionally large and it enables the shoebill to stand on aquatic vegetation while hunting and its neck is relatively shorter and thicker than other long-legged wading birds.
Why is the Shoebill Stork a unique bird?
The shoebill stork is a unique bird with the 3rd longest beak in the world and it is one of the most sought birds during a birding safari in Africa because of its unique features including its prehistorical dinosaur looks, the striking pale blue-eyed genes, its foot-long shoe-shaped bill that resembles a Dutch wooden clog, they have a long lifespan of 36 years and above, it has a big honking beak spanning 7 inches, it is one of the slowest bird when flying, it is among the extremely patient birds and it is an ambush bird which is capable of standing motionlessly like a statue for long periods to attack any prey within a safe distance.
How rare is it to see a shoebill stork
The shoebill storks are facing serious extinction due to different factors including hunting, habitat destruction, livestock ranching, climate changes, and many more which have affected the population of these birds hence they are rarely seen in the wild. The shoebills are critically endangered birds whose number is decreasing and their population is less than 5000.
Where to see the Shoebill stork in Uganda
Among all the birding destinations in Africa, Uganda is ranked as the top birding destination with the highest chances of spotting these critically endangered bird species in its natural habitat of freshwater swamps, marshes, Lakes, and Rivers with mixed vegetation. Below are the birding places in Uganda w
This is one of the best places to spot the elusive shoebill stork. This swamp is situated in the west of Entebbe on the northern shores of Lake Victoria in Uganda. The swamp covers an area of 2424 ha and it has thick marshes of papyrus, and water lilies. The swamp was named after the lungfish which is commonly found in its waters thereby attracting the shoebill because the primary food for the shoebill is Lungfish.
Semuliki National park
Semuliki national park is one of the finest top birding destinations founded in October 1993 in Uganda, situated in the western region in Bundibugyo district mainly in Bwamba County under the Uganda Wildlife Authority management. While in Semuliki, the shoebill stork can be easily viewed close to the Lake Albert area.
Queen Elizabeth National park
Queen Elizabeth national park is undoubtedly Uganda’s famous park known for harboring a large number of wildlife species in the park, is located in the Kasese district about 370 km South-West of Kampala in Uganda. The park is famously known for its majestic tree-climbing lions in the Ishasha sector and the shoebill stork can be spotted along Lake Edward flats in the Ishasha sector.
Murchison Falls National Park
This is one of the most visited national parks in Uganda situated in Northwestern Uganda spreading inland from the shores of Lake Albert around the Victoria Nile up to Karuma falls, Murchison falls National park is famously known for having the most powerful falls in the world. While in this park, the shoebill can be spotted on a boat cruise in the Albert delta where River Nile joins into Lake Albert.
Lake Mburo National Park
Lake Mburo national park is the only park in Uganda with elands, impala, and klipspringer and it is home to the largest population of Zebras. This park is situated in Nyabushozi county, Kiruhura district near Mbarara in Uganda. The shoebill can be spotted on a boat cruise on Lake Mburo.
Makanaga Bay Swamp
After the Mabamba swamp, the Makanaga swamp is an incredible birding site suitable for observing the shoebill stork and other water birds. This wetland is situated on Lake Victoria in the Mpigi district and it can be easily accessed on a canoe boat from Entebbe or Kampala.
Lugogo swamp is also a wonderful wetland suitable for shoebill watching, this swamp is found within Ziwa Rhino Sanctuary which is situated in Nakasongola in the River Kafu basin in the North of Kampala along the Kampala-Gulu highway. Lugogo swamp covers an area of 10km wide.
Uganda Wildlife Education Centre
The Uganda Wildlife education center is an exciting place to see the most cherished shoebill bird. It is located in Entebbe, it is very close to Entebbe International Airport which is a 10 minutes drive from the airport to Uganda wildlife education Centre and it stretches to Lake Victoria. This wildlife education Centre covers an area of approximately 70 acres comprising three ecosystems including Savannah, Forest and wetland.
here the shoebill stork and other bird species can be sighted throughout the day.
Can the Shoebill stork fly?
Despite their weight and height, shoebills can fly just like other bird species however it is one of the slowest of any bird and their long flights are rare and they fly not more than 100 to 500 m. The shoebill flies with its neck retracted and when its wings are held flat while soaring, its flapping rate estimated is at 150 flaps per minute.
What do the Shoebill storks feed on?
The shoebill storks are predators of wetland vertebrates because they are fond of Aquatic species mainly Marbled and African Lungfish, eels, and catfish however they crazily love Nile monitor lizards, water snakes, frogs, turtles, small waterfowl, and baby crocodiles which they hunt from swamps. The shoebill hunts patiently by standing motionlessly like a statue on floating vegetation to strike at its prey at the right moment using its beak. These species hunt entirely using vision while stalking its prey silently without engaging in tactile hunting
What is the lifespan of a shoebill stork?
The shoebills are solitary birds with an extraordinarily long life span compared to other bird species and animals whereby they can live longer for more than 35 years in the wild.
Are shoebills friendly to humans?
The shoebills are submissive shy bird species especially to humans, they are not aggressive in the presence of humans however the only thing they can do is to stare back at humans without imposing any threat and there are no records of shoebills attacking humans but they are quite sensitive to human disturbance. The shoebills are considered to be among the five most desirable birds in Africa b birders and researchers have been able to observe them on their nests at a close distance within 2 meters without portraying any threatening behavior.
Why do shoebills bow and shake their heads?
Shoebills have a proper way of communicating with each other through bowing to one another and they as well bow to attract their mates during their mating ritual. The shoebills normally shake their heads to expel water and algae which they scoop up while still holding tightly their main meal.
What happens if you bow to a shoebill?
Shoebills are interesting birds whereby if a human takes a deep bow, it bows back as a sign of respect and it even allows people to approach it and touch it unlike when they do not bow, it flies away.
Why is a shoebill stork scary?
The shoebill is ranked as the 1st scariest looking bird among the 7 scary bird species because of their look which comprises of strange death like stare. It has a strange historical appearance which is terrifying and creepy due its oversized shoe-shaped beak, it is a giant bird with a large body and huge wings. |
As the first international standards developed for IDPs, the principles were presented to the United Nations Commission on Human Rights in April 1998 by the Representative of the Secretary-General on Internally Displaced Persons. The 53-member Commission, in a resolution adopted unanimously, took note of the principles and acknowledged the Representative's stated intention to make use of them in his work. It requested him to report on these efforts and on the views expressed to him by governments, intergovernmental organisations and NGOs. The resolution further noted that the Inter-Agency Standing Committee (IASC), composed of the heads of the major international humanitarian and development organisations, had welcomed the Guiding Principles and encouraged its members to share them with their Executive Boards. The IASC's March 1998 decision had also encouraged its members to share the principles with their staff and to apply the principles in their activities on behalf of IDPs.
Reinforcing the IASC decision, UNHCR, UNICEF, WFP, ICRC and IOM made statements before the Commission emphasising the importance of the Guiding Principles to their work. UNICEF described the principles as "an excellent reference point which will serve as the international standard for the protection and assistance of IDPs." WFP observed that the principles would increase international awareness of the specific problems IDPs face as well as the legal norms relevant to addressing their needs. NGOs, in interventions to the Commission, urged effective action in the field on the basis of the principles' provisions.
Although the Commission was not asked or expected to adopt the principles, it took an important step toward advancing protection for IDPs by acknowledging the principles and their expected use in the field.
The need for principles
The Guiding Principles consolidate into one document all the international norms relevant to IDPs, otherwise dispersed in many different instruments. Although not a legally binding document, the principles reflect and are consistent with existing international human rights and humanitarian law. In re-stating existing norms, they also seek to address grey areas and gaps. An earlier study had found seventeen areas of insufficient protection for IDPs and eight areas of clear gaps in the law.(1) No norm, for example, could be found explicitly prohibiting the forcible return of internally displaced persons to places of danger. Nor was there a right to restitution of property lost as a consequence of displacement during armed conflict or to compensation for its loss. The law, moreover, was silent about internment of IDPs in camps. Special guarantees for women and children were needed.
The principles, developed by a team of international lawyers, do not create a new legal status for IDPs. Since IDPs are within their own country, they enjoy the same rights and freedoms as other persons in their country. They do, however, have special needs by virtue of their displacement which the principles seek to address.
They apply to both governments and insurgent forces since both frequently cause displacement and subject IDPs to abuse. They also deal with all phases of displacement. Most intergovernmental organisations and NGOs become involved only after displacement takes place, or during the phase of return and reintegration. But the principles also address the prevention of unlawful displacement. In the introduction to the principles, IDPs are described as "persons or groups of persons who have been forced or obliged to flee or to leave their homes or places of habitual residence, in particular as a result of or in order to avoid the effects of armed conflict, situations of generalised violence, violations of human rights or natural or human-made disasters, and who have not crossed an internationally recognised frontier." This is the broadest definition of IDPs in use at the international or regional level.
The content of the principles
The first section of the principles deals with protection against displacement and explicitly states the grounds and conditions on which displacement is impermissible and the minimum procedural guarantees to be complied with, should displacement occur. The principles make clear, for example, that displacement is prohibited when it is based on policies of apartheid, 'ethnic cleansing’, or other practices "aimed at or resulting in altering the ethnic, religious or racial composition of the affected population." They also consider as arbitrary displacement "cases of large-scale development projects, which are not justifed by compelling and overriding public interests." It is also made clear that displacement should not be carried out in a manner that violates the rights to life, dignity, liberty, or the security of those affected. States, moreover, have a particular obligation to provide protection against displacement to indigenous peoples and other groups with a special dependency on, and attachment to, their lands [see article by Chatty].
The section relating to protection during displacement covers a broad range of rights. In most instances, general norms are affirmed, followed by the specific rights needed by IDPs to give effect to these norms. For example, after the general norm prohibiting cruel and inhuman treatment is affirmed, it is specified that IDPs must not be forcibly returned or resettled to conditions where their life, safety, liberty and/or health would be at risk. Similarly, after the general norm on respect for family life, it is specified that families separated by displacement should be reunited as quickly as possible. And the general norm recognising a person before the law is given effect by specifying that IDPs shall be issued all documents necessary to enable them to enjoy their legal rights and that authorities must facilitate the replacement of documents lost in the course of displacement.
Special attention is paid to the needs of women and children, including a prohibition against gender-specific violence and provisions calling for the full participation of women in the planning and distribution of food and basic supplies. Access by women to female health providers and to reproductive health care is also affirmed, and the equal rights of women to obtain documents and to have such documentation issued in their own names is provided. The forcible recruitment of children into armed forces is prohibited and special efforts are called for to reunite children with their families.
Of particular importance are the principles relating to the provision of humanitarian assistance, given the frequent efforts of governments and insurgent groups to obstruct relief and deliberately starve populations. The principles prohibit starvation as a method of combat. They affirm the right of IDPs to request humanitarian assistance, the right of international actors to offer such assistance and the duty of states to accept such offers. Indeed, consent on the part of governments and other authorities to receiving humanitarian assistance cannot be arbitrarily withheld. "Rapid and unimpeded access to the internally displaced" is insisted upon.
Another innovative provision concerns the role of humanitarian organisations. In providing assistance, these are asked to "give due regard to the protection needs and human rights of internally displaced persons," and to "take appropriate measures in this regard." Since many humanitarian and development organisations have provided assistance to IDPs without paying sufficient attention to their protection and human rights needs, the emphasis here on protection is a welcome change. Indeed, the futility of feeding people without attention to their protection needs has been demonstrated time and again, in Bosnia and in crises around the world. Acknowledging this, UN Secretary-General Kofi Annan has called for a more integrated approach in humanitarian emergencies so that protection and assistance are addressed comprehensively.
The final section of the principles relating to resettlement and reintegration makes clear that IDPs have a right to return to their homes or places of habitual residence voluntarily and in safety and dignity, or to resettle voluntarily in another part of the country. This is especially pertinent since IDPs are often forced to return to their homes whether or not the areas are safe and irrespective of their wishes to resettle in other parts of the country. Another necessary provision is the one providing for the recovery of property and possessions lost as a result of displacement and for compensation or reparation if recovery is not possible.
Application of the principles
The important next step is to give wide dissemination to the principles, so as to increase international awareness of the needs of IDPs and of the legal standards pertinent to their needs. While the principles alone cannot prevent displacement or the violation of the rights of IDPs, they do serve notice to governments and insurgent forces that their actions are being monitored and that they bear responsibility not to create conditions causing displacement and to protect persons already displaced.
UN agencies have begun to publish and circulate the principles and to translate them into languages other than English.(2) The Under-Secretary-General for Humanitarian Affairs, who chairs the inter-agency process, has moved quickly to disseminate the principles and will publish 10,000 copies for use in the field. The Global IDP Survey (Norwegian Refugee Council) is also circulating the principles. But reaching millions of IDPs and the organisations assisting them will require a sustained, global effort in which regional organisations and international and local NGOs should become involved.
Training will also be needed. Although the principles are set forth clearly and are easy to comprehend, training in their specific provisions needs to be made a part of the UN disaster management training programme and comparable NGO programmes. UN peacekeepers and police forces also need to be trained in protection and human rights of IDPs. NGOs have suggested a popularised handbook based on the principles to assist in the training of field workers and local authorities; one is currently being prepared under the auspices of the Brookings Institution Project on Internal Displacement.
Monitoring application of the principles is critical to their effectiveness. Since there is no monitoring body to oversee the implementation of the principles, UN agencies, regional bodies and NGOs will have to perform this function.
The Inter-American Commission on Human Rights of the Organization of American States has already used the principles to evaluate the conditions of IDPs in Colombia. In addition, the Representative of the Secretary-General on IDPs used them in his discussions on a mission to Azerbaijan in May. But systematic monitoring will be needed to ensure that the principles are applied on a worldwide basis. The IDP database being developed by the UN in cooperation with the Norwegian Refugee Council/Global IDP Survey should prove an important means of monitoring their application. NGOs such as the Women's Commission on Refugee Women and Children could also perform a valuable service by monitoring the extent to which the principles are being implemented in the case of women and children. Local groups closest to IDPs need to be brought into the process, and affected populations themselves should be encouraged to monitor their own conditions in light of the principles.
Advocacy and intercessions, especially by the UN, with governments and insurgent groups will prove essential to increased protection. Even in cases where the combatants do not feel bound by accepted standards, the principles can serve notice that their conduct is open to scrutiny. In the case of governments interested in developing national law for IDPs, the principles should prove especially instructive. They should also help local authorities in dealing with problems of displacement.
But first, of course, the word has to get out. NRC's Global IDP Survey, the Brookings Institution Project on Internal Displacement and the US Committee for Refugees will all be highlighting the Guiding Principles in regional gatherings they are planning to focus attention on IDPs. UN agencies will likewise be promoting increased attention to the principles in the field. Indeed, everyone working with IDPs should become acquainted with the principles and how best to apply them to enhance protection for the displaced.
Roberta Cohen is Co-Director of the Brookings Institution Project on Internal Displacement and co-author with Francis M Deng of Masses in Flight: The Global Crisis of Internal Displacement (Brookings, 1998).
- See Compilation and Analysis of Legal Norms, Report of the Representative of the Secretary- General on Internally Displaced Persons, E/CN/4/1996/52/Add.2, United Nations, December 1995. See also chapter on 'Legal Framework’ by Walter Kaelin and Robert Kogod Goldman, in Roberta Cohen and Francis M Deng, Masses in Flight: The Global Crisis of Internal Displacement, Brookings, 1998.
- For a copy in English or French, contact: Allegra Baiocchi, Office for the Coordination of Humanitarian Affairs, DC 1-1568, 1 UN Plaa, 10017 NY, New York, USA. Fax: +1 212 963 1040. Email: email@example.com . Also on wwwnotes.reliefweb.int . For a copy in Spanish, Russian, Arabic or Chinese, contact: Erin Mooney, UN Commissioner for Human Rights, Palais des Nations, Geneva 10, 1211 Switzerland. Email: firstname.lastname@example.org |
Lightning crackles to life and evolves on Jupiter the identical manner because it does on Earth, a brand new examine finds.
Jovian lightning, which happens as steadily because the phenomenon does on Earth, was first noticed by NASA’s Voyager 1 spacecraft over 40 years in the past. Again then, the well-traveled probe picked up faint radio alerts spanning a number of seconds — nicknamed whistlers — which can be anticipated from lightning strikes. On the time, these bolts of electrical energy licensed Jupiter to be the one different planet in addition to Earth identified to flaunt lightning strikes. Their evolution on the gaseous world, nonetheless, has puzzled scientists for many years.
Now, a workforce learning 5 years value of information from NASA’s Juno spacecraft, which has been orbiting Jupiter since 2016, has discovered that Jovian lightning takes place in the identical “step-wise” manner because it does on Earth. The brand new observations present that regardless of the 2 planets being polar opposites of their sizes and constructions — our rocky planet is way smaller than Jupiter and has a stable floor, which the gasoline big lacks — each host the identical form of electrical storms.
Associated: Jupiter, the photo voltaic system’s largest planet (images)
On Earth, lightning originates inside turbulent clouds, whose upward winds carry water droplets and freeze them into ice, whereas downward winds push these frigid blobs again to the underside of the clouds. The place the falling ice meets the rising water drops, electrons are stripped off from the previous, leading to a cloud whose base is negatively charged and whose prime is constructive, separated by insulating air. When these fees construct up, the well-known lightning bolt zaps inside a cloud or generally from a cloud’s base to the bottom. Earlier analysis had discovered that the identical course of unfolds within the Jovian ambiance.
Although lightning strikes on Earth seem like lengthy, clean bolts from afar, researchers know that every spark of electrical energy is in reality fabricated from distinct steps. Every step beams out remoted radio emissions, whose detection is usually the one technique to perceive what is going on on inside thunderclouds.
“It was not clear if such stepping course of additionally happens in Jovian clouds,” Ivana Kolmašová, a senior analysis scientist on the Institute of Atmospheric Physics of the Czech Academy of Sciences in Prague and a lead creator of the brand new examine, advised House.com.
That is as a result of earlier spacecraft that studied lightning on Jupiter — NASA’s Voyager 1 and Voyager 2, Galileo and Cassini — didn’t have devices delicate sufficient to seize the radio alerts in granular element. The Waves instrument onboard Juno, nonetheless, collected 10 occasions extra radio emissions than its predecessors. It did so by selecting up lightning alerts separated by as shut as one millisecond, which revealed step-like habits wherein air in Jovian clouds is being charged up and forming lightning — the identical manner it does on Earth.
“Essentially the most difficult and likewise essentially the most time-consuming a part of the work was the seek for lightning alerts within the information of the Waves instrument,” Kolmašová mentioned.
On Jupiter, one such step of lightning might span wherever between a number of hundred to a couple thousand meters lengthy, though it’s laborious to verify with current Juno information, Kolmašová and her workforce wrote within the new examine.
Whereas the brand new findings shed extra gentle on the early lightning processes on Jupiter, a lot stays to be revealed. For instance, whereas Earth and Jupiter lightning strikes in related methods, the place these phenomena happen is wildly completely different on each worlds. On the gasoline big, a big chunk of the thunderstorms have been present in midlatitudes and better, and in polar areas. They’re absent on the big planet’s equator, which is reverse to thunderstorms again residence, the place areas near the equator report essentially the most lightning strikes.
“We’ve almost no lightning exercise near poles on the Earth,” Kolmašová advised House.com. “It implies that situations for a formation of Jovian and terrestrial thunderclouds are in all probability very completely different.”
The lightning on Jupiter can be distributed lopsidedly, with its northern hemisphere internet hosting extra strikes than its southern half. The rationale for this, nonetheless, will not be but clear.
“We additionally do not know why now we have not seen any lightning coming from the [Great] Purple Spot to date,” she added.
This analysis is described in a paper revealed Tuesday (Might 23) within the journal Nature Communications. |
Sporting excellence excites and inspires. Whether it is Serena Williams dominating women’s tennis, year after year, Cristiano Ronaldo splitting the defense with a perfectly timed goal-assist, or LeBron James faking a drive, pulling up and shooting the “dagger,” moments of sporting brilliance are at once captivating and mind-boggling. Our fascination with and enthusiasm for sports is deeply rooted, with historians tracing the origin of sports back to an early form of wrestling in Mongolia, which surfaced around 7000 B.C.
Since its advent, humans have pursued sporting excellence, embracing competition and striving to hone and refine their athletic skills. Part of what makes professional sport particularly engaging is that it represents the upper limits of human achievement: the fastest, most accurate, most skillful our species can be. As spectators, we mere mortals can only wonder how they do it.
Performing at elite levels requires athletes to execute feats of exceptional strength, speed, endurance, and agility. Undeniably, an athlete’s success is influenced by their physical capabilities. The likes of Tom Brady, Lionel Messi, and Ronda Rousey are not only faster, fitter, and stronger than the general public, they are faster, fitter, and stronger than most of their competitors. Yet a growing body of psychological research has revealed that athletes not only act or move differently than nonathletes, they think differently, too.
Dr. Jocelyn Faubert, psychophysicist and research chair of the Natural Sciences and Engineering Research Council of Canada at the University of Montreal, has investigated the superiority of professional athletes’ perceptual-cognitive abilities. In his study published in Nature’s Scientific Reports, Faubert investigated the ability of professional athletes to learn complex and dynamic visual scenes, when given no contextual or background information.
One group of test subjects included professional soccer, ice hockey, and rugby players. Their performances were compared to those of amateur athletes and nonathletes. Initially, participants were each presented with an arrangement of eight spheres, four of which were briefly highlighted. For the next eight seconds, all eight spheres were continuously rearranged. Participants were then asked to identify the four initially highlighted spheres.
Faubert found that professional athletes performed exceedingly well on this task. They were able to begin the task with the dots moving at a faster rate than they moved for the other participants, and, importantly, the learning curve for professional athletes was much steeper than it was for amateur athletes and nonathletes. Interestingly, the amateur athletes managed to learn the task faster than nonathletes, suggesting that these refined perceptual and attentional abilities might develop as a result of sporting experience.
Faubert emphasizes, “To achieve high levels on this task, one requires exquisite selective, dynamic, distributed, and sustained attention skills for brief yet intense periods.” Athletes need to be able to track and remember relevant information, in a fast and dynamic context, often under severe time and psychological pressure. In sports such as tennis, baseball, or cricket, athletes usually have less than half a second to respond to the ball once it is in play, leaving no time for slow and deliberative thinking or planning. Elite athletes, the study found, appear to be able to hyperfocus for short periods of time, which results in extraordinary learning functions. Psychological capacities of this kind are likely to underpin sporting brilliance.
Evidence of the extraordinary attentional and perceptual capabilities of athletes has been corroborated by neuroimaging data, suggesting that there is also something special in the brains of elite athletes. Working with expert basketball players, Dr. Ana Maria Abreu and colleagues from the Social Neuroscience Lab, at the Santa Lucia Foundation in Rome, used functional MRI to explore the brain activity associated with players’ predictions of the outcome of free throws. |
Strangers in a Strange Land: Newport’s Slaves
Newport was the hub of New England’s slave trade, and at its height, slaves made up one-fifth of its population.
Yet little is known about their day-to-day lives.
Ledger documents traced to Caesar Lyndon, a slave for one of the Colony’s early governors, provide one rare glimpse into the private life of an 18th-century slave. But, overall, the slaves left few, if any, journals or diaries to illuminate what they thought or how they felt.
The absence of written material forces historians to rely on tombstones, newspaper accounts, wills, court records and the documents of slave owners and abolitionists to piece together an account of their lives.
* * *
On a cold day in 1768, Pompe Stevens told his brother’s story on a piece of slate. Both men were slaves. A gravestone polisher and carver, Pompe worked for John Stevens Jr., who ran a well-known masonry shop on Thames Street in Newport.
Carefully gouging the stone, Pompe reduced his brother’s life to a single sentence:
This stone was cut by Pompe
Stevens in Memory of his Brother
Cuffe Gibbs, who died Dec. 27th 1768
Little else is known about Gibbs.
Experts say he probably came from Ghana, on the west coast of Africa. His surname, Cuffe, is an Anglicized version of Kofi, a traditional name given to Ghanaian boys born on Friday.
But it’s uncertain who owned Gibbs or what he did in Newport, the hub of New England’s slave trade.
More is known about his brother.
Pompe Stevens outlived three wives and eventually won his freedom.
Theresa Guzman Stokes, who wrote a booklet on Newport’s slave cemetery, says Cuffe’s gravestone tells us even more.
Cuffe and Pompe served different masters and lived apart — and Pompe wanted others to understand that they were human, not unfeeling pieces of property, she says.
“He was trying to make it clear. He was saying, ‘This is who I am and this is my brother.’ “
* * *
A gravedigger buried Cuffe Gibbs in the northwest corner of the Common Burying Ground, on a slope reserved for Newport’s slaves,
Already, many headstones dotted the hill.
Newporters had been importing slaves from the West Indies and Africa since the 1690s. By 1755, a fifth of the population was black. Only two other Colonial cities — New York and Charleston, S.C. — had a greater percentage of slaves.
Twenty years later, a third of the families in Newport would own at least one slave. Traders, captains and merchants would own even more. The wealthy Francis Malbone, a rum distiller, employed 10 slaves; Capt. John Mawdsley owned 20.
On Newport’s noisy waterfront, enslaved Africans cut sails, knotted ropes, shaped barrels, unloaded ships, molded candles and distilled rum. On Thames Street, master grinder Prince Updike — a slave owned by the wealthy trader Aaron Lopez — churned cocoa and sugar into sweet-smelling chocolate.
Elsewhere, Newport’s slaves worked as farmers, hatters, cooks, painters, bakers, barbers and servants. Godfrey Malbone’s slave carried a lantern so that the snuff-loving merchant could find his way home after a midnight dinner of meat and ale.
“Anyone who was a merchant or a craftsman owned a slave,” says Keith Stokes, executive director of the Newport County Chamber of Commerce. “By the mid-18th century, Africans are the entire work force.”
* * *
Some of the earliest slaves were from the sugar plantations in the West Indies where they “seasoned,” developing a resistance to European diseases and learning some English.
Later, slaves were brought directly from slave forts and castles along the African coast. Newporters preferred younger slaves so they could train them in specific trades.
The merchants often sought captives from areas in Africa where tribes already possessed building or husbandry skills that would be useful to their New World owners, Stokes says.
Newly arrived slaves were sometimes held in waterfront pens until they were sold at public auction. Others were sold from private wharves. On June 23, 1761, Capt. Samuel Holmes advertised the sale of “Slaves, just imported from the coast of Africa, consisting of very healthy likely Men, Women, Boys, Girls” at his wharf on Newport harbor.
In the early 1700s, lawyer Augustus Lucas offered buyers a “pre-auction” look at a group of slaves housed in his clapboard home on Division Street.
Many more were sold through private agreements.
The slaves were given nicknames like Peg or Dick, or names from antiquity, like Neptune, Cato or Caesar. Pompe Stevens was named after the Roman general, Pompey the Great.
* * *
The slaves were thrust into a world of successful merchants like William and Samuel Vernon, who hawked their goods from the docks and stores that rimmed the waterfront. From their store on John Bannister’s wharf, they hawked London Bohea Tea, Irish Linens and Old Barbados Rum “TO BE SOLD VERY CHEAP, For Cash only.”
On Brenton’s Row, Jacob Richardson offered a “large assortment of goods” from London, including sword blades, knee buckles, pens, Dutch twine, broadcloths, buff-colored breeches, gloves and ribbons.
As property, slaves could be sold as easily as the goods hawked from Newport’s wharves. In December 1762, Capt. Jeb Easton listed the following items for sale: sugar, coffee, indigo — “also four NEGROES.”
Although Newport was growing — in 1761 the town boasted 888 houses — it was a densely packed community. Most homes, crowded on the land above the harbor, were small.
Slaves slept in the homes of their masters, in attics, kitchens or cellars. In some instances, African children even slept in the same room or bed as their masters.
The opportunity for slaves to establish families or maintain kinship ties was almost impossible in Colonial Newport, says Edward Andrews, a University of New Hampshire history student studying Rhode Island slavery.
His theory is that slaves and servants were discouraged from marrying or starting families to curb urban crowding. Also, some indentured servants had to sign contracts forbidding fornication or matrimony, he says, because Newporters wanted to restrict the growth of the destitute and homeless.
Many slaves had to adopt their master’s religion. Slaves owned by Quakers worshipped at Newport’s Meeting House. Slaves owned by Congregationalists heard sermons from the Rev. Ezra Stiles. The slave Cato Thurston, a dock worker, was a “worthy member of the Baptist Church” who died “in the faith” while under the care of the Rev. Gardner Thurston.
But even in religion, Africans could only participate partially; most sat in balconies or in the rear of Newport’s churches.
Increasingly restrictive laws were passed to control the slaves’ lives.
Under one early law, slaves could not be out after 9 p.m. unless they had permission from their master. Offenders were imprisoned in a cage and, if their master failed to fetch them, whipped.
Another law, passed in 1750, forbade Newporters to entertain “Indian, Negro, or Mulatto Servants or Slaves” without permission from their masters, and also outlawed the sale of liquor to Indians and slaves. A 1757 law made it illegal for ship masters to transport slaves outside the Colony.
Some fought back by running away. In 1767, a slave named James ran away from the merchants Joseph and William Wanton. It wasn’t unusual.
From 1760 to 1766, slave owners paid for 77 advertisements in the Newport Mercury, offering rewards for runaway slaves and servants.
Unlike the huge plantations in the South, where slaves worked in the hot fields by day and crowded into shacks behind the master’s columned mansion at night, it was different in Newport.
“People sometimes think slaves were better off here because they weren’t picking cotton, but on the other hand, psychologically and socially, they were very much dominated by European life,” says Stokes.
While oppressed, Newport’s slaves still emerged better equipped to understand and navigate the world of their masters.
They learned skills, went to church and became part of the social fabric of the town, achieving a kind of status unknown elsewhere, Stokes says.
“You can’t compare Newport to the antebellum South,” he says. “These are not beasts of the field.”
In fact, many in Newport found ways to forge new lives despite their status as chattel. Some married, earned money, bought their freedom and preserved pieces of their culture.
Caesar Lyndon, an educated slave owned by Gov. Josiah Lyndon, worked as a purchasing agent and secretary. With money he managed to earn on the side, he bought good clothes and belt buckles.
In the summer of 1766, Caesar and several friends, including Pompe Stevens, went on a “pleasant outing” to Portsmouth. Caesar provided a sumptuous feast for the celebrants: a roasted pig, corn, bread, wine, rum, coffee and butter.
Two months later, Caesar married his picnic companion, Sarah Searing, and a year later, Stevens married his date, Phillis Lyndon, another of the governor’s slaves.
Slaves often socialized on Sunday, their day off.
And many slaves worked on trade ships, even some bound for Africa. At sea, they found a new kind of freedom, says Andrews. “They were mobile in a time of immobility.”
Slaves and freed blacks preserved their culture through funeral practices, bright clothing and reviving their African names.
Beginning in the 1750s, Newport’s Africans held their own elections. The ceremony, scholars say, echoed elements of African harvest celebrations.
During the annual event, slaves ran for office, dressed in their best clothes, marched in parades and elected “governors” and other officials.
White masters, who loaned their slaves horses and fine clothes for the event, considered it a coup if their slaves won office.
Historians disagree on the meaning of the elections. Some historians say those elected actually held power over their peers. Others say it was merely ceremonial.
“Election ceremonies are common in all controlled societies,” says James Garman at Salve Regina University. “They act as a release valve. But no matter whose purpose they serve, they don’t address the social inequities.”
* * *
On Aug. 26, 1765, a mob of club-carrying Newporters marched through the streets and burned the homes and gardens of a British lawyer and his friend. A day earlier, merchants William Ellery and Samuel Vernon burned an Englishman in effigy. The Colonists were angry about the English Parliament’s proposed Stamp Act, which would place a tax on Colonial documents, almanacs and newspapers.
Eventually Parliament backed off, and a group of Newporters again hit the streets, this time to celebrate by staging a spectacle in which “Liberty” was rescued from “Lawless Tyranny and Oppression.”
As historian Jill Lepore notes in her recent book on New York slavery, New England’s Colonists championed liberty and condemned slavery. But, in their political rhetoric, slavery meant rule by a despot.
When they talked about freedom, Newport’s elite were not including freedom for the 1,200 African men, women and children who lived and worked in the busy seaport. Many liberty-loving merchants — Ellery and Vernon included — owned or traded slaves.
“I call it the American irony,” says Stokes of the days leading up to the American Revolution. “We’re fighting for political and religious freedom, but we’re still enslaving people.”
Some did not miss the irony.
In January 1768, the Newport Mercury stated, “If Newport has the right to enslave Negroes, then Great Britain has the right to enslave the Colonists.”
By the end of the decade, a handful of Quakers and Congregationalists began to question Newport’s heavy role in the slave trade. The Quakers — often referred to as Friends — asked their members to free their slaves.
And, a few years later, the Rev. Samuel Hopkins, pastor of the First Congregational Church, angered some of his congregation when he started preaching against slavery from the pulpit, calling it unchristian.
Nearby, white school teacher Sarah Osborn provided religious services for slaves.
By 1776, the year the Colonies declared their independence from English rule, more than 100 free blacks lived in Newport. Some moved to Pope Street and other areas on the edge of town, or to Division Street, where white sympathizers like Pastor Hopkins, lived.
In 1784, the General Assembly passed the Negro Emancipation Act, which freed all children of slaves born after March 1, 1784. All slaves born before that date were to remain slaves for life. Even the emancipated children did not get freedom immediately. Girls remained slaves until they turned 18; boys were slaves until they were 21.
That same year, Pastor Hopkins told a Providence Quaker that Newport “is the most guilty respecting the slave trade, of any on the continent.” The town, he said, was built “by the blood of the poor Africans; and that the only way to escape the effects of divine displeasure, is to be sensible of the sin, repent, and reform.”
After the American Revolution, Newport’s free blacks formed their own religious organizations, including the African Union Society, the nation’s first self-help group for African-Americans.
Pompe Stevens was among them.
No longer a slave, he embraced his African name, Zingo.
The society helped members pay for burials and other items, and considered various plans to return to Africa. In time, other groups formed, including Newport’s Free African Union Society.
In 1789, the society’s president, Anthony Taylor, described Newport’s black residents as “strangers and outcasts in a strange land, attended with many disadvantages and evils? which are like to continue on us and on our children while we and they live in this Country.”
Paul Davis is a former staff writer for The Providence Journal. |